From patchwork Wed Nov 9 20:14:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13038019 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FE65C4332F for ; Wed, 9 Nov 2022 20:14:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231273AbiKIUOx (ORCPT ); Wed, 9 Nov 2022 15:14:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58544 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229561AbiKIUOv (ORCPT ); Wed, 9 Nov 2022 15:14:51 -0500 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A3DCA2F670 for ; Wed, 9 Nov 2022 12:14:50 -0800 (PST) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-368994f4bc0so171482647b3.14 for ; Wed, 09 Nov 2022 12:14:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=7TU+2VqY1TeYs06QsYbMJbyM5JZaBQCTll9ak+3uWSk=; b=VkErFMNzq75kA2VTpdvXyGoqKSQfuL+Lmetu4QU2yP6M4yue6vH3za5ZNiUgoMXFyM cwilW9dqawxxhiVkaDgFmaS5RGE3nw+KUo4XodY3hLjqioyeDyRz0Bh2QmNzIap9Fnu5 xYjxXKvYBO7unSe4hRWay8jxLEL8mK4Eg0BfSxES9vICvZ4lpuwUvEfIUzlnfRfBIJvP zAvZboT4ph6RAcURfYKtM+9EtCRgp39fVQ1dbJWhTv1Np/otjkt0VMmnu+fVu1IuzZgL Cs2gtKqnLf+mQCLYtaXCNg6XCCc0j8SNcidB8vFrkKcUMJOlpM2LGT4mvZZDhDSOti0w MB1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=7TU+2VqY1TeYs06QsYbMJbyM5JZaBQCTll9ak+3uWSk=; b=GeyVNd25zPaEKzbMy7PAF5crwet31ymSfjaIP+VNteSFBtxGjsPkXRNHUJEBmVVAHm q9Fl8QezPCvu5blpcgShYTgrIK3i55CS7ILhB2l8whJpfzXQnoaFD6aVDTS2oJD/JYPj H8rhYx/hXw1lzD1X5Z80UjRK+X5qcCYDe7ZdM1WF+TS4IClI6QtbIXzdY1B7uU3kyKJK PAoo+5ez09wg9PxtyOoEcXXtkI5D2q9hgFpenOreWSy/P7dnu9FOLjt3bOFpGSOMHZKC 2mBZ1Ku+emZvowcJYcuCO88RhET52hEXpw34yn78R4qR4OCQ3BV2VWu7WLj2B2q7uF8u irIQ== X-Gm-Message-State: ACrzQf2oKj2tIefa1wp5oyewBlTw9Njs2yXvzh5QW8PnMy8Q79BPKZJi NMlovTuxVQw7AjWW75Ru11Y/c5AHZVQ/gxV/VxPK2LOul4uf0P7q37LTjMPpLbXlvV0Y4hbrDPp wHgTAfwVWa0xUqJWaxH2RLWm/4al9OidVbFQHys3l91GRoinBm/tl/OYxiAZ1RdiMj1vO X-Google-Smtp-Source: AMsMyM6c6ZWrtM029A8dsb6ueeJu7+HsmavoHHLJNvtnUCWgjxXWW7CGSVpoXshxmou9PmzPFIODEmVbOBXkrCMa X-Received: from aaronlewis.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2675]) (user=aaronlewis job=sendgmr) by 2002:a25:f82:0:b0:6cc:987f:dffc with SMTP id 124-20020a250f82000000b006cc987fdffcmr52928295ybp.623.1668024889896; Wed, 09 Nov 2022 12:14:49 -0800 (PST) Date: Wed, 9 Nov 2022 20:14:38 +0000 In-Reply-To: <20221109201444.3399736-1-aaronlewis@google.com> Mime-Version: 1.0 References: <20221109201444.3399736-1-aaronlewis@google.com> X-Mailer: git-send-email 2.38.1.431.g37b22c650d-goog Message-ID: <20221109201444.3399736-2-aaronlewis@google.com> Subject: [PATCH v7 1/7] kvm: x86/pmu: Correct the mask used in a pmu event filter lookup From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When checking if a pmu event the guest is attempting to program should be filtered, only consider the event select + unit mask in that decision. Use an architecture specific mask to mask out all other bits, including bits 35:32 on Intel. Those bits are not part of the event select and should not be considered in that decision. Fixes: 66bb8a065f5a ("KVM: x86: PMU Event Filter") Signed-off-by: Aaron Lewis --- arch/x86/kvm/pmu.c | 3 ++- arch/x86/kvm/pmu.h | 2 ++ arch/x86/kvm/svm/pmu.c | 1 + arch/x86/kvm/vmx/pmu_intel.c | 1 + 4 files changed, 6 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 935c9d80ab50..5cf687196ce8 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -277,7 +277,8 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc) goto out; if (pmc_is_gp(pmc)) { - key = pmc->eventsel & AMD64_RAW_EVENT_MASK_NB; + key = pmc->eventsel & (kvm_pmu_ops.EVENTSEL_EVENT | + ARCH_PERFMON_EVENTSEL_UMASK); if (bsearch(&key, filter->events, filter->nevents, sizeof(__u64), cmp_u64)) allow_event = filter->action == KVM_PMU_EVENT_ALLOW; diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 85ff3c0588ba..5b070c563a97 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -40,6 +40,8 @@ struct kvm_pmu_ops { void (*reset)(struct kvm_vcpu *vcpu); void (*deliver_pmi)(struct kvm_vcpu *vcpu); void (*cleanup)(struct kvm_vcpu *vcpu); + + const u64 EVENTSEL_EVENT; }; void kvm_pmu_ops_update(const struct kvm_pmu_ops *pmu_ops); diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 2ec420b85d6a..560ddee1e0a9 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -228,4 +228,5 @@ struct kvm_pmu_ops amd_pmu_ops __initdata = { .refresh = amd_pmu_refresh, .init = amd_pmu_init, .reset = amd_pmu_reset, + .EVENTSEL_EVENT = AMD64_EVENTSEL_EVENT, }; diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index e4cd595ee221..a4b2caf0f85a 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -810,4 +810,5 @@ struct kvm_pmu_ops intel_pmu_ops __initdata = { .reset = intel_pmu_reset, .deliver_pmi = intel_pmu_deliver_pmi, .cleanup = intel_pmu_cleanup, + .EVENTSEL_EVENT = ARCH_PERFMON_EVENTSEL_EVENT, }; From patchwork Wed Nov 9 20:14:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13038020 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F239C43219 for ; Wed, 9 Nov 2022 20:14:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231391AbiKIUOy (ORCPT ); Wed, 9 Nov 2022 15:14:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58554 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231616AbiKIUOx (ORCPT ); Wed, 9 Nov 2022 15:14:53 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3B3EB1DA67 for ; Wed, 9 Nov 2022 12:14:52 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id pq17-20020a17090b3d9100b0020a4c65c3a9so8236883pjb.0 for ; Wed, 09 Nov 2022 12:14:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=qWhuPRw1tHQft4Uo+tbKyFMV7MLhXkDII18o5BFJnx4=; b=VkH40g3SsOnM6t3f4hReoTZD2lUw5gyYuXHUBK9ID1umoUoNzX9f9hAr2LYcU66Rcz OdCBPopJACkJ1freIcu/Ds5xFOyNuWp9VwaDVOdDzCMCyZucg2Sh/1cLcC6RpngQwPmv kA2zMpFIR2A6JN2/LzXwHXobkkq9JP7gn74rResDOdqqLGHs8SYPD/wsvQggmw3dAg/S 8/8ttxr9uvzBzrlfQx+SOhS6y8rlk6DszuxAIAQO4LCEzzs2KXrKChqdi2v5avU6MdLE MZaUf24u1Xpti+86k4tXWtSBsr/jPM231TG1h+a2mKjTsEMRMnnhSeESD2eG3eZUd9zs 9ClA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=qWhuPRw1tHQft4Uo+tbKyFMV7MLhXkDII18o5BFJnx4=; b=nyOWdeJoUSTCVw52I+59NhOyG4R+qeTjFWa9nHvdLt22WrSkX3RVvS13W7B9zBBxin jkhzhVqopHsaKRWsaZT11eedffykKNHJgcwMCJ2PY8ID05YplaXytYMR9jBQLf3XlmUq 9JHhgA33y44GybZW/vommk5BXP8BMUYpU9bBzMCnBlquByMpsdpjqQwwXQitHYzXxTnE ffToIp4fIQrcPOcScvR8tpP0zxfi5avb9zE/ZkR0ULGcVcm35xh8uko8bpa5IxHNjKTA 5cI81fw3suz5GiBCxiybQnL5GL7n3qMxiWsKMpov/GGUZAFGwnIHp5A/fy/D1L+jzG6S 7Axg== X-Gm-Message-State: ACrzQf0eYjT4teS86xzGzNJzHhHvOebt9KFaNgNglsC/5jITTmd1Ueoi FdYVjvHl1cCU7mgS1Ol0rSApfdM3S6ihQRI3Z1wp/qcJLA7epQs6ubln6VOBgUfSWBWU68J1xDt JqUy3/ywGitYV0yQmsIdmu2W8+Va55N8vMG+s/L7MuLVj/ktSklGUD0GFkwbAHLRh2sHE X-Google-Smtp-Source: AMsMyM6qUv16QoXD2g22SWBezRuzJIeckhsTYyjOGkeRVFujUCkcnqUoZunP5uGnMjOmBnvZOfVt4kcYumWnEhsg X-Received: from aaronlewis.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2675]) (user=aaronlewis job=sendgmr) by 2002:a05:6a00:b89:b0:56d:2a21:a6b3 with SMTP id g9-20020a056a000b8900b0056d2a21a6b3mr57979228pfj.56.1668024891559; Wed, 09 Nov 2022 12:14:51 -0800 (PST) Date: Wed, 9 Nov 2022 20:14:39 +0000 In-Reply-To: <20221109201444.3399736-1-aaronlewis@google.com> Mime-Version: 1.0 References: <20221109201444.3399736-1-aaronlewis@google.com> X-Mailer: git-send-email 2.38.1.431.g37b22c650d-goog Message-ID: <20221109201444.3399736-3-aaronlewis@google.com> Subject: [PATCH v7 2/7] kvm: x86/pmu: Remove impossible events from the pmu event filter From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org If it's not possible for an event in the pmu event filter to match a pmu event being programmed by the guest, it's pointless to have it in the list. Opt for a shorter list by removing those events. Because this is established uAPI the pmu event filter can't outright rejected these events as garbage and return an error. Instead, play nice and remove them from the list. Also, opportunistically rewrite the comment when the filter is set to clarify that it guards against *all* TOCTOU attacks on the verified data. Signed-off-by: Aaron Lewis --- arch/x86/kvm/pmu.c | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 5cf687196ce8..0a6ad955fc21 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -592,6 +592,21 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id) } EXPORT_SYMBOL_GPL(kvm_pmu_trigger_event); +static void remove_impossible_events(struct kvm_pmu_event_filter *filter) +{ + int i, j; + + for (i = 0, j = 0; i < filter->nevents; i++) { + if (filter->events[i] & ~(kvm_pmu_ops.EVENTSEL_EVENT | + ARCH_PERFMON_EVENTSEL_UMASK)) + continue; + + filter->events[j++] = filter->events[i]; + } + + filter->nevents = j; +} + int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp) { struct kvm_pmu_event_filter tmp, *filter; @@ -622,9 +637,11 @@ int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp) if (copy_from_user(filter, argp, size)) goto cleanup; - /* Ensure nevents can't be changed between the user copies. */ + /* Restore the verified state to guard against TOCTOU attacks. */ *filter = tmp; + remove_impossible_events(filter); + /* * Sort the in-kernel list so that we can search it with bsearch. */ From patchwork Wed Nov 9 20:14:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13038021 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38227C4332F for ; Wed, 9 Nov 2022 20:14:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231616AbiKIUO4 (ORCPT ); Wed, 9 Nov 2022 15:14:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58578 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231173AbiKIUOy (ORCPT ); Wed, 9 Nov 2022 15:14:54 -0500 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 410701DF16 for ; Wed, 9 Nov 2022 12:14:54 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id i8-20020a170902c94800b0018712ccd6bbso13959256pla.1 for ; Wed, 09 Nov 2022 12:14:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9HIjSNn8JrbS4AFswlYT8b67KcklXVdBva5hU128egg=; b=R993WIRfCzkTUEK91Jxl4SK1PgV9Zt7Tz43vqIFVARj50xXtadivCbhUaiTQwjK7xU gHhqvr9Tl+NTefV/ZgvHJlCnUsagRiNNLcI143Q698KJokL8aI0SKPI41qBe3JtbaRZN UAU38toSr86qMs8NDRNSJS5T8moS6rS1nVMsfic6IlhgdFiUtQVy/O06/B+D0nvCFxN/ ezeqsXfI7sH2aWiomUT3itNuXMNVMGOBarD6c0DnkNBT5qc23Rpq8Y9ILw2PVIeja5mN NXBg11Op3Jy9764e1JGc35UXRb1tf/0dJiEpqyKEusaNepsuQP++0OFqfSok5REYxzQ6 5Gtw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9HIjSNn8JrbS4AFswlYT8b67KcklXVdBva5hU128egg=; b=vPOr27Lx0kHgZCTDLMMYlNs5sbd2Cd1gbgv+nRcWOLaewFX7ea7DoIJFbnKo8m8Ij7 z/vDMpgW7RNzmYB1sQ1KM2+SgDW0yuyaSS+x39WUBRfavYhrhu08jlb+J6He4X+O68bR CElv9i57MPM7SlXDjyzTox6IAI//HAjltippgZh6tSJ5dpIslPmLtaDoh4l5SlzrtlZx cuYFkuyWyf5tUuvIY4ZmXG7qZA9vVxTAzTa76TSn04BWayjiFNlUCVnMGkE/B6cxgpoC KeMMfk9dFslJTTm60DwcEGTvT5n7Oy5tvk8lAVR7FS3zVrJrd+OpCKJwOEYtCZTXIfDA 6r8g== X-Gm-Message-State: ACrzQf0xTH32pSN8W/UewLYbzXJRGDCoYQ0KlYMQyROaYVwY9ZZwYRE7 ISunzgZBux367thUMw4N/AQ1dE4PUlnZJg0GOuS7cImmcW8IkUSpK8ADwpYj665brsBiLTWDTeQ Sr1zzXK+Kdo4sLNBbfjBeT3ZBmlFCYBYkyg0vFTmeDDTyM3K9BDJ2I2eWnZzbfp7idERL X-Google-Smtp-Source: AMsMyM4+/X8dCM7fVr1Vtp2uSdhqMndqW4urN7MuduiueQ6TXqiO+WaKZnU59ZW3Iue8oMHKdIjaRpfxfiOyKmYL X-Received: from aaronlewis.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2675]) (user=aaronlewis job=sendgmr) by 2002:a05:6a00:1d9b:b0:562:9a93:7c91 with SMTP id z27-20020a056a001d9b00b005629a937c91mr1261294pfw.21.1668024893561; Wed, 09 Nov 2022 12:14:53 -0800 (PST) Date: Wed, 9 Nov 2022 20:14:40 +0000 In-Reply-To: <20221109201444.3399736-1-aaronlewis@google.com> Mime-Version: 1.0 References: <20221109201444.3399736-1-aaronlewis@google.com> X-Mailer: git-send-email 2.38.1.431.g37b22c650d-goog Message-ID: <20221109201444.3399736-4-aaronlewis@google.com> Subject: [PATCH v7 3/7] kvm: x86/pmu: prepare the pmu event filter for masked events From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Refactor check_pmu_event_filter() in preparation for masked events. No functional changes intended Signed-off-by: Aaron Lewis --- arch/x86/kvm/pmu.c | 56 +++++++++++++++++++++++++++------------------- 1 file changed, 33 insertions(+), 23 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 0a6ad955fc21..a98013b939e3 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -261,41 +261,51 @@ static int cmp_u64(const void *pa, const void *pb) return (a > b) - (a < b); } +static u64 *find_filter_entry(struct kvm_pmu_event_filter *filter, u64 key) +{ + return bsearch(&key, filter->events, filter->nevents, + sizeof(filter->events[0]), cmp_u64); +} + +static bool is_gp_event_allowed(struct kvm_pmu_event_filter *filter, u64 eventsel) +{ + if (find_filter_entry(filter, eventsel & (kvm_pmu_ops.EVENTSEL_EVENT | + ARCH_PERFMON_EVENTSEL_UMASK))) + return filter->action == KVM_PMU_EVENT_ALLOW; + + return filter->action == KVM_PMU_EVENT_DENY; +} + +static bool is_fixed_event_allowed(struct kvm_pmu_event_filter *filter, int idx) +{ + int fixed_idx = idx - INTEL_PMC_IDX_FIXED; + + if (filter->action == KVM_PMU_EVENT_DENY && + test_bit(fixed_idx, (ulong *)&filter->fixed_counter_bitmap)) + return false; + if (filter->action == KVM_PMU_EVENT_ALLOW && + !test_bit(fixed_idx, (ulong *)&filter->fixed_counter_bitmap)) + return false; + + return true; +} + static bool check_pmu_event_filter(struct kvm_pmc *pmc) { struct kvm_pmu_event_filter *filter; struct kvm *kvm = pmc->vcpu->kvm; - bool allow_event = true; - __u64 key; - int idx; if (!static_call(kvm_x86_pmu_hw_event_available)(pmc)) return false; filter = srcu_dereference(kvm->arch.pmu_event_filter, &kvm->srcu); if (!filter) - goto out; + return true; - if (pmc_is_gp(pmc)) { - key = pmc->eventsel & (kvm_pmu_ops.EVENTSEL_EVENT | - ARCH_PERFMON_EVENTSEL_UMASK); - if (bsearch(&key, filter->events, filter->nevents, - sizeof(__u64), cmp_u64)) - allow_event = filter->action == KVM_PMU_EVENT_ALLOW; - else - allow_event = filter->action == KVM_PMU_EVENT_DENY; - } else { - idx = pmc->idx - INTEL_PMC_IDX_FIXED; - if (filter->action == KVM_PMU_EVENT_DENY && - test_bit(idx, (ulong *)&filter->fixed_counter_bitmap)) - allow_event = false; - if (filter->action == KVM_PMU_EVENT_ALLOW && - !test_bit(idx, (ulong *)&filter->fixed_counter_bitmap)) - allow_event = false; - } + if (pmc_is_gp(pmc)) + return is_gp_event_allowed(filter, pmc->eventsel); -out: - return allow_event; + return is_fixed_event_allowed(filter, pmc->idx); } static void reprogram_counter(struct kvm_pmc *pmc) From patchwork Wed Nov 9 20:14:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13038022 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8E31C433FE for ; Wed, 9 Nov 2022 20:14:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231671AbiKIUO6 (ORCPT ); Wed, 9 Nov 2022 15:14:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58618 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231649AbiKIUO5 (ORCPT ); Wed, 9 Nov 2022 15:14:57 -0500 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0FCFD2F645 for ; Wed, 9 Nov 2022 12:14:55 -0800 (PST) Received: by mail-pg1-x549.google.com with SMTP id f19-20020a63f113000000b0046fde69a09dso10059464pgi.10 for ; Wed, 09 Nov 2022 12:14:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=wH9eDPjiGQAG2YjxDBRUl74KGk9MeW4+nm9d1uaT18Q=; b=NXEwvJbjbOKt67JbQqxqxXc33S/E13OW+cywYHH9A9JVaM8DaJ7LruJtYcXxy1Qe1j wXvu78mbYAnV1uNnLFXQFCuCqjqK/wkRnXAanX7Njro1o+K7jJhbPtHGNtE9kpxD8nsW XSHO+GD/IEHL2ggIfV/rGiTQJL/hy4nKWW0hQqbLJld/ILQJvtI8PUv4syDeM/oEtndv I7HTOFNxGlH+LwFhKA0KhHa+oewWI0wL0AMlfU1u3aYnoX9M1Zj/BQIpvqllHB0uTEZ/ gWP4m15qquSmV9fezi4Mn+smU3A/TQxmShR0d2vFdHBKClvn4h1bZbWEMkqJb3Z4vyPd 8bYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=wH9eDPjiGQAG2YjxDBRUl74KGk9MeW4+nm9d1uaT18Q=; b=LwDf/j2xblh59ilMzcFbMXZ1NK+x8Y+5E65lrJPMM7YD4HHMAp0ZZLSjNwJ9PWeAKV 4ua5nPvI8lhYw461ElL9Pt8b5zW6GA7gALHIN9anv30CvIImd9TrId72YdJLQclJfDpt ynlSWS+xck1I9JreRB1C7Xnlxc815FVLmHIXxcQNDzYXsXzIpCR1xygyXsyrHriU2p0e hE61JBBZA4ge/oq9iOXMxRY5Imk39pgJp4Z8BLI4nhBgD2Q+U/JqfinYXTDlRtXVgXTN 4mF/7EDXtLS74Ny0E5CjPZDY99nND70IVA7KzvK1/ld8OKLzMNuO4R8Rcv9kCQQofWrl CQhg== X-Gm-Message-State: ACrzQf1OQqyPVR7/yZczaLKAaEN/toZi1YdLVmscJB6pdtUbP0yLGNV+ 4VEpNhhxwDJ89oZCglXTYps9hFKFm4HQPeQDTOVSOYOgSIXQ3efr5hcj6/bT8N+RjotQDQgx3rn wYybBn1e5l2qUXgEDpljN5dDvpu3abKx2TISWG5aNuFNo0ZGom+3tR7LRksRdykM2j8CL X-Google-Smtp-Source: AMsMyM5diYvlD+hJ7ZUtTVZAoc0mxDI1A0GS8625lIqxqupwg3bOP1ndM+ZmysDt81rwSGvd0EUS5w4bnKlFhtTI X-Received: from aaronlewis.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2675]) (user=aaronlewis job=sendgmr) by 2002:a17:90b:512:b0:216:c8ab:3f5b with SMTP id r18-20020a17090b051200b00216c8ab3f5bmr1030927pjz.157.1668024895194; Wed, 09 Nov 2022 12:14:55 -0800 (PST) Date: Wed, 9 Nov 2022 20:14:41 +0000 In-Reply-To: <20221109201444.3399736-1-aaronlewis@google.com> Mime-Version: 1.0 References: <20221109201444.3399736-1-aaronlewis@google.com> X-Mailer: git-send-email 2.38.1.431.g37b22c650d-goog Message-ID: <20221109201444.3399736-5-aaronlewis@google.com> Subject: [PATCH v7 4/7] kvm: x86/pmu: Introduce masked events to the pmu event filter From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When building a list of filter events, it can sometimes be a challenge to fit all the events needed to adequately restrict the guest into the limited space available in the pmu event filter. This stems from the fact that the pmu event filter requires each event (i.e. event select + unit mask) be listed, when the intention might be to restrict the event select all together, regardless of it's unit mask. Instead of increasing the number of filter events in the pmu event filter, add a new encoding that is able to do a more generalized match on the unit mask. Introduce masked events as another encoding the pmu event filter understands. Masked events has the fields: mask, match, and exclude. When filtering based on these events, the mask is applied to the guest's unit mask to see if it matches the match value (i.e. umask & mask == match). The exclude bit can then be used to exclude events from that match. E.g. for a given event select, if it's easier to say which unit mask values shouldn't be filtered, a masked event can be set up to match all possible unit mask values, then another masked event can be set up to match the unit mask values that shouldn't be filtered. Userspace can query to see if this feature exists by looking for the capability, KVM_CAP_PMU_EVENT_MASKED_EVENTS. This feature is enabled by setting the flags field in the pmu event filter to KVM_PMU_EVENT_FLAG_MASKED_EVENTS. Events can be encoded by using KVM_PMU_ENCODE_MASKED_ENTRY(). It is an error to have a bit set outside the valid bits for a masked event, and calls to KVM_SET_PMU_EVENT_FILTER will return -EINVAL in such cases, including the high bits of the event select (35:32) if called on Intel. With these updates the filter matching code has been updated to match on a common event. Masked events were flexible enough to handle both event types, so they were used as the common event. This changes how guest events get filtered because regardless of the type of event used in the uAPI, they will be converted to masked events. Because of this there could be a slight performance hit because instead of matching the filter event with a lookup on event select + unit mask, it does a lookup on event select then walks the unit masks to find the match. This shouldn't be a big problem because I would expect the set of common event selects to be small, and if they aren't the set can likely be reduced by using masked events to generalize the unit mask. Using one type of event when filtering guest events allows for a common code path to be used. Signed-off-by: Aaron Lewis --- Documentation/virt/kvm/api.rst | 77 +++++++++++-- arch/x86/include/asm/kvm_host.h | 14 ++- arch/x86/include/uapi/asm/kvm.h | 29 +++++ arch/x86/kvm/pmu.c | 197 +++++++++++++++++++++++++++----- arch/x86/kvm/x86.c | 1 + include/uapi/linux/kvm.h | 1 + 6 files changed, 281 insertions(+), 38 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index eee9f857a986..0cf07fbe3d78 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -5031,6 +5031,15 @@ using this ioctl. :Parameters: struct kvm_pmu_event_filter (in) :Returns: 0 on success, -1 on error +Errors: + + ====== ============================================================ + EFAULT args[0] cannot be accessed + EINVAL args[0] contains invalid data in the filter or filter events + E2BIG nevents is too large + EBUSY not enough memory to allocate the filter + ====== ============================================================ + :: struct kvm_pmu_event_filter { @@ -5042,14 +5051,68 @@ using this ioctl. __u64 events[0]; }; -This ioctl restricts the set of PMU events that the guest can program. -The argument holds a list of events which will be allowed or denied. -The eventsel+umask of each event the guest attempts to program is compared -against the events field to determine whether the guest should have access. -The events field only controls general purpose counters; fixed purpose -counters are controlled by the fixed_counter_bitmap. +This ioctl restricts the set of PMU events the guest can program by limiting +which event select and unit mask combinations are permitted. + +The argument holds a list of filter events which will be allowed or denied. + +Filter events only control general purpose counters; fixed purpose counters +are controlled by the fixed_counter_bitmap. + +Valid values for 'flags':: + +``0`` + +To use this mode, clear the 'flags' field. + +In this mode each event will contain an event select + unit mask. + +When the guest attempts to program the PMU the guest's event select + +unit mask is compared against the filter events to determine whether the +guest should have access. + +``KVM_PMU_EVENT_FLAG_MASKED_EVENTS`` +:Capability: KVM_CAP_PMU_EVENT_MASKED_EVENTS + +In this mode each filter event will contain an event select, mask, match, and +exclude value. To encode a masked event use:: + + KVM_PMU_ENCODE_MASKED_ENTRY() + +An encoded event will follow this layout:: + + Bits Description + ---- ----------- + 7:0 event select (low bits) + 15:8 umask match + 31:16 unused + 35:32 event select (high bits) + 36:54 unused + 55 exclude bit + 63:56 umask mask + +When the guest attempts to program the PMU, these steps are followed in +determining if the guest should have access: + 1. Match the event select from the guest against the filter events. + 2. If a match is found, match the guest's unit mask to the mask and match + values of the included filter events. + I.e. (unit mask & mask) == match && !exclude. + 3. If a match is found, match the guest's unit mask to the mask and match + values of the excluded filter events. + I.e. (unit mask & mask) == match && exclude. + 4. + a. If an included match is found and an excluded match is not found, filter + the event. + b. For everything else, do not filter the event. + 5. + a. If the event is filtered and it's an allow list, allow the guest to + program the event. + b. If the event is filtered and it's a deny list, do not allow the guest to + program the event. -No flags are defined yet, the field must be zero. +When setting a new pmu event filter, -EINVAL will be returned if any of the +unused fields are set or if any of the high bits (35:32) in the event +select are set when called on Intel. Valid values for 'action':: diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index d2e6f0ddc21c..2398074349d1 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1086,6 +1086,18 @@ struct kvm_x86_msr_filter { struct msr_bitmap_range ranges[16]; }; +struct kvm_x86_pmu_event_filter { + __u32 action; + __u32 nevents; + __u32 fixed_counter_bitmap; + __u32 flags; + __u32 nr_includes; + __u32 nr_excludes; + __u64 *includes; + __u64 *excludes; + __u64 events[]; +}; + enum kvm_apicv_inhibit { /********************************************************************/ @@ -1291,7 +1303,7 @@ struct kvm_arch { /* Guest can access the SGX PROVISIONKEY. */ bool sgx_provisioning_allowed; - struct kvm_pmu_event_filter __rcu *pmu_event_filter; + struct kvm_x86_pmu_event_filter __rcu *pmu_event_filter; struct task_struct *nx_huge_page_recovery_thread; #ifdef CONFIG_X86_64 diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h index c6df6b16a088..23104b189111 100644 --- a/arch/x86/include/uapi/asm/kvm.h +++ b/arch/x86/include/uapi/asm/kvm.h @@ -533,6 +533,35 @@ struct kvm_pmu_event_filter { #define KVM_PMU_EVENT_ALLOW 0 #define KVM_PMU_EVENT_DENY 1 +#define KVM_PMU_EVENT_FLAG_MASKED_EVENTS BIT(0) +#define KVM_PMU_EVENT_FLAGS_VALID_MASK (KVM_PMU_EVENT_FLAG_MASKED_EVENTS) + +/* + * Masked event layout. + * Bits Description + * ---- ----------- + * 7:0 event select (low bits) + * 15:8 umask match + * 31:16 unused + * 35:32 event select (high bits) + * 36:54 unused + * 55 exclude bit + * 63:56 umask mask + */ + +#define KVM_PMU_ENCODE_MASKED_ENTRY(event_select, mask, match, exclude) \ + (((event_select) & 0xFFULL) | (((event_select) & 0XF00ULL) << 24) | \ + (((mask) & 0xFFULL) << 56) | \ + (((match) & 0xFFULL) << 8) | \ + ((__u64)(!!(exclude)) << 55)) + +#define KVM_PMU_MASKED_ENTRY_EVENT_SELECT \ + (GENMASK_ULL(7, 0) | GENMASK_ULL(35, 32)) +#define KVM_PMU_MASKED_ENTRY_UMASK_MASK (GENMASK_ULL(63, 56)) +#define KVM_PMU_MASKED_ENTRY_UMASK_MATCH (GENMASK_ULL(15, 8)) +#define KVM_PMU_MASKED_ENTRY_EXCLUDE (BIT_ULL(55)) +#define KVM_PMU_MASKED_ENTRY_UMASK_MASK_SHIFT (56) + /* for KVM_{GET,SET,HAS}_DEVICE_ATTR */ #define KVM_VCPU_TSC_CTRL 0 /* control group for the timestamp counter (TSC) */ #define KVM_VCPU_TSC_OFFSET 0 /* attribute for the TSC offset */ diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index a98013b939e3..0ad2bcec25b2 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -253,30 +253,99 @@ static bool pmc_resume_counter(struct kvm_pmc *pmc) return true; } -static int cmp_u64(const void *pa, const void *pb) +static int filter_cmp(const void *pa, const void *pb, u64 mask) { - u64 a = *(u64 *)pa; - u64 b = *(u64 *)pb; + u64 a = *(u64 *)pa & mask; + u64 b = *(u64 *)pb & mask; return (a > b) - (a < b); } -static u64 *find_filter_entry(struct kvm_pmu_event_filter *filter, u64 key) + +static int filter_sort_cmp(const void *pa, const void *pb) +{ + return filter_cmp(pa, pb, (KVM_PMU_MASKED_ENTRY_EVENT_SELECT | + KVM_PMU_MASKED_ENTRY_EXCLUDE)); +} + +/* + * For the event filter, searching is done on the 'includes' list and + * 'excludes' list separately rather than on the 'events' list (which + * has both). As a result the exclude bit can be ignored. + */ +static int filter_event_cmp(const void *pa, const void *pb) +{ + return filter_cmp(pa, pb, (KVM_PMU_MASKED_ENTRY_EVENT_SELECT)); +} + +static int find_filter_index(u64 *events, u64 nevents, u64 key) +{ + u64 *fe = bsearch(&key, events, nevents, sizeof(events[0]), + filter_event_cmp); + + if (!fe) + return -1; + + return fe - events; +} + +static bool is_filter_entry_match(u64 filter_event, u64 umask) +{ + u64 mask = filter_event >> (KVM_PMU_MASKED_ENTRY_UMASK_MASK_SHIFT - 8); + u64 match = filter_event & KVM_PMU_MASKED_ENTRY_UMASK_MATCH; + + BUILD_BUG_ON((KVM_PMU_ENCODE_MASKED_ENTRY(0, 0xff, 0, false) >> + (KVM_PMU_MASKED_ENTRY_UMASK_MASK_SHIFT - 8)) != + ARCH_PERFMON_EVENTSEL_UMASK); + + return (umask & mask) == match; +} + +static bool filter_contains_match(u64 *events, u64 nevents, u64 eventsel) { - return bsearch(&key, filter->events, filter->nevents, - sizeof(filter->events[0]), cmp_u64); + u64 event_select = eventsel & kvm_pmu_ops.EVENTSEL_EVENT; + u64 umask = eventsel & ARCH_PERFMON_EVENTSEL_UMASK; + int i, index; + + index = find_filter_index(events, nevents, event_select); + if (index < 0) + return false; + + /* + * Entries are sorted by the event select. Walk the list in both + * directions to process all entries with the targeted event select. + */ + for (i = index; i < nevents; i++) { + if (filter_event_cmp(&events[i], &event_select)) + break; + + if (is_filter_entry_match(events[i], umask)) + return true; + } + + for (i = index - 1; i >= 0; i--) { + if (filter_event_cmp(&events[i], &event_select)) + break; + + if (is_filter_entry_match(events[i], umask)) + return true; + } + + return false; } -static bool is_gp_event_allowed(struct kvm_pmu_event_filter *filter, u64 eventsel) +static bool is_gp_event_allowed(struct kvm_x86_pmu_event_filter *f, + u64 eventsel) { - if (find_filter_entry(filter, eventsel & (kvm_pmu_ops.EVENTSEL_EVENT | - ARCH_PERFMON_EVENTSEL_UMASK))) - return filter->action == KVM_PMU_EVENT_ALLOW; + if (filter_contains_match(f->includes, f->nr_includes, eventsel) && + !filter_contains_match(f->excludes, f->nr_excludes, eventsel)) + return f->action == KVM_PMU_EVENT_ALLOW; - return filter->action == KVM_PMU_EVENT_DENY; + return f->action == KVM_PMU_EVENT_DENY; } -static bool is_fixed_event_allowed(struct kvm_pmu_event_filter *filter, int idx) +static bool is_fixed_event_allowed(struct kvm_x86_pmu_event_filter *filter, + int idx) { int fixed_idx = idx - INTEL_PMC_IDX_FIXED; @@ -292,7 +361,7 @@ static bool is_fixed_event_allowed(struct kvm_pmu_event_filter *filter, int idx) static bool check_pmu_event_filter(struct kvm_pmc *pmc) { - struct kvm_pmu_event_filter *filter; + struct kvm_x86_pmu_event_filter *filter; struct kvm *kvm = pmc->vcpu->kvm; if (!static_call(kvm_x86_pmu_hw_event_available)(pmc)) @@ -602,60 +671,128 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id) } EXPORT_SYMBOL_GPL(kvm_pmu_trigger_event); -static void remove_impossible_events(struct kvm_pmu_event_filter *filter) +static bool is_masked_filter_valid(const struct kvm_x86_pmu_event_filter *filter) +{ + u64 mask = kvm_pmu_ops.EVENTSEL_EVENT | + KVM_PMU_MASKED_ENTRY_UMASK_MASK | + KVM_PMU_MASKED_ENTRY_UMASK_MATCH | + KVM_PMU_MASKED_ENTRY_EXCLUDE; + int i; + + for (i = 0; i < filter->nevents; i++) { + if (filter->events[i] & ~mask) + return false; + } + + return true; +} + +static void convert_to_masked_filter(struct kvm_x86_pmu_event_filter *filter) { int i, j; for (i = 0, j = 0; i < filter->nevents; i++) { + /* + * Skip events that are impossible to match against a guest + * event. When filtering, only the event select + unit mask + * of the guest event is used. To maintain backwards + * compatibility, impossible filters can't be rejected :-( + */ if (filter->events[i] & ~(kvm_pmu_ops.EVENTSEL_EVENT | ARCH_PERFMON_EVENTSEL_UMASK)) continue; - - filter->events[j++] = filter->events[i]; + /* + * Convert userspace events to a common in-kernel event so + * only one code path is needed to support both events. For + * the in-kernel events use masked events because they are + * flexible enough to handle both cases. To convert to masked + * events all that's needed is to add an "all ones" umask_mask, + * (unmasked filter events don't support EXCLUDE). + */ + filter->events[j++] = filter->events[i] | + (0xFFULL << KVM_PMU_MASKED_ENTRY_UMASK_MASK_SHIFT); } filter->nevents = j; } +static int prepare_filter_lists(struct kvm_x86_pmu_event_filter *filter) +{ + int i; + + if (!(filter->flags & KVM_PMU_EVENT_FLAG_MASKED_EVENTS)) + convert_to_masked_filter(filter); + else if (!is_masked_filter_valid(filter)) + return -EINVAL; + + /* + * Sort entries by event select and includes vs. excludes so that all + * entries for a given event select can be processed efficiently during + * filtering. The EXCLUDE flag uses a more significant bit than the + * event select, and so the sorted list is also effectively split into + * includes and excludes sub-lists. + */ + sort(&filter->events, filter->nevents, sizeof(filter->events[0]), + filter_sort_cmp, NULL); + + i = filter->nevents; + /* Find the first EXCLUDE event (only supported for masked events). */ + if (filter->flags & KVM_PMU_EVENT_FLAG_MASKED_EVENTS) { + for (i = 0; i < filter->nevents; i++) { + if (filter->events[i] & KVM_PMU_MASKED_ENTRY_EXCLUDE) + break; + } + } + + filter->nr_includes = i; + filter->nr_excludes = filter->nevents - filter->nr_includes; + filter->includes = filter->events; + filter->excludes = filter->events + filter->nr_includes; + + return 0; +} + int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp) { - struct kvm_pmu_event_filter tmp, *filter; + struct kvm_pmu_event_filter __user *user_filter = argp; + struct kvm_x86_pmu_event_filter *filter; + struct kvm_pmu_event_filter tmp; struct kvm_vcpu *vcpu; unsigned long i; size_t size; int r; - if (copy_from_user(&tmp, argp, sizeof(tmp))) + if (copy_from_user(&tmp, user_filter, sizeof(tmp))) return -EFAULT; if (tmp.action != KVM_PMU_EVENT_ALLOW && tmp.action != KVM_PMU_EVENT_DENY) return -EINVAL; - if (tmp.flags != 0) + if (tmp.flags & ~KVM_PMU_EVENT_FLAGS_VALID_MASK) return -EINVAL; if (tmp.nevents > KVM_PMU_EVENT_FILTER_MAX_EVENTS) return -E2BIG; size = struct_size(filter, events, tmp.nevents); - filter = kmalloc(size, GFP_KERNEL_ACCOUNT); + filter = kzalloc(size, GFP_KERNEL_ACCOUNT); if (!filter) return -ENOMEM; + filter->action = tmp.action; + filter->nevents = tmp.nevents; + filter->fixed_counter_bitmap = tmp.fixed_counter_bitmap; + filter->flags = tmp.flags; + r = -EFAULT; - if (copy_from_user(filter, argp, size)) + if (copy_from_user(filter->events, user_filter->events, + sizeof(filter->events[0]) * filter->nevents)) goto cleanup; - /* Restore the verified state to guard against TOCTOU attacks. */ - *filter = tmp; - - remove_impossible_events(filter); - - /* - * Sort the in-kernel list so that we can search it with bsearch. - */ - sort(&filter->events, filter->nevents, sizeof(__u64), cmp_u64, NULL); + r = prepare_filter_lists(filter); + if (r) + goto cleanup; mutex_lock(&kvm->lock); filter = rcu_replace_pointer(kvm->arch.pmu_event_filter, filter, diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 19099c413363..507874829b13 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4385,6 +4385,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_SPLIT_IRQCHIP: case KVM_CAP_IMMEDIATE_EXIT: case KVM_CAP_PMU_EVENT_FILTER: + case KVM_CAP_PMU_EVENT_MASKED_EVENTS: case KVM_CAP_GET_MSR_FEATURES: case KVM_CAP_MSR_PLATFORM_INFO: case KVM_CAP_EXCEPTION_PAYLOAD: diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 7fea12369245..ed7fa2f40774 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1181,6 +1181,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_S390_ZPCI_OP 221 #define KVM_CAP_S390_CPU_TOPOLOGY 222 #define KVM_CAP_DIRTY_LOG_RING_ACQ_REL 223 +#define KVM_CAP_PMU_EVENT_MASKED_EVENTS 224 #ifdef KVM_CAP_IRQ_ROUTING From patchwork Wed Nov 9 20:14:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13038023 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A3DBC4332F for ; Wed, 9 Nov 2022 20:15:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231664AbiKIUPA (ORCPT ); Wed, 9 Nov 2022 15:15:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58632 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231685AbiKIUO7 (ORCPT ); Wed, 9 Nov 2022 15:14:59 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 140EE1DA67 for ; Wed, 9 Nov 2022 12:14:58 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id nl16-20020a17090b385000b002138288fd51so1970890pjb.6 for ; Wed, 09 Nov 2022 12:14:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=1Ems4DuloFYGcSsRCUY4EFa8uX8Hx7Kg+wd/blVe0fM=; b=koB4Rx0OHBFORPj6y/NBm5RACPCkj4aCdpMEtzl9GIVPgm7O9vkID9H1BySckneY80 TqgtCrvxYN8Koe+HhSueqzdJ0q8IFxzLnv9RDAyZIbVQp59sa0/1IRDSE+xtSStq0USv 12Pf8nFdqe+s/1eIqZXfV6AzWXIVWkbek1h41ioGamKUTWGv4OASM7yDARhTr3Kcbrmj UmN1W3GU12aoo/2wotPrSFgSNBl+kfUeqhtVc/e0ZpjfEzXWE0Bqm0LlowGX9qKceuoS bU9gMYw3NiAMtdcOolWzY2SV4qsHr1l3sb4CJWh4+xYEhCWiBhNDuKqngWaAmg6pQQZf cAng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=1Ems4DuloFYGcSsRCUY4EFa8uX8Hx7Kg+wd/blVe0fM=; b=S9ISDfsC6HgP7q35p/OKxNIkV8nTsmEvtanqAolpFQdTId/D5iISpMdSTUYtaalybw pRZL2U4LAViO/DXboSgUSyYlBzCUDFD3fq/7OMwx5mU6sxAb7+J5gHNR2gVD1Vu3ODze PLD04vFxjvnAdW0PIK7zmUVVgc4NKHejS4z6Te7EdVM3WnoBqPcWdMuNOAA9G9YjI0ci mVwRDhy1/H6akbZAoonh86rwzwoKFrzJOujt51wLUaeoDzjibqU2SROGi3hcq4tkXq3w 3qGqhR7fEj4PFXJINDJMJiJ+sIfLeK7w+dSEd1OFS0QTMIqDv1VK6pnMb48AVUoMr5zg 5yRg== X-Gm-Message-State: ANoB5plkWJEOCrnnWMFZXwUrJDx73d4khQod/3aAAGkTzrgJPZyf6wp9 0YPwxNebYFyD3MSOhsA6eaxyAq2LoDReWyHAtlki801wg3ZbavTaTEGhO6In5Iu29XCJBB/LpUQ OPd/fUNCULVOplIxwXjIdVz58zYRLx/RUPRSX+HgDYAFTxsB5XRl1FO0CqF3oGLd3lAqm X-Google-Smtp-Source: AA0mqf6nbl8Cf175wM9hKwjYEswWFEoDQHskXKVa9dUki8bcrekvxgfRtFBYKiG+CZbmXRwmATYt/rBfjzuUXo2S X-Received: from aaronlewis.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2675]) (user=aaronlewis job=sendgmr) by 2002:a17:90b:4003:b0:20a:fee1:8f69 with SMTP id ie3-20020a17090b400300b0020afee18f69mr73037pjb.0.1668024897113; Wed, 09 Nov 2022 12:14:57 -0800 (PST) Date: Wed, 9 Nov 2022 20:14:42 +0000 In-Reply-To: <20221109201444.3399736-1-aaronlewis@google.com> Mime-Version: 1.0 References: <20221109201444.3399736-1-aaronlewis@google.com> X-Mailer: git-send-email 2.38.1.431.g37b22c650d-goog Message-ID: <20221109201444.3399736-6-aaronlewis@google.com> Subject: [PATCH v7 5/7] selftests: kvm/x86: Add flags when creating a pmu event filter From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Now that the flags field can be non-zero, pass it in when creating a pmu event filter. This is needed in preparation for testing masked events. No functional change intended. Signed-off-by: Aaron Lewis --- .../testing/selftests/kvm/x86_64/pmu_event_filter_test.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index ea4e259a1e2e..bd7054a53981 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -221,14 +221,15 @@ static struct kvm_pmu_event_filter *alloc_pmu_event_filter(uint32_t nevents) static struct kvm_pmu_event_filter * -create_pmu_event_filter(const uint64_t event_list[], - int nevents, uint32_t action) +create_pmu_event_filter(const uint64_t event_list[], int nevents, + uint32_t action, uint32_t flags) { struct kvm_pmu_event_filter *f; int i; f = alloc_pmu_event_filter(nevents); f->action = action; + f->flags = flags; for (i = 0; i < nevents; i++) f->events[i] = event_list[i]; @@ -239,7 +240,7 @@ static struct kvm_pmu_event_filter *event_filter(uint32_t action) { return create_pmu_event_filter(event_list, ARRAY_SIZE(event_list), - action); + action, 0); } /* @@ -286,7 +287,7 @@ static void test_amd_deny_list(struct kvm_vcpu *vcpu) struct kvm_pmu_event_filter *f; uint64_t count; - f = create_pmu_event_filter(&event, 1, KVM_PMU_EVENT_DENY); + f = create_pmu_event_filter(&event, 1, KVM_PMU_EVENT_DENY, 0); count = test_with_filter(vcpu, f); free(f); From patchwork Wed Nov 9 20:14:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13038024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 792EBC4332F for ; Wed, 9 Nov 2022 20:15:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229657AbiKIUPB (ORCPT ); Wed, 9 Nov 2022 15:15:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58690 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231667AbiKIUPA (ORCPT ); Wed, 9 Nov 2022 15:15:00 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CFE871DF1D for ; Wed, 9 Nov 2022 12:14:59 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id h16-20020a170902f55000b001871b770a83so14007159plf.9 for ; Wed, 09 Nov 2022 12:14:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=hE/SHj1sdYHPDY7tHphP4dEsoHCE0SjySpY1HJmYCUM=; b=RWEpz+DP+ByotNnlB6LMuMhfGfcCzqnA5hCVrTopblVLha47S1SWEM1zaHOH7e6kK8 S8n4vJuRAaVKCFUlp4R9cCf85yx32rmrnzSmhalAy4r0DhMSsyxpKWly7j5UEFGpMNR6 nuILDywSBUKwMc3FqAvw/A0p3+VHkZvXqPlW7L66eEjYJgOjXOqrMiqsx/f39ZKlABFE MxslKYvMqpMuNHoD1yJdDAe7Et3XRMCh5ruZQ70nA944+ngryWbm4zxZw5nfRDgydD6u ik/0W81cfLqEPCECo2MqUlGS5v8ijmc2BkaiPkM17JUakUk2N3iUD10Ch6sKdrVJ5l8v fUuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=hE/SHj1sdYHPDY7tHphP4dEsoHCE0SjySpY1HJmYCUM=; b=L1TD5NOge36HsE/SmwhhljMRrL/UtKcvGazQ2qM6/teRhMAeq1AW4nuNvWJ708I3WV lBZZcA/aU86bX9DRpo6aIP5jrZ0DAyGFOcai6W22XbTH48W87ZJyGj+9fqDjipinYF/m PSC/fuv7wF76M4WSS1+rV2f3UKuhKIAgXt3wm+1Uxu840Nensgnb31r3lT9WA37wKOrU IGmFEVZwjnX0h7//nU/5k+EHsPYmCnUbGirTKXcGVQJTOZkhWx3/H2rhNqXl6bFycZ7a y0qgG8WMB/8qUf2tRdUasCfCl1oPi8eAWjFP/EgkRuqNXcTdrcTft4hVe8i39v2DzjLG Yppw== X-Gm-Message-State: ACrzQf2+RA5GwCGcKPGsSZDE2mfX7QT2MvR6+uV7ThAZa0Rtd3uluv3N 8LdVMaN0Af6MSHqD3UOxqmac99dN9GvqDILtEECBEuAM17S7amLlGsbab2eLnULTXgylHnVZJ8i m4RBP7Kglp7tvmW0Jo33yRGfW0PlXo94oZVx1mzTGFiXZr3SNvnBR2OObPn9tv7wt2cpe X-Google-Smtp-Source: AMsMyM4DG+bju7KSlbmjDldoHqaZjpSZHr7zV5EhiGMKWAiCA4M+7141ZcbQHaspjmrHW8Aj6833QdaODtL/ijgl X-Received: from aaronlewis.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2675]) (user=aaronlewis job=sendgmr) by 2002:a62:ab16:0:b0:56b:b112:4a16 with SMTP id p22-20020a62ab16000000b0056bb1124a16mr1289834pff.66.1668024899055; Wed, 09 Nov 2022 12:14:59 -0800 (PST) Date: Wed, 9 Nov 2022 20:14:43 +0000 In-Reply-To: <20221109201444.3399736-1-aaronlewis@google.com> Mime-Version: 1.0 References: <20221109201444.3399736-1-aaronlewis@google.com> X-Mailer: git-send-email 2.38.1.431.g37b22c650d-goog Message-ID: <20221109201444.3399736-7-aaronlewis@google.com> Subject: [PATCH v7 6/7] selftests: kvm/x86: Add testing for KVM_SET_PMU_EVENT_FILTER From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Test that masked events are not using invalid bits, and if they are, ensure the pmu event filter is not accepted by KVM_SET_PMU_EVENT_FILTER. The only valid bits that can be used for masked events are set when using KVM_PMU_ENCODE_MASKED_ENTRY() with one exception: If any of the high bits (35:32) of the event select are set when using Intel, the pmu event filter will fail. Also, because validation was not being done prior to the introduction of masked events, only expect validation to fail when masked events are used. E.g. in the first test a filter event with all its bits set is accepted by KVM_SET_PMU_EVENT_FILTER when flags = 0. Signed-off-by: Aaron Lewis --- .../kvm/x86_64/pmu_event_filter_test.c | 36 +++++++++++++++++++ 1 file changed, 36 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index bd7054a53981..0750e2fa7a38 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -442,6 +442,39 @@ static bool use_amd_pmu(void) is_zen3(entry->eax)); } +static int run_filter_test(struct kvm_vcpu *vcpu, const uint64_t *events, + int nevents, uint32_t flags) +{ + struct kvm_pmu_event_filter *f; + int r; + + f = create_pmu_event_filter(events, nevents, KVM_PMU_EVENT_ALLOW, flags); + r = __vm_ioctl(vcpu->vm, KVM_SET_PMU_EVENT_FILTER, f); + free(f); + + return r; +} + +static void test_filter_ioctl(struct kvm_vcpu *vcpu) +{ + uint64_t e = ~0ul; + int r; + + /* + * Unfortunately having invalid bits set in event data is expected to + * pass when flags == 0 (bits other than eventsel+umask). + */ + r = run_filter_test(vcpu, &e, 1, 0); + TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing"); + + r = run_filter_test(vcpu, &e, 1, KVM_PMU_EVENT_FLAG_MASKED_EVENTS); + TEST_ASSERT(r != 0, "Invalid PMU Event Filter is expected to fail"); + + e = KVM_PMU_EVENT_ENCODE_MASKED_ENTRY(0xff, 0xff, 0xff, 0xf); + r = run_filter_test(vcpu, &e, 1, KVM_PMU_EVENT_FLAG_MASKED_EVENTS); + TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing"); +} + int main(int argc, char *argv[]) { void (*guest_code)(void); @@ -452,6 +485,7 @@ int main(int argc, char *argv[]) setbuf(stdout, NULL); TEST_REQUIRE(kvm_has_cap(KVM_CAP_PMU_EVENT_FILTER)); + TEST_REQUIRE(kvm_has_cap(KVM_CAP_PMU_EVENT_MASKED_EVENTS)); TEST_REQUIRE(use_intel_pmu() || use_amd_pmu()); guest_code = use_intel_pmu() ? intel_guest_code : amd_guest_code; @@ -472,6 +506,8 @@ int main(int argc, char *argv[]) test_not_member_deny_list(vcpu); test_not_member_allow_list(vcpu); + test_filter_ioctl(vcpu); + kvm_vm_free(vm); test_pmu_config_disable(guest_code); From patchwork Wed Nov 9 20:14:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13038025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05D29C4332F for ; Wed, 9 Nov 2022 20:15:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229447AbiKIUPE (ORCPT ); Wed, 9 Nov 2022 15:15:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58772 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229781AbiKIUPD (ORCPT ); Wed, 9 Nov 2022 15:15:03 -0500 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6E43825D0 for ; Wed, 9 Nov 2022 12:15:01 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id e13-20020a17090301cd00b001871e6f8714so14055632plh.14 for ; Wed, 09 Nov 2022 12:15:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=z2CdJd7Q9NowRMj6wBDiEUkRafY2LBF54oqtjmQdaOE=; b=rEUQ5C1T60jwm7nedO6pED9lL5hLWyjNrqoCSmmZ9mDcYNbVKkyqW5pIeJEwkUM4wi rgxL2W+y/TJGyVATcMzbvVhTRhkCfodlo957qJHBOAuo4loaV7tv0IxSc8U9wk/6ybvI SIq2KCiXEZOuTu6FnRvq0Y9+QM6oPCAqJsJbPAGsA5kGZSx4h5JpuE3sKCiRIXEjlqPP Pqh4eU5cdRxgIyoIlJb5793+73BTUfnr+v56tpyKgfxrdNDUo3luZEwl9cQb/dakvsQU YbYOchz9QQb3js+zHPoc+c9OKg3w/o14jQ8HfT+Z7Zdjc49FKTUkj1Vl/uEKfYpNe4Pk kLbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=z2CdJd7Q9NowRMj6wBDiEUkRafY2LBF54oqtjmQdaOE=; b=bU+L3WLagg6gDX980/VxVRGI5VYHXMPaHs5GSEXRMao1xx5rAS6MUq37sIXbdltH6E N/vPP4d3TvGnJm6Qi59OMWxbebMGojsqHr/zqZAs8n+9pKW36jgR8y4RZfSdgfraK89P g2pnZcCI/+yUyQCs6/CdkpHfvcK0yPuASzy742RVIjHHCVwn+FKV4ibUIShmhR30+Xh0 lihA7NNgBLeTq96gCrRs4gCTK6ah8Im7v8N+KF3dTJNxwboxtmBsBceBWVViU6nHGYrW /2dykB7Z0oqnJQFh0FJbJQQ121q4Xd+N+Qqq9IL5qmnRqYMNqimSDZ0olqQNVDHxA4uP X6oQ== X-Gm-Message-State: ANoB5plpG3S9ABavCF6TaLRWJ+SRxPENXsM76sb+lzY36B/oqMBbrwz5 D9QfavVM92CkshH1D+rQ7bdFRwDHEIyi6fElgQeuMjLYdXXbRwhoTkjAIDpgA2Rql5PjlVv5Xap KsMqfVxyDHK/xEu6/mLB1bkhOnjIVkYms/xoIzr8j2rYN6NqHNinbDWVwzuXzV0MhAE1u X-Google-Smtp-Source: AA0mqf5I9Or53ubQ0X/DS1ACOluM1BHZi7KdTwLomLhA/DrnI3QpXcHZX5yprwj3FC7+uM7y0QpdCY6W0I+bB9Lu X-Received: from aaronlewis.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2675]) (user=aaronlewis job=sendgmr) by 2002:a17:902:ca0d:b0:188:9806:2e05 with SMTP id w13-20020a170902ca0d00b0018898062e05mr1386917pld.112.1668024900845; Wed, 09 Nov 2022 12:15:00 -0800 (PST) Date: Wed, 9 Nov 2022 20:14:44 +0000 In-Reply-To: <20221109201444.3399736-1-aaronlewis@google.com> Mime-Version: 1.0 References: <20221109201444.3399736-1-aaronlewis@google.com> X-Mailer: git-send-email 2.38.1.431.g37b22c650d-goog Message-ID: <20221109201444.3399736-8-aaronlewis@google.com> Subject: [PATCH v7 7/7] selftests: kvm/x86: Test masked events From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add testing to show that a pmu event can be filtered with a generalized match on it's unit mask. These tests set up test cases to demonstrate various ways of filtering a pmu event that has multiple unit mask values. It does this by setting up the filter in KVM with the masked events provided, then enabling three pmu counters in the guest. The test then verifies that the pmu counters agree with which counters should be counting and which counters should be filtered for both a sparse filter list and a dense filter list. Signed-off-by: Aaron Lewis --- .../kvm/x86_64/pmu_event_filter_test.c | 349 +++++++++++++++++- 1 file changed, 347 insertions(+), 2 deletions(-) r = run_filter_test(vcpu, &e, 1, KVM_PMU_EVENT_FLAG_MASKED_EVENTS); TEST_ASSERT(r != 0, "Invalid PMU Event Filter is expected to fail"); - e = KVM_PMU_EVENT_ENCODE_MASKED_ENTRY(0xff, 0xff, 0xff, 0xf); + e = KVM_PMU_ENCODE_MASKED_ENTRY(0xff, 0xff, 0xff, 0xf); r = run_filter_test(vcpu, &e, 1, KVM_PMU_EVENT_FLAG_MASKED_EVENTS); TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing"); } @@ -478,7 +814,7 @@ static void test_filter_ioctl(struct kvm_vcpu *vcpu) int main(int argc, char *argv[]) { void (*guest_code)(void); - struct kvm_vcpu *vcpu; + struct kvm_vcpu *vcpu, *vcpu2 = NULL; struct kvm_vm *vm; /* Tell stdout not to buffer its content */ @@ -506,6 +842,15 @@ int main(int argc, char *argv[]) test_not_member_deny_list(vcpu); test_not_member_allow_list(vcpu); + if (use_intel_pmu() && + supports_event_mem_inst_retired() && + num_gp_counters() >= 3) + vcpu2 = vm_vcpu_add(vm, 2, intel_masked_events_guest_code); + else if (use_amd_pmu()) + vcpu2 = vm_vcpu_add(vm, 2, amd_masked_events_guest_code); + + if (vcpu2) + test_masked_events(vcpu2); test_filter_ioctl(vcpu); kvm_vm_free(vm); diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 0750e2fa7a38..38578950e692 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -442,6 +442,342 @@ static bool use_amd_pmu(void) is_zen3(entry->eax)); } +/* + * "MEM_INST_RETIRED.ALL_LOADS", "MEM_INST_RETIRED.ALL_STORES", and + * "MEM_INST_RETIRED.ANY" from https://perfmon-events.intel.com/ + * supported on Intel Xeon processors: + * - Sapphire Rapids, Ice Lake, Cascade Lake, Skylake. + */ +#define MEM_INST_RETIRED 0xD0 +#define MEM_INST_RETIRED_LOAD EVENT(MEM_INST_RETIRED, 0x81) +#define MEM_INST_RETIRED_STORE EVENT(MEM_INST_RETIRED, 0x82) +#define MEM_INST_RETIRED_LOAD_STORE EVENT(MEM_INST_RETIRED, 0x83) + +static bool supports_event_mem_inst_retired(void) +{ + uint32_t eax, ebx, ecx, edx; + + cpuid(1, &eax, &ebx, &ecx, &edx); + if (x86_family(eax) == 0x6) { + switch (x86_model(eax)) { + /* Sapphire Rapids */ + case 0x8F: + /* Ice Lake */ + case 0x6A: + /* Skylake */ + /* Cascade Lake */ + case 0x55: + return true; + } + } + + return false; +} + +static int num_gp_counters(void) +{ + const struct kvm_cpuid_entry2 *entry; + + entry = kvm_get_supported_cpuid_entry(0xa); + union cpuid10_eax eax = { .full = entry->eax }; + + return eax.split.num_counters; +} + +/* + * "LS Dispatch", from Processor Programming Reference + * (PPR) for AMD Family 17h Model 01h, Revision B1 Processors, + * Preliminary Processor Programming Reference (PPR) for AMD Family + * 17h Model 31h, Revision B0 Processors, and Preliminary Processor + * Programming Reference (PPR) for AMD Family 19h Model 01h, Revision + * B1 Processors Volume 1 of 2. + */ +#define LS_DISPATCH 0x29 +#define LS_DISPATCH_LOAD EVENT(LS_DISPATCH, BIT(0)) +#define LS_DISPATCH_STORE EVENT(LS_DISPATCH, BIT(1)) +#define LS_DISPATCH_LOAD_STORE EVENT(LS_DISPATCH, BIT(2)) + +#define INCLUDE_MASKED_ENTRY(event_select, mask, match) \ + KVM_PMU_ENCODE_MASKED_ENTRY(event_select, mask, match, false) +#define EXCLUDE_MASKED_ENTRY(event_select, mask, match) \ + KVM_PMU_ENCODE_MASKED_ENTRY(event_select, mask, match, true) + +struct perf_counter { + union { + uint64_t raw; + struct { + uint64_t loads:22; + uint64_t stores:22; + uint64_t loads_stores:20; + }; + }; +}; + +static uint64_t masked_events_guest_test(uint32_t msr_base) +{ + uint64_t ld0, ld1, st0, st1, ls0, ls1; + struct perf_counter c; + int val; + + /* + * The acutal value of the counters don't determine the outcome of + * the test. Only that they are zero or non-zero. + */ + ld0 = rdmsr(msr_base + 0); + st0 = rdmsr(msr_base + 1); + ls0 = rdmsr(msr_base + 2); + + __asm__ __volatile__("movl $0, %[v];" + "movl %[v], %%eax;" + "incl %[v];" + : [v]"+m"(val) :: "eax"); + + ld1 = rdmsr(msr_base + 0); + st1 = rdmsr(msr_base + 1); + ls1 = rdmsr(msr_base + 2); + + c.loads = ld1 - ld0; + c.stores = st1 - st0; + c.loads_stores = ls1 - ls0; + + return c.raw; +} + +static void intel_masked_events_guest_code(void) +{ + uint64_t r; + + for (;;) { + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); + + wrmsr(MSR_P6_EVNTSEL0 + 0, ARCH_PERFMON_EVENTSEL_ENABLE | + ARCH_PERFMON_EVENTSEL_OS | MEM_INST_RETIRED_LOAD); + wrmsr(MSR_P6_EVNTSEL0 + 1, ARCH_PERFMON_EVENTSEL_ENABLE | + ARCH_PERFMON_EVENTSEL_OS | MEM_INST_RETIRED_STORE); + wrmsr(MSR_P6_EVNTSEL0 + 2, ARCH_PERFMON_EVENTSEL_ENABLE | + ARCH_PERFMON_EVENTSEL_OS | MEM_INST_RETIRED_LOAD_STORE); + + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0x7); + + r = masked_events_guest_test(MSR_IA32_PMC0); + + GUEST_SYNC(r); + } +} + +static void amd_masked_events_guest_code(void) +{ + uint64_t r; + + for (;;) { + wrmsr(MSR_K7_EVNTSEL0, 0); + wrmsr(MSR_K7_EVNTSEL1, 0); + wrmsr(MSR_K7_EVNTSEL2, 0); + + wrmsr(MSR_K7_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE | + ARCH_PERFMON_EVENTSEL_OS | LS_DISPATCH_LOAD); + wrmsr(MSR_K7_EVNTSEL1, ARCH_PERFMON_EVENTSEL_ENABLE | + ARCH_PERFMON_EVENTSEL_OS | LS_DISPATCH_STORE); + wrmsr(MSR_K7_EVNTSEL2, ARCH_PERFMON_EVENTSEL_ENABLE | + ARCH_PERFMON_EVENTSEL_OS | LS_DISPATCH_LOAD_STORE); + + r = masked_events_guest_test(MSR_K7_PERFCTR0); + + GUEST_SYNC(r); + } +} + +static struct perf_counter run_masked_events_test(struct kvm_vcpu *vcpu, + const uint64_t masked_events[], + const int nmasked_events) +{ + struct kvm_pmu_event_filter *f; + struct perf_counter r; + + f = create_pmu_event_filter(masked_events, nmasked_events, + KVM_PMU_EVENT_ALLOW, + KVM_PMU_EVENT_FLAG_MASKED_EVENTS); + r.raw = test_with_filter(vcpu, f); + free(f); + + return r; +} + +/* Matches KVM_PMU_EVENT_FILTER_MAX_EVENTS in pmu.c */ +#define MAX_FILTER_EVENTS 300 +#define MAX_TEST_EVENTS 10 + +#define ALLOW_LOADS BIT(0) +#define ALLOW_STORES BIT(1) +#define ALLOW_LOADS_STORES BIT(2) + +struct masked_events_test { + uint64_t intel_events[MAX_TEST_EVENTS]; + uint64_t intel_event_end; + uint64_t amd_events[MAX_TEST_EVENTS]; + uint64_t amd_event_end; + const char *msg; + uint32_t flags; +}; + +/* + * These are the test cases for the masked events tests. + * + * For each test, the guest enables 3 PMU counters (loads, stores, + * loads + stores). The filter is then set in KVM with the masked events + * provided. The test then verifies that the counters agree with which + * ones should be counting and which ones should be filtered. + */ +const struct masked_events_test test_cases[] = { + { + .intel_events = { + INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFF, 0x81), + }, + .amd_events = { + INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xFF, BIT(0)), + }, + .msg = "Only allow loads.", + .flags = ALLOW_LOADS, + }, { + .intel_events = { + INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFF, 0x82), + }, + .amd_events = { + INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xFF, BIT(1)), + }, + .msg = "Only allow stores.", + .flags = ALLOW_STORES, + }, { + .intel_events = { + INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFF, 0x83), + }, + .amd_events = { + INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xFF, BIT(2)), + }, + .msg = "Only allow loads + stores.", + .flags = ALLOW_LOADS_STORES, + }, { + .intel_events = { + INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0x7C, 0), + EXCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFF, 0x83), + }, + .amd_events = { + INCLUDE_MASKED_ENTRY(LS_DISPATCH, ~(BIT(0) | BIT(1)), 0), + }, + .msg = "Only allow loads and stores.", + .flags = ALLOW_LOADS | ALLOW_STORES, + }, { + .intel_events = { + INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0x7C, 0), + EXCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFF, 0x82), + }, + .amd_events = { + INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xF8, 0), + EXCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xFF, BIT(1)), + }, + .msg = "Only allow loads and loads + stores.", + .flags = ALLOW_LOADS | ALLOW_LOADS_STORES + }, { + .intel_events = { + INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFE, 0x82), + }, + .amd_events = { + INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xF8, 0), + EXCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xFF, BIT(0)), + }, + .msg = "Only allow stores and loads + stores.", + .flags = ALLOW_STORES | ALLOW_LOADS_STORES + }, { + .intel_events = { + INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0x7C, 0), + }, + .amd_events = { + INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xF8, 0), + }, + .msg = "Only allow loads, stores, and loads + stores.", + .flags = ALLOW_LOADS | ALLOW_STORES | ALLOW_LOADS_STORES + }, +}; + +static int append_test_events(const struct masked_events_test *test, + uint64_t *events, int nevents) +{ + const uint64_t *evts; + int i; + + evts = use_intel_pmu() ? test->intel_events : test->amd_events; + for (i = 0; i < MAX_TEST_EVENTS; i++) { + if (evts[i] == 0) + break; + + events[nevents + i] = evts[i]; + } + + return nevents + i; +} + +static bool bool_eq(bool a, bool b) +{ + return a == b; +} + +static void run_masked_events_tests(struct kvm_vcpu *vcpu, uint64_t *events, + int nevents) +{ + int ntests = ARRAY_SIZE(test_cases); + struct perf_counter c; + int i, n; + + for (i = 0; i < ntests; i++) { + const struct masked_events_test *test = &test_cases[i]; + + /* Do any test case events overflow MAX_TEST_EVENTS? */ + assert(test->intel_event_end == 0); + assert(test->amd_event_end == 0); + + n = append_test_events(test, events, nevents); + + c = run_masked_events_test(vcpu, events, n); + TEST_ASSERT(bool_eq(c.loads, test->flags & ALLOW_LOADS) && + bool_eq(c.stores, test->flags & ALLOW_STORES) && + bool_eq(c.loads_stores, + test->flags & ALLOW_LOADS_STORES), + "%s loads: %u, stores: %u, loads + stores: %u", + test->msg, c.loads, c.stores, c.loads_stores); + } +} + +static void add_dummy_events(uint64_t *events, int nevents) +{ + int i; + + for (i = 0; i < nevents; i++) { + int event_select = i % 0xFF; + bool exclude = ((i % 4) == 0); + + if (event_select == MEM_INST_RETIRED || + event_select == LS_DISPATCH) + event_select++; + + events[i] = KVM_PMU_ENCODE_MASKED_ENTRY(event_select, 0, + 0, exclude); + } +} + +static void test_masked_events(struct kvm_vcpu *vcpu) +{ + int nevents = MAX_FILTER_EVENTS - MAX_TEST_EVENTS; + uint64_t events[MAX_FILTER_EVENTS]; + + /* Run the test cases against a sparse PMU event filter. */ + run_masked_events_tests(vcpu, events, 0); + + /* Run the test cases against a dense PMU event filter. */ + add_dummy_events(events, MAX_FILTER_EVENTS); + run_masked_events_tests(vcpu, events, nevents); +} + static int run_filter_test(struct kvm_vcpu *vcpu, const uint64_t *events, int nevents, uint32_t flags) { @@ -470,7 +806,7 @@ static void test_filter_ioctl(struct kvm_vcpu *vcpu)