From patchwork Thu Aug 1 15:22:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 11070899 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E569E13A0 for ; Thu, 1 Aug 2019 15:23:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D5E8C28622 for ; Thu, 1 Aug 2019 15:23:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C9CA32863C; Thu, 1 Aug 2019 15:23:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 5535B28640 for ; Thu, 1 Aug 2019 15:23:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=lkEEdGopovYiyfvzW3DdQaCOpGKWzb9fx9x1Z0k5X+c=; b=MY58c26d0LWy3O t/iwHXAqPsWbhiCKWlDoAEBpn/x5ipFEc3kBvE1k8mcm+H5YjahVlDnI7h7wWpVPEE67HghRsFl7z UnZRZglPMqaRtcUGDmjYfQ7+Vl/v1K3XeCUiON8X0Q/Bk3evgO+CHP3Mlk90ala30D26lcxUHPRm6 8fsVUrus9MbS+nqRE6JuX6t/Ej16SScwehjOyKPzOBudHC03/6tlbmV7SSxJQQY43NYk7flzRwSAW pzz+LjgidKnAWIWuOEajex6a4JH4sKqp6JooiTBgBD/U8KQR5e9PPnu5u3BjSac21ccyAvEDzE12y 3AqAg33UfFp91osbpGLg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1htCv7-0008Sj-V6; Thu, 01 Aug 2019 15:23:09 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1htCut-0008Et-Fv for linux-arm-kernel@lists.infradead.org; Thu, 01 Aug 2019 15:22:56 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 670C31597; Thu, 1 Aug 2019 08:22:53 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.cambridge.arm.com [10.1.197.57]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B5ABE3F694; Thu, 1 Aug 2019 08:22:52 -0700 (PDT) From: Robin Murphy To: will.deacon@arm.com, mark.rutland@arm.com Subject: [PATCH v2 2/2] perf/smmuv3: Validate groups for global filtering Date: Thu, 1 Aug 2019 16:22:45 +0100 Message-Id: X-Mailer: git-send-email 2.21.0.dirty In-Reply-To: <85f2db579f90f0a89d6c6bb4e53191e9ad172805.1564672242.git.robin.murphy@arm.com> References: <85f2db579f90f0a89d6c6bb4e53191e9ad172805.1564672242.git.robin.murphy@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190801_082255_615472_5BDD0B57 X-CRM114-Status: GOOD ( 16.11 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arm-kernel@lists.infradead.org, shameerali.kolothum.thodi@huawei.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP With global filtering, it becomes possible for users to construct self-contradictory groups with conflicting filters. Make sure we cover that when initially validating events. Signed-off-by: Robin Murphy --- v2: - generalise to reusable event-checking helpers - don't forget the group leader (again) This seems like a reasonable compromise for now vs. rewriting half the driver just to make a 'fake PMU' approach viable. drivers/perf/arm_smmuv3_pmu.c | 47 +++++++++++++++++++++++++---------- 1 file changed, 34 insertions(+), 13 deletions(-) diff --git a/drivers/perf/arm_smmuv3_pmu.c b/drivers/perf/arm_smmuv3_pmu.c index c65c197b52a7..9110d7d86b81 100644 --- a/drivers/perf/arm_smmuv3_pmu.c +++ b/drivers/perf/arm_smmuv3_pmu.c @@ -113,8 +113,6 @@ struct smmu_pmu { u64 counter_mask; u32 options; bool global_filter; - u32 global_filter_span; - u32 global_filter_sid; }; #define to_smmu_pmu(p) (container_of(p, struct smmu_pmu, pmu)) @@ -260,6 +258,19 @@ static void smmu_pmu_set_event_filter(struct perf_event *event, smmu_pmu_set_smr(smmu_pmu, idx, sid); } +static bool smmu_pmu_check_global_filter(struct perf_event *curr, + struct perf_event *new) +{ + if (get_filter_enable(new) != get_filter_enable(curr)) + return false; + + if (!get_filter_enable(new)) + return true; + + return get_filter_span(new) == get_filter_span(curr) && + get_filter_stream_id(new) == get_filter_stream_id(curr); +} + static int smmu_pmu_apply_event_filter(struct smmu_pmu *smmu_pmu, struct perf_event *event, int idx) { @@ -279,17 +290,14 @@ static int smmu_pmu_apply_event_filter(struct smmu_pmu *smmu_pmu, } /* Requested settings same as current global settings*/ - if (span == smmu_pmu->global_filter_span && - sid == smmu_pmu->global_filter_sid) + idx = find_first_bit(smmu_pmu->used_counters, num_ctrs); + if (idx == num_ctrs || + smmu_pmu_check_global_filter(smmu_pmu->events[idx], event)) { + smmu_pmu_set_event_filter(event, 0, span, sid); return 0; + } - if (!bitmap_empty(smmu_pmu->used_counters, num_ctrs)) - return -EAGAIN; - - smmu_pmu_set_event_filter(event, 0, span, sid); - smmu_pmu->global_filter_span = span; - smmu_pmu->global_filter_sid = sid; - return 0; + return -EAGAIN; } static int smmu_pmu_get_event_idx(struct smmu_pmu *smmu_pmu, @@ -312,6 +320,19 @@ static int smmu_pmu_get_event_idx(struct smmu_pmu *smmu_pmu, return idx; } +static bool smmu_pmu_events_compatible(struct perf_event *curr, + struct perf_event *new) +{ + if (new->pmu != curr->pmu) + return false; + + if (to_smmu_pmu(new->pmu)->global_filter && + !smmu_pmu_check_global_filter(curr, new)) + return false; + + return true; +} + /* * Implementation of abstract pmu functionality required by * the core perf events code. @@ -349,7 +370,7 @@ static int smmu_pmu_event_init(struct perf_event *event) /* Don't allow groups with mixed PMUs, except for s/w events */ if (!is_software_event(event->group_leader)) { - if (event->group_leader->pmu != event->pmu) + if (!smmu_pmu_events_compatible(event->group_leader, event)) return -EINVAL; if (++group_num_events > smmu_pmu->num_counters) @@ -360,7 +381,7 @@ static int smmu_pmu_event_init(struct perf_event *event) if (is_software_event(sibling)) continue; - if (sibling->pmu != event->pmu) + if (!smmu_pmu_events_compatible(sibling, event)) return -EINVAL; if (++group_num_events > smmu_pmu->num_counters)