From patchwork Fri Mar 8 12:07:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Murray X-Patchwork-Id: 10844583 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 297C81390 for ; Fri, 8 Mar 2019 12:12:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0EB302D27F for ; Fri, 8 Mar 2019 12:12:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F1FB92D29B; Fri, 8 Mar 2019 12:12:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 8911F2D27F for ; Fri, 8 Mar 2019 12:12:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=dUFaYTcS9CEZAsjO2nWmEEhB+ipnkxLRJm7UcIvgkxU=; b=Pyaa1mePDpIZir tlPllSmcmDXmF2uh6xlwE0ARgLkXMW7aIEbxmR8EzTL98ujjW9FtohFxD9ULp6onMnwbR1HC4rbN6 QEhX+eaCschv11jVdTEuiTKnh7T6V3KQXJlkqEF5s0bSoNOU5KtgbJxgwSN5MS4LuqpiqYlmlrI0S i3H0AvC8Q993/ZcTsNLHcGXAUUbLe2hEsSDkq4hVCmeOambzX/miDztUH8rgKMiNmxjC/kfr7TytG bn5kNUVlDIQrK3mu3pP/D0Q2ebtyP1fAw6AFDU7H/OQ3xzsb9uCF07NC6IZsVJJAq3NupKfHnCSbB 8i38V/TMTr606+KxGxrQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h2EM0-0002Rp-Oo; Fri, 08 Mar 2019 12:11:56 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h2EIF-0007KP-6i for linux-arm-kernel@lists.infradead.org; Fri, 08 Mar 2019 12:08:13 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C24051684; Fri, 8 Mar 2019 04:08:02 -0800 (PST) Received: from e119886-lin.cambridge.arm.com (unknown [10.37.6.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CF47A3F706; Fri, 8 Mar 2019 04:08:00 -0800 (PST) From: Andrew Murray To: Christoffer Dall , Marc Zyngier , Catalin Marinas , Will Deacon , Mark Rutland Subject: [PATCH v11 4/8] arm64: arm_pmu: Add !VHE support for exclude_host/exclude_guest attributes Date: Fri, 8 Mar 2019 12:07:42 +0000 Message-Id: <20190308120746.56897-5-andrew.murray@arm.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190308120746.56897-1-andrew.murray@arm.com> References: <20190308120746.56897-1-andrew.murray@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190308_040803_930064_C96E72D9 X-CRM114-Status: GOOD ( 16.18 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Julien Thierry , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Suzuki K Poulose Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Add support for the :G and :H attributes in perf by handling the exclude_host/exclude_guest event attributes. We notify KVM of counters that we wish to be enabled or disabled on guest entry/exit and thus defer from starting or stopping events based on their event attributes. With !VHE we switch the counters between host/guest at EL2. We are able to eliminate counters counting host events on the boundaries of guest entry/exit when using :G by filtering out EL2 for exclude_host. When using !exclude_hv there is a small blackout window at the guest entry/exit where host events are not captured. Signed-off-by: Andrew Murray --- arch/arm64/kernel/perf_event.c | 43 ++++++++++++++++++++++++++++------ 1 file changed, 36 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index 1c717965b3ff..64f02a9fd7cd 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -26,6 +26,7 @@ #include #include +#include #include #include #include @@ -528,11 +529,21 @@ static inline int armv8pmu_enable_counter(int idx) static inline void armv8pmu_enable_event_counter(struct perf_event *event) { + struct perf_event_attr *attr = &event->attr; int idx = event->hw.idx; + u32 counter_bits = BIT(ARMV8_IDX_TO_COUNTER(idx)); - armv8pmu_enable_counter(idx); if (armv8pmu_event_is_chained(event)) - armv8pmu_enable_counter(idx - 1); + counter_bits |= BIT(ARMV8_IDX_TO_COUNTER(idx - 1)); + + kvm_set_pmu_events(counter_bits, attr); + + /* We rely on the hypervisor switch code to enable guest counters */ + if (!kvm_pmu_counter_defered(attr)) { + armv8pmu_enable_counter(idx); + if (armv8pmu_event_is_chained(event)) + armv8pmu_enable_counter(idx - 1); + } } static inline int armv8pmu_disable_counter(int idx) @@ -545,11 +556,21 @@ static inline int armv8pmu_disable_counter(int idx) static inline void armv8pmu_disable_event_counter(struct perf_event *event) { struct hw_perf_event *hwc = &event->hw; + struct perf_event_attr *attr = &event->attr; int idx = hwc->idx; + u32 counter_bits = BIT(ARMV8_IDX_TO_COUNTER(idx)); if (armv8pmu_event_is_chained(event)) - armv8pmu_disable_counter(idx - 1); - armv8pmu_disable_counter(idx); + counter_bits |= BIT(ARMV8_IDX_TO_COUNTER(idx - 1)); + + kvm_clr_pmu_events(counter_bits); + + /* We rely on the hypervisor switch code to disable guest counters */ + if (!kvm_pmu_counter_defered(attr)) { + if (armv8pmu_event_is_chained(event)) + armv8pmu_disable_counter(idx - 1); + armv8pmu_disable_counter(idx); + } } static inline int armv8pmu_enable_intens(int idx) @@ -829,11 +850,16 @@ static int armv8pmu_set_event_filter(struct hw_perf_event *event, if (!attr->exclude_kernel) config_base |= ARMV8_PMU_INCLUDE_EL2; } else { - if (attr->exclude_kernel) - config_base |= ARMV8_PMU_EXCLUDE_EL1; - if (!attr->exclude_hv) + if (!attr->exclude_hv && !attr->exclude_host) config_base |= ARMV8_PMU_INCLUDE_EL2; } + + /* + * Filter out !VHE kernels and guest kernels + */ + if (attr->exclude_kernel) + config_base |= ARMV8_PMU_EXCLUDE_EL1; + if (attr->exclude_user) config_base |= ARMV8_PMU_EXCLUDE_EL0; @@ -863,6 +889,9 @@ static void armv8pmu_reset(void *info) armv8pmu_disable_intens(idx); } + /* Clear the counters we flip at guest entry/exit */ + kvm_clr_pmu_events(U32_MAX); + /* * Initialize & Reset PMNC. Request overflow interrupt for * 64 bit cycle counter but cheat in armv8pmu_write_counter().