From patchwork Thu Aug 22 14:42:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raphael Gault X-Patchwork-Id: 11109503 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 826D713A4 for ; Thu, 22 Aug 2019 14:44:07 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 60652233A1 for ; Thu, 22 Aug 2019 14:44:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="DGOkjPN5" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 60652233A1 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=sT/Uh9aeQRDGF8MDFuhl3DxgtRkLnsHemk/4FTkXRes=; b=DGOkjPN52ZQLPM22nWZ/yC8+FU UAvq36f7mBJfxQoIx4K1L4qIWzRqmsL8zG4bwbFQJwtMCqQwEERZwyyWS6DSFa8hBeTG+91QeIhjP Mc4pU65is4ugkTLLMlpWpq6csbN/YDBFSuFAxgAoBprQQgB4nFfDngSa0kG/n0GYg/Ml+wcGk0Jdy wMZbagCY3+92qNG6hkuTBnrkDCM0wPVUM9Y7OGlQsWgVL42klkhqgnFT6ktJpSbXB+tq6m03SOiJG e9SfA8Pjj28kIh77bi7jzaVbOspEDtKNcOJaNRvbhfM3CxoG0/aRqBEP/NlUjlF86J5y3X85qHHyc FKtRa4xA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1i0oJp-0005cg-Ux; Thu, 22 Aug 2019 14:44:06 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1i0oIS-0004QQ-J5 for linux-arm-kernel@lists.infradead.org; Thu, 22 Aug 2019 14:42:42 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 74DC7337; Thu, 22 Aug 2019 07:42:40 -0700 (PDT) Received: from e121650-lin.cambridge.arm.com (e121650-lin.cambridge.arm.com [10.1.196.120]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 205943F706; Thu, 22 Aug 2019 07:42:39 -0700 (PDT) From: Raphael Gault To: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 6/7] arm64: perf: Enable pmu counter direct access for perf event on armv8 Date: Thu, 22 Aug 2019 15:42:19 +0100 Message-Id: <20190822144220.27860-7-raphael.gault@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190822144220.27860-1-raphael.gault@arm.com> References: <20190822144220.27860-1-raphael.gault@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190822_074240_753284_25721F1C X-CRM114-Status: GOOD ( 15.80 ) X-Spam-Score: 0.0 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (0.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, raph.gault+kdev@gmail.com, peterz@infradead.org, catalin.marinas@arm.com, will.deacon@arm.com, acme@kernel.org, Raphael Gault , mingo@redhat.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Keep track of event opened with direct access to the hardware counters and modify permissions while they are open. The strategy used here is the same which x86 uses: everytime an event is mapped, the permissions are set if required. The atomic field added in the mm_context helps keep track of the different event opened and de-activate the permissions when all are unmapped. We also need to update the permissions in the context switch code so that tasks keep the right permissions. Signed-off-by: Raphael Gault --- arch/arm64/include/asm/mmu.h | 6 ++++ arch/arm64/include/asm/mmu_context.h | 2 ++ arch/arm64/include/asm/perf_event.h | 14 ++++++++ arch/arm64/kernel/perf_event.c | 1 + drivers/perf/arm_pmu.c | 54 ++++++++++++++++++++++++++++ 5 files changed, 77 insertions(+) diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index fd6161336653..88ed4466bd06 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -18,6 +18,12 @@ typedef struct { atomic64_t id; + + /* + * non-zero if userspace have access to hardware + * counters directly. + */ + atomic_t pmu_direct_access; void *vdso; unsigned long flags; } mm_context_t; diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h index 7ed0adb187a8..6e66ff940494 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -21,6 +21,7 @@ #include #include #include +#include #include #include @@ -224,6 +225,7 @@ static inline void __switch_mm(struct mm_struct *next) } check_and_switch_context(next, cpu); + perf_switch_user_access(next); } static inline void diff --git a/arch/arm64/include/asm/perf_event.h b/arch/arm64/include/asm/perf_event.h index 2bdbc79bbd01..ba58fa726631 100644 --- a/arch/arm64/include/asm/perf_event.h +++ b/arch/arm64/include/asm/perf_event.h @@ -8,6 +8,7 @@ #include #include +#include #define ARMV8_PMU_MAX_COUNTERS 32 #define ARMV8_PMU_COUNTER_MASK (ARMV8_PMU_MAX_COUNTERS - 1) @@ -223,4 +224,17 @@ extern unsigned long perf_misc_flags(struct pt_regs *regs); (regs)->pstate = PSR_MODE_EL1h; \ } +static inline void perf_switch_user_access(struct mm_struct *mm) +{ + if (!IS_ENABLED(CONFIG_PERF_EVENTS)) + return; + + if (atomic_read(&mm->context.pmu_direct_access)) { + write_sysreg(ARMV8_PMU_USERENR_ER|ARMV8_PMU_USERENR_CR, + pmuserenr_el0); + } else { + write_sysreg(0, pmuserenr_el0); + } +} + #endif diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index de9b001e8b7c..7de56f22d038 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -1285,6 +1285,7 @@ void arch_perf_update_userpage(struct perf_event *event, */ freq = arch_timer_get_rate(); userpg->cap_user_time = 1; + userpg->cap_user_rdpmc = !!(event->hw.flags & ARMPMU_EL0_RD_CNTR); clocks_calc_mult_shift(&userpg->time_mult, &shift, freq, NSEC_PER_SEC, 0); diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c index 2d06b8095a19..d0d3e523a4c4 100644 --- a/drivers/perf/arm_pmu.c +++ b/drivers/perf/arm_pmu.c @@ -25,6 +25,7 @@ #include #include +#include static DEFINE_PER_CPU(struct arm_pmu *, cpu_armpmu); static DEFINE_PER_CPU(int, cpu_irq); @@ -778,6 +779,57 @@ static void cpu_pmu_destroy(struct arm_pmu *cpu_pmu) &cpu_pmu->node); } +static void refresh_pmuserenr(void *mm) +{ + perf_switch_user_access(mm); +} + +static int check_homogeneous_cap(struct perf_event *event, struct mm_struct *mm) +{ + pr_info("checking HAS_HOMOGENEOUS_PMU"); + if (!cpus_have_cap(ARM64_HAS_HOMOGENEOUS_PMU)) { + pr_info("Disable direct access (!HAS_HOMOGENEOUS_PMU)"); + atomic_set(&mm->context.pmu_direct_access, 0); + on_each_cpu(refresh_pmuserenr, mm, 1); + event->hw.flags &= ~ARMPMU_EL0_RD_CNTR; + return 0; + } + + return 1; +} + +static void armpmu_event_mapped(struct perf_event *event, struct mm_struct *mm) +{ + if (!(event->hw.flags & ARMPMU_EL0_RD_CNTR)) + return; + + /* + * This function relies on not being called concurrently in two + * tasks in the same mm. Otherwise one task could observe + * pmu_direct_access > 1 and return all the way back to + * userspace with user access disabled while another task is still + * doing on_each_cpu_mask() to enable user access. + * + * For now, this can't happen because all callers hold mmap_sem + * for write. If this changes, we'll need a different solution. + */ + lockdep_assert_held_write(&mm->mmap_sem); + + if (check_homogeneous_cap(event, mm) && + atomic_inc_return(&mm->context.pmu_direct_access) == 1) + on_each_cpu(refresh_pmuserenr, mm, 1); +} + +static void armpmu_event_unmapped(struct perf_event *event, struct mm_struct *mm) +{ + if (!(event->hw.flags & ARMPMU_EL0_RD_CNTR)) + return; + + if (check_homogeneous_cap(event, mm) && + atomic_dec_and_test(&mm->context.pmu_direct_access)) + on_each_cpu_mask(mm_cpumask(mm), refresh_pmuserenr, NULL, 1); +} + static struct arm_pmu *__armpmu_alloc(gfp_t flags) { struct arm_pmu *pmu; @@ -799,6 +851,8 @@ static struct arm_pmu *__armpmu_alloc(gfp_t flags) .pmu_enable = armpmu_enable, .pmu_disable = armpmu_disable, .event_init = armpmu_event_init, + .event_mapped = armpmu_event_mapped, + .event_unmapped = armpmu_event_unmapped, .add = armpmu_add, .del = armpmu_del, .start = armpmu_start,