From patchwork Fri Mar 22 16:23:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Thierry X-Patchwork-Id: 10866223 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 916031390 for ; Fri, 22 Mar 2019 16:24:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6D4BF29375 for ; Fri, 22 Mar 2019 16:24:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 620D22A878; Fri, 22 Mar 2019 16:24:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id EE5E32A876 for ; Fri, 22 Mar 2019 16:24:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=3hR/3GQHWRrfzrSJB/QnQrvnorDGMCSGHdj0BqcV40Q=; b=lXd3HCRR7YGp+ihU/qKRXZeu1N NGjnGzEJg0ZL9Ibk8gvbUJqpmRDY/7XeN8Ka3Hcd09xZkAVCvuY0oe40sP5sT0etI4rYfhWsVbGYJ NMONZsnNISgweWDdH2SLrltEtUmQ5JXqwqfipj2/0cenV1nqRaiYfZ5KHkcbwvN0WNsEbALBa8dTg YAB55N1y4YiTJjCt8yJ2FXNZWOQhP0yIYDlFBnOSjeCmZ00Zkumjw4pR4+0G3QzgYLvAU21euplCY +az1HF/Pof1mjwNCGUQ9I5QiqxhhtI4h2yg5NkDBOTteYrRPtY28LfyExfBMtcbErbbuMFfUW10Fl 0TR2nJfA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h7MyL-0002Mr-IP; Fri, 22 Mar 2019 16:24:45 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h7Mxs-0001oO-Az for linux-arm-kernel@lists.infradead.org; Fri, 22 Mar 2019 16:24:17 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 024C9A78; Fri, 22 Mar 2019 09:24:16 -0700 (PDT) Received: from e112298-lin.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 07EB63F59C; Fri, 22 Mar 2019 09:24:13 -0700 (PDT) From: Julien Thierry To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 2/9] arm64: perf: Remove PMU locking Date: Fri, 22 Mar 2019 16:23:57 +0000 Message-Id: <1553271844-49003-3-git-send-email-julien.thierry@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1553271844-49003-1-git-send-email-julien.thierry@arm.com> References: <1553271844-49003-1-git-send-email-julien.thierry@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190322_092416_389278_01665DA9 X-CRM114-Status: GOOD ( 11.67 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, Julien Thierry , peterz@infradead.org, Catalin Marinas , will.deacon@arm.com, acme@kernel.org, alexander.shishkin@linux.intel.com, mingo@redhat.com, namhyung@kernel.org, jolsa@redhat.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Since the PMU driver uses direct registers for counter setup/manipulation, locking around these operations is no longer needed. For operations that can be called with interrupts enabled, preemption still needs to be disabled to ensure the programming of the PMU is done on the expected CPU and not migrated mid-programming. Signed-off-by: Julien Thierry Cc: Will Deacon Cc: Mark Rutland Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Arnaldo Carvalho de Melo Cc: Alexander Shishkin Cc: Jiri Olsa Cc: Namhyung Kim Cc: Catalin Marinas --- arch/arm64/kernel/perf_event.c | 32 ++++---------------------------- 1 file changed, 4 insertions(+), 28 deletions(-) -- 1.9.1 diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index 63d45e2..bae4242 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -659,15 +659,10 @@ static inline u32 armv8pmu_getreset_flags(void) static void armv8pmu_enable_event(struct perf_event *event) { - unsigned long flags; - struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu); - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); - /* * Enable counter and interrupt, and set the counter to count * the event that we're interested in. */ - raw_spin_lock_irqsave(&events->pmu_lock, flags); /* * Disable counter @@ -688,21 +683,10 @@ static void armv8pmu_enable_event(struct perf_event *event) * Enable counter */ armv8pmu_enable_event_counter(event); - - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); } static void armv8pmu_disable_event(struct perf_event *event) { - unsigned long flags; - struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu); - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); - - /* - * Disable counter and interrupt - */ - raw_spin_lock_irqsave(&events->pmu_lock, flags); - /* * Disable counter */ @@ -712,30 +696,22 @@ static void armv8pmu_disable_event(struct perf_event *event) * Disable interrupt for this counter */ armv8pmu_disable_event_irq(event); - - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); } static void armv8pmu_start(struct arm_pmu *cpu_pmu) { - unsigned long flags; - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); - - raw_spin_lock_irqsave(&events->pmu_lock, flags); + preempt_disable(); /* Enable all counters */ armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E); - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); + preempt_enable(); } static void armv8pmu_stop(struct arm_pmu *cpu_pmu) { - unsigned long flags; - struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); - - raw_spin_lock_irqsave(&events->pmu_lock, flags); + preempt_disable(); /* Disable all counters */ armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E); - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); + preempt_enable(); } static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)