From patchwork Wed Aug 28 21:24:45 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Boyd X-Patchwork-Id: 2851049 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id E0C9ABF546 for ; Wed, 28 Aug 2013 21:24:52 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 0AC02205BE for ; Wed, 28 Aug 2013 21:24:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 2DFFA205BB for ; Wed, 28 Aug 2013 21:24:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754834Ab3H1VYu (ORCPT ); Wed, 28 Aug 2013 17:24:50 -0400 Received: from smtp.codeaurora.org ([198.145.11.231]:56950 "EHLO smtp.codeaurora.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753233Ab3H1VYt (ORCPT ); Wed, 28 Aug 2013 17:24:49 -0400 Received: from smtp.codeaurora.org (localhost [127.0.0.1]) by smtp.codeaurora.org (Postfix) with ESMTP id 3AA9113EE07; Wed, 28 Aug 2013 21:24:49 +0000 (UTC) Received: by smtp.codeaurora.org (Postfix, from userid 486) id 2D6D713EF33; Wed, 28 Aug 2013 21:24:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-9.4 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from sboyd-linux.qualcomm.com (i-global252.qualcomm.com [199.106.103.252]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: sboyd@smtp.codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id A518913EE07; Wed, 28 Aug 2013 21:24:48 +0000 (UTC) From: Stephen Boyd To: Viresh Kumar Cc: linux-kernel@vger.kernel.org, cpufreq@vger.kernel.org, linux-pm@vger.kernel.org, "Rafael J . Wysocki" Subject: [PATCH v2] cpufreq: Don't use smp_processor_id() in preemptible context Date: Wed, 28 Aug 2013 14:24:45 -0700 Message-Id: <1377725085-16798-1-git-send-email-sboyd@codeaurora.org> X-Mailer: git-send-email 1.8.4 In-Reply-To: <521E24CE.9090407@codeaurora.org> X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Workqueues are preemptible even if works are queued on them with queue_work_on(). Let's use raw_smp_processor_id() here to silence the warning. BUG: using smp_processor_id() in preemptible [00000000] code: kworker/3:2/674 caller is gov_queue_work+0x28/0xb0 CPU: 0 PID: 674 Comm: kworker/3:2 Tainted: G W 3.10.0 #30 Workqueue: events od_dbs_timer [] (unwind_backtrace+0x0/0x11c) from [] (show_stack+0x10/0x14) [] (show_stack+0x10/0x14) from [] (debug_smp_processor_id+0xbc/0xf0) [] (debug_smp_processor_id+0xbc/0xf0) from [] (gov_queue_work+0x28/0xb0) [] (gov_queue_work+0x28/0xb0) from [] (od_dbs_timer+0x108/0x134) [] (od_dbs_timer+0x108/0x134) from [] (process_one_work+0x25c/0x444) [] (process_one_work+0x25c/0x444) from [] (worker_thread+0x200/0x344) [] (worker_thread+0x200/0x344) from [] (kthread+0xa0/0xb0) [] (kthread+0xa0/0xb0) from [] (ret_from_fork+0x14/0x3c) Signed-off-by: Stephen Boyd Acked-by: Viresh Kumar --- drivers/cpufreq/cpufreq_governor.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c index b9b20fd..bfbcf9a 100644 --- a/drivers/cpufreq/cpufreq_governor.c +++ b/drivers/cpufreq/cpufreq_governor.c @@ -137,7 +137,14 @@ void gov_queue_work(struct dbs_data *dbs_data, struct cpufreq_policy *policy, return; if (!all_cpus) { - __gov_queue_work(smp_processor_id(), dbs_data, delay); + /* + * Use raw_smp_processor_id() to avoid preemptible warnings. + * We know that this is only called with all_cpus == false from + * works that have been queued with *_work_on() functions and + * those works are canceled during CPU_DOWN_PREPARE so they + * can't possibly run on any other CPU. + */ + __gov_queue_work(raw_smp_processor_id(), dbs_data, delay); } else { for_each_cpu(i, policy->cpus) __gov_queue_work(i, dbs_data, delay);