From patchwork Tue May 22 22:55:53 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joel Fernandes X-Patchwork-Id: 10419707 X-Patchwork-Delegate: rjw@sisk.pl Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id D12456032C for ; Tue, 22 May 2018 22:56:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BA68928F5F for ; Tue, 22 May 2018 22:56:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id ACD5728F63; Tue, 22 May 2018 22:56:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 89D4628F5F for ; Tue, 22 May 2018 22:56:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753192AbeEVW4M (ORCPT ); Tue, 22 May 2018 18:56:12 -0400 Received: from mail-pl0-f54.google.com ([209.85.160.54]:35578 "EHLO mail-pl0-f54.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753036AbeEVW4L (ORCPT ); Tue, 22 May 2018 18:56:11 -0400 Received: by mail-pl0-f54.google.com with SMTP id i5-v6so11769036plt.2 for ; Tue, 22 May 2018 15:56:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=rmq78MnyXT5xIrLYqXPfCNvRVxytZiWpWqwauCiARac=; b=g6Gd9gpSuCx+OXOvCTkuaVDcoBCf8nq7OcAUK/QqPdyxNxFt3dmisOMaVYjgaabBk3 RUeVcKu13K66C2+JWxKpiAMzpODxA+Axy2pm7YSt912ju2fdOtO7lz9pqfMwhGaTCApH 4bI1bvKln++4CyypwY2eWTZHHjFlyevQ2CjUaohFp8TRX0hiovDUL1Gllp0P1W/qbutQ IWOXAKOcRasCWUV1L1jRI/xMVTZCimgK1tYhC+ZXOfYrP0xw7mV2cXZdsRKtCOvpn+X0 Z4nFcxEgWmtU1MMe59rPIz382IHJ/nVY5DqWAkYp9wgdvttMwZAs5RmLsx1MK+pKtw1N oPww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=rmq78MnyXT5xIrLYqXPfCNvRVxytZiWpWqwauCiARac=; b=FYN9T7stOTgebPidaWSepiPBcEX1M25gxWTqSTNBPjpCAVztnXvBkaYQsqkXpun/Tv sZpenE8+DU/TkUdjz4e0eRShsq3AAGeZR6NveS6Zvu0wOWGRf/2KHyEA1U2rsfYi3nEb zpPw9+v6v23RHgU3/HURL5H7ofFTXBaE9OoUl+fedxDYqZV4lmhv1R4nOm4PWwRi5Qqy ypzWPQ3kMFZ96+SkyaRb11o1FmXmyVfcMbksOJWpHI56+bBw/z/lu9VFGK80ghA/UVfb LGGSP5gPtiHisZTFGAPUsxsz9MnCZmPcehNcMYyVd175x3mOq6XeP3xn90ewXJu5rWKM wfPA== X-Gm-Message-State: ALKqPwc7D9nHrkd6sjSTB70pTfr3Dn83/F4vKHLhXsBZyNUh3ZqjKtJw v6VbPbR+5SyRvFwy10g+r9vOQQ== X-Google-Smtp-Source: AB8JxZqcSmTo56c3PGT/RQOMCzdwVqkqrdVt66xR45LMFIUsciCq3xoBuJ8KDufutIcTe/kEHAxmjA== X-Received: by 2002:a17:902:d68d:: with SMTP id v13-v6mr79932ply.16.1527029769959; Tue, 22 May 2018 15:56:09 -0700 (PDT) Received: from joelaf.mtv.corp.google.com ([2620:0:1000:1600:3122:ea9c:d178:eb]) by smtp.gmail.com with ESMTPSA id h130-v6sm48323368pfc.98.2018.05.22.15.56.08 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 22 May 2018 15:56:09 -0700 (PDT) From: Joel Fernandes X-Google-Original-From: Joel Fernandes To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , Viresh Kumar , "Rafael J . Wysocki" , Peter Zijlstra , Ingo Molnar , Patrick Bellasi , Juri Lelli , Luca Abeni , Todd Kjos , claudio@evidence.eu.com, kernel-team@android.com, linux-pm@vger.kernel.org Subject: [PATCH v3] schedutil: Allow cpufreq requests to be made even when kthread kicked Date: Tue, 22 May 2018 15:55:53 -0700 Message-Id: <20180522225553.69483-1-joel@joelfernandes.org> X-Mailer: git-send-email 2.17.0.441.gb46fe60e1d-goog Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: "Joel Fernandes (Google)" Currently there is a chance of a schedutil cpufreq update request to be dropped if there is a pending update request. This pending request can be delayed if there is a scheduling delay of the irq_work and the wake up of the schedutil governor kthread. A very bad scenario is when a schedutil request was already just made, such as to reduce the CPU frequency, then a newer request to increase CPU frequency (even sched deadline urgent frequency increase requests) can be dropped, even though the rate limits suggest that its Ok to process a request. This is because of the way the work_in_progress flag is used. This patch improves the situation by allowing new requests to happen even though the old one is still being processed. Note that in this approach, if an irq_work was already issued, we just update next_freq and don't bother to queue another request so there's no extra work being done to make this happen. Acked-by: Viresh Kumar Acked-by: Juri Lelli CC: Viresh Kumar CC: Rafael J. Wysocki CC: Peter Zijlstra CC: Ingo Molnar CC: Patrick Bellasi CC: Juri Lelli Cc: Luca Abeni CC: Todd Kjos CC: claudio@evidence.eu.com CC: kernel-team@android.com CC: linux-pm@vger.kernel.org Signed-off-by: Joel Fernandes (Google) --- Only commit log update, no code change in v2->v3 kernel/sched/cpufreq_schedutil.c | 34 ++++++++++++++++++++++++-------- 1 file changed, 26 insertions(+), 8 deletions(-) diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index e13df951aca7..5c482ec38610 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -92,9 +92,6 @@ static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time) !cpufreq_can_do_remote_dvfs(sg_policy->policy)) return false; - if (sg_policy->work_in_progress) - return false; - if (unlikely(sg_policy->need_freq_update)) { sg_policy->need_freq_update = false; /* @@ -128,7 +125,7 @@ static void sugov_update_commit(struct sugov_policy *sg_policy, u64 time, policy->cur = next_freq; trace_cpu_frequency(next_freq, smp_processor_id()); - } else { + } else if (!sg_policy->work_in_progress) { sg_policy->work_in_progress = true; irq_work_queue(&sg_policy->irq_work); } @@ -291,6 +288,13 @@ static void sugov_update_single(struct update_util_data *hook, u64 time, ignore_dl_rate_limit(sg_cpu, sg_policy); + /* + * For slow-switch systems, single policy requests can't run at the + * moment if update is in progress, unless we acquire update_lock. + */ + if (sg_policy->work_in_progress) + return; + if (!sugov_should_update_freq(sg_policy, time)) return; @@ -382,13 +386,27 @@ sugov_update_shared(struct update_util_data *hook, u64 time, unsigned int flags) static void sugov_work(struct kthread_work *work) { struct sugov_policy *sg_policy = container_of(work, struct sugov_policy, work); + unsigned int freq; + unsigned long flags; + + /* + * Hold sg_policy->update_lock shortly to handle the case where: + * incase sg_policy->next_freq is read here, and then updated by + * sugov_update_shared just before work_in_progress is set to false + * here, we may miss queueing the new update. + * + * Note: If a work was queued after the update_lock is released, + * sugov_work will just be called again by kthread_work code; and the + * request will be proceed before the sugov thread sleeps. + */ + raw_spin_lock_irqsave(&sg_policy->update_lock, flags); + freq = sg_policy->next_freq; + sg_policy->work_in_progress = false; + raw_spin_unlock_irqrestore(&sg_policy->update_lock, flags); mutex_lock(&sg_policy->work_lock); - __cpufreq_driver_target(sg_policy->policy, sg_policy->next_freq, - CPUFREQ_RELATION_L); + __cpufreq_driver_target(sg_policy->policy, freq, CPUFREQ_RELATION_L); mutex_unlock(&sg_policy->work_lock); - - sg_policy->work_in_progress = false; } static void sugov_irq_work(struct irq_work *irq_work)