From patchwork Mon Feb 5 22:07:09 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Lutomirski X-Patchwork-Id: 10201935 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 8AF91601A1 for ; Mon, 5 Feb 2018 22:07:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7005D287CE for ; Mon, 5 Feb 2018 22:07:17 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 619AC2891D; Mon, 5 Feb 2018 22:07:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 95690287CE for ; Mon, 5 Feb 2018 22:07:16 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E7BB26E358; Mon, 5 Feb 2018 22:07:12 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7F4116E351; Mon, 5 Feb 2018 22:07:11 +0000 (UTC) Received: from localhost (c-71-202-137-17.hsd1.ca.comcast.net [71.202.137.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 1C6C220B80; Mon, 5 Feb 2018 22:07:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1C6C220B80 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=luto@kernel.org From: Andy Lutomirski To: intel-gfx@lists.freedesktop.org, DRI Subject: [PATCH] drm/i915: Improve PSR activation timing Date: Mon, 5 Feb 2018 14:07:09 -0800 Message-Id: X-Mailer: git-send-email 2.14.3 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Dhinakaran Pandiyan , Andy Lutomirski MIME-Version: 1.0 Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP The current PSR code has a two call sites that each schedule delayed work to activate PSR. As far as I can tell, each call site intends to keep PSR inactive for the given amount of time and then allow it to be activated. The call sites are: - intel_psr_enable(), which explicitly states in a comment that it's trying to keep PSR off a short time after the dispay is initialized as a workaround. - intel_psr_flush(). There isn't an explcit explanation, but the intent is presumably to keep PSR off until the display has been idle for 100ms. The current code doesn't actually accomplish either of these goals. Rather than keeping PSR inactive for the given amount of time, it will schedule PSR for activation after the given time, with the earliest target time in such a request winning. In other words, if intel_psr_enable() is immediately followed by intel_psr_flush(), then PSR will be activated after 100ms even if intel_psr_enable() wanted a longer delay. And, if the screen is being constantly updated so that intel_psr_flush() is called once per frame at 60Hz, PSR will still be activated once every 100ms. Rewrite the code so that it does what was intended. This adds a new function intel_psr_schedule(), which will enable PSR after the requested time but no sooner. Signed-off-by: Andy Lutomirski --- drivers/gpu/drm/i915/i915_debugfs.c | 9 +++-- drivers/gpu/drm/i915/i915_drv.h | 4 ++- drivers/gpu/drm/i915/intel_psr.c | 69 ++++++++++++++++++++++++++++++++----- 3 files changed, 71 insertions(+), 11 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c index c65e381b85f3..b67db93f905d 100644 --- a/drivers/gpu/drm/i915/i915_debugfs.c +++ b/drivers/gpu/drm/i915/i915_debugfs.c @@ -2663,8 +2663,13 @@ static int i915_edp_psr_status(struct seq_file *m, void *data) seq_printf(m, "Active: %s\n", yesno(dev_priv->psr.active)); seq_printf(m, "Busy frontbuffer bits: 0x%03x\n", dev_priv->psr.busy_frontbuffer_bits); - seq_printf(m, "Re-enable work scheduled: %s\n", - yesno(work_busy(&dev_priv->psr.work.work))); + + if (timer_pending(&dev_priv->psr.activate_timer)) + seq_printf(m, "Activate scheduled: yes, in %ldms\n", + (long)(dev_priv->psr.earliest_activate - jiffies) * + 1000 / HZ); + else + seq_printf(m, "Re-enable scheduled: no\n"); if (HAS_DDI(dev_priv)) { if (dev_priv->psr.psr2_support) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 46eb729b367d..c0fb7d65cda6 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -1192,7 +1192,9 @@ struct i915_psr { bool source_ok; struct intel_dp *enabled; bool active; - struct delayed_work work; + struct timer_list activate_timer; + struct work_struct activate_work; + unsigned long earliest_activate; unsigned busy_frontbuffer_bits; bool psr2_support; bool aux_frame_sync; diff --git a/drivers/gpu/drm/i915/intel_psr.c b/drivers/gpu/drm/i915/intel_psr.c index 55ea5eb3b7df..333d90d4e5af 100644 --- a/drivers/gpu/drm/i915/intel_psr.c +++ b/drivers/gpu/drm/i915/intel_psr.c @@ -461,6 +461,30 @@ static void intel_psr_activate(struct intel_dp *intel_dp) dev_priv->psr.active = true; } +static void intel_psr_schedule(struct drm_i915_private *dev_priv, + unsigned long min_wait_ms) +{ + unsigned long next; + + lockdep_assert_held(&dev_priv->psr.lock); + + /* + * We update next_enable *and* call mod_timer() because it's + * possible that intel_psr_work() has already been called and is + * waiting for psr.lock. If that's the case, we don't want it + * to immediately enable PSR. + * + * We also need to make sure that PSR is never activated earlier + * than requested to avoid breaking intel_psr_enable()'s workaround + * for pre-gen9 hardware. + */ + next = jiffies + msecs_to_jiffies(min_wait_ms); + if (time_after(next, dev_priv->psr.earliest_activate)) { + dev_priv->psr.earliest_activate = next; + mod_timer(&dev_priv->psr.activate_timer, next); + } +} + static void hsw_psr_enable_source(struct intel_dp *intel_dp, const struct intel_crtc_state *crtc_state) { @@ -544,8 +568,7 @@ void intel_psr_enable(struct intel_dp *intel_dp, * - On HSW/BDW we get a recoverable frozen screen until * next exit-activate sequence. */ - schedule_delayed_work(&dev_priv->psr.work, - msecs_to_jiffies(intel_dp->panel_power_cycle_delay * 5)); + intel_psr_schedule(dev_priv, intel_dp->panel_power_cycle_delay * 5); } unlock: @@ -660,13 +683,14 @@ void intel_psr_disable(struct intel_dp *intel_dp, dev_priv->psr.enabled = NULL; mutex_unlock(&dev_priv->psr.lock); - cancel_delayed_work_sync(&dev_priv->psr.work); + cancel_work_sync(&dev_priv->psr.activate_work); + del_timer_sync(&dev_priv->psr.activate_timer); } static void intel_psr_work(struct work_struct *work) { struct drm_i915_private *dev_priv = - container_of(work, typeof(*dev_priv), psr.work.work); + container_of(work, typeof(*dev_priv), psr.activate_work); struct intel_dp *intel_dp = dev_priv->psr.enabled; struct drm_crtc *crtc = dp_to_dig_port(intel_dp)->base.base.crtc; enum pipe pipe = to_intel_crtc(crtc)->pipe; @@ -676,6 +700,7 @@ static void intel_psr_work(struct work_struct *work) * PSR might take some time to get fully disabled * and be ready for re-enable. */ + if (HAS_DDI(dev_priv)) { if (dev_priv->psr.psr2_support) { if (intel_wait_for_register(dev_priv, @@ -712,6 +737,18 @@ static void intel_psr_work(struct work_struct *work) if (!intel_dp) goto unlock; + + if (!time_after_eq(jiffies, dev_priv->psr.earliest_activate)) { + /* + * We raced: intel_psr_schedule() tried to delay us, but + * we were already in intel_psr_timer_fn() or already in + * the workqueue. We can safely return -- the + * intel_psr_schedule() call that put earliest_activeate + * in the future will have called mod_timer(). + */ + goto unlock; + } + /* * The delayed work can race with an invalidate hence we need to * recheck. Since psr_flush first clears this and then reschedules we @@ -725,6 +762,20 @@ static void intel_psr_work(struct work_struct *work) mutex_unlock(&dev_priv->psr.lock); } +static void intel_psr_timer_fn(struct timer_list *timer) +{ + struct drm_i915_private *dev_priv = + container_of(timer, typeof(*dev_priv), psr.activate_timer); + + /* + * We need a non-atomic context to activate PSR. Using + * delayed_work wouldn't be an improvement -- delayed_work is + * just a timer that schedules work when it fires, but there's + * no equivalent of mod_timer() for delayed_work. + */ + schedule_work(&dev_priv->psr.activate_work); +} + static void intel_psr_exit(struct drm_i915_private *dev_priv) { struct intel_dp *intel_dp = dev_priv->psr.enabled; @@ -905,9 +956,8 @@ void intel_psr_flush(struct drm_i915_private *dev_priv, intel_psr_exit(dev_priv); if (!dev_priv->psr.active && !dev_priv->psr.busy_frontbuffer_bits) - if (!work_busy(&dev_priv->psr.work.work)) - schedule_delayed_work(&dev_priv->psr.work, - msecs_to_jiffies(100)); + intel_psr_schedule(dev_priv, 100); + mutex_unlock(&dev_priv->psr.lock); } @@ -951,7 +1001,8 @@ void intel_psr_init(struct drm_i915_private *dev_priv) dev_priv->psr.link_standby = false; } - INIT_DELAYED_WORK(&dev_priv->psr.work, intel_psr_work); + timer_setup(&dev_priv->psr.activate_timer, intel_psr_timer_fn, 0); + INIT_WORK(&dev_priv->psr.activate_work, intel_psr_work); mutex_init(&dev_priv->psr.lock); if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) { @@ -967,4 +1018,6 @@ void intel_psr_init(struct drm_i915_private *dev_priv) dev_priv->psr.activate = hsw_psr_activate; dev_priv->psr.setup_vsc = hsw_psr_setup_vsc; } + + dev_priv->psr.earliest_activate = jiffies; }