From patchwork Tue Jan 19 13:35:34 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zanoni, Paulo R" X-Patchwork-Id: 8062231 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 742EA9F1CC for ; Tue, 19 Jan 2016 13:36:34 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 2AEE0203A0 for ; Tue, 19 Jan 2016 13:36:29 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 0C1E02038A for ; Tue, 19 Jan 2016 13:36:28 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B26736E741; Tue, 19 Jan 2016 05:36:26 -0800 (PST) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by gabe.freedesktop.org (Postfix) with ESMTP id 77E396E740 for ; Tue, 19 Jan 2016 05:36:25 -0800 (PST) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga104.fm.intel.com with ESMTP; 19 Jan 2016 05:36:08 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.22,317,1449561600"; d="scan'208";a="893744042" Received: from sundarar-mobl4.amr.corp.intel.com (HELO panetone.amr.corp.intel.com) ([10.252.197.26]) by orsmga002.jf.intel.com with ESMTP; 19 Jan 2016 05:36:07 -0800 From: Paulo Zanoni To: intel-gfx@lists.freedesktop.org Date: Tue, 19 Jan 2016 11:35:34 -0200 Message-Id: <1453210558-7875-2-git-send-email-paulo.r.zanoni@intel.com> X-Mailer: git-send-email 2.6.4 In-Reply-To: <1453210558-7875-1-git-send-email-paulo.r.zanoni@intel.com> References: <1453210558-7875-1-git-send-email-paulo.r.zanoni@intel.com> Subject: [Intel-gfx] [PATCH 01/25] drm/i915/fbc: wait for a vblank instead of 50ms when enabling X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Instead of waiting for 50ms, just wait until the next vblank, since it's the minimum requirement. The whole infrastructure of FBC is based on vblanks, so waiting for X vblanks instead of X milliseconds sounds like the correct way to go. Besides, 50ms may be less than a vblank on super slow modes that may or may not exist. There are some small improvements in PC state residency (due to the fact that we're now using 16ms for the common modes instead of 50ms), but the biggest advantage is still the correctness of being vblank-based instead of time-based. v2: - Rebase after changing the patch order. - Update the commit message. v3: - Fix bogus vblank_get() instead of vblank_count() (Ville). - Don't forget to call drm_crtc_vblank_{get,put} (Chris, Ville) - Adjust the performance details on the commit message. Signed-off-by: Paulo Zanoni --- drivers/gpu/drm/i915/i915_drv.h | 2 +- drivers/gpu/drm/i915/intel_fbc.c | 43 ++++++++++++++++++++++++++++------------ 2 files changed, 31 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index af30148..33217a4 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -925,9 +925,9 @@ struct i915_fbc { struct intel_fbc_work { bool scheduled; + u32 scheduled_vblank; struct work_struct work; struct drm_framebuffer *fb; - unsigned long enable_jiffies; } work; const char *no_fbc_reason; diff --git a/drivers/gpu/drm/i915/intel_fbc.c b/drivers/gpu/drm/i915/intel_fbc.c index a1988a4..6b43ec3 100644 --- a/drivers/gpu/drm/i915/intel_fbc.c +++ b/drivers/gpu/drm/i915/intel_fbc.c @@ -381,7 +381,15 @@ static void intel_fbc_work_fn(struct work_struct *__work) container_of(__work, struct drm_i915_private, fbc.work.work); struct intel_fbc_work *work = &dev_priv->fbc.work; struct intel_crtc *crtc = dev_priv->fbc.crtc; - int delay_ms = 50; + struct drm_vblank_crtc *vblank = &dev_priv->dev->vblank[crtc->pipe]; + + mutex_lock(&dev_priv->fbc.lock); + if (drm_crtc_vblank_get(&crtc->base)) { + DRM_ERROR("vblank not available for FBC on pipe %c\n", + pipe_name(crtc->pipe)); + goto out; + } + mutex_unlock(&dev_priv->fbc.lock); retry: /* Delay the actual enabling to let pageflipping cease and the @@ -390,24 +398,25 @@ retry: * vblank to pass after disabling the FBC before we attempt * to modify the control registers. * - * A more complicated solution would involve tracking vblanks - * following the termination of the page-flipping sequence - * and indeed performing the enable as a co-routine and not - * waiting synchronously upon the vblank. - * * WaFbcWaitForVBlankBeforeEnable:ilk,snb + * + * It is also worth mentioning that since work->scheduled_vblank can be + * updated multiple times by the other threads, hitting the timeout is + * not an error condition. We'll just end up hitting the "goto retry" + * case below. */ - wait_remaining_ms_from_jiffies(work->enable_jiffies, delay_ms); + wait_event_timeout(vblank->queue, + drm_crtc_vblank_count(&crtc->base) != work->scheduled_vblank, + msecs_to_jiffies(50)); mutex_lock(&dev_priv->fbc.lock); /* Were we cancelled? */ if (!work->scheduled) - goto out; + goto out_put; /* Were we delayed again while this function was sleeping? */ - if (time_after(work->enable_jiffies + msecs_to_jiffies(delay_ms), - jiffies)) { + if (drm_crtc_vblank_count(&crtc->base) == work->scheduled_vblank) { mutex_unlock(&dev_priv->fbc.lock); goto retry; } @@ -415,9 +424,10 @@ retry: if (crtc->base.primary->fb == work->fb) intel_fbc_activate(work->fb); - work->scheduled = false; - +out_put: + drm_crtc_vblank_put(&crtc->base); out: + work->scheduled = false; mutex_unlock(&dev_priv->fbc.lock); } @@ -434,13 +444,20 @@ static void intel_fbc_schedule_activation(struct intel_crtc *crtc) WARN_ON(!mutex_is_locked(&dev_priv->fbc.lock)); + if (drm_crtc_vblank_get(&crtc->base)) { + DRM_ERROR("vblank not available for FBC on pipe %c\n", + pipe_name(crtc->pipe)); + return; + } + /* It is useless to call intel_fbc_cancel_work() in this function since * we're not releasing fbc.lock, so it won't have an opportunity to grab * it to discover that it was cancelled. So we just update the expected * jiffy count. */ work->fb = crtc->base.primary->fb; work->scheduled = true; - work->enable_jiffies = jiffies; + work->scheduled_vblank = drm_crtc_vblank_count(&crtc->base); + drm_crtc_vblank_put(&crtc->base); schedule_work(&work->work); }