From patchwork Tue Apr 5 15:53:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Souza, Jose" X-Patchwork-Id: 12801744 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3294EC433F5 for ; Tue, 5 Apr 2022 15:52:34 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B70DB10EA04; Tue, 5 Apr 2022 15:52:33 +0000 (UTC) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id 98F9910EA01 for ; Tue, 5 Apr 2022 15:52:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649173952; x=1680709952; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=tDEnwZDNHMBi/Lz98xssUJ/8sa1e5uSd+g743LgQ1m0=; b=N2335uOfJ9BEC57nK0f3G6CQsmWr7HwNnCzq6zyrv3q1s2lqe4/a4OYR MzXpOTKLQn+fSfGt0KvVo0+xM2TVbRl2oMhyaISlrE+jBYCcxSQYc6NXh N+kGiusWVOxfOCJJvhJMFH9Qv4BG2LMM5cMzRXtlh3eaqyvhPMIwhURGg T/zOk5yuiwKmyTe7nbQLYTCM9k4wm4DoO0aboTNRgQijO+b8MrNI43xBH rB6jAqfC3J5RiyVyuQSb3FrpaxMVHTQmvQ/EdMBtCsGAC3SGpCe038q7W PL/eRwKPln6LlGwOAUoGQUjlQ5kJXCvbb/I/QvmaMvcgJlaBmF9Xh1FhR A==; X-IronPort-AV: E=McAfee;i="6200,9189,10308"; a="321480353" X-IronPort-AV: E=Sophos;i="5.90,236,1643702400"; d="scan'208";a="321480353" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Apr 2022 08:52:31 -0700 X-IronPort-AV: E=Sophos;i="5.90,236,1643702400"; d="scan'208";a="524047567" Received: from unknown (HELO josouza-mobl2.intel.com) ([10.230.19.149]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Apr 2022 08:52:30 -0700 From: =?utf-8?q?Jos=C3=A9_Roberto_de_Souza?= To: intel-gfx@lists.freedesktop.org Date: Tue, 5 Apr 2022 08:53:42 -0700 Message-Id: <20220405155344.47219-1-jose.souza@intel.com> X-Mailer: git-send-email 2.35.1 MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH CI 1/3] drm/i915/display/psr: Set partial frame enable when forcing full frame fetch X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Following up what was done in commit 804f46885317 ("drm/i915/psr: Set "SF Partial Frame Enable" also on full update") and also setting partial frame enable when psr_force_hw_tracking_exit() is called. Also as PSR2_MAN_TRK_CTL is a double buffered registers do a RMW is not a good idea so here also setting the man_trk_ctl_enable_bit() that is required in TGL and only doing a register write. v2: - not doing a rmw v3: - removing the inline from functions that return PSR2_MAN_TRK_CTL bits Reviewed-by: Jouni Högander Cc: Jouni Högander Cc: Mika Kahola Signed-off-by: José Roberto de Souza --- drivers/gpu/drm/i915/display/intel_psr.c | 22 +++++++++++++--------- 1 file changed, 13 insertions(+), 9 deletions(-) diff --git a/drivers/gpu/drm/i915/display/intel_psr.c b/drivers/gpu/drm/i915/display/intel_psr.c index 80002ca6a6ebe..6e3ae2c4430c7 100644 --- a/drivers/gpu/drm/i915/display/intel_psr.c +++ b/drivers/gpu/drm/i915/display/intel_psr.c @@ -1436,14 +1436,19 @@ void intel_psr_resume(struct intel_dp *intel_dp) mutex_unlock(&psr->lock); } -static inline u32 man_trk_ctl_single_full_frame_bit_get(struct drm_i915_private *dev_priv) +static u32 man_trk_ctl_enable_bit_get(struct drm_i915_private *dev_priv) +{ + return IS_ALDERLAKE_P(dev_priv) ? 0 : PSR2_MAN_TRK_CTL_ENABLE; +} + +static u32 man_trk_ctl_single_full_frame_bit_get(struct drm_i915_private *dev_priv) { return IS_ALDERLAKE_P(dev_priv) ? ADLP_PSR2_MAN_TRK_CTL_SF_SINGLE_FULL_FRAME : PSR2_MAN_TRK_CTL_SF_SINGLE_FULL_FRAME; } -static inline u32 man_trk_ctl_partial_frame_bit_get(struct drm_i915_private *dev_priv) +static u32 man_trk_ctl_partial_frame_bit_get(struct drm_i915_private *dev_priv) { return IS_ALDERLAKE_P(dev_priv) ? ADLP_PSR2_MAN_TRK_CTL_SF_PARTIAL_FRAME_UPDATE : @@ -1455,9 +1460,11 @@ static void psr_force_hw_tracking_exit(struct intel_dp *intel_dp) struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); if (intel_dp->psr.psr2_sel_fetch_enabled) - intel_de_rmw(dev_priv, - PSR2_MAN_TRK_CTL(intel_dp->psr.transcoder), 0, - man_trk_ctl_single_full_frame_bit_get(dev_priv)); + intel_de_write(dev_priv, + PSR2_MAN_TRK_CTL(intel_dp->psr.transcoder), + man_trk_ctl_enable_bit_get(dev_priv) | + man_trk_ctl_partial_frame_bit_get(dev_priv) | + man_trk_ctl_single_full_frame_bit_get(dev_priv)); /* * Display WA #0884: skl+ @@ -1554,10 +1561,7 @@ static void psr2_man_trk_ctl_calc(struct intel_crtc_state *crtc_state, { struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); - u32 val = 0; - - if (!IS_ALDERLAKE_P(dev_priv)) - val = PSR2_MAN_TRK_CTL_ENABLE; + u32 val = man_trk_ctl_enable_bit_get(dev_priv); /* SF partial frame enable has to be set even on full update */ val |= man_trk_ctl_partial_frame_bit_get(dev_priv); From patchwork Tue Apr 5 15:53:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Souza, Jose" X-Patchwork-Id: 12801745 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DF444C433F5 for ; Tue, 5 Apr 2022 15:52:36 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 55EA010EA01; Tue, 5 Apr 2022 15:52:35 +0000 (UTC) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id B1C7410EA01 for ; Tue, 5 Apr 2022 15:52:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649173953; x=1680709953; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=F0Msq1hcQjviBj8k9RievSgFIs+ICwWiey3jImznGVU=; b=JKTmEBYV959dH/i49m+KCxFgoaCrensewTnBi1a3rIJWSPcjCIBL9i/+ Wvy+OY+P69OcD2PQ24rIwEJp+TA4rYNPrE+7nWEKFNDhJahkErKyClJQ4 0NH6H2n/5wv0M6wE8plhPlr54Po+3rKFbxsDDaaVckMwq7wWxqyv1JJyy SwzUMyZ6YuCWfPLIJNi3W3rhwJcMezXwF/E50foHOc/I+YYoN4psDfKeK R51720ODubfwKgSSAneYrWVYNUI9CSDM4CIzol+rt7321qqjN6DkeD8Bo AzVKHnunzNLQ1/4naxm/ULBWpZdO9y9GYMu5vVfw+dEOo/9To8okgRzB2 g==; X-IronPort-AV: E=McAfee;i="6200,9189,10308"; a="321480364" X-IronPort-AV: E=Sophos;i="5.90,236,1643702400"; d="scan'208";a="321480364" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Apr 2022 08:52:33 -0700 X-IronPort-AV: E=Sophos;i="5.90,236,1643702400"; d="scan'208";a="524047578" Received: from unknown (HELO josouza-mobl2.intel.com) ([10.230.19.149]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Apr 2022 08:52:32 -0700 From: =?utf-8?q?Jos=C3=A9_Roberto_de_Souza?= To: intel-gfx@lists.freedesktop.org Date: Tue, 5 Apr 2022 08:53:43 -0700 Message-Id: <20220405155344.47219-2-jose.souza@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220405155344.47219-1-jose.souza@intel.com> References: <20220405155344.47219-1-jose.souza@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH CI 2/3] drm/i915/display/psr: Lock and unlock PSR around pipe updates X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Frontbuffer rendering and page flips can race with each other and this can potentialy cause issues with PSR2 selective fetch. And because pipe/crtc updates are time sentive we can't grab the PSR lock after intel_pipe_update_start() and before intel_pipe_update_end(). So here adding the lock and unlock functions and calls, the proper PSR2 selective fetch handling will come in a separated patch. v2: - fixed new functions documentation Reviewed-by: Jouni Högander Cc: Jouni Högander Cc: Mika Kahola Signed-off-by: José Roberto de Souza --- drivers/gpu/drm/i915/display/intel_crtc.c | 6 +- drivers/gpu/drm/i915/display/intel_psr.c | 69 ++++++++++++++++++++--- drivers/gpu/drm/i915/display/intel_psr.h | 5 +- 3 files changed, 70 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/i915/display/intel_crtc.c b/drivers/gpu/drm/i915/display/intel_crtc.c index f655c16228776..a5439182d5ae4 100644 --- a/drivers/gpu/drm/i915/display/intel_crtc.c +++ b/drivers/gpu/drm/i915/display/intel_crtc.c @@ -507,6 +507,8 @@ void intel_pipe_update_start(struct intel_crtc_state *new_crtc_state) VBLANK_EVASION_TIME_US); max = vblank_start - 1; + intel_psr_lock(new_crtc_state); + if (min <= 0 || max <= 0) goto irq_disable; @@ -518,7 +520,7 @@ void intel_pipe_update_start(struct intel_crtc_state *new_crtc_state) * VBL interrupts will start the PSR exit and prevent a PSR * re-entry as well. */ - intel_psr_wait_for_idle(new_crtc_state); + intel_psr_wait_for_idle_locked(new_crtc_state); local_irq_disable(); @@ -683,6 +685,8 @@ void intel_pipe_update_end(struct intel_crtc_state *new_crtc_state) local_irq_enable(); + intel_psr_unlock(new_crtc_state); + if (intel_vgpu_active(dev_priv)) return; diff --git a/drivers/gpu/drm/i915/display/intel_psr.c b/drivers/gpu/drm/i915/display/intel_psr.c index 6e3ae2c4430c7..9517074cd097e 100644 --- a/drivers/gpu/drm/i915/display/intel_psr.c +++ b/drivers/gpu/drm/i915/display/intel_psr.c @@ -1548,10 +1548,19 @@ void intel_psr2_program_plane_sel_fetch(struct intel_plane *plane, void intel_psr2_program_trans_man_trk_ctl(const struct intel_crtc_state *crtc_state) { struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev); + struct intel_encoder *encoder; if (!crtc_state->enable_psr2_sel_fetch) return; + for_each_intel_encoder_mask_with_psr(&dev_priv->drm, encoder, + crtc_state->uapi.encoder_mask) { + struct intel_dp *intel_dp = enc_to_intel_dp(encoder); + + lockdep_assert_held(&intel_dp->psr.lock); + break; + } + intel_de_write(dev_priv, PSR2_MAN_TRK_CTL(crtc_state->cpu_transcoder), crtc_state->psr2_man_track_ctl); } @@ -1919,13 +1928,13 @@ static int _psr1_ready_for_pipe_update_locked(struct intel_dp *intel_dp) } /** - * intel_psr_wait_for_idle - wait for PSR be ready for a pipe update + * intel_psr_wait_for_idle_locked - wait for PSR be ready for a pipe update * @new_crtc_state: new CRTC state * * This function is expected to be called from pipe_update_start() where it is * not expected to race with PSR enable or disable. */ -void intel_psr_wait_for_idle(const struct intel_crtc_state *new_crtc_state) +void intel_psr_wait_for_idle_locked(const struct intel_crtc_state *new_crtc_state) { struct drm_i915_private *dev_priv = to_i915(new_crtc_state->uapi.crtc->dev); struct intel_encoder *encoder; @@ -1938,12 +1947,10 @@ void intel_psr_wait_for_idle(const struct intel_crtc_state *new_crtc_state) struct intel_dp *intel_dp = enc_to_intel_dp(encoder); int ret; - mutex_lock(&intel_dp->psr.lock); + lockdep_assert_held(&intel_dp->psr.lock); - if (!intel_dp->psr.enabled) { - mutex_unlock(&intel_dp->psr.lock); + if (!intel_dp->psr.enabled) continue; - } if (intel_dp->psr.psr2_enabled) ret = _psr2_ready_for_pipe_update_locked(intel_dp); @@ -1952,8 +1959,6 @@ void intel_psr_wait_for_idle(const struct intel_crtc_state *new_crtc_state) if (ret) drm_err(&dev_priv->drm, "PSR wait timed out, atomic update may fail\n"); - - mutex_unlock(&intel_dp->psr.lock); } } @@ -2444,3 +2449,51 @@ bool intel_psr_enabled(struct intel_dp *intel_dp) return ret; } + +/** + * intel_psr_lock - grab PSR lock + * @crtc_state: the crtc state + * + * This is initially meant to be used by around CRTC update, when + * vblank sensitive registers are updated and we need grab the lock + * before it to avoid vblank evasion. + */ +void intel_psr_lock(const struct intel_crtc_state *crtc_state) +{ + struct drm_i915_private *i915 = to_i915(crtc_state->uapi.crtc->dev); + struct intel_encoder *encoder; + + if (!crtc_state->has_psr) + return; + + for_each_intel_encoder_mask_with_psr(&i915->drm, encoder, + crtc_state->uapi.encoder_mask) { + struct intel_dp *intel_dp = enc_to_intel_dp(encoder); + + mutex_lock(&intel_dp->psr.lock); + break; + } +} + +/** + * intel_psr_unlock - release PSR lock + * @crtc_state: the crtc state + * + * Release the PSR lock that was held during pipe update. + */ +void intel_psr_unlock(const struct intel_crtc_state *crtc_state) +{ + struct drm_i915_private *i915 = to_i915(crtc_state->uapi.crtc->dev); + struct intel_encoder *encoder; + + if (!crtc_state->has_psr) + return; + + for_each_intel_encoder_mask_with_psr(&i915->drm, encoder, + crtc_state->uapi.encoder_mask) { + struct intel_dp *intel_dp = enc_to_intel_dp(encoder); + + mutex_unlock(&intel_dp->psr.lock); + break; + } +} diff --git a/drivers/gpu/drm/i915/display/intel_psr.h b/drivers/gpu/drm/i915/display/intel_psr.h index f6526d9ccfdc6..2ac3a46cccc50 100644 --- a/drivers/gpu/drm/i915/display/intel_psr.h +++ b/drivers/gpu/drm/i915/display/intel_psr.h @@ -41,7 +41,7 @@ void intel_psr_get_config(struct intel_encoder *encoder, struct intel_crtc_state *pipe_config); void intel_psr_irq_handler(struct intel_dp *intel_dp, u32 psr_iir); void intel_psr_short_pulse(struct intel_dp *intel_dp); -void intel_psr_wait_for_idle(const struct intel_crtc_state *new_crtc_state); +void intel_psr_wait_for_idle_locked(const struct intel_crtc_state *new_crtc_state); bool intel_psr_enabled(struct intel_dp *intel_dp); int intel_psr2_sel_fetch_update(struct intel_atomic_state *state, struct intel_crtc *crtc); @@ -55,4 +55,7 @@ void intel_psr2_disable_plane_sel_fetch(struct intel_plane *plane, void intel_psr_pause(struct intel_dp *intel_dp); void intel_psr_resume(struct intel_dp *intel_dp); +void intel_psr_lock(const struct intel_crtc_state *crtc_state); +void intel_psr_unlock(const struct intel_crtc_state *crtc_state); + #endif /* __INTEL_PSR_H__ */ From patchwork Tue Apr 5 15:53:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Souza, Jose" X-Patchwork-Id: 12801746 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D9C1CC433EF for ; Tue, 5 Apr 2022 15:52:38 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id DB3E410EA06; Tue, 5 Apr 2022 15:52:36 +0000 (UTC) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id BE30810EA05 for ; Tue, 5 Apr 2022 15:52:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649173955; x=1680709955; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TrgnDrR5kzm/M8uspQ0nkyavedNswgbmjcZ5GsQpDOE=; b=gO5QR1NTcHQs5T/9SbJRuCOB9WtKlk2MNopJE2Qg1mwWgjREcnF5QegY 8LFEVzXjtenOImiG+dDtmGRXHe/LRABg/Ck4IRaWcwJQI7stFfccbbDJq C2UcJnU99naaf3AdAzqWBfJx1FlQAmXl4DTijF8DaipvfrFkl1N/8OWCG 5AaT6aEtg2ddv6prrtkujhXr00vIKCpfLhV67cn4KE84Mf1k5scG+WjeA TGDaFjqOJkDsU4Z6xjrmxVWQM3UBjDRyUD6Gmotg4WOhWiKaVieOZt26j kapJA3bsbdaDoQS3jJlDyLhfLc1niV2USzn/YO/c6w5kMimVysSkbwDuf g==; X-IronPort-AV: E=McAfee;i="6200,9189,10308"; a="321480375" X-IronPort-AV: E=Sophos;i="5.90,236,1643702400"; d="scan'208";a="321480375" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Apr 2022 08:52:35 -0700 X-IronPort-AV: E=Sophos;i="5.90,236,1643702400"; d="scan'208";a="524047590" Received: from unknown (HELO josouza-mobl2.intel.com) ([10.230.19.149]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Apr 2022 08:52:33 -0700 From: =?utf-8?q?Jos=C3=A9_Roberto_de_Souza?= To: intel-gfx@lists.freedesktop.org Date: Tue, 5 Apr 2022 08:53:44 -0700 Message-Id: <20220405155344.47219-3-jose.souza@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220405155344.47219-1-jose.souza@intel.com> References: <20220405155344.47219-1-jose.souza@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH CI 3/3] drm/i915/display/psr: Use continuos full frame to handle frontbuffer invalidations X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Instead of exit PSR when a frontbuffer invalidation happens, we can enable the PSR2 selective fetch continuous full frame, that will keep the panel updated like PSR was disabled but without keeping PSR active. So as soon as the frontbuffer flush happens we can disable the continuous full frame and start to do selective fetches much quicker than the path that would enable PSR, that will wait a few frames to actually activate PSR. Also this approach has proven to fix some glitches found in Alderlake-P when there are a lot of invalidations happening together with page flips. Some may ask why it is writing to CURSURFLIVE(), it is because that is the way that hardware team provided us to poke display to handle PSR updates, and it is being used since display 9. v2: - handling possible race conditions between frontbuffer rendering and page flips Reviewed-by: Jouni Högander Cc: Khaled Almahallawy Cc: Shawn C Lee Cc: Jouni Högander Cc: Mika Kahola Signed-off-by: José Roberto de Souza --- .../drm/i915/display/intel_display_types.h | 1 + drivers/gpu/drm/i915/display/intel_psr.c | 88 ++++++++++++++++--- 2 files changed, 77 insertions(+), 12 deletions(-) diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h b/drivers/gpu/drm/i915/display/intel_display_types.h index 96024f7d839d4..cfd042117b109 100644 --- a/drivers/gpu/drm/i915/display/intel_display_types.h +++ b/drivers/gpu/drm/i915/display/intel_display_types.h @@ -1525,6 +1525,7 @@ struct intel_psr { bool colorimetry_support; bool psr2_enabled; bool psr2_sel_fetch_enabled; + bool psr2_sel_fetch_cff_enabled; bool req_psr2_sdp_prior_scanline; u8 sink_sync_latency; ktime_t last_entry_attempt; diff --git a/drivers/gpu/drm/i915/display/intel_psr.c b/drivers/gpu/drm/i915/display/intel_psr.c index 9517074cd097e..5a55010a9b2f7 100644 --- a/drivers/gpu/drm/i915/display/intel_psr.c +++ b/drivers/gpu/drm/i915/display/intel_psr.c @@ -1221,6 +1221,7 @@ static void intel_psr_enable_locked(struct intel_dp *intel_dp, intel_dp->psr.dc3co_exit_delay = val; intel_dp->psr.dc3co_exitline = crtc_state->dc3co_exitline; intel_dp->psr.psr2_sel_fetch_enabled = crtc_state->enable_psr2_sel_fetch; + intel_dp->psr.psr2_sel_fetch_cff_enabled = false; intel_dp->psr.req_psr2_sdp_prior_scanline = crtc_state->req_psr2_sdp_prior_scanline; @@ -1455,6 +1456,13 @@ static u32 man_trk_ctl_partial_frame_bit_get(struct drm_i915_private *dev_priv) PSR2_MAN_TRK_CTL_SF_PARTIAL_FRAME_UPDATE; } +static u32 man_trk_ctl_continuos_full_frame(struct drm_i915_private *dev_priv) +{ + return IS_ALDERLAKE_P(dev_priv) ? + ADLP_PSR2_MAN_TRK_CTL_SF_CONTINUOS_FULL_FRAME : + PSR2_MAN_TRK_CTL_SF_CONTINUOS_FULL_FRAME; +} + static void psr_force_hw_tracking_exit(struct intel_dp *intel_dp) { struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); @@ -1558,6 +1566,8 @@ void intel_psr2_program_trans_man_trk_ctl(const struct intel_crtc_state *crtc_st struct intel_dp *intel_dp = enc_to_intel_dp(encoder); lockdep_assert_held(&intel_dp->psr.lock); + if (intel_dp->psr.psr2_sel_fetch_cff_enabled) + return; break; } @@ -2135,6 +2145,27 @@ static void intel_psr_work(struct work_struct *work) mutex_unlock(&intel_dp->psr.lock); } +static void _psr_invalidate_handle(struct intel_dp *intel_dp) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + + if (intel_dp->psr.psr2_sel_fetch_enabled) { + u32 val; + + if (intel_dp->psr.psr2_sel_fetch_cff_enabled) + return; + + val = man_trk_ctl_enable_bit_get(dev_priv) | + man_trk_ctl_partial_frame_bit_get(dev_priv) | + man_trk_ctl_continuos_full_frame(dev_priv); + intel_de_write(dev_priv, PSR2_MAN_TRK_CTL(intel_dp->psr.transcoder), val); + intel_de_write(dev_priv, CURSURFLIVE(intel_dp->psr.pipe), 0); + intel_dp->psr.psr2_sel_fetch_cff_enabled = true; + } else { + intel_psr_exit(intel_dp); + } +} + /** * intel_psr_invalidate - Invalidade PSR * @dev_priv: i915 device @@ -2171,7 +2202,7 @@ void intel_psr_invalidate(struct drm_i915_private *dev_priv, intel_dp->psr.busy_frontbuffer_bits |= pipe_frontbuffer_bits; if (pipe_frontbuffer_bits) - intel_psr_exit(intel_dp); + _psr_invalidate_handle(intel_dp); mutex_unlock(&intel_dp->psr.lock); } @@ -2203,6 +2234,42 @@ tgl_dc3co_flush_locked(struct intel_dp *intel_dp, unsigned int frontbuffer_bits, intel_dp->psr.dc3co_exit_delay); } +static void _psr_flush_handle(struct intel_dp *intel_dp) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + + if (intel_dp->psr.psr2_sel_fetch_enabled) { + if (intel_dp->psr.psr2_sel_fetch_cff_enabled) { + /* can we turn CFF off? */ + if (intel_dp->psr.busy_frontbuffer_bits == 0) { + u32 val = man_trk_ctl_enable_bit_get(dev_priv) | + man_trk_ctl_partial_frame_bit_get(dev_priv) | + man_trk_ctl_single_full_frame_bit_get(dev_priv); + + /* + * turn continuous full frame off and do a single + * full frame + */ + intel_de_write(dev_priv, PSR2_MAN_TRK_CTL(intel_dp->psr.transcoder), + val); + intel_de_write(dev_priv, CURSURFLIVE(intel_dp->psr.pipe), 0); + intel_dp->psr.psr2_sel_fetch_cff_enabled = false; + } + } else { + /* + * continuous full frame is disabled, only a single full + * frame is required + */ + psr_force_hw_tracking_exit(intel_dp); + } + } else { + psr_force_hw_tracking_exit(intel_dp); + + if (!intel_dp->psr.active && !intel_dp->psr.busy_frontbuffer_bits) + schedule_work(&intel_dp->psr.work); + } +} + /** * intel_psr_flush - Flush PSR * @dev_priv: i915 device @@ -2240,25 +2307,22 @@ void intel_psr_flush(struct drm_i915_private *dev_priv, * we have to ensure that the PSR is not activated until * intel_psr_resume() is called. */ - if (intel_dp->psr.paused) { - mutex_unlock(&intel_dp->psr.lock); - continue; - } + if (intel_dp->psr.paused) + goto unlock; if (origin == ORIGIN_FLIP || (origin == ORIGIN_CURSOR_UPDATE && !intel_dp->psr.psr2_sel_fetch_enabled)) { tgl_dc3co_flush_locked(intel_dp, frontbuffer_bits, origin); - mutex_unlock(&intel_dp->psr.lock); - continue; + goto unlock; } - /* By definition flush = invalidate + flush */ - if (pipe_frontbuffer_bits) - psr_force_hw_tracking_exit(intel_dp); + if (pipe_frontbuffer_bits == 0) + goto unlock; - if (!intel_dp->psr.active && !intel_dp->psr.busy_frontbuffer_bits) - schedule_work(&intel_dp->psr.work); + /* By definition flush = invalidate + flush */ + _psr_flush_handle(intel_dp); +unlock: mutex_unlock(&intel_dp->psr.lock); } }