Message ID | CABVU7+s3FP0eyBKoauVbR4aPmHOAqbS1kUQM2N0fJ4JQvNz5YQ@mail.gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On to, 2016-02-18 at 08:56 -0800, Rodrigo Vivi wrote: > Imre, Patrik, do you know if I'm missing something or what I'm doing > wrong with this power domain handler for vblanks to avoid DC states > when we need a reliable frame counter in place. The WARN is due to the spin_lock() in drm_vblank_enable(), you can't call power domain functions in atomic context, due to the mutex the power domain and runtime PM fw uses. --Imre > > Do you have better ideas? > > Thanks, > Rodrigo. > > ---------- Forwarded message ---------- > From: Rodrigo Vivi <rodrigo.vivi@gmail.com> > Date: Wed, Feb 17, 2016 at 3:14 PM > Subject: Re: [Intel-gfx] [PATCH] drm/i915: Avoid vblank counter for > gen9+ > To: Daniel Vetter <daniel@ffwll.ch>, Patrik Jakobsson > <patrik.jakobsson@linux.intel.com> > Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>, intel-gfx > <intel-gfx@lists.freedesktop.org> > > > On Tue, Feb 16, 2016 at 7:50 AM, Daniel Vetter <daniel@ffwll.ch> > wrote: > > On Thu, Feb 11, 2016 at 09:00:47AM -0800, Rodrigo Vivi wrote: > > > Framecounter register is read-only so DMC cannot restore it > > > after exiting DC5 and DC6. > > > > > > Easiest way to go is to avoid the counter and use vblank > > > interruptions for this platform and for all the following > > > ones since DMC came to stay. At least while we can't change > > > this register to read-write. > > > > > > Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> > > > > Now my comments also in public: > > - Do we still get reasonable dc5 residency with this - it means > > we'll keep > > vblank irq running forever. > > > > - I'm a bit unclear on what exactly this fixes - have you tested > > that > > long-lasting vblank waits are still accurate? Just want to make > > sure we > > don't just paper over the issue and desktops can still get stuck > > waiting > > for a vblank. > > apparently no... so please just ignore this patch for now... after a > while with that patch I was seeing the issue again... > > > > > Just a bit suprised that the only problem is the framecounter, and > > not > > that vblanks stop happening too. > > > > We need to also know these details for the proper fix, which will > > involve > > grabbing power well references (might need a new one for vblank > > interrupts) to make sure. > > Yeap, I liked this idea... so combining a power domain reference with > a vblank count restore once we know the dc off is blocked we could > workaround this case... something like: > > diff --git a/drivers/gpu/drm/i915/i915_irq.c > b/drivers/gpu/drm/i915/i915_irq.c > index 25a8937..2b18778 100644 > --- a/drivers/gpu/drm/i915/i915_irq.c > +++ b/drivers/gpu/drm/i915/i915_irq.c > @@ -2743,7 +2743,10 @@ static int gen8_enable_vblank(struct > drm_device > *dev, unsigned int pipe) > struct drm_i915_private *dev_priv = dev->dev_private; > unsigned long irqflags; > > + intel_display_power_get(dev_priv, POWER_DOMAIN_VBLANK); > + > spin_lock_irqsave(&dev_priv->irq_lock, irqflags); > + dev->vblank[pipe].last = g4x_get_vblank_counter(dev, pipe); > bdw_enable_pipe_irq(dev_priv, pipe, GEN8_PIPE_VBLANK); > spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags); > > @@ -2796,6 +2799,8 @@ static void gen8_disable_vblank(struct > drm_device *dev, unsigned int pipe) > spin_lock_irqsave(&dev_priv->irq_lock, irqflags); > bdw_disable_pipe_irq(dev_priv, pipe, GEN8_PIPE_VBLANK); > spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags); > + > + intel_display_power_put(dev_priv, POWER_DOMAIN_VBLANK); > } > > where POWER_DOMAIN_VBLANK is part of: > #define SKL_DISPLAY_DC_OFF_POWER_DOMAINS ( \ > BIT(POWER_DOMAIN_VBLANK) | \ > > > However I have my dmesg flooded by: > > > [ 69.025562] BUG: sleeping function called from invalid context at > drivers/base/power/runtime.c:955 > [ 69.025576] in_atomic(): 1, irqs_disabled(): 1, pid: 995, name: > Xorg > [ 69.025582] Preemption disabled at:[<ffffffff815a1fee>] > drm_vblank_get+0x4e/0xd0 > > [ 69.025619] CPU: 3 PID: 995 Comm: Xorg Tainted: G U W > 4.5.0-rc4+ #11 > [ 69.025628] Hardware name: Intel Corporation Kabylake Client > platform/Skylake U DDR3L RVP7, BIOS KBLSE2R1.R00.X019.B01.1512230743 > 12/23/2015 > [ 69.025637] 0000000000000000 ffff88003f0bfbb0 ffffffff8148e983 > 0000000000000000 > [ 69.025653] ffff880085b04200 ffff88003f0bfbd0 ffffffff81133ece > ffffffff81d77f23 > [ 69.025667] 00000000000003bb ffff88003f0bfbf8 ffffffff81133f89 > ffff88016913a098 > [ 69.025680] Call Trace: > [ 69.025697] [<ffffffff8148e983>] dump_stack+0x65/0x92 > [ 69.025711] [<ffffffff81133ece>] ___might_sleep+0x10e/0x180 > [ 69.025722] [<ffffffff81133f89>] __might_sleep+0x49/0x80 > [ 69.025739] [<ffffffff815d21d9>] __pm_runtime_resume+0x79/0x80 > [ 69.025841] [<ffffffffa005ebc8>] intel_runtime_pm_get+0x28/0x90 > [i915] > [ 69.025924] [<ffffffffa005ec49>] > intel_display_power_get+0x19/0x50 [i915] > [ 69.025995] [<ffffffffa004aea4>] gen8_enable_vblank+0x34/0xc0 > [i915] > [ 69.026016] [<ffffffff815a1f46>] drm_vblank_enable+0x76/0xd0 > > > > > Another thing that I search in the spec was for an Interrupt to know > when we came back from DC5 or DC6 or got power well re-enabled, so we > would be able to restore the drm last counter... but I couldn't find > any... > > > Any other idea? > > > > > > Cheers, Daniel > > > > > --- > > > drivers/gpu/drm/i915/i915_irq.c | 7 +++++-- > > > 1 file changed, 5 insertions(+), 2 deletions(-) > > > > > > diff --git a/drivers/gpu/drm/i915/i915_irq.c > > > b/drivers/gpu/drm/i915/i915_irq.c > > > index 25a8937..c294a4b 100644 > > > --- a/drivers/gpu/drm/i915/i915_irq.c > > > +++ b/drivers/gpu/drm/i915/i915_irq.c > > > @@ -4556,7 +4556,10 @@ void intel_irq_init(struct > > > drm_i915_private *dev_priv) > > > > > > pm_qos_add_request(&dev_priv->pm_qos, > > > PM_QOS_CPU_DMA_LATENCY, PM_QOS_DEFAULT_VALUE); > > > > > > - if (IS_GEN2(dev_priv)) { > > > + if (INTEL_INFO(dev_priv)->gen >= 9) { > > > + dev->max_vblank_count = 0; > > > + dev->driver->get_vblank_counter = > > > g4x_get_vblank_counter; > > > + } else if (IS_GEN2(dev_priv)) { > > > dev->max_vblank_count = 0; > > > dev->driver->get_vblank_counter = > > > i8xx_get_vblank_counter; > > > } else if (IS_G4X(dev_priv) || INTEL_INFO(dev_priv)->gen >= > > > 5) { > > > @@ -4572,7 +4575,7 @@ void intel_irq_init(struct drm_i915_private > > > *dev_priv) > > > * Gen2 doesn't have a hardware frame counter and so > > > depends on > > > * vblank interrupts to produce sane vblank seuquence > > > numbers. > > > */ > > > - if (!IS_GEN2(dev_priv)) > > > + if (!IS_GEN2(dev_priv) && !INTEL_INFO(dev_priv)->gen >= 9) > > > dev->vblank_disable_immediate = true; > > > > > > dev->driver->get_vblank_timestamp = > > > i915_get_vblank_timestamp; > > > -- > > > 2.4.3 > > > > > > _______________________________________________ > > > Intel-gfx mailing list > > > Intel-gfx@lists.freedesktop.org > > > https://lists.freedesktop.org/mailman/listinfo/intel-gfx > > > > -- > > Daniel Vetter > > Software Engineer, Intel Corporation > > http://blog.ffwll.ch > > _______________________________________________ > > Intel-gfx mailing list > > Intel-gfx@lists.freedesktop.org > > https://lists.freedesktop.org/mailman/listinfo/intel-gfx > > > > -- > Rodrigo Vivi > Blog: http://blog.vivi.eng.br > >
On to, 2016-02-18 at 08:56 -0800, Rodrigo Vivi wrote: > Imre, Patrik, do you know if I'm missing something or what I'm doing > wrong with this power domain handler for vblanks to avoid DC states > when we need a reliable frame counter in place. > > Do you have better ideas? Would it be possible to check the DC5/6 entry counters whenever you try to access to vblank counter and consider them corrupted if they increased since the last access? In this case you could adjust the counter based on some timestamp difference. --Imre > Thanks, > Rodrigo. > > ---------- Forwarded message ---------- > From: Rodrigo Vivi <rodrigo.vivi@gmail.com> > Date: Wed, Feb 17, 2016 at 3:14 PM > Subject: Re: [Intel-gfx] [PATCH] drm/i915: Avoid vblank counter for > gen9+ > To: Daniel Vetter <daniel@ffwll.ch>, Patrik Jakobsson > <patrik.jakobsson@linux.intel.com> > Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>, intel-gfx > <intel-gfx@lists.freedesktop.org> > > > On Tue, Feb 16, 2016 at 7:50 AM, Daniel Vetter <daniel@ffwll.ch> > wrote: > > On Thu, Feb 11, 2016 at 09:00:47AM -0800, Rodrigo Vivi wrote: > > > Framecounter register is read-only so DMC cannot restore it > > > after exiting DC5 and DC6. > > > > > > Easiest way to go is to avoid the counter and use vblank > > > interruptions for this platform and for all the following > > > ones since DMC came to stay. At least while we can't change > > > this register to read-write. > > > > > > Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> > > > > Now my comments also in public: > > - Do we still get reasonable dc5 residency with this - it means > > we'll keep > > vblank irq running forever. > > > > - I'm a bit unclear on what exactly this fixes - have you tested > > that > > long-lasting vblank waits are still accurate? Just want to make > > sure we > > don't just paper over the issue and desktops can still get stuck > > waiting > > for a vblank. > > apparently no... so please just ignore this patch for now... after a > while with that patch I was seeing the issue again... > > > > > Just a bit suprised that the only problem is the framecounter, and > > not > > that vblanks stop happening too. > > > > We need to also know these details for the proper fix, which will > > involve > > grabbing power well references (might need a new one for vblank > > interrupts) to make sure. > > Yeap, I liked this idea... so combining a power domain reference with > a vblank count restore once we know the dc off is blocked we could > workaround this case... something like: > > diff --git a/drivers/gpu/drm/i915/i915_irq.c > b/drivers/gpu/drm/i915/i915_irq.c > index 25a8937..2b18778 100644 > --- a/drivers/gpu/drm/i915/i915_irq.c > +++ b/drivers/gpu/drm/i915/i915_irq.c > @@ -2743,7 +2743,10 @@ static int gen8_enable_vblank(struct > drm_device > *dev, unsigned int pipe) > struct drm_i915_private *dev_priv = dev->dev_private; > unsigned long irqflags; > > + intel_display_power_get(dev_priv, POWER_DOMAIN_VBLANK); > + > spin_lock_irqsave(&dev_priv->irq_lock, irqflags); > + dev->vblank[pipe].last = g4x_get_vblank_counter(dev, pipe); > bdw_enable_pipe_irq(dev_priv, pipe, GEN8_PIPE_VBLANK); > spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags); > > @@ -2796,6 +2799,8 @@ static void gen8_disable_vblank(struct > drm_device *dev, unsigned int pipe) > spin_lock_irqsave(&dev_priv->irq_lock, irqflags); > bdw_disable_pipe_irq(dev_priv, pipe, GEN8_PIPE_VBLANK); > spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags); > + > + intel_display_power_put(dev_priv, POWER_DOMAIN_VBLANK); > } > > where POWER_DOMAIN_VBLANK is part of: > #define SKL_DISPLAY_DC_OFF_POWER_DOMAINS ( \ > BIT(POWER_DOMAIN_VBLANK) | \ > > > However I have my dmesg flooded by: > > > [ 69.025562] BUG: sleeping function called from invalid context at > drivers/base/power/runtime.c:955 > [ 69.025576] in_atomic(): 1, irqs_disabled(): 1, pid: 995, name: > Xorg > [ 69.025582] Preemption disabled at:[<ffffffff815a1fee>] > drm_vblank_get+0x4e/0xd0 > > [ 69.025619] CPU: 3 PID: 995 Comm: Xorg Tainted: G U W > 4.5.0-rc4+ #11 > [ 69.025628] Hardware name: Intel Corporation Kabylake Client > platform/Skylake U DDR3L RVP7, BIOS KBLSE2R1.R00.X019.B01.1512230743 > 12/23/2015 > [ 69.025637] 0000000000000000 ffff88003f0bfbb0 ffffffff8148e983 > 0000000000000000 > [ 69.025653] ffff880085b04200 ffff88003f0bfbd0 ffffffff81133ece > ffffffff81d77f23 > [ 69.025667] 00000000000003bb ffff88003f0bfbf8 ffffffff81133f89 > ffff88016913a098 > [ 69.025680] Call Trace: > [ 69.025697] [<ffffffff8148e983>] dump_stack+0x65/0x92 > [ 69.025711] [<ffffffff81133ece>] ___might_sleep+0x10e/0x180 > [ 69.025722] [<ffffffff81133f89>] __might_sleep+0x49/0x80 > [ 69.025739] [<ffffffff815d21d9>] __pm_runtime_resume+0x79/0x80 > [ 69.025841] [<ffffffffa005ebc8>] intel_runtime_pm_get+0x28/0x90 > [i915] > [ 69.025924] [<ffffffffa005ec49>] > intel_display_power_get+0x19/0x50 [i915] > [ 69.025995] [<ffffffffa004aea4>] gen8_enable_vblank+0x34/0xc0 > [i915] > [ 69.026016] [<ffffffff815a1f46>] drm_vblank_enable+0x76/0xd0 > > > > > Another thing that I search in the spec was for an Interrupt to know > when we came back from DC5 or DC6 or got power well re-enabled, so we > would be able to restore the drm last counter... but I couldn't find > any... > > > Any other idea? > > > > > > Cheers, Daniel > > > > > --- > > > drivers/gpu/drm/i915/i915_irq.c | 7 +++++-- > > > 1 file changed, 5 insertions(+), 2 deletions(-) > > > > > > diff --git a/drivers/gpu/drm/i915/i915_irq.c > > > b/drivers/gpu/drm/i915/i915_irq.c > > > index 25a8937..c294a4b 100644 > > > --- a/drivers/gpu/drm/i915/i915_irq.c > > > +++ b/drivers/gpu/drm/i915/i915_irq.c > > > @@ -4556,7 +4556,10 @@ void intel_irq_init(struct > > > drm_i915_private *dev_priv) > > > > > > pm_qos_add_request(&dev_priv->pm_qos, > > > PM_QOS_CPU_DMA_LATENCY, PM_QOS_DEFAULT_VALUE); > > > > > > - if (IS_GEN2(dev_priv)) { > > > + if (INTEL_INFO(dev_priv)->gen >= 9) { > > > + dev->max_vblank_count = 0; > > > + dev->driver->get_vblank_counter = > > > g4x_get_vblank_counter; > > > + } else if (IS_GEN2(dev_priv)) { > > > dev->max_vblank_count = 0; > > > dev->driver->get_vblank_counter = > > > i8xx_get_vblank_counter; > > > } else if (IS_G4X(dev_priv) || INTEL_INFO(dev_priv)->gen >= > > > 5) { > > > @@ -4572,7 +4575,7 @@ void intel_irq_init(struct drm_i915_private > > > *dev_priv) > > > * Gen2 doesn't have a hardware frame counter and so > > > depends on > > > * vblank interrupts to produce sane vblank seuquence > > > numbers. > > > */ > > > - if (!IS_GEN2(dev_priv)) > > > + if (!IS_GEN2(dev_priv) && !INTEL_INFO(dev_priv)->gen >= 9) > > > dev->vblank_disable_immediate = true; > > > > > > dev->driver->get_vblank_timestamp = > > > i915_get_vblank_timestamp; > > > -- > > > 2.4.3 > > > > > > _______________________________________________ > > > Intel-gfx mailing list > > > Intel-gfx@lists.freedesktop.org > > > https://lists.freedesktop.org/mailman/listinfo/intel-gfx > > > > -- > > Daniel Vetter > > Software Engineer, Intel Corporation > > http://blog.ffwll.ch > > _______________________________________________ > > Intel-gfx mailing list > > Intel-gfx@lists.freedesktop.org > > https://lists.freedesktop.org/mailman/listinfo/intel-gfx > > > > -- > Rodrigo Vivi > Blog: http://blog.vivi.eng.br > >
On Mon, Feb 22, 2016 at 6:33 AM, Imre Deak <imre.deak@intel.com> wrote: > On to, 2016-02-18 at 08:56 -0800, Rodrigo Vivi wrote: >> Imre, Patrik, do you know if I'm missing something or what I'm doing >> wrong with this power domain handler for vblanks to avoid DC states >> when we need a reliable frame counter in place. >> >> Do you have better ideas? > > Would it be possible to check the DC5/6 entry counters whenever you try > to access to vblank counter and consider them corrupted if they > increased since the last access? In this case you could adjust the > counter based on some timestamp difference. Imre, thanks a lot for the idea and reviving this topic. I'd like to know your opinion on the approach with patches 1,2,3 that I wrote below: Unfortunately we cannot use those counters for anything. But anyways, they are just entry counters and for every enter it would take vblank number to zero. so, if we waited 100 vblanks and dc entered 100->0 and if we waited 1 vblank and dc entered 1->0 so seems unpredictable... Well, I have this tree: https://cgit.freedesktop.org/~vivijim/drm-intel/log/?h=rpm-domains-psr-vblank-counter-full with mainly: 1 - vblank domain on pre-enable post-disable vblanks hooks as Ville had suggested 2 - psr domain so we just enable dc state when screen is really in idle. 3 - restore counter on vblank enable. From what I understood so far of this problem, only the patch 1 should be enough, but with only this one I don't get the screen frozen but the typying is so slow that is visible that we have something wrong.... Maybe dc state transition with mutexes there are slow? Patch 2 by iitself also doesn't solve this and I still have frozen screens, but when combined to patch 1 everything works really well... In the point that I believe we really don't need patch 3. But I didn't submit any series yet because I'd like to understand more of what is happening here and probably with a good i-g-t coverage for this case. Please let me know your thoughts. Thanks, Rodrigo. > > --Imre > >> Thanks, >> Rodrigo. >> >> ---------- Forwarded message ---------- >> From: Rodrigo Vivi <rodrigo.vivi@gmail.com> >> Date: Wed, Feb 17, 2016 at 3:14 PM >> Subject: Re: [Intel-gfx] [PATCH] drm/i915: Avoid vblank counter for >> gen9+ >> To: Daniel Vetter <daniel@ffwll.ch>, Patrik Jakobsson >> <patrik.jakobsson@linux.intel.com> >> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>, intel-gfx >> <intel-gfx@lists.freedesktop.org> >> >> >> On Tue, Feb 16, 2016 at 7:50 AM, Daniel Vetter <daniel@ffwll.ch> >> wrote: >> > On Thu, Feb 11, 2016 at 09:00:47AM -0800, Rodrigo Vivi wrote: >> > > Framecounter register is read-only so DMC cannot restore it >> > > after exiting DC5 and DC6. >> > > >> > > Easiest way to go is to avoid the counter and use vblank >> > > interruptions for this platform and for all the following >> > > ones since DMC came to stay. At least while we can't change >> > > this register to read-write. >> > > >> > > Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> >> > >> > Now my comments also in public: >> > - Do we still get reasonable dc5 residency with this - it means >> > we'll keep >> > vblank irq running forever. >> > >> > - I'm a bit unclear on what exactly this fixes - have you tested >> > that >> > long-lasting vblank waits are still accurate? Just want to make >> > sure we >> > don't just paper over the issue and desktops can still get stuck >> > waiting >> > for a vblank. >> >> apparently no... so please just ignore this patch for now... after a >> while with that patch I was seeing the issue again... >> >> > >> > Just a bit suprised that the only problem is the framecounter, and >> > not >> > that vblanks stop happening too. >> > >> > We need to also know these details for the proper fix, which will >> > involve >> > grabbing power well references (might need a new one for vblank >> > interrupts) to make sure. >> >> Yeap, I liked this idea... so combining a power domain reference with >> a vblank count restore once we know the dc off is blocked we could >> workaround this case... something like: >> >> diff --git a/drivers/gpu/drm/i915/i915_irq.c >> b/drivers/gpu/drm/i915/i915_irq.c >> index 25a8937..2b18778 100644 >> --- a/drivers/gpu/drm/i915/i915_irq.c >> +++ b/drivers/gpu/drm/i915/i915_irq.c >> @@ -2743,7 +2743,10 @@ static int gen8_enable_vblank(struct >> drm_device >> *dev, unsigned int pipe) >> struct drm_i915_private *dev_priv = dev->dev_private; >> unsigned long irqflags; >> >> + intel_display_power_get(dev_priv, POWER_DOMAIN_VBLANK); >> + >> spin_lock_irqsave(&dev_priv->irq_lock, irqflags); >> + dev->vblank[pipe].last = g4x_get_vblank_counter(dev, pipe); >> bdw_enable_pipe_irq(dev_priv, pipe, GEN8_PIPE_VBLANK); >> spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags); >> >> @@ -2796,6 +2799,8 @@ static void gen8_disable_vblank(struct >> drm_device *dev, unsigned int pipe) >> spin_lock_irqsave(&dev_priv->irq_lock, irqflags); >> bdw_disable_pipe_irq(dev_priv, pipe, GEN8_PIPE_VBLANK); >> spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags); >> + >> + intel_display_power_put(dev_priv, POWER_DOMAIN_VBLANK); >> } >> >> where POWER_DOMAIN_VBLANK is part of: >> #define SKL_DISPLAY_DC_OFF_POWER_DOMAINS ( \ >> BIT(POWER_DOMAIN_VBLANK) | \ >> >> >> However I have my dmesg flooded by: >> >> >> [ 69.025562] BUG: sleeping function called from invalid context at >> drivers/base/power/runtime.c:955 >> [ 69.025576] in_atomic(): 1, irqs_disabled(): 1, pid: 995, name: >> Xorg >> [ 69.025582] Preemption disabled at:[<ffffffff815a1fee>] >> drm_vblank_get+0x4e/0xd0 >> >> [ 69.025619] CPU: 3 PID: 995 Comm: Xorg Tainted: G U W >> 4.5.0-rc4+ #11 >> [ 69.025628] Hardware name: Intel Corporation Kabylake Client >> platform/Skylake U DDR3L RVP7, BIOS KBLSE2R1.R00.X019.B01.1512230743 >> 12/23/2015 >> [ 69.025637] 0000000000000000 ffff88003f0bfbb0 ffffffff8148e983 >> 0000000000000000 >> [ 69.025653] ffff880085b04200 ffff88003f0bfbd0 ffffffff81133ece >> ffffffff81d77f23 >> [ 69.025667] 00000000000003bb ffff88003f0bfbf8 ffffffff81133f89 >> ffff88016913a098 >> [ 69.025680] Call Trace: >> [ 69.025697] [<ffffffff8148e983>] dump_stack+0x65/0x92 >> [ 69.025711] [<ffffffff81133ece>] ___might_sleep+0x10e/0x180 >> [ 69.025722] [<ffffffff81133f89>] __might_sleep+0x49/0x80 >> [ 69.025739] [<ffffffff815d21d9>] __pm_runtime_resume+0x79/0x80 >> [ 69.025841] [<ffffffffa005ebc8>] intel_runtime_pm_get+0x28/0x90 >> [i915] >> [ 69.025924] [<ffffffffa005ec49>] >> intel_display_power_get+0x19/0x50 [i915] >> [ 69.025995] [<ffffffffa004aea4>] gen8_enable_vblank+0x34/0xc0 >> [i915] >> [ 69.026016] [<ffffffff815a1f46>] drm_vblank_enable+0x76/0xd0 >> >> >> >> >> Another thing that I search in the spec was for an Interrupt to know >> when we came back from DC5 or DC6 or got power well re-enabled, so we >> would be able to restore the drm last counter... but I couldn't find >> any... >> >> >> Any other idea? >> >> >> > >> > Cheers, Daniel >> > >> > > --- >> > > drivers/gpu/drm/i915/i915_irq.c | 7 +++++-- >> > > 1 file changed, 5 insertions(+), 2 deletions(-) >> > > >> > > diff --git a/drivers/gpu/drm/i915/i915_irq.c >> > > b/drivers/gpu/drm/i915/i915_irq.c >> > > index 25a8937..c294a4b 100644 >> > > --- a/drivers/gpu/drm/i915/i915_irq.c >> > > +++ b/drivers/gpu/drm/i915/i915_irq.c >> > > @@ -4556,7 +4556,10 @@ void intel_irq_init(struct >> > > drm_i915_private *dev_priv) >> > > >> > > pm_qos_add_request(&dev_priv->pm_qos, >> > > PM_QOS_CPU_DMA_LATENCY, PM_QOS_DEFAULT_VALUE); >> > > >> > > - if (IS_GEN2(dev_priv)) { >> > > + if (INTEL_INFO(dev_priv)->gen >= 9) { >> > > + dev->max_vblank_count = 0; >> > > + dev->driver->get_vblank_counter = >> > > g4x_get_vblank_counter; >> > > + } else if (IS_GEN2(dev_priv)) { >> > > dev->max_vblank_count = 0; >> > > dev->driver->get_vblank_counter = >> > > i8xx_get_vblank_counter; >> > > } else if (IS_G4X(dev_priv) || INTEL_INFO(dev_priv)->gen >= >> > > 5) { >> > > @@ -4572,7 +4575,7 @@ void intel_irq_init(struct drm_i915_private >> > > *dev_priv) >> > > * Gen2 doesn't have a hardware frame counter and so >> > > depends on >> > > * vblank interrupts to produce sane vblank seuquence >> > > numbers. >> > > */ >> > > - if (!IS_GEN2(dev_priv)) >> > > + if (!IS_GEN2(dev_priv) && !INTEL_INFO(dev_priv)->gen >= 9) >> > > dev->vblank_disable_immediate = true; >> > > >> > > dev->driver->get_vblank_timestamp = >> > > i915_get_vblank_timestamp; >> > > -- >> > > 2.4.3 >> > > >> > > _______________________________________________ >> > > Intel-gfx mailing list >> > > Intel-gfx@lists.freedesktop.org >> > > https://lists.freedesktop.org/mailman/listinfo/intel-gfx >> > >> > -- >> > Daniel Vetter >> > Software Engineer, Intel Corporation >> > http://blog.ffwll.ch >> > _______________________________________________ >> > Intel-gfx mailing list >> > Intel-gfx@lists.freedesktop.org >> > https://lists.freedesktop.org/mailman/listinfo/intel-gfx >> >> >> >> -- >> Rodrigo Vivi >> Blog: http://blog.vivi.eng.br >> >>
On Fri, 2016-02-26 at 10:02 -0800, Rodrigo Vivi wrote: > [...] > Well, I have this tree: > https://cgit.freedesktop.org/~vivijim/drm-intel/log/?h=rpm-domains-psr-vblank-counter-full > with mainly: > 1 - vblank domain on pre-enable post-disable vblanks hooks as Ville > had suggested > 2 - psr domain so we just enable dc state when screen is really in > idle. > 3 - restore counter on vblank enable. > > From what I understood so far of this problem, only the patch 1 > should > be enough, but with only this one I don't get the screen frozen but > the typying is so slow that is visible that we have something > wrong.... Maybe dc state transition with mutexes there are slow? I'm not aware of any big latencies caused by toggling DC states alone. > Patch 2 by iitself also doesn't solve this and I still have frozen > screens, but when combined to patch 1 everything works really > well... > In the point that I believe we really don't need patch 3. I think something like 1 and 3 is a good idea (and both are needed). About 2, it's strange that you have to disable DC states when enabling the panel. Since the pipe is active it should prevent DC5 (and hence DC6). We wouldn't waste any power with your changes, since you re- enable DC states before entering PSR, but imo we should find out why exactly this is needed. Some notes/ideas about the patches: - intel_display_power_get() is called from page_flip_completed(), which is bad since we can be in interrupt context. - There is a drm_crtc_vblank_get() in intel_crtc_page_flip(), but there is no corresponding intel_display_power_get() for it. - The same goes for the FBC, CRC code, couldn't you just call the new vblank hooks from drm_vblank_get/put()? - PIPE_FLIPCOUNT_G4X is also read-only, so it could get corrupted the same way as the frame counter register. page_flip_finished() depends on PIPE_FLIPCOUNT_G4X, so isn't this a problem? I haven't checked this in detail, but it could be that we need to exit PSR explicitly when waiting for a vblank or doing a flip. In PSR mode the pipe may not be running, so I'm not sure how the vblank and flip interrupts would be delivered. --Imre
On Wed, Mar 02, 2016 at 07:13:07PM +0200, Imre Deak wrote: > On Fri, 2016-02-26 at 10:02 -0800, Rodrigo Vivi wrote: > > [...] > > Well, I have this tree: > > https://cgit.freedesktop.org/~vivijim/drm-intel/log/?h=rpm-domains-psr-vblank-counter-full > > with mainly: > > 1 - vblank domain on pre-enable post-disable vblanks hooks as Ville > > had suggested > > 2 - psr domain so we just enable dc state when screen is really in > > idle. > > 3 - restore counter on vblank enable. > > > > From what I understood so far of this problem, only the patch 1 > > should > > be enough, but with only this one I don't get the screen frozen but > > the typying is so slow that is visible that we have something > > wrong.... Maybe dc state transition with mutexes there are slow? > > I'm not aware of any big latencies caused by toggling DC states alone. According to bspec, any MMIO access when in DC6 have a considerably higher latency. The recommendation from bspec is to disable DC states around longer sequences of MMIO. I don't know if this is the case here but going from DC6 -> DC3 is a quite heavy operation (enable PG0, PG1, CDCLK and do CSR for these). I guess we could measure it to see how long it actually takes? > > > Patch 2 by iitself also doesn't solve this and I still have frozen > > screens, but when combined to patch 1 everything works really > > well... > > In the point that I believe we really don't need patch 3. > > I think something like 1 and 3 is a good idea (and both are needed). > About 2, it's strange that you have to disable DC states when enabling > the panel. Since the pipe is active it should prevent DC5 (and hence > DC6). We wouldn't waste any power with your changes, since you re- > enable DC states before entering PSR, but imo we should find out why > exactly this is needed. > > Some notes/ideas about the patches: > - intel_display_power_get() is called from page_flip_completed(), which > is bad since we can be in interrupt context. > - There is a drm_crtc_vblank_get() in intel_crtc_page_flip(), but there > is no corresponding intel_display_power_get() for it. > - The same goes for the FBC, CRC code, couldn't you just call the new > vblank hooks from drm_vblank_get/put()? > - PIPE_FLIPCOUNT_G4X is also read-only, so it could get corrupted the > same way as the frame counter register. page_flip_finished() depends on > PIPE_FLIPCOUNT_G4X, so isn't this a problem? > > I haven't checked this in detail, but it could be that we need to exit > PSR explicitly when waiting for a vblank or doing a flip. In PSR mode > the pipe may not be running, so I'm not sure how the vblank and flip > interrupts would be delivered. > > --Imre > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/intel-gfx
diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c index 25a8937..2b18778 100644 --- a/drivers/gpu/drm/i915/i915_irq.c +++ b/drivers/gpu/drm/i915/i915_irq.c @@ -2743,7 +2743,10 @@ static int gen8_enable_vblank(struct drm_device *dev, unsigned int pipe) struct drm_i915_private *dev_priv = dev->dev_private; unsigned long irqflags; + intel_display_power_get(dev_priv, POWER_DOMAIN_VBLANK); + spin_lock_irqsave(&dev_priv->irq_lock, irqflags); + dev->vblank[pipe].last = g4x_get_vblank_counter(dev, pipe); bdw_enable_pipe_irq(dev_priv, pipe, GEN8_PIPE_VBLANK); spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags);