diff mbox series

AW: [PATCH 2/2] drm/amdgpu: Use mod_delayed_work in JPEG/UVD/VCE/VCN ring_end_use hooks

Message ID MN2PR12MB377506AD71308A47222A3E9583F89@MN2PR12MB3775.namprd12.prod.outlook.com (mailing list archive)
State New, archived
Headers show
Series AW: [PATCH 2/2] drm/amdgpu: Use mod_delayed_work in JPEG/UVD/VCE/VCN ring_end_use hooks | expand

Commit Message

Christian König Aug. 11, 2021, 9:34 p.m. UTC
NAK to at least this patch.

Since activating power management while submitting work is problematic cancel_delayed_work() must have been called during begin use or otherwise we have a serious coding problem in the first place.

So this change shouldn't make a difference and I suggest to really stick with schedule_delayed_work().

Maybe add a comment how this works?

Need to take a closer look at the first patch when I'm back from vacation, but it could be that this applies there as well.

Regards,
Christian.

Comments

James Zhu Aug. 11, 2021, 10:12 p.m. UTC | #1
[AMD Official Use Only]

Hi Christian,

Since we have strict check on queue status, I don't think original design can cause issue here.
But this change should help improve below case:

  1.  both enc thread and dec thread try to start begin_use.
  2.  dec thread gets the chance to finish begin_use process first.
  3.  before dec thread enters end_use, enc thread gets time slot to run through begin_use(No delay work scheduled at that time)
  4.  dec thread enters end_use, scheduled a delay work
  5.   enc thread enters end_use, modify this delay work.

It will help reduce one delay work call at least.


Thanks & Best Regards!


James Zhu
James Zhu Aug. 11, 2021, 10:22 p.m. UTC | #2
[AMD Official Use Only]

I shouldn't say reduce one delay work call ,  For this case, Michael's proposal is closer to idle work design's purpose.


Thanks & Best Regards!


James Zhu
Evan Quan Aug. 12, 2021, 2:42 a.m. UTC | #3
[AMD Official Use Only]

Different from the 1st patch(for amdgpu_gfx_off_ctrl) of the series, "cancel_delayed_work_sync(&adev->uvd.idle_work)" will be called on like amdgpu_uvd_ring_begin_use().  Under this case, does it make any difference from previous implementation "schedule_delayed_work"?
Suppose the sequence is as below:

  *   Ring begin use
  *   Ring end use -->  mod_delayed_work() : queue a new delayed work, right?
  *   Ring begin use (within 1s) --> cancel_delayed_work_sync() will cancel the work submitted above, right?
  *   Ring end use  --> mod_delayed_work(): queue another new scheduled work, same as previous "schedule_delayed_work"?

BR
Evan
From: amd-gfx <amd-gfx-bounces@lists.freedesktop.org> On Behalf Of Koenig, Christian
Sent: Thursday, August 12, 2021 5:34 AM
To: Michel Dänzer <michel@daenzer.net>; Deucher, Alexander <Alexander.Deucher@amd.com>
Cc: Liu, Leo <Leo.Liu@amd.com>; Zhu, James <James.Zhu@amd.com>; amd-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org
Subject: AW: [PATCH 2/2] drm/amdgpu: Use mod_delayed_work in JPEG/UVD/VCE/VCN ring_end_use hooks

NAK to at least this patch.

Since activating power management while submitting work is problematic cancel_delayed_work() must have been called during begin use or otherwise we have a serious coding problem in the first place.

So this change shouldn't make a difference and I suggest to really stick with schedule_delayed_work().

Maybe add a comment how this works?

Need to take a closer look at the first patch when I'm back from vacation, but it could be that this applies there as well.

Regards,
Christian.
diff mbox series

Patch

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.c
index 8996cb4ed57a..2c0040153f6c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.c
@@ -110,7 +110,7 @@  void amdgpu_jpeg_ring_begin_use(struct amdgpu_ring *ring)
 void amdgpu_jpeg_ring_end_use(struct amdgpu_ring *ring)
 {
         atomic_dec(&ring->adev->jpeg.total_submission_cnt);
-       schedule_delayed_work(&ring->adev->jpeg.idle_work, JPEG_IDLE_TIMEOUT);
+       mod_delayed_work(system_wq, &ring->adev->jpeg.idle_work, JPEG_IDLE_TIMEOUT);
 }

 int amdgpu_jpeg_dec_ring_test_ring(struct amdgpu_ring *ring)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
index 0f576f294d8a..b6b1d7eeb8e5 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
@@ -1283,7 +1283,7 @@  void amdgpu_uvd_ring_begin_use(struct amdgpu_ring *ring)
 void amdgpu_uvd_ring_end_use(struct amdgpu_ring *ring)
 {
         if (!amdgpu_sriov_vf(ring->adev))
-               schedule_delayed_work(&ring->adev->uvd.idle_work, UVD_IDLE_TIMEOUT);
+               mod_delayed_work(system_wq, &ring->adev->uvd.idle_work, UVD_IDLE_TIMEOUT);
 }

 /**
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
index 1ae7f824adc7..2253c18a6688 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
@@ -401,7 +401,7 @@  void amdgpu_vce_ring_begin_use(struct amdgpu_ring *ring)
 void amdgpu_vce_ring_end_use(struct amdgpu_ring *ring)
 {
         if (!amdgpu_sriov_vf(ring->adev))
-               schedule_delayed_work(&ring->adev->vce.idle_work, VCE_IDLE_TIMEOUT);
+               mod_delayed_work(system_wq, &ring->adev->vce.idle_work, VCE_IDLE_TIMEOUT);
 }

 /**
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
index 284bb42d6c86..d5937ab5ac80 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
@@ -1874,7 +1874,7 @@  void vcn_v1_0_set_pg_for_begin_use(struct amdgpu_ring *ring, bool set_clocks)

 void vcn_v1_0_ring_end_use(struct amdgpu_ring *ring)
 {
-       schedule_delayed_work(&ring->adev->vcn.idle_work, VCN_IDLE_TIMEOUT);
+       mod_delayed_work(system_wq, &ring->adev->vcn.idle_work, VCN_IDLE_TIMEOUT);
         mutex_unlock(&ring->adev->vcn.vcn1_jpeg1_workaround);
 }