Message ID | 20211021214040.33292-1-matthew.brost@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | drm/i915/selftests: Update live.evict to wait on requests / idle GPU after each loop | expand |
On 10/21/21 23:40, Matthew Brost wrote: > Update live.evict to wait on last request and idle GPU after each loop. > This not only enhances the test to fill the GGTT on each engine class > but also avoid timeouts from igt_flush_test when using GuC submission. > igt_flush_test (idle GPU) can take a long time with GuC submission if > losts of contexts are created due to H2G / G2H required to destroy > contexts. > > Signed-off-by: Matthew Brost <matthew.brost@intel.com> LGTM, Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> > --- > .../gpu/drm/i915/selftests/i915_gem_evict.c | 19 +++++++++++++++++++ > 1 file changed, 19 insertions(+) > > diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_evict.c b/drivers/gpu/drm/i915/selftests/i915_gem_evict.c > index f99bb0113726..7e0658a77659 100644 > --- a/drivers/gpu/drm/i915/selftests/i915_gem_evict.c > +++ b/drivers/gpu/drm/i915/selftests/i915_gem_evict.c > @@ -442,6 +442,7 @@ static int igt_evict_contexts(void *arg) > /* Overfill the GGTT with context objects and so try to evict one. */ > for_each_engine(engine, gt, id) { > struct i915_sw_fence fence; > + struct i915_request *last = NULL; > > count = 0; > onstack_fence_init(&fence); > @@ -479,6 +480,9 @@ static int igt_evict_contexts(void *arg) > > i915_request_add(rq); > count++; > + if (last) > + i915_request_put(last); > + last = i915_request_get(rq); > err = 0; > } while(1); > onstack_fence_fini(&fence); > @@ -486,6 +490,21 @@ static int igt_evict_contexts(void *arg) > count, engine->name); > if (err) > break; > + if (last) { > + if (i915_request_wait(last, 0, HZ) < 0) { > + err = -EIO; > + i915_request_put(last); > + pr_err("Failed waiting for last request (on %s)", > + engine->name); > + break; > + } > + i915_request_put(last); > + } > + err = intel_gt_wait_for_idle(engine->gt, HZ * 3); > + if (err) { > + pr_err("Failed to idle GT (on %s)", engine->name); > + break; > + } > } > > mutex_lock(&ggtt->vm.mutex);
diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_evict.c b/drivers/gpu/drm/i915/selftests/i915_gem_evict.c index f99bb0113726..7e0658a77659 100644 --- a/drivers/gpu/drm/i915/selftests/i915_gem_evict.c +++ b/drivers/gpu/drm/i915/selftests/i915_gem_evict.c @@ -442,6 +442,7 @@ static int igt_evict_contexts(void *arg) /* Overfill the GGTT with context objects and so try to evict one. */ for_each_engine(engine, gt, id) { struct i915_sw_fence fence; + struct i915_request *last = NULL; count = 0; onstack_fence_init(&fence); @@ -479,6 +480,9 @@ static int igt_evict_contexts(void *arg) i915_request_add(rq); count++; + if (last) + i915_request_put(last); + last = i915_request_get(rq); err = 0; } while(1); onstack_fence_fini(&fence); @@ -486,6 +490,21 @@ static int igt_evict_contexts(void *arg) count, engine->name); if (err) break; + if (last) { + if (i915_request_wait(last, 0, HZ) < 0) { + err = -EIO; + i915_request_put(last); + pr_err("Failed waiting for last request (on %s)", + engine->name); + break; + } + i915_request_put(last); + } + err = intel_gt_wait_for_idle(engine->gt, HZ * 3); + if (err) { + pr_err("Failed to idle GT (on %s)", engine->name); + break; + } } mutex_lock(&ggtt->vm.mutex);
Update live.evict to wait on last request and idle GPU after each loop. This not only enhances the test to fill the GGTT on each engine class but also avoid timeouts from igt_flush_test when using GuC submission. igt_flush_test (idle GPU) can take a long time with GuC submission if losts of contexts are created due to H2G / G2H required to destroy contexts. Signed-off-by: Matthew Brost <matthew.brost@intel.com> --- .../gpu/drm/i915/selftests/i915_gem_evict.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+)