Message ID | 20210318170419.2107512-4-tvrtko.ursulin@linux.intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Default request/fence expiry + watchdog | expand |
On Thu, 18 Mar 2021 at 17:04, Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote: > > From: Tvrtko Ursulin <tvrtko.ursulin@intel.com> > > With the watchdog cancelling requests asynchronously to preempt-to-busy we > need to relax one assert making it apply only to requests not in error. > > v2: > * Check against the correct request! > > Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> > --- > drivers/gpu/drm/i915/gt/intel_execlists_submission.c | 7 +++++++ > 1 file changed, 7 insertions(+) > > diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c > index 4b870eca9693..bf557290173a 100644 > --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c > +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c > @@ -815,6 +815,13 @@ assert_pending_valid(const struct intel_engine_execlists *execlists, > spin_unlock_irqrestore(&rq->lock, flags); > if (!ok) > return false; > + > + /* > + * Due async nature of preempt-to-busy and request cancellation Due to the > + * we need to skip further asserts for cancelled requests. > + */ > + if (READ_ONCE(rq->fence.error)) > + break; If the above trylock fails, I guess we end up skipping this? Maybe add an explicit goto label to handle the skip here? > } > > return ce; > -- > 2.27.0 > > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/intel-gfx
diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c index 4b870eca9693..bf557290173a 100644 --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c @@ -815,6 +815,13 @@ assert_pending_valid(const struct intel_engine_execlists *execlists, spin_unlock_irqrestore(&rq->lock, flags); if (!ok) return false; + + /* + * Due async nature of preempt-to-busy and request cancellation + * we need to skip further asserts for cancelled requests. + */ + if (READ_ONCE(rq->fence.error)) + break; } return ce;