diff mbox

[v12,05/11] drm/i915/guc: Introduce intel_uc_sanitize

Message ID 1506581329-29720-6-git-send-email-sagar.a.kamble@intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

sagar.a.kamble@intel.com Sept. 28, 2017, 6:48 a.m. UTC
Currently GPU is reset at the end of suspend via i915_gem_sanitize.
On resume, GuC will not be loaded until intel_uc_init_hw happens
during GEM resume flow but action to exit sleep can be sent to GuC
considering the FW load status. To make sure we don't invoke that
action update GuC FW load status at the end of GPU reset as NONE.
load_status indicates HW state and it is sanitized through this new
function intel_uc_sanitize.

v2: Rebase.

v3: Removed intel_guc_sanitize. Marking load status as NONE at the
GPU reset point. (Chris/Michal)

v4: Reinstated the uC function intel_uc_sanitize. (Michal Wajdeczko)

Signed-off-by: Sagar Arun Kamble <sagar.a.kamble@intel.com>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Michał Winiarski <michal.winiarski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/intel_uc.c     | 12 ++++++++++++
 drivers/gpu/drm/i915/intel_uc.h     |  1 +
 drivers/gpu/drm/i915/intel_uncore.c |  3 +++
 3 files changed, 16 insertions(+)

Comments

Joonas Lahtinen Sept. 29, 2017, noon UTC | #1
On Thu, 2017-09-28 at 12:18 +0530, Sagar Arun Kamble wrote:
> Currently GPU is reset at the end of suspend via i915_gem_sanitize.
> On resume, GuC will not be loaded until intel_uc_init_hw happens
> during GEM resume flow but action to exit sleep can be sent to GuC
> considering the FW load status. To make sure we don't invoke that
> action update GuC FW load status at the end of GPU reset as NONE.
> load_status indicates HW state and it is sanitized through this new
> function intel_uc_sanitize.
> 
> v2: Rebase.
> 
> v3: Removed intel_guc_sanitize. Marking load status as NONE at the
> GPU reset point. (Chris/Michal)
> 
> v4: Reinstated the uC function intel_uc_sanitize. (Michal Wajdeczko)
> 
> Signed-off-by: Sagar Arun Kamble <sagar.a.kamble@intel.com>
> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
> Cc: Michał Winiarski <michal.winiarski@intel.com>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>

<SNIP>

> @@ -508,6 +508,18 @@ int intel_uc_resume(struct drm_i915_private *dev_priv)
>  	return intel_guc_resume(dev_priv);
>  }
>  
> +void intel_uc_sanitize(struct drm_i915_private *dev_priv)
> +{
> +	/*
> +	 * FIXME: intel_uc_resume currently depends on load_status to resume
> +	 * GuC. Since we are resetting Full GPU at the end of suspend, let us
> +	 * mark the load status as NONE. Once intel_uc_resume is updated to take
> +	 * into consideration GuC load state based on WOPCM, we can skip this
> +	 * state update.
> +	 */
> +	dev_priv->guc.fw.load_status = INTEL_UC_FIRMWARE_NONE;

With what I suggested to Michal, this would be call to
intel_guc_sanitize() (and in future also intel_huc_sanitize()
intel_dmc_sanitize()).

> +++ b/drivers/gpu/drm/i915/intel_uncore.c
> @@ -1763,6 +1763,9 @@ int intel_gpu_reset(struct drm_i915_private *dev_priv, unsigned engine_mask)
>  	}
>  	intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
>  
> +	if (engine_mask == ALL_ENGINES)
> +		intel_uc_sanitize(dev_priv);

We could propagate engine_mask to intel_uc_sanitize and let it decide
what it does to keep a clear top level code flow. This also doesn't
seem to depend on if GuC submission is enabled or not.

If we want to be unconditional, wouldn't intel_guc_select_fw would not
be more appropriate in intel_uc_sanitize?

Regards, Joonas
sagar.a.kamble@intel.com Sept. 29, 2017, 2:22 p.m. UTC | #2
On 9/29/2017 5:30 PM, Joonas Lahtinen wrote:
> On Thu, 2017-09-28 at 12:18 +0530, Sagar Arun Kamble wrote:
>> Currently GPU is reset at the end of suspend via i915_gem_sanitize.
>> On resume, GuC will not be loaded until intel_uc_init_hw happens
>> during GEM resume flow but action to exit sleep can be sent to GuC
>> considering the FW load status. To make sure we don't invoke that
>> action update GuC FW load status at the end of GPU reset as NONE.
>> load_status indicates HW state and it is sanitized through this new
>> function intel_uc_sanitize.
>>
>> v2: Rebase.
>>
>> v3: Removed intel_guc_sanitize. Marking load status as NONE at the
>> GPU reset point. (Chris/Michal)
>>
>> v4: Reinstated the uC function intel_uc_sanitize. (Michal Wajdeczko)
>>
>> Signed-off-by: Sagar Arun Kamble <sagar.a.kamble@intel.com>
>> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
>> Cc: Michał Winiarski <michal.winiarski@intel.com>
>> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> <SNIP>
>
>> @@ -508,6 +508,18 @@ int intel_uc_resume(struct drm_i915_private *dev_priv)
>>   	return intel_guc_resume(dev_priv);
>>   }
>>   
>> +void intel_uc_sanitize(struct drm_i915_private *dev_priv)
>> +{
>> +	/*
>> +	 * FIXME: intel_uc_resume currently depends on load_status to resume
>> +	 * GuC. Since we are resetting Full GPU at the end of suspend, let us
>> +	 * mark the load status as NONE. Once intel_uc_resume is updated to take
>> +	 * into consideration GuC load state based on WOPCM, we can skip this
>> +	 * state update.
>> +	 */
>> +	dev_priv->guc.fw.load_status = INTEL_UC_FIRMWARE_NONE;
> With what I suggested to Michal, this would be call to
> intel_guc_sanitize() (and in future also intel_huc_sanitize()
> intel_dmc_sanitize()).
Yes.
>
>> +++ b/drivers/gpu/drm/i915/intel_uncore.c
>> @@ -1763,6 +1763,9 @@ int intel_gpu_reset(struct drm_i915_private *dev_priv, unsigned engine_mask)
>>   	}
>>   	intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
>>   
>> +	if (engine_mask == ALL_ENGINES)
>> +		intel_uc_sanitize(dev_priv);
> We could propagate engine_mask to intel_uc_sanitize and let it decide
> what it does to keep a clear top level code flow. This also doesn't
> seem to depend on if GuC submission is enabled or not.
Sure. will make this change.
> If we want to be unconditional, wouldn't intel_guc_select_fw would not
> be more appropriate in intel_uc_sanitize?
Do we want to select different fw across resets? That would mean 
changing i915.guc_firmware_path at runtime which I guess we don't want 
do right?
> Regards, Joonas
Joonas Lahtinen Oct. 2, 2017, 8:37 a.m. UTC | #3
On Fri, 2017-09-29 at 19:52 +0530, Sagar Arun Kamble wrote:
> 
> On 9/29/2017 5:30 PM, Joonas Lahtinen wrote:
> > On Thu, 2017-09-28 at 12:18 +0530, Sagar Arun Kamble wrote:
> > > Currently GPU is reset at the end of suspend via i915_gem_sanitize.
> > > On resume, GuC will not be loaded until intel_uc_init_hw happens
> > > during GEM resume flow but action to exit sleep can be sent to GuC
> > > considering the FW load status. To make sure we don't invoke that
> > > action update GuC FW load status at the end of GPU reset as NONE.
> > > load_status indicates HW state and it is sanitized through this new
> > > function intel_uc_sanitize.
> > > 
> > > v2: Rebase.
> > > 
> > > v3: Removed intel_guc_sanitize. Marking load status as NONE at the
> > > GPU reset point. (Chris/Michal)
> > > 
> > > v4: Reinstated the uC function intel_uc_sanitize. (Michal Wajdeczko)
> > > 
> > > Signed-off-by: Sagar Arun Kamble <sagar.a.kamble@intel.com>
> > > Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
> > > Cc: Michał Winiarski <michal.winiarski@intel.com>
> > > Cc: Chris Wilson <chris@chris-wilson.co.uk>
> > 
> > <SNIP>
> > 
> > > @@ -508,6 +508,18 @@ int intel_uc_resume(struct drm_i915_private *dev_priv)
> > >   	return intel_guc_resume(dev_priv);
> > >   }
> > >   
> > > +void intel_uc_sanitize(struct drm_i915_private *dev_priv)
> > > +{
> > > +	/*
> > > +	 * FIXME: intel_uc_resume currently depends on load_status to resume
> > > +	 * GuC. Since we are resetting Full GPU at the end of suspend, let us
> > > +	 * mark the load status as NONE. Once intel_uc_resume is updated to take
> > > +	 * into consideration GuC load state based on WOPCM, we can skip this
> > > +	 * state update.
> > > +	 */
> > > +	dev_priv->guc.fw.load_status = INTEL_UC_FIRMWARE_NONE;
> > 
> > With what I suggested to Michal, this would be call to
> > intel_guc_sanitize() (and in future also intel_huc_sanitize()
> > intel_dmc_sanitize()).
> 
> Yes.
> > 
> > > +++ b/drivers/gpu/drm/i915/intel_uncore.c
> > > @@ -1763,6 +1763,9 @@ int intel_gpu_reset(struct drm_i915_private *dev_priv, unsigned engine_mask)
> > >   	}
> > >   	intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
> > >   
> > > +	if (engine_mask == ALL_ENGINES)
> > > +		intel_uc_sanitize(dev_priv);
> > 
> > We could propagate engine_mask to intel_uc_sanitize and let it decide
> > what it does to keep a clear top level code flow. This also doesn't
> > seem to depend on if GuC submission is enabled or not.
> 
> Sure. will make this change.
> > If we want to be unconditional, wouldn't intel_guc_select_fw would not
> > be more appropriate in intel_uc_sanitize?
> 
> Do we want to select different fw across resets? That would mean 
> changing i915.guc_firmware_path at runtime which I guess we don't want 
> do right?

That's a good point, intel_uc_sanitize could call intel_guc_sanitize
and intel_guc_sanitize could be called at the beginning of
intel_guc_select_fw too. Quasi-randomly setting the guc firmware load
status was odd.

Regards, Joonas
diff mbox

Patch

diff --git a/drivers/gpu/drm/i915/intel_uc.c b/drivers/gpu/drm/i915/intel_uc.c
index 80251ec..ab26232 100644
--- a/drivers/gpu/drm/i915/intel_uc.c
+++ b/drivers/gpu/drm/i915/intel_uc.c
@@ -508,6 +508,18 @@  int intel_uc_resume(struct drm_i915_private *dev_priv)
 	return intel_guc_resume(dev_priv);
 }
 
+void intel_uc_sanitize(struct drm_i915_private *dev_priv)
+{
+	/*
+	 * FIXME: intel_uc_resume currently depends on load_status to resume
+	 * GuC. Since we are resetting Full GPU at the end of suspend, let us
+	 * mark the load status as NONE. Once intel_uc_resume is updated to take
+	 * into consideration GuC load state based on WOPCM, we can skip this
+	 * state update.
+	 */
+	dev_priv->guc.fw.load_status = INTEL_UC_FIRMWARE_NONE;
+}
+
 int intel_guc_send_nop(struct intel_guc *guc, const u32 *action, u32 len)
 {
 	WARN(1, "Unexpected send: action=%#x\n", *action);
diff --git a/drivers/gpu/drm/i915/intel_uc.h b/drivers/gpu/drm/i915/intel_uc.h
index 0a79e17..ce3cea5 100644
--- a/drivers/gpu/drm/i915/intel_uc.h
+++ b/drivers/gpu/drm/i915/intel_uc.h
@@ -212,6 +212,7 @@  struct intel_huc {
 int intel_uc_runtime_resume(struct drm_i915_private *dev_priv);
 int intel_uc_suspend(struct drm_i915_private *dev_priv);
 int intel_uc_resume(struct drm_i915_private *dev_priv);
+void intel_uc_sanitize(struct drm_i915_private *dev_priv);
 int intel_guc_sample_forcewake(struct intel_guc *guc);
 int intel_guc_send_nop(struct intel_guc *guc, const u32 *action, u32 len);
 int intel_guc_send_mmio(struct intel_guc *guc, const u32 *action, u32 len);
diff --git a/drivers/gpu/drm/i915/intel_uncore.c b/drivers/gpu/drm/i915/intel_uncore.c
index b3c3f94..acab013 100644
--- a/drivers/gpu/drm/i915/intel_uncore.c
+++ b/drivers/gpu/drm/i915/intel_uncore.c
@@ -1763,6 +1763,9 @@  int intel_gpu_reset(struct drm_i915_private *dev_priv, unsigned engine_mask)
 	}
 	intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
 
+	if (engine_mask == ALL_ENGINES)
+		intel_uc_sanitize(dev_priv);
+
 	return ret;
 }