Message ID | 1436950023-13940-2-git-send-email-sourab.gupta@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Wed, Jul 15, 2015 at 02:16:56PM +0530, sourab.gupta@intel.com wrote: > From: Sourab Gupta <sourab.gupta@intel.com> > > Currently the context ids are specific to a drm file instance, as opposed > to being globally unique. There are some usecases, which may require > globally unique context ids. For e.g. a system level GPU profiler tool may > lean upon the context ids to associate the performance snapshots with > individual contexts. If the context ids are unique, it may do so without > relying on any additional information such as pid and drm fd. > > This patch proposes an implementation of globally unique context ids, by > conceptually moving the idr table for holding the context ids, into device > private structure instead of file private structure. The case of default > context id for drm file (which is given by id=0) is handled by storing the > same in file private during context creation, and retrieving as and when > required. > > This patch is proposed an an enabler for the patches following in the > series. In particular, I'm looking for feedback on the pros and cons of > having a globally unique context id, and any specific inputs on this > particular implementation. This implementation can be improved upon, if > agreed upon conceptually. Simpler: use a cyclic idr for global context ids. The importance here is that for the common case we have a more friendly per-file small lookup table and reporting, also the code should be more understandable with a separate user_handle and guid. -Chris
On Wed, Jul 15, 2015 at 10:54:28AM +0100, Chris Wilson wrote: > On Wed, Jul 15, 2015 at 02:16:56PM +0530, sourab.gupta@intel.com wrote: > > From: Sourab Gupta <sourab.gupta@intel.com> > > > > Currently the context ids are specific to a drm file instance, as opposed > > to being globally unique. There are some usecases, which may require > > globally unique context ids. For e.g. a system level GPU profiler tool may > > lean upon the context ids to associate the performance snapshots with > > individual contexts. If the context ids are unique, it may do so without > > relying on any additional information such as pid and drm fd. > > > > This patch proposes an implementation of globally unique context ids, by > > conceptually moving the idr table for holding the context ids, into device > > private structure instead of file private structure. The case of default > > context id for drm file (which is given by id=0) is handled by storing the > > same in file private during context creation, and retrieving as and when > > required. > > > > This patch is proposed an an enabler for the patches following in the > > series. In particular, I'm looking for feedback on the pros and cons of > > having a globally unique context id, and any specific inputs on this > > particular implementation. This implementation can be improved upon, if > > agreed upon conceptually. > > Simpler: use a cyclic idr for global context ids. The importance here is > that for the common case we have a more friendly per-file small lookup > table and reporting, also the code should be more understandable with a > separate user_handle and guid. I should emphasize that we really do want to keep user_handle inside a per-file namespace. -Chris
On Wed, Jul 15, 2015 at 11:31:32AM +0100, Chris Wilson wrote: > On Wed, Jul 15, 2015 at 10:54:28AM +0100, Chris Wilson wrote: > > On Wed, Jul 15, 2015 at 02:16:56PM +0530, sourab.gupta@intel.com wrote: > > > From: Sourab Gupta <sourab.gupta@intel.com> > > > > > > Currently the context ids are specific to a drm file instance, as opposed > > > to being globally unique. There are some usecases, which may require > > > globally unique context ids. For e.g. a system level GPU profiler tool may > > > lean upon the context ids to associate the performance snapshots with > > > individual contexts. If the context ids are unique, it may do so without > > > relying on any additional information such as pid and drm fd. > > > > > > This patch proposes an implementation of globally unique context ids, by > > > conceptually moving the idr table for holding the context ids, into device > > > private structure instead of file private structure. The case of default > > > context id for drm file (which is given by id=0) is handled by storing the > > > same in file private during context creation, and retrieving as and when > > > required. > > > > > > This patch is proposed an an enabler for the patches following in the > > > series. In particular, I'm looking for feedback on the pros and cons of > > > having a globally unique context id, and any specific inputs on this > > > particular implementation. This implementation can be improved upon, if > > > agreed upon conceptually. > > > > Simpler: use a cyclic idr for global context ids. The importance here is > > that for the common case we have a more friendly per-file small lookup > > table and reporting, also the code should be more understandable with a > > separate user_handle and guid. > > I should emphasize that we really do want to keep user_handle inside a > per-file namespace. Yeah agreed, if we need some global internal ID because we need it in the hw somewhere then that should be tracked in a separate hw_id field, and allocated from a separate idr. That would also allow us to correctly implement any restrictions the hw has (iirc we don't have the full 32bit for ctx id, but not sure). -Daniel
diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c index 47636f3..38e0026 100644 --- a/drivers/gpu/drm/i915/i915_debugfs.c +++ b/drivers/gpu/drm/i915/i915_debugfs.c @@ -2254,12 +2254,10 @@ static void gen6_ppgtt_info(struct seq_file *m, struct drm_device *dev) } list_for_each_entry_reverse(file, &dev->filelist, lhead) { - struct drm_i915_file_private *file_priv = file->driver_priv; - seq_printf(m, "proc: %s\n", get_pid_task(file->pid, PIDTYPE_PID)->comm); - idr_for_each(&file_priv->context_idr, per_file_ctx, m); } + idr_for_each(&dev_priv->context_idr, per_file_ctx, m); seq_printf(m, "ECOCHK: 0x%08x\n", I915_READ(GAM_ECOCHK)); } diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 50977f0..baa0234 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -320,7 +320,7 @@ struct drm_i915_file_private { */ #define DRM_I915_THROTTLE_JIFFIES msecs_to_jiffies(20) } mm; - struct idr context_idr; + u32 first_ctx_id; struct intel_rps_client { struct list_head link; @@ -1754,6 +1754,8 @@ struct drm_i915_private { struct intel_opregion opregion; struct intel_vbt_data vbt; + struct idr context_idr; + bool preserve_bios_swizzle; /* overlay */ diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c index d9ccad5..6b572c1 100644 --- a/drivers/gpu/drm/i915/i915_gem_context.c +++ b/drivers/gpu/drm/i915/i915_gem_context.c @@ -212,7 +212,8 @@ i915_gem_alloc_context_obj(struct drm_device *dev, size_t size) static struct intel_context * __create_hw_context(struct drm_device *dev, - struct drm_i915_file_private *file_priv) + struct drm_i915_file_private *file_priv, + bool is_first_ctx) { struct drm_i915_private *dev_priv = dev->dev_private; struct intel_context *ctx; @@ -237,10 +238,12 @@ __create_hw_context(struct drm_device *dev, /* Default context will never have a file_priv */ if (file_priv != NULL) { - ret = idr_alloc(&file_priv->context_idr, ctx, + ret = idr_alloc(&dev_priv->context_idr, ctx, DEFAULT_CONTEXT_HANDLE, 0, GFP_KERNEL); if (ret < 0) goto err_out; + if (is_first_ctx) + file_priv->first_ctx_id = ret; } else ret = DEFAULT_CONTEXT_HANDLE; @@ -267,7 +270,8 @@ err_out: */ static struct intel_context * i915_gem_create_context(struct drm_device *dev, - struct drm_i915_file_private *file_priv) + struct drm_i915_file_private *file_priv, + bool is_first_ctx) { const bool is_global_default_ctx = file_priv == NULL; struct intel_context *ctx; @@ -275,7 +279,7 @@ i915_gem_create_context(struct drm_device *dev, BUG_ON(!mutex_is_locked(&dev->struct_mutex)); - ctx = __create_hw_context(dev, file_priv); + ctx = __create_hw_context(dev, file_priv, is_first_ctx); if (IS_ERR(ctx)) return ctx; @@ -348,6 +352,14 @@ void i915_gem_context_reset(struct drm_device *dev) } } +static int context_idr_cleanup(int id, void *p, void *data) +{ + struct intel_context *ctx = p; + + i915_gem_context_unreference(ctx); + return 0; +} + int i915_gem_context_init(struct drm_device *dev) { struct drm_i915_private *dev_priv = dev->dev_private; @@ -371,8 +383,9 @@ int i915_gem_context_init(struct drm_device *dev) dev_priv->hw_context_size = 0; } } + idr_init(&dev_priv->context_idr); - ctx = i915_gem_create_context(dev, NULL); + ctx = i915_gem_create_context(dev, NULL, false); if (IS_ERR(ctx)) { DRM_ERROR("Failed to create default global context (error %ld)\n", PTR_ERR(ctx)); @@ -398,6 +411,9 @@ void i915_gem_context_fini(struct drm_device *dev) struct intel_context *dctx = dev_priv->ring[RCS].default_context; int i; + idr_for_each(&dev_priv->context_idr, context_idr_cleanup, NULL); + idr_destroy(&dev_priv->context_idr); + if (dctx->legacy_hw_ctx.rcs_state) { /* The only known way to stop the gpu from accessing the hw context is * to reset it. Do this as the very last operation to avoid confusing @@ -465,11 +481,14 @@ int i915_gem_context_enable(struct drm_i915_private *dev_priv) return 0; } -static int context_idr_cleanup(int id, void *p, void *data) +static int cleanup_file_contexts(int id, void *p, void *data) { struct intel_context *ctx = p; + struct drm_i915_file_private *file_priv = data; + + if (ctx->file_priv == file_priv) + i915_gem_context_unreference(ctx); - i915_gem_context_unreference(ctx); return 0; } @@ -478,14 +497,11 @@ int i915_gem_context_open(struct drm_device *dev, struct drm_file *file) struct drm_i915_file_private *file_priv = file->driver_priv; struct intel_context *ctx; - idr_init(&file_priv->context_idr); - mutex_lock(&dev->struct_mutex); - ctx = i915_gem_create_context(dev, file_priv); + ctx = i915_gem_create_context(dev, file_priv, true); mutex_unlock(&dev->struct_mutex); if (IS_ERR(ctx)) { - idr_destroy(&file_priv->context_idr); return PTR_ERR(ctx); } @@ -495,17 +511,21 @@ int i915_gem_context_open(struct drm_device *dev, struct drm_file *file) void i915_gem_context_close(struct drm_device *dev, struct drm_file *file) { struct drm_i915_file_private *file_priv = file->driver_priv; + struct drm_i915_private *dev_priv = file_priv->dev_priv; - idr_for_each(&file_priv->context_idr, context_idr_cleanup, NULL); - idr_destroy(&file_priv->context_idr); + idr_for_each(&dev_priv->context_idr, cleanup_file_contexts, file_priv); } struct intel_context * i915_gem_context_get(struct drm_i915_file_private *file_priv, u32 id) { + struct drm_i915_private *dev_priv = file_priv->dev_priv; struct intel_context *ctx; - ctx = (struct intel_context *)idr_find(&file_priv->context_idr, id); + if (id == 0) + id = file_priv->first_ctx_id; + + ctx = (struct intel_context *)idr_find(&dev_priv->context_idr, id); if (!ctx) return ERR_PTR(-ENOENT); @@ -862,7 +882,7 @@ int i915_gem_context_create_ioctl(struct drm_device *dev, void *data, if (ret) return ret; - ctx = i915_gem_create_context(dev, file_priv); + ctx = i915_gem_create_context(dev, file_priv, false); mutex_unlock(&dev->struct_mutex); if (IS_ERR(ctx)) return PTR_ERR(ctx); @@ -878,6 +898,7 @@ int i915_gem_context_destroy_ioctl(struct drm_device *dev, void *data, { struct drm_i915_gem_context_destroy *args = data; struct drm_i915_file_private *file_priv = file->driver_priv; + struct drm_i915_private *dev_priv = dev->dev_private; struct intel_context *ctx; int ret; @@ -894,7 +915,7 @@ int i915_gem_context_destroy_ioctl(struct drm_device *dev, void *data, return PTR_ERR(ctx); } - idr_remove(&ctx->file_priv->context_idr, ctx->user_handle); + idr_remove(&dev_priv->context_idr, ctx->user_handle); i915_gem_context_unreference(ctx); mutex_unlock(&dev->struct_mutex);