diff mbox

[v2] drm/i915: Sysfs interface to get GFX shmem usage stats per process

Message ID 1409738995-32334-1-git-send-email-sourab.gupta@intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

sourab.gupta@intel.com Sept. 3, 2014, 10:09 a.m. UTC
From: Sourab Gupta <sourab.gupta@intel.com>

Currently the Graphics Driver provides an interface through which
one can get a snapshot of the overall Graphics memory consumption.
Also there is an interface available, which provides information
about the several memory related attributes of every single Graphics
buffer created by the various clients.

There is a requirement of a new interface for achieving below
functionalities:
1) Need to provide Client based detailed information about the
distribution of Graphics memory
2) Need to provide an interface which can provide info about the
sharing of Graphics buffers between the clients.

The client based interface would also aid in debugging of
memory usage/consumption by each client & debug memleak related issues.

With this new interface,
1) In case of memleak scenarios, we can easily zero in on the culprit
client which is unexpectedly holding on the Graphics buffers for an
inordinate amount of time.
2) We can get an estimate of the instantaneous memory footprint of
every Graphics client.
3) We can now trace all the processes sharing a particular Graphics buffer.

By means of this patch we try to provide a sysfs interface to achieve
the mentioned functionalities.

There are two files created in sysfs:
'i915_gem_meminfo' will provide summary of the graphics resources used by
each graphics client.
'i915_gem_objinfo' will provide detailed view of each object created by
individual clients.

v2: Changes made for
    - adding support to report user virtual addresses of mapped buffers
    - replacing pid based reporting with tgid based one
    - checkpatch and other misc cleanup

Signed-off-by: Sourab Gupta <sourab.gupta@intel.com>
Signed-off-by: Akash Goel <akash.goel@intel.com>
---
 drivers/gpu/drm/i915/i915_dma.c       |   1 +
 drivers/gpu/drm/i915/i915_drv.c       |   2 +
 drivers/gpu/drm/i915/i915_drv.h       |  26 ++
 drivers/gpu/drm/i915/i915_gem.c       | 169 ++++++++++-
 drivers/gpu/drm/i915/i915_gem_debug.c | 542 ++++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_gpu_error.c |   2 +-
 drivers/gpu/drm/i915/i915_sysfs.c     |  83 ++++++
 7 files changed, 822 insertions(+), 3 deletions(-)

Comments

Daniel Vetter Sept. 3, 2014, 10:58 a.m. UTC | #1
On Wed, Sep 03, 2014 at 03:39:55PM +0530, sourab.gupta@intel.com wrote:
> From: Sourab Gupta <sourab.gupta@intel.com>
> 
> Currently the Graphics Driver provides an interface through which
> one can get a snapshot of the overall Graphics memory consumption.
> Also there is an interface available, which provides information
> about the several memory related attributes of every single Graphics
> buffer created by the various clients.
> 
> There is a requirement of a new interface for achieving below
> functionalities:
> 1) Need to provide Client based detailed information about the
> distribution of Graphics memory
> 2) Need to provide an interface which can provide info about the
> sharing of Graphics buffers between the clients.
> 
> The client based interface would also aid in debugging of
> memory usage/consumption by each client & debug memleak related issues.
> 
> With this new interface,
> 1) In case of memleak scenarios, we can easily zero in on the culprit
> client which is unexpectedly holding on the Graphics buffers for an
> inordinate amount of time.
> 2) We can get an estimate of the instantaneous memory footprint of
> every Graphics client.
> 3) We can now trace all the processes sharing a particular Graphics buffer.
> 
> By means of this patch we try to provide a sysfs interface to achieve
> the mentioned functionalities.
> 
> There are two files created in sysfs:
> 'i915_gem_meminfo' will provide summary of the graphics resources used by
> each graphics client.
> 'i915_gem_objinfo' will provide detailed view of each object created by
> individual clients.
> 
> v2: Changes made for
>     - adding support to report user virtual addresses of mapped buffers
>     - replacing pid based reporting with tgid based one
>     - checkpatch and other misc cleanup
> 
> Signed-off-by: Sourab Gupta <sourab.gupta@intel.com>
> Signed-off-by: Akash Goel <akash.goel@intel.com>

Sorry I didn't spot this the first time around, but I think sysfs is the
wrong place for this.

Generally sysfs is for setting/reading per-object values, and it has the
big rule that there should be only _one_ value per file. The error state
is a bit an exception, but otoh it's also just the full dump as a binary
file (which for historical reasons is printed as ascii).

The other issue is that imo this should be a generic interface, so that we
can write a gpu_top tool for dumping memory consumers which works on all
linux platforms.

To avoid delaying for a long time can we just move ahead by putting this
into debugfs?

Also in debugfs there's already a lot of this stuff around - why is that
not sufficient and could we extend it somehow with the missing bits?

Thanks, Daniel

> ---
>  drivers/gpu/drm/i915/i915_dma.c       |   1 +
>  drivers/gpu/drm/i915/i915_drv.c       |   2 +
>  drivers/gpu/drm/i915/i915_drv.h       |  26 ++
>  drivers/gpu/drm/i915/i915_gem.c       | 169 ++++++++++-
>  drivers/gpu/drm/i915/i915_gem_debug.c | 542 ++++++++++++++++++++++++++++++++++
>  drivers/gpu/drm/i915/i915_gpu_error.c |   2 +-
>  drivers/gpu/drm/i915/i915_sysfs.c     |  83 ++++++
>  7 files changed, 822 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c
> index a58fed9..7ea3250 100644
> --- a/drivers/gpu/drm/i915/i915_dma.c
> +++ b/drivers/gpu/drm/i915/i915_dma.c
> @@ -1985,6 +1985,7 @@ void i915_driver_postclose(struct drm_device *dev, struct drm_file *file)
>  {
>  	struct drm_i915_file_private *file_priv = file->driver_priv;
>  
> +	kfree(file_priv->process_name);
>  	if (file_priv && file_priv->bsd_ring)
>  		file_priv->bsd_ring = NULL;
>  	kfree(file_priv);
> diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
> index 1d6d9ac..9bee20e 100644
> --- a/drivers/gpu/drm/i915/i915_drv.c
> +++ b/drivers/gpu/drm/i915/i915_drv.c
> @@ -1628,6 +1628,8 @@ static struct drm_driver driver = {
>  	.debugfs_init = i915_debugfs_init,
>  	.debugfs_cleanup = i915_debugfs_cleanup,
>  #endif
> +	.gem_open_object = i915_gem_open_object,
> +	.gem_close_object = i915_gem_close_object,
>  	.gem_free_object = i915_gem_free_object,
>  	.gem_vm_ops = &i915_gem_vm_ops,
>  
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index 36f3da6..43ba7c4 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -1765,6 +1765,11 @@ struct drm_i915_gem_object_ops {
>  #define INTEL_FRONTBUFFER_ALL_MASK(pipe) \
>  	(0xf << (INTEL_FRONTBUFFER_BITS_PER_PIPE * (pipe)))
>  
> +struct drm_i915_obj_virt_addr {
> +	struct list_head head;
> +	unsigned long user_virt_addr;
> +};
> +
>  struct drm_i915_gem_object {
>  	struct drm_gem_object base;
>  
> @@ -1890,6 +1895,13 @@ struct drm_i915_gem_object {
>  			struct work_struct *work;
>  		} userptr;
>  	};
> +
> +#define MAX_OPEN_HANDLE 20
> +	struct {
> +		struct list_head virt_addr_head;
> +		pid_t pid;
> +		int open_handle_count;
> +	} pid_array[MAX_OPEN_HANDLE];
>  };
>  #define to_intel_bo(x) container_of(x, struct drm_i915_gem_object, base)
>  
> @@ -1940,6 +1952,8 @@ struct drm_i915_gem_request {
>  struct drm_i915_file_private {
>  	struct drm_i915_private *dev_priv;
>  	struct drm_file *file;
> +	char *process_name;
> +	struct pid *tgid;
>  
>  	struct {
>  		spinlock_t lock;
> @@ -2370,6 +2384,10 @@ void i915_init_vm(struct drm_i915_private *dev_priv,
>  		  struct i915_address_space *vm);
>  void i915_gem_free_object(struct drm_gem_object *obj);
>  void i915_gem_vma_destroy(struct i915_vma *vma);
> +int i915_gem_open_object(struct drm_gem_object *gem_obj,
> +			struct drm_file *file_priv);
> +int i915_gem_close_object(struct drm_gem_object *gem_obj,
> +			struct drm_file *file_priv);
>  
>  #define PIN_MAPPABLE 0x1
>  #define PIN_NONBLOCK 0x2
> @@ -2420,6 +2438,8 @@ int i915_gem_dumb_create(struct drm_file *file_priv,
>  			 struct drm_mode_create_dumb *args);
>  int i915_gem_mmap_gtt(struct drm_file *file_priv, struct drm_device *dev,
>  		      uint32_t handle, uint64_t *offset);
> +int i915_gem_obj_shmem_pages_alloced(struct drm_i915_gem_object *obj);
> +
>  /**
>   * Returns true if seq1 is later than seq2.
>   */
> @@ -2686,6 +2706,10 @@ int i915_verify_lists(struct drm_device *dev);
>  #else
>  #define i915_verify_lists(dev) 0
>  #endif
> +int i915_get_drm_clients_info(struct drm_i915_error_state_buf *m,
> +				struct drm_device *dev);
> +int i915_gem_get_all_obj_info(struct drm_i915_error_state_buf *m,
> +				struct drm_device *dev);
>  
>  /* i915_debugfs.c */
>  int i915_debugfs_init(struct drm_minor *minor);
> @@ -2699,6 +2723,8 @@ static inline void intel_display_crc_init(struct drm_device *dev) {}
>  /* i915_gpu_error.c */
>  __printf(2, 3)
>  void i915_error_printf(struct drm_i915_error_state_buf *e, const char *f, ...);
> +void i915_error_puts(struct drm_i915_error_state_buf *e,
> +			    const char *str);
>  int i915_error_state_to_str(struct drm_i915_error_state_buf *estr,
>  			    const struct i915_error_state_file_priv *error);
>  int i915_error_state_buf_init(struct drm_i915_error_state_buf *eb,
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 6c68570..3c36486 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -1461,6 +1461,45 @@ unlock:
>  	return ret;
>  }
>  
> +static void
> +i915_gem_obj_insert_virt_addr(struct drm_i915_gem_object *obj,
> +				unsigned long addr,
> +				bool is_map_gtt)
> +{
> +	pid_t current_pid = task_tgid_nr(current);
> +	int i, found = 0;
> +
> +	if (is_map_gtt)
> +		addr |= 1;
> +
> +	for (i = 0; i < MAX_OPEN_HANDLE; i++) {
> +		if (obj->pid_array[i].pid == current_pid) {
> +			struct drm_i915_obj_virt_addr *entry, *new_entry;
> +
> +			list_for_each_entry(entry,
> +					    &obj->pid_array[i].virt_addr_head,
> +					    head) {
> +				if (entry->user_virt_addr == addr) {
> +					found = 1;
> +					break;
> +				}
> +			}
> +			if (found)
> +				break;
> +			new_entry = kzalloc
> +				(sizeof(struct drm_i915_obj_virt_addr),
> +				GFP_KERNEL);
> +			new_entry->user_virt_addr = addr;
> +			list_add_tail(&new_entry->head,
> +				&obj->pid_array[i].virt_addr_head);
> +			break;
> +		}
> +	}
> +	if (i == MAX_OPEN_HANDLE)
> +		DRM_DEBUG("Couldn't find matching pid %d for obj 0x%x\n",
> +			current_pid, (u32) obj);
> +}
> +
>  /**
>   * Maps the contents of an object, returning the address it is mapped
>   * into.
> @@ -1495,6 +1534,7 @@ i915_gem_mmap_ioctl(struct drm_device *dev, void *data,
>  	if (IS_ERR((void *)addr))
>  		return addr;
>  
> +	i915_gem_obj_insert_virt_addr(to_intel_bo(obj), addr, false);
>  	args->addr_ptr = (uint64_t) addr;
>  
>  	return 0;
> @@ -1585,6 +1625,8 @@ int i915_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
>  		}
>  
>  		obj->fault_mappable = true;
> +		i915_gem_obj_insert_virt_addr(obj,
> +			(unsigned long)vma->vm_start, true);
>  	} else
>  		ret = vm_insert_pfn(vma,
>  				    (unsigned long)vmf->virtual_address,
> @@ -1830,6 +1872,24 @@ i915_gem_object_is_purgeable(struct drm_i915_gem_object *obj)
>  	return obj->madv == I915_MADV_DONTNEED;
>  }
>  
> +int i915_gem_obj_shmem_pages_alloced(struct drm_i915_gem_object *obj)
> +{
> +	int ret;
> +
> +	if (obj->base.filp) {
> +		struct inode *inode = file_inode(obj->base.filp);
> +		struct shmem_inode_info *info = SHMEM_I(inode);
> +
> +		if (!inode)
> +			return 0;
> +		spin_lock(&info->lock);
> +		ret = inode->i_mapping->nrpages;
> +		spin_unlock(&info->lock);
> +		return ret;
> +	}
> +	return 0;
> +}
> +
>  /* Immediately discard the backing storage */
>  static void
>  i915_gem_object_truncate(struct drm_i915_gem_object *obj)
> @@ -4447,6 +4507,79 @@ static bool discard_backing_storage(struct drm_i915_gem_object *obj)
>  	return atomic_long_read(&obj->base.filp->f_count) == 1;
>  }
>  
> +int
> +i915_gem_open_object(struct drm_gem_object *gem_obj,
> +			struct drm_file *file_priv)
> +{
> +	struct drm_i915_gem_object *obj = to_intel_bo(gem_obj);
> +	pid_t current_pid = task_tgid_nr(current);
> +	int i, ret, free = -1;
> +
> +	ret = i915_mutex_lock_interruptible(gem_obj->dev);
> +	if (ret)
> +		return ret;
> +
> +	for (i = 0; i < MAX_OPEN_HANDLE; i++) {
> +		if (obj->pid_array[i].pid == current_pid) {
> +			obj->pid_array[i].open_handle_count++;
> +			break;
> +		} else if (obj->pid_array[i].pid == 0)
> +			free = i;
> +	}
> +
> +	if (i == MAX_OPEN_HANDLE) {
> +		if (free != -1) {
> +			WARN_ON(obj->pid_array[free].open_handle_count);
> +			obj->pid_array[free].open_handle_count = 1;
> +			obj->pid_array[free].pid = current_pid;
> +			INIT_LIST_HEAD(&obj->pid_array[free].virt_addr_head);
> +		} else
> +			DRM_DEBUG("Max open handle count limit: obj 0x%x\n",
> +					(u32) obj);
> +	}
> +
> +	mutex_unlock(&gem_obj->dev->struct_mutex);
> +	return 0;
> +}
> +
> +int
> +i915_gem_close_object(struct drm_gem_object *gem_obj,
> +			struct drm_file *file_priv)
> +{
> +	struct drm_i915_gem_object *obj = to_intel_bo(gem_obj);
> +	pid_t current_pid = task_tgid_nr(current);
> +	int i, ret;
> +
> +	ret = i915_mutex_lock_interruptible(gem_obj->dev);
> +	if (ret)
> +		return ret;
> +
> +	for (i = 0; i < MAX_OPEN_HANDLE; i++) {
> +		if (obj->pid_array[i].pid == current_pid) {
> +			obj->pid_array[i].open_handle_count--;
> +			if (obj->pid_array[i].open_handle_count == 0) {
> +				struct drm_i915_obj_virt_addr *entry, *next;
> +
> +				list_for_each_entry_safe(entry, next,
> +					&obj->pid_array[i].virt_addr_head,
> +					head) {
> +					list_del(&entry->head);
> +					kfree(entry);
> +				}
> +				obj->pid_array[i].pid = 0;
> +			}
> +			break;
> +		}
> +	}
> +	if (i == MAX_OPEN_HANDLE)
> +		DRM_DEBUG("Couldn't find matching pid %d for obj 0x%x\n",
> +				current_pid, (u32) obj);
> +
> +	mutex_unlock(&gem_obj->dev->struct_mutex);
> +	return 0;
> +}
> +
> +
>  void i915_gem_free_object(struct drm_gem_object *gem_obj)
>  {
>  	struct drm_i915_gem_object *obj = to_intel_bo(gem_obj);
> @@ -5072,13 +5205,37 @@ i915_gem_file_idle_work_handler(struct work_struct *work)
>  	atomic_set(&file_priv->rps_wait_boost, false);
>  }
>  
> +static int i915_gem_get_pid_cmdline(struct task_struct *task, char *buffer)
> +{
> +	int res = 0;
> +	unsigned int len;
> +	struct mm_struct *mm = get_task_mm(task);
> +
> +	if (!mm)
> +		goto out;
> +	if (!mm->arg_end)
> +		goto out_mm;
> +
> +	len = mm->arg_end - mm->arg_start;
> +
> +	if (len > PAGE_SIZE)
> +		len = PAGE_SIZE;
> +
> +	res = access_process_vm(task, mm->arg_start, buffer, len, 0);
> +
> +	if (res > 0 && buffer[res-1] != '\0' && len < PAGE_SIZE)
> +		buffer[res-1] = '\0';
> +out_mm:
> +	mmput(mm);
> +out:
> +	return res;
> +}
> +
>  int i915_gem_open(struct drm_device *dev, struct drm_file *file)
>  {
>  	struct drm_i915_file_private *file_priv;
>  	int ret;
>  
> -	DRM_DEBUG_DRIVER("\n");
> -
>  	file_priv = kzalloc(sizeof(*file_priv), GFP_KERNEL);
>  	if (!file_priv)
>  		return -ENOMEM;
> @@ -5086,6 +5243,14 @@ int i915_gem_open(struct drm_device *dev, struct drm_file *file)
>  	file->driver_priv = file_priv;
>  	file_priv->dev_priv = dev->dev_private;
>  	file_priv->file = file;
> +	file_priv->tgid = find_vpid(task_tgid_nr(current));
> +	file_priv->process_name =  kzalloc(PAGE_SIZE, GFP_ATOMIC);
> +	if (!file_priv->process_name) {
> +		kfree(file_priv);
> +		return -ENOMEM;
> +	}
> +
> +	ret = i915_gem_get_pid_cmdline(current, file_priv->process_name);
>  
>  	spin_lock_init(&file_priv->mm.lock);
>  	INIT_LIST_HEAD(&file_priv->mm.request_list);
> diff --git a/drivers/gpu/drm/i915/i915_gem_debug.c b/drivers/gpu/drm/i915/i915_gem_debug.c
> index f462d1b..7a42891 100644
> --- a/drivers/gpu/drm/i915/i915_gem_debug.c
> +++ b/drivers/gpu/drm/i915/i915_gem_debug.c
> @@ -25,6 +25,7 @@
>   *
>   */
>  
> +#include <linux/pid.h>
>  #include <drm/drmP.h>
>  #include <drm/i915_drm.h>
>  #include "i915_drv.h"
> @@ -116,3 +117,544 @@ i915_verify_lists(struct drm_device *dev)
>  	return warned = err;
>  }
>  #endif /* WATCH_LIST */
> +
> +struct per_file_obj_mem_info {
> +	int num_obj;
> +	int num_obj_shared;
> +	int num_obj_private;
> +	int num_obj_gtt_bound;
> +	int num_obj_purged;
> +	int num_obj_purgeable;
> +	int num_obj_allocated;
> +	int num_obj_fault_mappable;
> +	int num_obj_stolen;
> +	size_t gtt_space_allocated_shared;
> +	size_t gtt_space_allocated_priv;
> +	size_t phys_space_allocated_shared;
> +	size_t phys_space_allocated_priv;
> +	size_t phys_space_purgeable;
> +	size_t phys_space_shared_proportion;
> +	size_t fault_mappable_size;
> +	size_t stolen_space_allocated;
> +	char *process_name;
> +};
> +
> +struct name_entry {
> +	struct list_head head;
> +	struct drm_hash_item hash_item;
> +};
> +
> +struct pid_stat_entry {
> +	struct list_head head;
> +	struct list_head namefree;
> +	struct drm_open_hash namelist;
> +	struct per_file_obj_mem_info stats;
> +	struct pid *pid;
> +	int pid_num;
> +};
> +
> +
> +#define err_printf(e, ...) i915_error_printf(e, __VA_ARGS__)
> +#define err_puts(e, s) i915_error_puts(e, s)
> +
> +static const char *get_pin_flag(struct drm_i915_gem_object *obj)
> +{
> +	if (obj->user_pin_count > 0)
> +		return "P";
> +	else if (i915_gem_obj_is_pinned(obj))
> +		return "p";
> +	return " ";
> +}
> +
> +static const char *get_tiling_flag(struct drm_i915_gem_object *obj)
> +{
> +	switch (obj->tiling_mode) {
> +	default:
> +	case I915_TILING_NONE: return " ";
> +	case I915_TILING_X: return "X";
> +	case I915_TILING_Y: return "Y";
> +	}
> +}
> +
> +static int i915_obj_virt_addr_is_valid(struct drm_gem_object *obj,
> +				struct pid *pid, unsigned long addr)
> +{
> +	struct task_struct *task;
> +	struct mm_struct *mm;
> +	struct vm_area_struct *vma;
> +	int locked, ret = 0;
> +
> +	task = get_pid_task(pid, PIDTYPE_PID);
> +	if (task == NULL) {
> +		DRM_DEBUG("null task for pid=%d\n", pid_nr(pid));
> +		return -EINVAL;
> +	}
> +
> +	mm = get_task_mm(task);
> +	if (mm == NULL) {
> +		DRM_DEBUG("null mm for pid=%d\n", pid_nr(pid));
> +		return -EINVAL;
> +	}
> +
> +	locked = down_read_trylock(&mm->mmap_sem);
> +
> +	vma = find_vma(mm, addr);
> +	if (vma) {
> +		if (addr & 1) { /* mmap_gtt case */
> +			if (vma->vm_pgoff*PAGE_SIZE == (unsigned long)
> +				drm_vma_node_offset_addr(&obj->vma_node))
> +				ret = 0;
> +			else
> +				ret = -EINVAL;
> +		} else { /* mmap case */
> +			if (vma->vm_file == obj->filp)
> +				ret = 0;
> +			else
> +				ret = -EINVAL;
> +		}
> +	} else
> +		ret = -EINVAL;
> +
> +	if (locked)
> +		up_read(&mm->mmap_sem);
> +
> +	mmput(mm);
> +	return ret;
> +}
> +
> +static void i915_obj_pidarray_validate(struct drm_gem_object *gem_obj)
> +{
> +	struct drm_i915_gem_object *obj = to_intel_bo(gem_obj);
> +	struct drm_device *dev = gem_obj->dev;
> +	struct drm_i915_obj_virt_addr *entry, *next;
> +	struct drm_file *file;
> +	struct drm_i915_file_private *file_priv;
> +	struct pid *tgid;
> +	int pid_num, i, present;
> +
> +	/* Run a sanity check on pid_array. All entries in pid_array should
> +	 * be subset of the the drm filelist pid entries.
> +	 */
> +	for (i = 0; i < MAX_OPEN_HANDLE; i++) {
> +		if (obj->pid_array[i].pid == 0)
> +			continue;
> +
> +		present = 0;
> +		list_for_each_entry(file, &dev->filelist, lhead) {
> +			file_priv = file->driver_priv;
> +			tgid = file_priv->tgid;
> +			pid_num = pid_nr(tgid);
> +
> +			if (pid_num == obj->pid_array[i].pid) {
> +				present = 1;
> +				break;
> +			}
> +		}
> +		if (present == 0) {
> +			DRM_DEBUG("stale_pid=%d\n", obj->pid_array[i].pid);
> +			list_for_each_entry_safe(entry, next,
> +					&obj->pid_array[i].virt_addr_head,
> +					head) {
> +				list_del(&entry->head);
> +				kfree(entry);
> +			}
> +
> +			obj->pid_array[i].open_handle_count = 0;
> +			obj->pid_array[i].pid = 0;
> +		} else {
> +			/* Validate the virtual address list */
> +			struct task_struct *task =
> +				get_pid_task(tgid, PIDTYPE_PID);
> +			if (task == NULL)
> +				continue;
> +
> +			list_for_each_entry_safe(entry, next,
> +					&obj->pid_array[i].virt_addr_head,
> +					head) {
> +				if (i915_obj_virt_addr_is_valid(gem_obj, tgid,
> +				entry->user_virt_addr)) {
> +					DRM_DEBUG("stale_addr=%ld\n",
> +					entry->user_virt_addr);
> +					list_del(&entry->head);
> +					kfree(entry);
> +				}
> +			}
> +		}
> +	}
> +}
> +
> +static int
> +i915_describe_obj(struct drm_i915_error_state_buf *m,
> +		struct drm_i915_gem_object *obj)
> +{
> +	int i;
> +	struct i915_vma *vma;
> +	struct drm_i915_obj_virt_addr *entry;
> +
> +	err_printf(m,
> +		"%p: %7zdK  %s    %s     %s      %s     %s      %s       %s     ",
> +		   &obj->base,
> +		   obj->base.size / 1024,
> +		   get_pin_flag(obj),
> +		   get_tiling_flag(obj),
> +		   obj->dirty ? "Y" : "N",
> +		   obj->base.name ? "Y" : "N",
> +		   (obj->userptr.mm != 0) ? "Y" : "N",
> +		   obj->stolen ? "Y" : "N",
> +		   (obj->pin_mappable || obj->fault_mappable) ? "Y" : "N");
> +
> +	if (obj->madv == __I915_MADV_PURGED)
> +		err_printf(m, " purged    ");
> +	else if (obj->madv == I915_MADV_DONTNEED)
> +		err_printf(m, " purgeable   ");
> +	else if (i915_gem_obj_shmem_pages_alloced(obj) != 0)
> +		err_printf(m, " allocated   ");
> +
> +
> +	list_for_each_entry(vma, &obj->vma_list, vma_link) {
> +		if (!i915_is_ggtt(vma->vm))
> +			err_puts(m, " PP    ");
> +		else
> +			err_puts(m, " G     ");
> +		err_printf(m, "  %08lx ", vma->node.start);
> +	}
> +
> +	for (i = 0; i < MAX_OPEN_HANDLE; i++) {
> +		if (obj->pid_array[i].pid != 0) {
> +			err_printf(m, " (%d: %d:",
> +			obj->pid_array[i].pid,
> +			obj->pid_array[i].open_handle_count);
> +			list_for_each_entry(entry,
> +				&obj->pid_array[i].virt_addr_head, head) {
> +				if (entry->user_virt_addr & 1)
> +					err_printf(m, " %p",
> +					(void *)(entry->user_virt_addr & ~1));
> +				else
> +					err_printf(m, " %p*",
> +					(void *)entry->user_virt_addr);
> +			}
> +			err_printf(m, ") ");
> +		}
> +	}
> +
> +	err_printf(m, "\n");
> +
> +	if (m->bytes == 0 && m->err)
> +		return m->err;
> +
> +	return 0;
> +}
> +
> +static int
> +i915_drm_gem_obj_info(int id, void *ptr, void *data)
> +{
> +	struct drm_i915_gem_object *obj = ptr;
> +	struct drm_i915_error_state_buf *m = data;
> +	int ret;
> +
> +	i915_obj_pidarray_validate(&obj->base);
> +	ret = i915_describe_obj(m, obj);
> +
> +	return ret;
> +}
> +
> +static int
> +i915_drm_gem_object_per_file_summary(int id, void *ptr, void *data)
> +{
> +	struct pid_stat_entry *pid_entry = data;
> +	struct drm_i915_gem_object *obj = ptr;
> +	struct per_file_obj_mem_info *stats = &pid_entry->stats;
> +	struct drm_hash_item *hash_item;
> +	int i, obj_shared_count = 0;
> +
> +	i915_obj_pidarray_validate(&obj->base);
> +
> +	stats->num_obj++;
> +
> +	if (obj->base.name) {
> +
> +		if (drm_ht_find_item(&pid_entry->namelist,
> +				(unsigned long)obj->base.name, &hash_item)) {
> +			struct name_entry *entry =
> +				kzalloc(sizeof(struct name_entry), GFP_KERNEL);
> +			if (entry == NULL) {
> +				DRM_ERROR("alloc failed\n");
> +				return -ENOMEM;
> +			}
> +			entry->hash_item.key = obj->base.name;
> +			drm_ht_insert_item(&pid_entry->namelist,
> +					&entry->hash_item);
> +			list_add_tail(&entry->head, &pid_entry->namefree);
> +		} else {
> +			DRM_DEBUG("Duplicate obj with name %d for process %s\n",
> +				obj->base.name, stats->process_name);
> +			return 0;
> +		}
> +		for (i = 0; i < MAX_OPEN_HANDLE; i++) {
> +			if (obj->pid_array[i].pid != 0)
> +				obj_shared_count++;
> +		}
> +		if (WARN_ON(obj_shared_count == 0))
> +			return 1;
> +
> +		DRM_DEBUG("Obj: %p, shared count =%d\n",
> +			&obj->base, obj_shared_count);
> +
> +		if (obj_shared_count > 1)
> +			stats->num_obj_shared++;
> +		else
> +			stats->num_obj_private++;
> +	} else {
> +		obj_shared_count = 1;
> +		stats->num_obj_private++;
> +	}
> +
> +	if (i915_gem_obj_bound_any(obj)) {
> +		stats->num_obj_gtt_bound++;
> +		if (obj_shared_count > 1)
> +			stats->gtt_space_allocated_shared += obj->base.size;
> +		else
> +			stats->gtt_space_allocated_priv += obj->base.size;
> +	}
> +
> +	if (obj->stolen) {
> +		stats->num_obj_stolen++;
> +		stats->stolen_space_allocated += obj->base.size;
> +	} else if (obj->madv == __I915_MADV_PURGED) {
> +		stats->num_obj_purged++;
> +	} else if (obj->madv == I915_MADV_DONTNEED) {
> +		stats->num_obj_purgeable++;
> +		stats->num_obj_allocated++;
> +		if (i915_gem_obj_shmem_pages_alloced(obj) != 0) {
> +			stats->phys_space_purgeable += obj->base.size;
> +			if (obj_shared_count > 1) {
> +				stats->phys_space_allocated_shared +=
> +					obj->base.size;
> +				stats->phys_space_shared_proportion +=
> +					obj->base.size/obj_shared_count;
> +			} else
> +				stats->phys_space_allocated_priv +=
> +					obj->base.size;
> +		} else
> +			WARN_ON(1);
> +	} else if (i915_gem_obj_shmem_pages_alloced(obj) != 0) {
> +		stats->num_obj_allocated++;
> +			if (obj_shared_count > 1) {
> +				stats->phys_space_allocated_shared +=
> +					obj->base.size;
> +				stats->phys_space_shared_proportion +=
> +					obj->base.size/obj_shared_count;
> +			}
> +		else
> +			stats->phys_space_allocated_priv += obj->base.size;
> +	}
> +	if (obj->fault_mappable) {
> +		stats->num_obj_fault_mappable++;
> +		stats->fault_mappable_size += obj->base.size;
> +	}
> +	return 0;
> +}
> +
> +int i915_get_drm_clients_info(struct drm_i915_error_state_buf *m,
> +			struct drm_device *dev)
> +{
> +	struct drm_file *file;
> +	struct drm_i915_private *dev_priv = dev->dev_private;
> +
> +	struct name_entry *entry, *next;
> +	struct pid_stat_entry *pid_entry, *temp_entry;
> +	struct pid_stat_entry *new_pid_entry, *new_temp_entry;
> +	struct list_head per_pid_stats, sorted_pid_stats;
> +	int ret = 0, total_shared_prop_space = 0, total_priv_space = 0;
> +
> +	INIT_LIST_HEAD(&per_pid_stats);
> +	INIT_LIST_HEAD(&sorted_pid_stats);
> +
> +	err_printf(m,
> +		"\n\n  pid   Total  Shared  Priv   Purgeable  Alloced  SharedPHYsize   SharedPHYprop    PrivPHYsize   PurgeablePHYsize   process\n");
> +
> +	/* Protect the access to global drm resources such as filelist. Protect
> +	 * against their removal under our noses, while in use.
> +	 */
> +	mutex_lock(&drm_global_mutex);
> +	ret = i915_mutex_lock_interruptible(dev);
> +	if (ret) {
> +		mutex_unlock(&drm_global_mutex);
> +		return ret;
> +	}
> +
> +	list_for_each_entry(file, &dev->filelist, lhead) {
> +		struct pid *tgid;
> +		struct drm_i915_file_private *file_priv = file->driver_priv;
> +		int pid_num, found = 0;
> +
> +		tgid = file_priv->tgid;
> +		pid_num = pid_nr(tgid);
> +
> +		list_for_each_entry(pid_entry, &per_pid_stats, head) {
> +			if (pid_entry->pid_num == pid_num) {
> +				found = 1;
> +				break;
> +			}
> +		}
> +
> +		if (!found) {
> +			struct pid_stat_entry *new_entry =
> +				kzalloc(sizeof(struct pid_stat_entry),
> +					GFP_KERNEL);
> +			if (new_entry == NULL) {
> +				DRM_ERROR("alloc failed\n");
> +				ret = -ENOMEM;
> +				goto out_unlock;
> +			}
> +			new_entry->pid = tgid;
> +			new_entry->pid_num = pid_num;
> +			list_add_tail(&new_entry->head, &per_pid_stats);
> +			drm_ht_create(&new_entry->namelist,
> +				DRM_MAGIC_HASH_ORDER);
> +			INIT_LIST_HEAD(&new_entry->namefree);
> +			new_entry->stats.process_name = file_priv->process_name;
> +			pid_entry = new_entry;
> +		}
> +
> +		ret = idr_for_each(&file->object_idr,
> +			&i915_drm_gem_object_per_file_summary, pid_entry);
> +		if (ret)
> +			break;
> +	}
> +
> +	list_for_each_entry_safe(pid_entry, temp_entry, &per_pid_stats, head) {
> +		if (list_empty(&sorted_pid_stats)) {
> +			list_del(&pid_entry->head);
> +			list_add_tail(&pid_entry->head, &sorted_pid_stats);
> +			continue;
> +		}
> +
> +		list_for_each_entry_safe(new_pid_entry, new_temp_entry,
> +			&sorted_pid_stats, head) {
> +			int prev_space =
> +				pid_entry->stats.phys_space_shared_proportion +
> +				pid_entry->stats.phys_space_allocated_priv;
> +			int new_space =
> +				new_pid_entry->
> +				stats.phys_space_shared_proportion +
> +				new_pid_entry->stats.phys_space_allocated_priv;
> +			if (prev_space > new_space) {
> +				list_del(&pid_entry->head);
> +				list_add_tail(&pid_entry->head,
> +					&new_pid_entry->head);
> +				break;
> +			}
> +			if (list_is_last(&new_pid_entry->head,
> +				&sorted_pid_stats)) {
> +				list_del(&pid_entry->head);
> +				list_add_tail(&pid_entry->head,
> +						&sorted_pid_stats);
> +			}
> +		}
> +	}
> +
> +	list_for_each_entry_safe(pid_entry, temp_entry,
> +				&sorted_pid_stats, head) {
> +		struct task_struct *task = get_pid_task(pid_entry->pid,
> +							PIDTYPE_PID);
> +		err_printf(m,
> +			"%5d %6d %6d %6d %9d %8d %14zdK %14zdK %14zdK  %14zdK     %s",
> +			   pid_entry->pid_num,
> +			   pid_entry->stats.num_obj,
> +			   pid_entry->stats.num_obj_shared,
> +			   pid_entry->stats.num_obj_private,
> +			   pid_entry->stats.num_obj_purgeable,
> +			   pid_entry->stats.num_obj_allocated,
> +			   pid_entry->stats.phys_space_allocated_shared/1024,
> +			   pid_entry->stats.phys_space_shared_proportion/1024,
> +			   pid_entry->stats.phys_space_allocated_priv/1024,
> +			   pid_entry->stats.phys_space_purgeable/1024,
> +			   pid_entry->stats.process_name);
> +
> +		if (task == NULL)
> +			err_printf(m, "*\n");
> +		else
> +			err_printf(m, "\n");
> +
> +		total_shared_prop_space +=
> +			pid_entry->stats.phys_space_shared_proportion/1024;
> +		total_priv_space +=
> +			pid_entry->stats.phys_space_allocated_priv/1024;
> +		list_del(&pid_entry->head);
> +
> +		list_for_each_entry_safe(entry, next,
> +					&pid_entry->namefree, head) {
> +			list_del(&entry->head);
> +			drm_ht_remove_item(&pid_entry->namelist,
> +					&entry->hash_item);
> +			kfree(entry);
> +		}
> +		drm_ht_remove(&pid_entry->namelist);
> +		kfree(pid_entry);
> +	}
> +
> +	err_printf(m,
> +		"\t\t\t\t\t\t\t\t--------------\t-------------\t--------\n");
> +	err_printf(m,
> +		"\t\t\t\t\t\t\t\t%13zdK\t%12zdK\tTotal\n",
> +			total_shared_prop_space, total_priv_space);
> +
> +out_unlock:
> +	mutex_unlock(&dev->struct_mutex);
> +	mutex_unlock(&drm_global_mutex);
> +
> +	if (ret)
> +		return ret;
> +	if (m->bytes == 0 && m->err)
> +		return m->err;
> +
> +	return 0;
> +}
> +
> +int i915_gem_get_all_obj_info(struct drm_i915_error_state_buf *m,
> +			struct drm_device *dev)
> +{
> +	struct drm_file *file;
> +	int pid_num, ret = 0;
> +
> +	/* Protect the access to global drm resources such as filelist. Protect
> +	 * against their removal under our noses, while in use.
> +	 */
> +	mutex_lock(&drm_global_mutex);
> +	ret = i915_mutex_lock_interruptible(dev);
> +	if (ret) {
> +		mutex_unlock(&drm_global_mutex);
> +		return ret;
> +	}
> +
> +	list_for_each_entry(file, &dev->filelist, lhead) {
> +		struct pid *tgid;
> +		struct drm_i915_file_private *file_priv = file->driver_priv;
> +
> +		tgid = file_priv->tgid;
> +		pid_num = pid_nr(tgid);
> +
> +		err_printf(m, "\n\n  PID  process\n");
> +
> +		err_printf(m, "%5d  %s\n",
> +			   pid_num, file_priv->process_name);
> +
> +		err_printf(m,
> +			"\n Obj Identifier       Size Pin Tiling Dirty Shared Vmap Stolen Mappable  AllocState Global/PP  GttOffset (PID: handle count: user virt addrs)\n");
> +		ret = idr_for_each(&file->object_idr,
> +				&i915_drm_gem_obj_info, m);
> +		if (ret)
> +			break;
> +	}
> +	mutex_unlock(&dev->struct_mutex);
> +	mutex_unlock(&drm_global_mutex);
> +
> +	if (ret)
> +		return ret;
> +	if (m->bytes == 0 && m->err)
> +		return m->err;
> +
> +	return 0;
> +}
> +
> diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
> index 2c87a79..089c7df 100644
> --- a/drivers/gpu/drm/i915/i915_gpu_error.c
> +++ b/drivers/gpu/drm/i915/i915_gpu_error.c
> @@ -161,7 +161,7 @@ static void i915_error_vprintf(struct drm_i915_error_state_buf *e,
>  	__i915_error_advance(e, len);
>  }
>  
> -static void i915_error_puts(struct drm_i915_error_state_buf *e,
> +void i915_error_puts(struct drm_i915_error_state_buf *e,
>  			    const char *str)
>  {
>  	unsigned len;
> diff --git a/drivers/gpu/drm/i915/i915_sysfs.c b/drivers/gpu/drm/i915/i915_sysfs.c
> index 503847f..b204c92 100644
> --- a/drivers/gpu/drm/i915/i915_sysfs.c
> +++ b/drivers/gpu/drm/i915/i915_sysfs.c
> @@ -582,6 +582,64 @@ static ssize_t error_state_write(struct file *file, struct kobject *kobj,
>  	return count;
>  }
>  
> +static ssize_t i915_gem_clients_state_read(struct file *filp,
> +				struct kobject *kobj,
> +				struct bin_attribute *attr,
> +				char *buf, loff_t off, size_t count)
> +{
> +	struct device *kdev = container_of(kobj, struct device, kobj);
> +	struct drm_minor *minor = dev_to_drm_minor(kdev);
> +	struct drm_device *dev = minor->dev;
> +	struct drm_i915_error_state_buf error_str;
> +	ssize_t ret_count = 0;
> +	int ret;
> +
> +	ret = i915_error_state_buf_init(&error_str, to_i915(dev), count, off);
> +	if (ret)
> +		return ret;
> +
> +	ret = i915_get_drm_clients_info(&error_str, dev);
> +	if (ret)
> +		goto out;
> +
> +	ret_count = count < error_str.bytes ? count : error_str.bytes;
> +
> +	memcpy(buf, error_str.buf, ret_count);
> +out:
> +	i915_error_state_buf_release(&error_str);
> +
> +	return ret ?: ret_count;
> +}
> +
> +static ssize_t i915_gem_objects_state_read(struct file *filp,
> +				struct kobject *kobj,
> +				struct bin_attribute *attr,
> +				char *buf, loff_t off, size_t count)
> +{
> +	struct device *kdev = container_of(kobj, struct device, kobj);
> +	struct drm_minor *minor = dev_to_drm_minor(kdev);
> +	struct drm_device *dev = minor->dev;
> +	struct drm_i915_error_state_buf error_str;
> +	ssize_t ret_count = 0;
> +	int ret;
> +
> +	ret = i915_error_state_buf_init(&error_str, to_i915(dev), count, off);
> +	if (ret)
> +		return ret;
> +
> +	ret = i915_gem_get_all_obj_info(&error_str, dev);
> +	if (ret)
> +		goto out;
> +
> +	ret_count = count < error_str.bytes ? count : error_str.bytes;
> +
> +	memcpy(buf, error_str.buf, ret_count);
> +out:
> +	i915_error_state_buf_release(&error_str);
> +
> +	return ret ?: ret_count;
> +}
> +
>  static struct bin_attribute error_state_attr = {
>  	.attr.name = "error",
>  	.attr.mode = S_IRUSR | S_IWUSR,
> @@ -590,6 +648,20 @@ static struct bin_attribute error_state_attr = {
>  	.write = error_state_write,
>  };
>  
> +static struct bin_attribute i915_gem_client_state_attr = {
> +	.attr.name = "i915_gem_meminfo",
> +	.attr.mode = S_IRUSR | S_IWUSR,
> +	.size = 0,
> +	.read = i915_gem_clients_state_read,
> +};
> +
> +static struct bin_attribute i915_gem_objects_state_attr = {
> +	.attr.name = "i915_gem_objinfo",
> +	.attr.mode = S_IRUSR | S_IWUSR,
> +	.size = 0,
> +	.read = i915_gem_objects_state_read,
> +};
> +
>  void i915_setup_sysfs(struct drm_device *dev)
>  {
>  	int ret;
> @@ -627,6 +699,17 @@ void i915_setup_sysfs(struct drm_device *dev)
>  				    &error_state_attr);
>  	if (ret)
>  		DRM_ERROR("error_state sysfs setup failed\n");
> +
> +	ret = sysfs_create_bin_file(&dev->primary->kdev->kobj,
> +				    &i915_gem_client_state_attr);
> +	if (ret)
> +		DRM_ERROR("i915_gem_client_state sysfs setup failed\n");
> +
> +	ret = sysfs_create_bin_file(&dev->primary->kdev->kobj,
> +				    &i915_gem_objects_state_attr);
> +	if (ret)
> +		DRM_ERROR("i915_gem_objects_state sysfs setup failed\n");
> +
>  }
>  
>  void i915_teardown_sysfs(struct drm_device *dev)
> -- 
> 1.8.5.1
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/intel-gfx
sourab.gupta@intel.com Sept. 3, 2014, 11:49 a.m. UTC | #2
On Wed, 2014-09-03 at 10:58 +0000, Daniel Vetter wrote:
> On Wed, Sep 03, 2014 at 03:39:55PM +0530, sourab.gupta@intel.com wrote:
> > From: Sourab Gupta <sourab.gupta@intel.com>
> > 
> > Currently the Graphics Driver provides an interface through which
> > one can get a snapshot of the overall Graphics memory consumption.
> > Also there is an interface available, which provides information
> > about the several memory related attributes of every single Graphics
> > buffer created by the various clients.
> > 
> > There is a requirement of a new interface for achieving below
> > functionalities:
> > 1) Need to provide Client based detailed information about the
> > distribution of Graphics memory
> > 2) Need to provide an interface which can provide info about the
> > sharing of Graphics buffers between the clients.
> > 
> > The client based interface would also aid in debugging of
> > memory usage/consumption by each client & debug memleak related issues.
> > 
> > With this new interface,
> > 1) In case of memleak scenarios, we can easily zero in on the culprit
> > client which is unexpectedly holding on the Graphics buffers for an
> > inordinate amount of time.
> > 2) We can get an estimate of the instantaneous memory footprint of
> > every Graphics client.
> > 3) We can now trace all the processes sharing a particular Graphics buffer.
> > 
> > By means of this patch we try to provide a sysfs interface to achieve
> > the mentioned functionalities.
> > 
> > There are two files created in sysfs:
> > 'i915_gem_meminfo' will provide summary of the graphics resources used by
> > each graphics client.
> > 'i915_gem_objinfo' will provide detailed view of each object created by
> > individual clients.
> > 
> > v2: Changes made for
> >     - adding support to report user virtual addresses of mapped buffers
> >     - replacing pid based reporting with tgid based one
> >     - checkpatch and other misc cleanup
> > 
> > Signed-off-by: Sourab Gupta <sourab.gupta@intel.com>
> > Signed-off-by: Akash Goel <akash.goel@intel.com>
> 
> Sorry I didn't spot this the first time around, but I think sysfs is the
> wrong place for this.
> 
> Generally sysfs is for setting/reading per-object values, and it has the
> big rule that there should be only _one_ value per file. The error state
> is a bit an exception, but otoh it's also just the full dump as a binary
> file (which for historical reasons is printed as ascii).
> 
> The other issue is that imo this should be a generic interface, so that we
> can write a gpu_top tool for dumping memory consumers which works on all
> linux platforms.
> 
> To avoid delaying for a long time can we just move ahead by putting this
> into debugfs?
> 
> Also in debugfs there's already a lot of this stuff around - why is that
> not sufficient and could we extend it somehow with the missing bits?
> 
> Thanks, Daniel

Hi Daniel,

Thanks for your inputs.
We had originally put the patch in sysfs, as there was a requirement for
this feature to be available in production kernels also.
We can move it to debugfs to move ahead with this. I'll submit the
debugfs version of this patch next time.

Also,
we developed this new interface to overcome the deficiencies of existing
interface. With this new interface, we can provide client based detailed
information about the distribution of Graphics memory. This gives
information about the various states of the graphics objects opened per
process (summarized as well as detailed info)
It also gives information about Graphics buffers shared between the
clients, and gives user mapped virtual address of all the mapped
graphics buffers.
It was not feasible to fit all this info in the existing interface. So
we decided to go ahead with new interface for these functionality.

Thanks,
Sourab

> 
> > ---
> >  drivers/gpu/drm/i915/i915_dma.c       |   1 +
> >  drivers/gpu/drm/i915/i915_drv.c       |   2 +
> >  drivers/gpu/drm/i915/i915_drv.h       |  26 ++
> >  drivers/gpu/drm/i915/i915_gem.c       | 169 ++++++++++-
> >  drivers/gpu/drm/i915/i915_gem_debug.c | 542 ++++++++++++++++++++++++++++++++++
> >  drivers/gpu/drm/i915/i915_gpu_error.c |   2 +-
> >  drivers/gpu/drm/i915/i915_sysfs.c     |  83 ++++++
> >  7 files changed, 822 insertions(+), 3 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c
> > index a58fed9..7ea3250 100644
> > --- a/drivers/gpu/drm/i915/i915_dma.c
> > +++ b/drivers/gpu/drm/i915/i915_dma.c
> > @@ -1985,6 +1985,7 @@ void i915_driver_postclose(struct drm_device *dev, struct drm_file *file)
> >  {
> >  	struct drm_i915_file_private *file_priv = file->driver_priv;
> >  
> > +	kfree(file_priv->process_name);
> >  	if (file_priv && file_priv->bsd_ring)
> >  		file_priv->bsd_ring = NULL;
> >  	kfree(file_priv);
> > diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
> > index 1d6d9ac..9bee20e 100644
> > --- a/drivers/gpu/drm/i915/i915_drv.c
> > +++ b/drivers/gpu/drm/i915/i915_drv.c
> > @@ -1628,6 +1628,8 @@ static struct drm_driver driver = {
> >  	.debugfs_init = i915_debugfs_init,
> >  	.debugfs_cleanup = i915_debugfs_cleanup,
> >  #endif
> > +	.gem_open_object = i915_gem_open_object,
> > +	.gem_close_object = i915_gem_close_object,
> >  	.gem_free_object = i915_gem_free_object,
> >  	.gem_vm_ops = &i915_gem_vm_ops,
> >  
> > diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> > index 36f3da6..43ba7c4 100644
> > --- a/drivers/gpu/drm/i915/i915_drv.h
> > +++ b/drivers/gpu/drm/i915/i915_drv.h
> > @@ -1765,6 +1765,11 @@ struct drm_i915_gem_object_ops {
> >  #define INTEL_FRONTBUFFER_ALL_MASK(pipe) \
> >  	(0xf << (INTEL_FRONTBUFFER_BITS_PER_PIPE * (pipe)))
> >  
> > +struct drm_i915_obj_virt_addr {
> > +	struct list_head head;
> > +	unsigned long user_virt_addr;
> > +};
> > +
> >  struct drm_i915_gem_object {
> >  	struct drm_gem_object base;
> >  
> > @@ -1890,6 +1895,13 @@ struct drm_i915_gem_object {
> >  			struct work_struct *work;
> >  		} userptr;
> >  	};
> > +
> > +#define MAX_OPEN_HANDLE 20
> > +	struct {
> > +		struct list_head virt_addr_head;
> > +		pid_t pid;
> > +		int open_handle_count;
> > +	} pid_array[MAX_OPEN_HANDLE];
> >  };
> >  #define to_intel_bo(x) container_of(x, struct drm_i915_gem_object, base)
> >  
> > @@ -1940,6 +1952,8 @@ struct drm_i915_gem_request {
> >  struct drm_i915_file_private {
> >  	struct drm_i915_private *dev_priv;
> >  	struct drm_file *file;
> > +	char *process_name;
> > +	struct pid *tgid;
> >  
> >  	struct {
> >  		spinlock_t lock;
> > @@ -2370,6 +2384,10 @@ void i915_init_vm(struct drm_i915_private *dev_priv,
> >  		  struct i915_address_space *vm);
> >  void i915_gem_free_object(struct drm_gem_object *obj);
> >  void i915_gem_vma_destroy(struct i915_vma *vma);
> > +int i915_gem_open_object(struct drm_gem_object *gem_obj,
> > +			struct drm_file *file_priv);
> > +int i915_gem_close_object(struct drm_gem_object *gem_obj,
> > +			struct drm_file *file_priv);
> >  
> >  #define PIN_MAPPABLE 0x1
> >  #define PIN_NONBLOCK 0x2
> > @@ -2420,6 +2438,8 @@ int i915_gem_dumb_create(struct drm_file *file_priv,
> >  			 struct drm_mode_create_dumb *args);
> >  int i915_gem_mmap_gtt(struct drm_file *file_priv, struct drm_device *dev,
> >  		      uint32_t handle, uint64_t *offset);
> > +int i915_gem_obj_shmem_pages_alloced(struct drm_i915_gem_object *obj);
> > +
> >  /**
> >   * Returns true if seq1 is later than seq2.
> >   */
> > @@ -2686,6 +2706,10 @@ int i915_verify_lists(struct drm_device *dev);
> >  #else
> >  #define i915_verify_lists(dev) 0
> >  #endif
> > +int i915_get_drm_clients_info(struct drm_i915_error_state_buf *m,
> > +				struct drm_device *dev);
> > +int i915_gem_get_all_obj_info(struct drm_i915_error_state_buf *m,
> > +				struct drm_device *dev);
> >  
> >  /* i915_debugfs.c */
> >  int i915_debugfs_init(struct drm_minor *minor);
> > @@ -2699,6 +2723,8 @@ static inline void intel_display_crc_init(struct drm_device *dev) {}
> >  /* i915_gpu_error.c */
> >  __printf(2, 3)
> >  void i915_error_printf(struct drm_i915_error_state_buf *e, const char *f, ...);
> > +void i915_error_puts(struct drm_i915_error_state_buf *e,
> > +			    const char *str);
> >  int i915_error_state_to_str(struct drm_i915_error_state_buf *estr,
> >  			    const struct i915_error_state_file_priv *error);
> >  int i915_error_state_buf_init(struct drm_i915_error_state_buf *eb,
> > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> > index 6c68570..3c36486 100644
> > --- a/drivers/gpu/drm/i915/i915_gem.c
> > +++ b/drivers/gpu/drm/i915/i915_gem.c
> > @@ -1461,6 +1461,45 @@ unlock:
> >  	return ret;
> >  }
> >  
> > +static void
> > +i915_gem_obj_insert_virt_addr(struct drm_i915_gem_object *obj,
> > +				unsigned long addr,
> > +				bool is_map_gtt)
> > +{
> > +	pid_t current_pid = task_tgid_nr(current);
> > +	int i, found = 0;
> > +
> > +	if (is_map_gtt)
> > +		addr |= 1;
> > +
> > +	for (i = 0; i < MAX_OPEN_HANDLE; i++) {
> > +		if (obj->pid_array[i].pid == current_pid) {
> > +			struct drm_i915_obj_virt_addr *entry, *new_entry;
> > +
> > +			list_for_each_entry(entry,
> > +					    &obj->pid_array[i].virt_addr_head,
> > +					    head) {
> > +				if (entry->user_virt_addr == addr) {
> > +					found = 1;
> > +					break;
> > +				}
> > +			}
> > +			if (found)
> > +				break;
> > +			new_entry = kzalloc
> > +				(sizeof(struct drm_i915_obj_virt_addr),
> > +				GFP_KERNEL);
> > +			new_entry->user_virt_addr = addr;
> > +			list_add_tail(&new_entry->head,
> > +				&obj->pid_array[i].virt_addr_head);
> > +			break;
> > +		}
> > +	}
> > +	if (i == MAX_OPEN_HANDLE)
> > +		DRM_DEBUG("Couldn't find matching pid %d for obj 0x%x\n",
> > +			current_pid, (u32) obj);
> > +}
> > +
> >  /**
> >   * Maps the contents of an object, returning the address it is mapped
> >   * into.
> > @@ -1495,6 +1534,7 @@ i915_gem_mmap_ioctl(struct drm_device *dev, void *data,
> >  	if (IS_ERR((void *)addr))
> >  		return addr;
> >  
> > +	i915_gem_obj_insert_virt_addr(to_intel_bo(obj), addr, false);
> >  	args->addr_ptr = (uint64_t) addr;
> >  
> >  	return 0;
> > @@ -1585,6 +1625,8 @@ int i915_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
> >  		}
> >  
> >  		obj->fault_mappable = true;
> > +		i915_gem_obj_insert_virt_addr(obj,
> > +			(unsigned long)vma->vm_start, true);
> >  	} else
> >  		ret = vm_insert_pfn(vma,
> >  				    (unsigned long)vmf->virtual_address,
> > @@ -1830,6 +1872,24 @@ i915_gem_object_is_purgeable(struct drm_i915_gem_object *obj)
> >  	return obj->madv == I915_MADV_DONTNEED;
> >  }
> >  
> > +int i915_gem_obj_shmem_pages_alloced(struct drm_i915_gem_object *obj)
> > +{
> > +	int ret;
> > +
> > +	if (obj->base.filp) {
> > +		struct inode *inode = file_inode(obj->base.filp);
> > +		struct shmem_inode_info *info = SHMEM_I(inode);
> > +
> > +		if (!inode)
> > +			return 0;
> > +		spin_lock(&info->lock);
> > +		ret = inode->i_mapping->nrpages;
> > +		spin_unlock(&info->lock);
> > +		return ret;
> > +	}
> > +	return 0;
> > +}
> > +
> >  /* Immediately discard the backing storage */
> >  static void
> >  i915_gem_object_truncate(struct drm_i915_gem_object *obj)
> > @@ -4447,6 +4507,79 @@ static bool discard_backing_storage(struct drm_i915_gem_object *obj)
> >  	return atomic_long_read(&obj->base.filp->f_count) == 1;
> >  }
> >  
> > +int
> > +i915_gem_open_object(struct drm_gem_object *gem_obj,
> > +			struct drm_file *file_priv)
> > +{
> > +	struct drm_i915_gem_object *obj = to_intel_bo(gem_obj);
> > +	pid_t current_pid = task_tgid_nr(current);
> > +	int i, ret, free = -1;
> > +
> > +	ret = i915_mutex_lock_interruptible(gem_obj->dev);
> > +	if (ret)
> > +		return ret;
> > +
> > +	for (i = 0; i < MAX_OPEN_HANDLE; i++) {
> > +		if (obj->pid_array[i].pid == current_pid) {
> > +			obj->pid_array[i].open_handle_count++;
> > +			break;
> > +		} else if (obj->pid_array[i].pid == 0)
> > +			free = i;
> > +	}
> > +
> > +	if (i == MAX_OPEN_HANDLE) {
> > +		if (free != -1) {
> > +			WARN_ON(obj->pid_array[free].open_handle_count);
> > +			obj->pid_array[free].open_handle_count = 1;
> > +			obj->pid_array[free].pid = current_pid;
> > +			INIT_LIST_HEAD(&obj->pid_array[free].virt_addr_head);
> > +		} else
> > +			DRM_DEBUG("Max open handle count limit: obj 0x%x\n",
> > +					(u32) obj);
> > +	}
> > +
> > +	mutex_unlock(&gem_obj->dev->struct_mutex);
> > +	return 0;
> > +}
> > +
> > +int
> > +i915_gem_close_object(struct drm_gem_object *gem_obj,
> > +			struct drm_file *file_priv)
> > +{
> > +	struct drm_i915_gem_object *obj = to_intel_bo(gem_obj);
> > +	pid_t current_pid = task_tgid_nr(current);
> > +	int i, ret;
> > +
> > +	ret = i915_mutex_lock_interruptible(gem_obj->dev);
> > +	if (ret)
> > +		return ret;
> > +
> > +	for (i = 0; i < MAX_OPEN_HANDLE; i++) {
> > +		if (obj->pid_array[i].pid == current_pid) {
> > +			obj->pid_array[i].open_handle_count--;
> > +			if (obj->pid_array[i].open_handle_count == 0) {
> > +				struct drm_i915_obj_virt_addr *entry, *next;
> > +
> > +				list_for_each_entry_safe(entry, next,
> > +					&obj->pid_array[i].virt_addr_head,
> > +					head) {
> > +					list_del(&entry->head);
> > +					kfree(entry);
> > +				}
> > +				obj->pid_array[i].pid = 0;
> > +			}
> > +			break;
> > +		}
> > +	}
> > +	if (i == MAX_OPEN_HANDLE)
> > +		DRM_DEBUG("Couldn't find matching pid %d for obj 0x%x\n",
> > +				current_pid, (u32) obj);
> > +
> > +	mutex_unlock(&gem_obj->dev->struct_mutex);
> > +	return 0;
> > +}
> > +
> > +
> >  void i915_gem_free_object(struct drm_gem_object *gem_obj)
> >  {
> >  	struct drm_i915_gem_object *obj = to_intel_bo(gem_obj);
> > @@ -5072,13 +5205,37 @@ i915_gem_file_idle_work_handler(struct work_struct *work)
> >  	atomic_set(&file_priv->rps_wait_boost, false);
> >  }
> >  
> > +static int i915_gem_get_pid_cmdline(struct task_struct *task, char *buffer)
> > +{
> > +	int res = 0;
> > +	unsigned int len;
> > +	struct mm_struct *mm = get_task_mm(task);
> > +
> > +	if (!mm)
> > +		goto out;
> > +	if (!mm->arg_end)
> > +		goto out_mm;
> > +
> > +	len = mm->arg_end - mm->arg_start;
> > +
> > +	if (len > PAGE_SIZE)
> > +		len = PAGE_SIZE;
> > +
> > +	res = access_process_vm(task, mm->arg_start, buffer, len, 0);
> > +
> > +	if (res > 0 && buffer[res-1] != '\0' && len < PAGE_SIZE)
> > +		buffer[res-1] = '\0';
> > +out_mm:
> > +	mmput(mm);
> > +out:
> > +	return res;
> > +}
> > +
> >  int i915_gem_open(struct drm_device *dev, struct drm_file *file)
> >  {
> >  	struct drm_i915_file_private *file_priv;
> >  	int ret;
> >  
> > -	DRM_DEBUG_DRIVER("\n");
> > -
> >  	file_priv = kzalloc(sizeof(*file_priv), GFP_KERNEL);
> >  	if (!file_priv)
> >  		return -ENOMEM;
> > @@ -5086,6 +5243,14 @@ int i915_gem_open(struct drm_device *dev, struct drm_file *file)
> >  	file->driver_priv = file_priv;
> >  	file_priv->dev_priv = dev->dev_private;
> >  	file_priv->file = file;
> > +	file_priv->tgid = find_vpid(task_tgid_nr(current));
> > +	file_priv->process_name =  kzalloc(PAGE_SIZE, GFP_ATOMIC);
> > +	if (!file_priv->process_name) {
> > +		kfree(file_priv);
> > +		return -ENOMEM;
> > +	}
> > +
> > +	ret = i915_gem_get_pid_cmdline(current, file_priv->process_name);
> >  
> >  	spin_lock_init(&file_priv->mm.lock);
> >  	INIT_LIST_HEAD(&file_priv->mm.request_list);
> > diff --git a/drivers/gpu/drm/i915/i915_gem_debug.c b/drivers/gpu/drm/i915/i915_gem_debug.c
> > index f462d1b..7a42891 100644
> > --- a/drivers/gpu/drm/i915/i915_gem_debug.c
> > +++ b/drivers/gpu/drm/i915/i915_gem_debug.c
> > @@ -25,6 +25,7 @@
> >   *
> >   */
> >  
> > +#include <linux/pid.h>
> >  #include <drm/drmP.h>
> >  #include <drm/i915_drm.h>
> >  #include "i915_drv.h"
> > @@ -116,3 +117,544 @@ i915_verify_lists(struct drm_device *dev)
> >  	return warned = err;
> >  }
> >  #endif /* WATCH_LIST */
> > +
> > +struct per_file_obj_mem_info {
> > +	int num_obj;
> > +	int num_obj_shared;
> > +	int num_obj_private;
> > +	int num_obj_gtt_bound;
> > +	int num_obj_purged;
> > +	int num_obj_purgeable;
> > +	int num_obj_allocated;
> > +	int num_obj_fault_mappable;
> > +	int num_obj_stolen;
> > +	size_t gtt_space_allocated_shared;
> > +	size_t gtt_space_allocated_priv;
> > +	size_t phys_space_allocated_shared;
> > +	size_t phys_space_allocated_priv;
> > +	size_t phys_space_purgeable;
> > +	size_t phys_space_shared_proportion;
> > +	size_t fault_mappable_size;
> > +	size_t stolen_space_allocated;
> > +	char *process_name;
> > +};
> > +
> > +struct name_entry {
> > +	struct list_head head;
> > +	struct drm_hash_item hash_item;
> > +};
> > +
> > +struct pid_stat_entry {
> > +	struct list_head head;
> > +	struct list_head namefree;
> > +	struct drm_open_hash namelist;
> > +	struct per_file_obj_mem_info stats;
> > +	struct pid *pid;
> > +	int pid_num;
> > +};
> > +
> > +
> > +#define err_printf(e, ...) i915_error_printf(e, __VA_ARGS__)
> > +#define err_puts(e, s) i915_error_puts(e, s)
> > +
> > +static const char *get_pin_flag(struct drm_i915_gem_object *obj)
> > +{
> > +	if (obj->user_pin_count > 0)
> > +		return "P";
> > +	else if (i915_gem_obj_is_pinned(obj))
> > +		return "p";
> > +	return " ";
> > +}
> > +
> > +static const char *get_tiling_flag(struct drm_i915_gem_object *obj)
> > +{
> > +	switch (obj->tiling_mode) {
> > +	default:
> > +	case I915_TILING_NONE: return " ";
> > +	case I915_TILING_X: return "X";
> > +	case I915_TILING_Y: return "Y";
> > +	}
> > +}
> > +
> > +static int i915_obj_virt_addr_is_valid(struct drm_gem_object *obj,
> > +				struct pid *pid, unsigned long addr)
> > +{
> > +	struct task_struct *task;
> > +	struct mm_struct *mm;
> > +	struct vm_area_struct *vma;
> > +	int locked, ret = 0;
> > +
> > +	task = get_pid_task(pid, PIDTYPE_PID);
> > +	if (task == NULL) {
> > +		DRM_DEBUG("null task for pid=%d\n", pid_nr(pid));
> > +		return -EINVAL;
> > +	}
> > +
> > +	mm = get_task_mm(task);
> > +	if (mm == NULL) {
> > +		DRM_DEBUG("null mm for pid=%d\n", pid_nr(pid));
> > +		return -EINVAL;
> > +	}
> > +
> > +	locked = down_read_trylock(&mm->mmap_sem);
> > +
> > +	vma = find_vma(mm, addr);
> > +	if (vma) {
> > +		if (addr & 1) { /* mmap_gtt case */
> > +			if (vma->vm_pgoff*PAGE_SIZE == (unsigned long)
> > +				drm_vma_node_offset_addr(&obj->vma_node))
> > +				ret = 0;
> > +			else
> > +				ret = -EINVAL;
> > +		} else { /* mmap case */
> > +			if (vma->vm_file == obj->filp)
> > +				ret = 0;
> > +			else
> > +				ret = -EINVAL;
> > +		}
> > +	} else
> > +		ret = -EINVAL;
> > +
> > +	if (locked)
> > +		up_read(&mm->mmap_sem);
> > +
> > +	mmput(mm);
> > +	return ret;
> > +}
> > +
> > +static void i915_obj_pidarray_validate(struct drm_gem_object *gem_obj)
> > +{
> > +	struct drm_i915_gem_object *obj = to_intel_bo(gem_obj);
> > +	struct drm_device *dev = gem_obj->dev;
> > +	struct drm_i915_obj_virt_addr *entry, *next;
> > +	struct drm_file *file;
> > +	struct drm_i915_file_private *file_priv;
> > +	struct pid *tgid;
> > +	int pid_num, i, present;
> > +
> > +	/* Run a sanity check on pid_array. All entries in pid_array should
> > +	 * be subset of the the drm filelist pid entries.
> > +	 */
> > +	for (i = 0; i < MAX_OPEN_HANDLE; i++) {
> > +		if (obj->pid_array[i].pid == 0)
> > +			continue;
> > +
> > +		present = 0;
> > +		list_for_each_entry(file, &dev->filelist, lhead) {
> > +			file_priv = file->driver_priv;
> > +			tgid = file_priv->tgid;
> > +			pid_num = pid_nr(tgid);
> > +
> > +			if (pid_num == obj->pid_array[i].pid) {
> > +				present = 1;
> > +				break;
> > +			}
> > +		}
> > +		if (present == 0) {
> > +			DRM_DEBUG("stale_pid=%d\n", obj->pid_array[i].pid);
> > +			list_for_each_entry_safe(entry, next,
> > +					&obj->pid_array[i].virt_addr_head,
> > +					head) {
> > +				list_del(&entry->head);
> > +				kfree(entry);
> > +			}
> > +
> > +			obj->pid_array[i].open_handle_count = 0;
> > +			obj->pid_array[i].pid = 0;
> > +		} else {
> > +			/* Validate the virtual address list */
> > +			struct task_struct *task =
> > +				get_pid_task(tgid, PIDTYPE_PID);
> > +			if (task == NULL)
> > +				continue;
> > +
> > +			list_for_each_entry_safe(entry, next,
> > +					&obj->pid_array[i].virt_addr_head,
> > +					head) {
> > +				if (i915_obj_virt_addr_is_valid(gem_obj, tgid,
> > +				entry->user_virt_addr)) {
> > +					DRM_DEBUG("stale_addr=%ld\n",
> > +					entry->user_virt_addr);
> > +					list_del(&entry->head);
> > +					kfree(entry);
> > +				}
> > +			}
> > +		}
> > +	}
> > +}
> > +
> > +static int
> > +i915_describe_obj(struct drm_i915_error_state_buf *m,
> > +		struct drm_i915_gem_object *obj)
> > +{
> > +	int i;
> > +	struct i915_vma *vma;
> > +	struct drm_i915_obj_virt_addr *entry;
> > +
> > +	err_printf(m,
> > +		"%p: %7zdK  %s    %s     %s      %s     %s      %s       %s     ",
> > +		   &obj->base,
> > +		   obj->base.size / 1024,
> > +		   get_pin_flag(obj),
> > +		   get_tiling_flag(obj),
> > +		   obj->dirty ? "Y" : "N",
> > +		   obj->base.name ? "Y" : "N",
> > +		   (obj->userptr.mm != 0) ? "Y" : "N",
> > +		   obj->stolen ? "Y" : "N",
> > +		   (obj->pin_mappable || obj->fault_mappable) ? "Y" : "N");
> > +
> > +	if (obj->madv == __I915_MADV_PURGED)
> > +		err_printf(m, " purged    ");
> > +	else if (obj->madv == I915_MADV_DONTNEED)
> > +		err_printf(m, " purgeable   ");
> > +	else if (i915_gem_obj_shmem_pages_alloced(obj) != 0)
> > +		err_printf(m, " allocated   ");
> > +
> > +
> > +	list_for_each_entry(vma, &obj->vma_list, vma_link) {
> > +		if (!i915_is_ggtt(vma->vm))
> > +			err_puts(m, " PP    ");
> > +		else
> > +			err_puts(m, " G     ");
> > +		err_printf(m, "  %08lx ", vma->node.start);
> > +	}
> > +
> > +	for (i = 0; i < MAX_OPEN_HANDLE; i++) {
> > +		if (obj->pid_array[i].pid != 0) {
> > +			err_printf(m, " (%d: %d:",
> > +			obj->pid_array[i].pid,
> > +			obj->pid_array[i].open_handle_count);
> > +			list_for_each_entry(entry,
> > +				&obj->pid_array[i].virt_addr_head, head) {
> > +				if (entry->user_virt_addr & 1)
> > +					err_printf(m, " %p",
> > +					(void *)(entry->user_virt_addr & ~1));
> > +				else
> > +					err_printf(m, " %p*",
> > +					(void *)entry->user_virt_addr);
> > +			}
> > +			err_printf(m, ") ");
> > +		}
> > +	}
> > +
> > +	err_printf(m, "\n");
> > +
> > +	if (m->bytes == 0 && m->err)
> > +		return m->err;
> > +
> > +	return 0;
> > +}
> > +
> > +static int
> > +i915_drm_gem_obj_info(int id, void *ptr, void *data)
> > +{
> > +	struct drm_i915_gem_object *obj = ptr;
> > +	struct drm_i915_error_state_buf *m = data;
> > +	int ret;
> > +
> > +	i915_obj_pidarray_validate(&obj->base);
> > +	ret = i915_describe_obj(m, obj);
> > +
> > +	return ret;
> > +}
> > +
> > +static int
> > +i915_drm_gem_object_per_file_summary(int id, void *ptr, void *data)
> > +{
> > +	struct pid_stat_entry *pid_entry = data;
> > +	struct drm_i915_gem_object *obj = ptr;
> > +	struct per_file_obj_mem_info *stats = &pid_entry->stats;
> > +	struct drm_hash_item *hash_item;
> > +	int i, obj_shared_count = 0;
> > +
> > +	i915_obj_pidarray_validate(&obj->base);
> > +
> > +	stats->num_obj++;
> > +
> > +	if (obj->base.name) {
> > +
> > +		if (drm_ht_find_item(&pid_entry->namelist,
> > +				(unsigned long)obj->base.name, &hash_item)) {
> > +			struct name_entry *entry =
> > +				kzalloc(sizeof(struct name_entry), GFP_KERNEL);
> > +			if (entry == NULL) {
> > +				DRM_ERROR("alloc failed\n");
> > +				return -ENOMEM;
> > +			}
> > +			entry->hash_item.key = obj->base.name;
> > +			drm_ht_insert_item(&pid_entry->namelist,
> > +					&entry->hash_item);
> > +			list_add_tail(&entry->head, &pid_entry->namefree);
> > +		} else {
> > +			DRM_DEBUG("Duplicate obj with name %d for process %s\n",
> > +				obj->base.name, stats->process_name);
> > +			return 0;
> > +		}
> > +		for (i = 0; i < MAX_OPEN_HANDLE; i++) {
> > +			if (obj->pid_array[i].pid != 0)
> > +				obj_shared_count++;
> > +		}
> > +		if (WARN_ON(obj_shared_count == 0))
> > +			return 1;
> > +
> > +		DRM_DEBUG("Obj: %p, shared count =%d\n",
> > +			&obj->base, obj_shared_count);
> > +
> > +		if (obj_shared_count > 1)
> > +			stats->num_obj_shared++;
> > +		else
> > +			stats->num_obj_private++;
> > +	} else {
> > +		obj_shared_count = 1;
> > +		stats->num_obj_private++;
> > +	}
> > +
> > +	if (i915_gem_obj_bound_any(obj)) {
> > +		stats->num_obj_gtt_bound++;
> > +		if (obj_shared_count > 1)
> > +			stats->gtt_space_allocated_shared += obj->base.size;
> > +		else
> > +			stats->gtt_space_allocated_priv += obj->base.size;
> > +	}
> > +
> > +	if (obj->stolen) {
> > +		stats->num_obj_stolen++;
> > +		stats->stolen_space_allocated += obj->base.size;
> > +	} else if (obj->madv == __I915_MADV_PURGED) {
> > +		stats->num_obj_purged++;
> > +	} else if (obj->madv == I915_MADV_DONTNEED) {
> > +		stats->num_obj_purgeable++;
> > +		stats->num_obj_allocated++;
> > +		if (i915_gem_obj_shmem_pages_alloced(obj) != 0) {
> > +			stats->phys_space_purgeable += obj->base.size;
> > +			if (obj_shared_count > 1) {
> > +				stats->phys_space_allocated_shared +=
> > +					obj->base.size;
> > +				stats->phys_space_shared_proportion +=
> > +					obj->base.size/obj_shared_count;
> > +			} else
> > +				stats->phys_space_allocated_priv +=
> > +					obj->base.size;
> > +		} else
> > +			WARN_ON(1);
> > +	} else if (i915_gem_obj_shmem_pages_alloced(obj) != 0) {
> > +		stats->num_obj_allocated++;
> > +			if (obj_shared_count > 1) {
> > +				stats->phys_space_allocated_shared +=
> > +					obj->base.size;
> > +				stats->phys_space_shared_proportion +=
> > +					obj->base.size/obj_shared_count;
> > +			}
> > +		else
> > +			stats->phys_space_allocated_priv += obj->base.size;
> > +	}
> > +	if (obj->fault_mappable) {
> > +		stats->num_obj_fault_mappable++;
> > +		stats->fault_mappable_size += obj->base.size;
> > +	}
> > +	return 0;
> > +}
> > +
> > +int i915_get_drm_clients_info(struct drm_i915_error_state_buf *m,
> > +			struct drm_device *dev)
> > +{
> > +	struct drm_file *file;
> > +	struct drm_i915_private *dev_priv = dev->dev_private;
> > +
> > +	struct name_entry *entry, *next;
> > +	struct pid_stat_entry *pid_entry, *temp_entry;
> > +	struct pid_stat_entry *new_pid_entry, *new_temp_entry;
> > +	struct list_head per_pid_stats, sorted_pid_stats;
> > +	int ret = 0, total_shared_prop_space = 0, total_priv_space = 0;
> > +
> > +	INIT_LIST_HEAD(&per_pid_stats);
> > +	INIT_LIST_HEAD(&sorted_pid_stats);
> > +
> > +	err_printf(m,
> > +		"\n\n  pid   Total  Shared  Priv   Purgeable  Alloced  SharedPHYsize   SharedPHYprop    PrivPHYsize   PurgeablePHYsize   process\n");
> > +
> > +	/* Protect the access to global drm resources such as filelist. Protect
> > +	 * against their removal under our noses, while in use.
> > +	 */
> > +	mutex_lock(&drm_global_mutex);
> > +	ret = i915_mutex_lock_interruptible(dev);
> > +	if (ret) {
> > +		mutex_unlock(&drm_global_mutex);
> > +		return ret;
> > +	}
> > +
> > +	list_for_each_entry(file, &dev->filelist, lhead) {
> > +		struct pid *tgid;
> > +		struct drm_i915_file_private *file_priv = file->driver_priv;
> > +		int pid_num, found = 0;
> > +
> > +		tgid = file_priv->tgid;
> > +		pid_num = pid_nr(tgid);
> > +
> > +		list_for_each_entry(pid_entry, &per_pid_stats, head) {
> > +			if (pid_entry->pid_num == pid_num) {
> > +				found = 1;
> > +				break;
> > +			}
> > +		}
> > +
> > +		if (!found) {
> > +			struct pid_stat_entry *new_entry =
> > +				kzalloc(sizeof(struct pid_stat_entry),
> > +					GFP_KERNEL);
> > +			if (new_entry == NULL) {
> > +				DRM_ERROR("alloc failed\n");
> > +				ret = -ENOMEM;
> > +				goto out_unlock;
> > +			}
> > +			new_entry->pid = tgid;
> > +			new_entry->pid_num = pid_num;
> > +			list_add_tail(&new_entry->head, &per_pid_stats);
> > +			drm_ht_create(&new_entry->namelist,
> > +				DRM_MAGIC_HASH_ORDER);
> > +			INIT_LIST_HEAD(&new_entry->namefree);
> > +			new_entry->stats.process_name = file_priv->process_name;
> > +			pid_entry = new_entry;
> > +		}
> > +
> > +		ret = idr_for_each(&file->object_idr,
> > +			&i915_drm_gem_object_per_file_summary, pid_entry);
> > +		if (ret)
> > +			break;
> > +	}
> > +
> > +	list_for_each_entry_safe(pid_entry, temp_entry, &per_pid_stats, head) {
> > +		if (list_empty(&sorted_pid_stats)) {
> > +			list_del(&pid_entry->head);
> > +			list_add_tail(&pid_entry->head, &sorted_pid_stats);
> > +			continue;
> > +		}
> > +
> > +		list_for_each_entry_safe(new_pid_entry, new_temp_entry,
> > +			&sorted_pid_stats, head) {
> > +			int prev_space =
> > +				pid_entry->stats.phys_space_shared_proportion +
> > +				pid_entry->stats.phys_space_allocated_priv;
> > +			int new_space =
> > +				new_pid_entry->
> > +				stats.phys_space_shared_proportion +
> > +				new_pid_entry->stats.phys_space_allocated_priv;
> > +			if (prev_space > new_space) {
> > +				list_del(&pid_entry->head);
> > +				list_add_tail(&pid_entry->head,
> > +					&new_pid_entry->head);
> > +				break;
> > +			}
> > +			if (list_is_last(&new_pid_entry->head,
> > +				&sorted_pid_stats)) {
> > +				list_del(&pid_entry->head);
> > +				list_add_tail(&pid_entry->head,
> > +						&sorted_pid_stats);
> > +			}
> > +		}
> > +	}
> > +
> > +	list_for_each_entry_safe(pid_entry, temp_entry,
> > +				&sorted_pid_stats, head) {
> > +		struct task_struct *task = get_pid_task(pid_entry->pid,
> > +							PIDTYPE_PID);
> > +		err_printf(m,
> > +			"%5d %6d %6d %6d %9d %8d %14zdK %14zdK %14zdK  %14zdK     %s",
> > +			   pid_entry->pid_num,
> > +			   pid_entry->stats.num_obj,
> > +			   pid_entry->stats.num_obj_shared,
> > +			   pid_entry->stats.num_obj_private,
> > +			   pid_entry->stats.num_obj_purgeable,
> > +			   pid_entry->stats.num_obj_allocated,
> > +			   pid_entry->stats.phys_space_allocated_shared/1024,
> > +			   pid_entry->stats.phys_space_shared_proportion/1024,
> > +			   pid_entry->stats.phys_space_allocated_priv/1024,
> > +			   pid_entry->stats.phys_space_purgeable/1024,
> > +			   pid_entry->stats.process_name);
> > +
> > +		if (task == NULL)
> > +			err_printf(m, "*\n");
> > +		else
> > +			err_printf(m, "\n");
> > +
> > +		total_shared_prop_space +=
> > +			pid_entry->stats.phys_space_shared_proportion/1024;
> > +		total_priv_space +=
> > +			pid_entry->stats.phys_space_allocated_priv/1024;
> > +		list_del(&pid_entry->head);
> > +
> > +		list_for_each_entry_safe(entry, next,
> > +					&pid_entry->namefree, head) {
> > +			list_del(&entry->head);
> > +			drm_ht_remove_item(&pid_entry->namelist,
> > +					&entry->hash_item);
> > +			kfree(entry);
> > +		}
> > +		drm_ht_remove(&pid_entry->namelist);
> > +		kfree(pid_entry);
> > +	}
> > +
> > +	err_printf(m,
> > +		"\t\t\t\t\t\t\t\t--------------\t-------------\t--------\n");
> > +	err_printf(m,
> > +		"\t\t\t\t\t\t\t\t%13zdK\t%12zdK\tTotal\n",
> > +			total_shared_prop_space, total_priv_space);
> > +
> > +out_unlock:
> > +	mutex_unlock(&dev->struct_mutex);
> > +	mutex_unlock(&drm_global_mutex);
> > +
> > +	if (ret)
> > +		return ret;
> > +	if (m->bytes == 0 && m->err)
> > +		return m->err;
> > +
> > +	return 0;
> > +}
> > +
> > +int i915_gem_get_all_obj_info(struct drm_i915_error_state_buf *m,
> > +			struct drm_device *dev)
> > +{
> > +	struct drm_file *file;
> > +	int pid_num, ret = 0;
> > +
> > +	/* Protect the access to global drm resources such as filelist. Protect
> > +	 * against their removal under our noses, while in use.
> > +	 */
> > +	mutex_lock(&drm_global_mutex);
> > +	ret = i915_mutex_lock_interruptible(dev);
> > +	if (ret) {
> > +		mutex_unlock(&drm_global_mutex);
> > +		return ret;
> > +	}
> > +
> > +	list_for_each_entry(file, &dev->filelist, lhead) {
> > +		struct pid *tgid;
> > +		struct drm_i915_file_private *file_priv = file->driver_priv;
> > +
> > +		tgid = file_priv->tgid;
> > +		pid_num = pid_nr(tgid);
> > +
> > +		err_printf(m, "\n\n  PID  process\n");
> > +
> > +		err_printf(m, "%5d  %s\n",
> > +			   pid_num, file_priv->process_name);
> > +
> > +		err_printf(m,
> > +			"\n Obj Identifier       Size Pin Tiling Dirty Shared Vmap Stolen Mappable  AllocState Global/PP  GttOffset (PID: handle count: user virt addrs)\n");
> > +		ret = idr_for_each(&file->object_idr,
> > +				&i915_drm_gem_obj_info, m);
> > +		if (ret)
> > +			break;
> > +	}
> > +	mutex_unlock(&dev->struct_mutex);
> > +	mutex_unlock(&drm_global_mutex);
> > +
> > +	if (ret)
> > +		return ret;
> > +	if (m->bytes == 0 && m->err)
> > +		return m->err;
> > +
> > +	return 0;
> > +}
> > +
> > diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
> > index 2c87a79..089c7df 100644
> > --- a/drivers/gpu/drm/i915/i915_gpu_error.c
> > +++ b/drivers/gpu/drm/i915/i915_gpu_error.c
> > @@ -161,7 +161,7 @@ static void i915_error_vprintf(struct drm_i915_error_state_buf *e,
> >  	__i915_error_advance(e, len);
> >  }
> >  
> > -static void i915_error_puts(struct drm_i915_error_state_buf *e,
> > +void i915_error_puts(struct drm_i915_error_state_buf *e,
> >  			    const char *str)
> >  {
> >  	unsigned len;
> > diff --git a/drivers/gpu/drm/i915/i915_sysfs.c b/drivers/gpu/drm/i915/i915_sysfs.c
> > index 503847f..b204c92 100644
> > --- a/drivers/gpu/drm/i915/i915_sysfs.c
> > +++ b/drivers/gpu/drm/i915/i915_sysfs.c
> > @@ -582,6 +582,64 @@ static ssize_t error_state_write(struct file *file, struct kobject *kobj,
> >  	return count;
> >  }
> >  
> > +static ssize_t i915_gem_clients_state_read(struct file *filp,
> > +				struct kobject *kobj,
> > +				struct bin_attribute *attr,
> > +				char *buf, loff_t off, size_t count)
> > +{
> > +	struct device *kdev = container_of(kobj, struct device, kobj);
> > +	struct drm_minor *minor = dev_to_drm_minor(kdev);
> > +	struct drm_device *dev = minor->dev;
> > +	struct drm_i915_error_state_buf error_str;
> > +	ssize_t ret_count = 0;
> > +	int ret;
> > +
> > +	ret = i915_error_state_buf_init(&error_str, to_i915(dev), count, off);
> > +	if (ret)
> > +		return ret;
> > +
> > +	ret = i915_get_drm_clients_info(&error_str, dev);
> > +	if (ret)
> > +		goto out;
> > +
> > +	ret_count = count < error_str.bytes ? count : error_str.bytes;
> > +
> > +	memcpy(buf, error_str.buf, ret_count);
> > +out:
> > +	i915_error_state_buf_release(&error_str);
> > +
> > +	return ret ?: ret_count;
> > +}
> > +
> > +static ssize_t i915_gem_objects_state_read(struct file *filp,
> > +				struct kobject *kobj,
> > +				struct bin_attribute *attr,
> > +				char *buf, loff_t off, size_t count)
> > +{
> > +	struct device *kdev = container_of(kobj, struct device, kobj);
> > +	struct drm_minor *minor = dev_to_drm_minor(kdev);
> > +	struct drm_device *dev = minor->dev;
> > +	struct drm_i915_error_state_buf error_str;
> > +	ssize_t ret_count = 0;
> > +	int ret;
> > +
> > +	ret = i915_error_state_buf_init(&error_str, to_i915(dev), count, off);
> > +	if (ret)
> > +		return ret;
> > +
> > +	ret = i915_gem_get_all_obj_info(&error_str, dev);
> > +	if (ret)
> > +		goto out;
> > +
> > +	ret_count = count < error_str.bytes ? count : error_str.bytes;
> > +
> > +	memcpy(buf, error_str.buf, ret_count);
> > +out:
> > +	i915_error_state_buf_release(&error_str);
> > +
> > +	return ret ?: ret_count;
> > +}
> > +
> >  static struct bin_attribute error_state_attr = {
> >  	.attr.name = "error",
> >  	.attr.mode = S_IRUSR | S_IWUSR,
> > @@ -590,6 +648,20 @@ static struct bin_attribute error_state_attr = {
> >  	.write = error_state_write,
> >  };
> >  
> > +static struct bin_attribute i915_gem_client_state_attr = {
> > +	.attr.name = "i915_gem_meminfo",
> > +	.attr.mode = S_IRUSR | S_IWUSR,
> > +	.size = 0,
> > +	.read = i915_gem_clients_state_read,
> > +};
> > +
> > +static struct bin_attribute i915_gem_objects_state_attr = {
> > +	.attr.name = "i915_gem_objinfo",
> > +	.attr.mode = S_IRUSR | S_IWUSR,
> > +	.size = 0,
> > +	.read = i915_gem_objects_state_read,
> > +};
> > +
> >  void i915_setup_sysfs(struct drm_device *dev)
> >  {
> >  	int ret;
> > @@ -627,6 +699,17 @@ void i915_setup_sysfs(struct drm_device *dev)
> >  				    &error_state_attr);
> >  	if (ret)
> >  		DRM_ERROR("error_state sysfs setup failed\n");
> > +
> > +	ret = sysfs_create_bin_file(&dev->primary->kdev->kobj,
> > +				    &i915_gem_client_state_attr);
> > +	if (ret)
> > +		DRM_ERROR("i915_gem_client_state sysfs setup failed\n");
> > +
> > +	ret = sysfs_create_bin_file(&dev->primary->kdev->kobj,
> > +				    &i915_gem_objects_state_attr);
> > +	if (ret)
> > +		DRM_ERROR("i915_gem_objects_state sysfs setup failed\n");
> > +
> >  }
> >  
> >  void i915_teardown_sysfs(struct drm_device *dev)
> > -- 
> > 1.8.5.1
> > 
> > _______________________________________________
> > Intel-gfx mailing list
> > Intel-gfx@lists.freedesktop.org
> > http://lists.freedesktop.org/mailman/listinfo/intel-gfx
>
Daniel Vetter Sept. 3, 2014, 1:09 p.m. UTC | #3
On Wed, Sep 03, 2014 at 11:49:52AM +0000, Gupta, Sourab wrote:
> On Wed, 2014-09-03 at 10:58 +0000, Daniel Vetter wrote:
> > On Wed, Sep 03, 2014 at 03:39:55PM +0530, sourab.gupta@intel.com wrote:
> > > From: Sourab Gupta <sourab.gupta@intel.com>
> > > 
> > > Currently the Graphics Driver provides an interface through which
> > > one can get a snapshot of the overall Graphics memory consumption.
> > > Also there is an interface available, which provides information
> > > about the several memory related attributes of every single Graphics
> > > buffer created by the various clients.
> > > 
> > > There is a requirement of a new interface for achieving below
> > > functionalities:
> > > 1) Need to provide Client based detailed information about the
> > > distribution of Graphics memory
> > > 2) Need to provide an interface which can provide info about the
> > > sharing of Graphics buffers between the clients.
> > > 
> > > The client based interface would also aid in debugging of
> > > memory usage/consumption by each client & debug memleak related issues.
> > > 
> > > With this new interface,
> > > 1) In case of memleak scenarios, we can easily zero in on the culprit
> > > client which is unexpectedly holding on the Graphics buffers for an
> > > inordinate amount of time.
> > > 2) We can get an estimate of the instantaneous memory footprint of
> > > every Graphics client.
> > > 3) We can now trace all the processes sharing a particular Graphics buffer.
> > > 
> > > By means of this patch we try to provide a sysfs interface to achieve
> > > the mentioned functionalities.
> > > 
> > > There are two files created in sysfs:
> > > 'i915_gem_meminfo' will provide summary of the graphics resources used by
> > > each graphics client.
> > > 'i915_gem_objinfo' will provide detailed view of each object created by
> > > individual clients.
> > > 
> > > v2: Changes made for
> > >     - adding support to report user virtual addresses of mapped buffers
> > >     - replacing pid based reporting with tgid based one
> > >     - checkpatch and other misc cleanup
> > > 
> > > Signed-off-by: Sourab Gupta <sourab.gupta@intel.com>
> > > Signed-off-by: Akash Goel <akash.goel@intel.com>
> > 
> > Sorry I didn't spot this the first time around, but I think sysfs is the
> > wrong place for this.
> > 
> > Generally sysfs is for setting/reading per-object values, and it has the
> > big rule that there should be only _one_ value per file. The error state
> > is a bit an exception, but otoh it's also just the full dump as a binary
> > file (which for historical reasons is printed as ascii).
> > 
> > The other issue is that imo this should be a generic interface, so that we
> > can write a gpu_top tool for dumping memory consumers which works on all
> > linux platforms.
> > 
> > To avoid delaying for a long time can we just move ahead by putting this
> > into debugfs?
> > 
> > Also in debugfs there's already a lot of this stuff around - why is that
> > not sufficient and could we extend it somehow with the missing bits?
> > 
> > Thanks, Daniel
> 
> Hi Daniel,
> 
> Thanks for your inputs.
> We had originally put the patch in sysfs, as there was a requirement for
> this feature to be available in production kernels also.
> We can move it to debugfs to move ahead with this. I'll submit the
> debugfs version of this patch next time.

Yeah sysfs is the only place where we have a stable api, but that also
implies that requirements are a _lot_ more stringent. At least we need
testcases to make sure the interface actually do what we want them to do,
and to make sure we don't break the interface by accident.

> Also,
> we developed this new interface to overcome the deficiencies of existing
> interface. With this new interface, we can provide client based detailed
> information about the distribution of Graphics memory. This gives
> information about the various states of the graphics objects opened per
> process (summarized as well as detailed info)
> It also gives information about Graphics buffers shared between the
> clients, and gives user mapped virtual address of all the mapped
> graphics buffers.
> It was not feasible to fit all this info in the existing interface. So
> we decided to go ahead with new interface for these functionality.

Well the problem is that adding more files like that increases the
maintenance burden. So if there's some way to compute the information you
want from information already provided in debugfs, then I prefer we do
that at first.
-Daniel

> 
> Thanks,
> Sourab
> 
> > 
> > > ---
> > >  drivers/gpu/drm/i915/i915_dma.c       |   1 +
> > >  drivers/gpu/drm/i915/i915_drv.c       |   2 +
> > >  drivers/gpu/drm/i915/i915_drv.h       |  26 ++
> > >  drivers/gpu/drm/i915/i915_gem.c       | 169 ++++++++++-
> > >  drivers/gpu/drm/i915/i915_gem_debug.c | 542 ++++++++++++++++++++++++++++++++++
> > >  drivers/gpu/drm/i915/i915_gpu_error.c |   2 +-
> > >  drivers/gpu/drm/i915/i915_sysfs.c     |  83 ++++++
> > >  7 files changed, 822 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c
> > > index a58fed9..7ea3250 100644
> > > --- a/drivers/gpu/drm/i915/i915_dma.c
> > > +++ b/drivers/gpu/drm/i915/i915_dma.c
> > > @@ -1985,6 +1985,7 @@ void i915_driver_postclose(struct drm_device *dev, struct drm_file *file)
> > >  {
> > >  	struct drm_i915_file_private *file_priv = file->driver_priv;
> > >  
> > > +	kfree(file_priv->process_name);
> > >  	if (file_priv && file_priv->bsd_ring)
> > >  		file_priv->bsd_ring = NULL;
> > >  	kfree(file_priv);
> > > diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
> > > index 1d6d9ac..9bee20e 100644
> > > --- a/drivers/gpu/drm/i915/i915_drv.c
> > > +++ b/drivers/gpu/drm/i915/i915_drv.c
> > > @@ -1628,6 +1628,8 @@ static struct drm_driver driver = {
> > >  	.debugfs_init = i915_debugfs_init,
> > >  	.debugfs_cleanup = i915_debugfs_cleanup,
> > >  #endif
> > > +	.gem_open_object = i915_gem_open_object,
> > > +	.gem_close_object = i915_gem_close_object,
> > >  	.gem_free_object = i915_gem_free_object,
> > >  	.gem_vm_ops = &i915_gem_vm_ops,
> > >  
> > > diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> > > index 36f3da6..43ba7c4 100644
> > > --- a/drivers/gpu/drm/i915/i915_drv.h
> > > +++ b/drivers/gpu/drm/i915/i915_drv.h
> > > @@ -1765,6 +1765,11 @@ struct drm_i915_gem_object_ops {
> > >  #define INTEL_FRONTBUFFER_ALL_MASK(pipe) \
> > >  	(0xf << (INTEL_FRONTBUFFER_BITS_PER_PIPE * (pipe)))
> > >  
> > > +struct drm_i915_obj_virt_addr {
> > > +	struct list_head head;
> > > +	unsigned long user_virt_addr;
> > > +};
> > > +
> > >  struct drm_i915_gem_object {
> > >  	struct drm_gem_object base;
> > >  
> > > @@ -1890,6 +1895,13 @@ struct drm_i915_gem_object {
> > >  			struct work_struct *work;
> > >  		} userptr;
> > >  	};
> > > +
> > > +#define MAX_OPEN_HANDLE 20
> > > +	struct {
> > > +		struct list_head virt_addr_head;
> > > +		pid_t pid;
> > > +		int open_handle_count;
> > > +	} pid_array[MAX_OPEN_HANDLE];
> > >  };
> > >  #define to_intel_bo(x) container_of(x, struct drm_i915_gem_object, base)
> > >  
> > > @@ -1940,6 +1952,8 @@ struct drm_i915_gem_request {
> > >  struct drm_i915_file_private {
> > >  	struct drm_i915_private *dev_priv;
> > >  	struct drm_file *file;
> > > +	char *process_name;
> > > +	struct pid *tgid;
> > >  
> > >  	struct {
> > >  		spinlock_t lock;
> > > @@ -2370,6 +2384,10 @@ void i915_init_vm(struct drm_i915_private *dev_priv,
> > >  		  struct i915_address_space *vm);
> > >  void i915_gem_free_object(struct drm_gem_object *obj);
> > >  void i915_gem_vma_destroy(struct i915_vma *vma);
> > > +int i915_gem_open_object(struct drm_gem_object *gem_obj,
> > > +			struct drm_file *file_priv);
> > > +int i915_gem_close_object(struct drm_gem_object *gem_obj,
> > > +			struct drm_file *file_priv);
> > >  
> > >  #define PIN_MAPPABLE 0x1
> > >  #define PIN_NONBLOCK 0x2
> > > @@ -2420,6 +2438,8 @@ int i915_gem_dumb_create(struct drm_file *file_priv,
> > >  			 struct drm_mode_create_dumb *args);
> > >  int i915_gem_mmap_gtt(struct drm_file *file_priv, struct drm_device *dev,
> > >  		      uint32_t handle, uint64_t *offset);
> > > +int i915_gem_obj_shmem_pages_alloced(struct drm_i915_gem_object *obj);
> > > +
> > >  /**
> > >   * Returns true if seq1 is later than seq2.
> > >   */
> > > @@ -2686,6 +2706,10 @@ int i915_verify_lists(struct drm_device *dev);
> > >  #else
> > >  #define i915_verify_lists(dev) 0
> > >  #endif
> > > +int i915_get_drm_clients_info(struct drm_i915_error_state_buf *m,
> > > +				struct drm_device *dev);
> > > +int i915_gem_get_all_obj_info(struct drm_i915_error_state_buf *m,
> > > +				struct drm_device *dev);
> > >  
> > >  /* i915_debugfs.c */
> > >  int i915_debugfs_init(struct drm_minor *minor);
> > > @@ -2699,6 +2723,8 @@ static inline void intel_display_crc_init(struct drm_device *dev) {}
> > >  /* i915_gpu_error.c */
> > >  __printf(2, 3)
> > >  void i915_error_printf(struct drm_i915_error_state_buf *e, const char *f, ...);
> > > +void i915_error_puts(struct drm_i915_error_state_buf *e,
> > > +			    const char *str);
> > >  int i915_error_state_to_str(struct drm_i915_error_state_buf *estr,
> > >  			    const struct i915_error_state_file_priv *error);
> > >  int i915_error_state_buf_init(struct drm_i915_error_state_buf *eb,
> > > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> > > index 6c68570..3c36486 100644
> > > --- a/drivers/gpu/drm/i915/i915_gem.c
> > > +++ b/drivers/gpu/drm/i915/i915_gem.c
> > > @@ -1461,6 +1461,45 @@ unlock:
> > >  	return ret;
> > >  }
> > >  
> > > +static void
> > > +i915_gem_obj_insert_virt_addr(struct drm_i915_gem_object *obj,
> > > +				unsigned long addr,
> > > +				bool is_map_gtt)
> > > +{
> > > +	pid_t current_pid = task_tgid_nr(current);
> > > +	int i, found = 0;
> > > +
> > > +	if (is_map_gtt)
> > > +		addr |= 1;
> > > +
> > > +	for (i = 0; i < MAX_OPEN_HANDLE; i++) {
> > > +		if (obj->pid_array[i].pid == current_pid) {
> > > +			struct drm_i915_obj_virt_addr *entry, *new_entry;
> > > +
> > > +			list_for_each_entry(entry,
> > > +					    &obj->pid_array[i].virt_addr_head,
> > > +					    head) {
> > > +				if (entry->user_virt_addr == addr) {
> > > +					found = 1;
> > > +					break;
> > > +				}
> > > +			}
> > > +			if (found)
> > > +				break;
> > > +			new_entry = kzalloc
> > > +				(sizeof(struct drm_i915_obj_virt_addr),
> > > +				GFP_KERNEL);
> > > +			new_entry->user_virt_addr = addr;
> > > +			list_add_tail(&new_entry->head,
> > > +				&obj->pid_array[i].virt_addr_head);
> > > +			break;
> > > +		}
> > > +	}
> > > +	if (i == MAX_OPEN_HANDLE)
> > > +		DRM_DEBUG("Couldn't find matching pid %d for obj 0x%x\n",
> > > +			current_pid, (u32) obj);
> > > +}
> > > +
> > >  /**
> > >   * Maps the contents of an object, returning the address it is mapped
> > >   * into.
> > > @@ -1495,6 +1534,7 @@ i915_gem_mmap_ioctl(struct drm_device *dev, void *data,
> > >  	if (IS_ERR((void *)addr))
> > >  		return addr;
> > >  
> > > +	i915_gem_obj_insert_virt_addr(to_intel_bo(obj), addr, false);
> > >  	args->addr_ptr = (uint64_t) addr;
> > >  
> > >  	return 0;
> > > @@ -1585,6 +1625,8 @@ int i915_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
> > >  		}
> > >  
> > >  		obj->fault_mappable = true;
> > > +		i915_gem_obj_insert_virt_addr(obj,
> > > +			(unsigned long)vma->vm_start, true);
> > >  	} else
> > >  		ret = vm_insert_pfn(vma,
> > >  				    (unsigned long)vmf->virtual_address,
> > > @@ -1830,6 +1872,24 @@ i915_gem_object_is_purgeable(struct drm_i915_gem_object *obj)
> > >  	return obj->madv == I915_MADV_DONTNEED;
> > >  }
> > >  
> > > +int i915_gem_obj_shmem_pages_alloced(struct drm_i915_gem_object *obj)
> > > +{
> > > +	int ret;
> > > +
> > > +	if (obj->base.filp) {
> > > +		struct inode *inode = file_inode(obj->base.filp);
> > > +		struct shmem_inode_info *info = SHMEM_I(inode);
> > > +
> > > +		if (!inode)
> > > +			return 0;
> > > +		spin_lock(&info->lock);
> > > +		ret = inode->i_mapping->nrpages;
> > > +		spin_unlock(&info->lock);
> > > +		return ret;
> > > +	}
> > > +	return 0;
> > > +}
> > > +
> > >  /* Immediately discard the backing storage */
> > >  static void
> > >  i915_gem_object_truncate(struct drm_i915_gem_object *obj)
> > > @@ -4447,6 +4507,79 @@ static bool discard_backing_storage(struct drm_i915_gem_object *obj)
> > >  	return atomic_long_read(&obj->base.filp->f_count) == 1;
> > >  }
> > >  
> > > +int
> > > +i915_gem_open_object(struct drm_gem_object *gem_obj,
> > > +			struct drm_file *file_priv)
> > > +{
> > > +	struct drm_i915_gem_object *obj = to_intel_bo(gem_obj);
> > > +	pid_t current_pid = task_tgid_nr(current);
> > > +	int i, ret, free = -1;
> > > +
> > > +	ret = i915_mutex_lock_interruptible(gem_obj->dev);
> > > +	if (ret)
> > > +		return ret;
> > > +
> > > +	for (i = 0; i < MAX_OPEN_HANDLE; i++) {
> > > +		if (obj->pid_array[i].pid == current_pid) {
> > > +			obj->pid_array[i].open_handle_count++;
> > > +			break;
> > > +		} else if (obj->pid_array[i].pid == 0)
> > > +			free = i;
> > > +	}
> > > +
> > > +	if (i == MAX_OPEN_HANDLE) {
> > > +		if (free != -1) {
> > > +			WARN_ON(obj->pid_array[free].open_handle_count);
> > > +			obj->pid_array[free].open_handle_count = 1;
> > > +			obj->pid_array[free].pid = current_pid;
> > > +			INIT_LIST_HEAD(&obj->pid_array[free].virt_addr_head);
> > > +		} else
> > > +			DRM_DEBUG("Max open handle count limit: obj 0x%x\n",
> > > +					(u32) obj);
> > > +	}
> > > +
> > > +	mutex_unlock(&gem_obj->dev->struct_mutex);
> > > +	return 0;
> > > +}
> > > +
> > > +int
> > > +i915_gem_close_object(struct drm_gem_object *gem_obj,
> > > +			struct drm_file *file_priv)
> > > +{
> > > +	struct drm_i915_gem_object *obj = to_intel_bo(gem_obj);
> > > +	pid_t current_pid = task_tgid_nr(current);
> > > +	int i, ret;
> > > +
> > > +	ret = i915_mutex_lock_interruptible(gem_obj->dev);
> > > +	if (ret)
> > > +		return ret;
> > > +
> > > +	for (i = 0; i < MAX_OPEN_HANDLE; i++) {
> > > +		if (obj->pid_array[i].pid == current_pid) {
> > > +			obj->pid_array[i].open_handle_count--;
> > > +			if (obj->pid_array[i].open_handle_count == 0) {
> > > +				struct drm_i915_obj_virt_addr *entry, *next;
> > > +
> > > +				list_for_each_entry_safe(entry, next,
> > > +					&obj->pid_array[i].virt_addr_head,
> > > +					head) {
> > > +					list_del(&entry->head);
> > > +					kfree(entry);
> > > +				}
> > > +				obj->pid_array[i].pid = 0;
> > > +			}
> > > +			break;
> > > +		}
> > > +	}
> > > +	if (i == MAX_OPEN_HANDLE)
> > > +		DRM_DEBUG("Couldn't find matching pid %d for obj 0x%x\n",
> > > +				current_pid, (u32) obj);
> > > +
> > > +	mutex_unlock(&gem_obj->dev->struct_mutex);
> > > +	return 0;
> > > +}
> > > +
> > > +
> > >  void i915_gem_free_object(struct drm_gem_object *gem_obj)
> > >  {
> > >  	struct drm_i915_gem_object *obj = to_intel_bo(gem_obj);
> > > @@ -5072,13 +5205,37 @@ i915_gem_file_idle_work_handler(struct work_struct *work)
> > >  	atomic_set(&file_priv->rps_wait_boost, false);
> > >  }
> > >  
> > > +static int i915_gem_get_pid_cmdline(struct task_struct *task, char *buffer)
> > > +{
> > > +	int res = 0;
> > > +	unsigned int len;
> > > +	struct mm_struct *mm = get_task_mm(task);
> > > +
> > > +	if (!mm)
> > > +		goto out;
> > > +	if (!mm->arg_end)
> > > +		goto out_mm;
> > > +
> > > +	len = mm->arg_end - mm->arg_start;
> > > +
> > > +	if (len > PAGE_SIZE)
> > > +		len = PAGE_SIZE;
> > > +
> > > +	res = access_process_vm(task, mm->arg_start, buffer, len, 0);
> > > +
> > > +	if (res > 0 && buffer[res-1] != '\0' && len < PAGE_SIZE)
> > > +		buffer[res-1] = '\0';
> > > +out_mm:
> > > +	mmput(mm);
> > > +out:
> > > +	return res;
> > > +}
> > > +
> > >  int i915_gem_open(struct drm_device *dev, struct drm_file *file)
> > >  {
> > >  	struct drm_i915_file_private *file_priv;
> > >  	int ret;
> > >  
> > > -	DRM_DEBUG_DRIVER("\n");
> > > -
> > >  	file_priv = kzalloc(sizeof(*file_priv), GFP_KERNEL);
> > >  	if (!file_priv)
> > >  		return -ENOMEM;
> > > @@ -5086,6 +5243,14 @@ int i915_gem_open(struct drm_device *dev, struct drm_file *file)
> > >  	file->driver_priv = file_priv;
> > >  	file_priv->dev_priv = dev->dev_private;
> > >  	file_priv->file = file;
> > > +	file_priv->tgid = find_vpid(task_tgid_nr(current));
> > > +	file_priv->process_name =  kzalloc(PAGE_SIZE, GFP_ATOMIC);
> > > +	if (!file_priv->process_name) {
> > > +		kfree(file_priv);
> > > +		return -ENOMEM;
> > > +	}
> > > +
> > > +	ret = i915_gem_get_pid_cmdline(current, file_priv->process_name);
> > >  
> > >  	spin_lock_init(&file_priv->mm.lock);
> > >  	INIT_LIST_HEAD(&file_priv->mm.request_list);
> > > diff --git a/drivers/gpu/drm/i915/i915_gem_debug.c b/drivers/gpu/drm/i915/i915_gem_debug.c
> > > index f462d1b..7a42891 100644
> > > --- a/drivers/gpu/drm/i915/i915_gem_debug.c
> > > +++ b/drivers/gpu/drm/i915/i915_gem_debug.c
> > > @@ -25,6 +25,7 @@
> > >   *
> > >   */
> > >  
> > > +#include <linux/pid.h>
> > >  #include <drm/drmP.h>
> > >  #include <drm/i915_drm.h>
> > >  #include "i915_drv.h"
> > > @@ -116,3 +117,544 @@ i915_verify_lists(struct drm_device *dev)
> > >  	return warned = err;
> > >  }
> > >  #endif /* WATCH_LIST */
> > > +
> > > +struct per_file_obj_mem_info {
> > > +	int num_obj;
> > > +	int num_obj_shared;
> > > +	int num_obj_private;
> > > +	int num_obj_gtt_bound;
> > > +	int num_obj_purged;
> > > +	int num_obj_purgeable;
> > > +	int num_obj_allocated;
> > > +	int num_obj_fault_mappable;
> > > +	int num_obj_stolen;
> > > +	size_t gtt_space_allocated_shared;
> > > +	size_t gtt_space_allocated_priv;
> > > +	size_t phys_space_allocated_shared;
> > > +	size_t phys_space_allocated_priv;
> > > +	size_t phys_space_purgeable;
> > > +	size_t phys_space_shared_proportion;
> > > +	size_t fault_mappable_size;
> > > +	size_t stolen_space_allocated;
> > > +	char *process_name;
> > > +};
> > > +
> > > +struct name_entry {
> > > +	struct list_head head;
> > > +	struct drm_hash_item hash_item;
> > > +};
> > > +
> > > +struct pid_stat_entry {
> > > +	struct list_head head;
> > > +	struct list_head namefree;
> > > +	struct drm_open_hash namelist;
> > > +	struct per_file_obj_mem_info stats;
> > > +	struct pid *pid;
> > > +	int pid_num;
> > > +};
> > > +
> > > +
> > > +#define err_printf(e, ...) i915_error_printf(e, __VA_ARGS__)
> > > +#define err_puts(e, s) i915_error_puts(e, s)
> > > +
> > > +static const char *get_pin_flag(struct drm_i915_gem_object *obj)
> > > +{
> > > +	if (obj->user_pin_count > 0)
> > > +		return "P";
> > > +	else if (i915_gem_obj_is_pinned(obj))
> > > +		return "p";
> > > +	return " ";
> > > +}
> > > +
> > > +static const char *get_tiling_flag(struct drm_i915_gem_object *obj)
> > > +{
> > > +	switch (obj->tiling_mode) {
> > > +	default:
> > > +	case I915_TILING_NONE: return " ";
> > > +	case I915_TILING_X: return "X";
> > > +	case I915_TILING_Y: return "Y";
> > > +	}
> > > +}
> > > +
> > > +static int i915_obj_virt_addr_is_valid(struct drm_gem_object *obj,
> > > +				struct pid *pid, unsigned long addr)
> > > +{
> > > +	struct task_struct *task;
> > > +	struct mm_struct *mm;
> > > +	struct vm_area_struct *vma;
> > > +	int locked, ret = 0;
> > > +
> > > +	task = get_pid_task(pid, PIDTYPE_PID);
> > > +	if (task == NULL) {
> > > +		DRM_DEBUG("null task for pid=%d\n", pid_nr(pid));
> > > +		return -EINVAL;
> > > +	}
> > > +
> > > +	mm = get_task_mm(task);
> > > +	if (mm == NULL) {
> > > +		DRM_DEBUG("null mm for pid=%d\n", pid_nr(pid));
> > > +		return -EINVAL;
> > > +	}
> > > +
> > > +	locked = down_read_trylock(&mm->mmap_sem);
> > > +
> > > +	vma = find_vma(mm, addr);
> > > +	if (vma) {
> > > +		if (addr & 1) { /* mmap_gtt case */
> > > +			if (vma->vm_pgoff*PAGE_SIZE == (unsigned long)
> > > +				drm_vma_node_offset_addr(&obj->vma_node))
> > > +				ret = 0;
> > > +			else
> > > +				ret = -EINVAL;
> > > +		} else { /* mmap case */
> > > +			if (vma->vm_file == obj->filp)
> > > +				ret = 0;
> > > +			else
> > > +				ret = -EINVAL;
> > > +		}
> > > +	} else
> > > +		ret = -EINVAL;
> > > +
> > > +	if (locked)
> > > +		up_read(&mm->mmap_sem);
> > > +
> > > +	mmput(mm);
> > > +	return ret;
> > > +}
> > > +
> > > +static void i915_obj_pidarray_validate(struct drm_gem_object *gem_obj)
> > > +{
> > > +	struct drm_i915_gem_object *obj = to_intel_bo(gem_obj);
> > > +	struct drm_device *dev = gem_obj->dev;
> > > +	struct drm_i915_obj_virt_addr *entry, *next;
> > > +	struct drm_file *file;
> > > +	struct drm_i915_file_private *file_priv;
> > > +	struct pid *tgid;
> > > +	int pid_num, i, present;
> > > +
> > > +	/* Run a sanity check on pid_array. All entries in pid_array should
> > > +	 * be subset of the the drm filelist pid entries.
> > > +	 */
> > > +	for (i = 0; i < MAX_OPEN_HANDLE; i++) {
> > > +		if (obj->pid_array[i].pid == 0)
> > > +			continue;
> > > +
> > > +		present = 0;
> > > +		list_for_each_entry(file, &dev->filelist, lhead) {
> > > +			file_priv = file->driver_priv;
> > > +			tgid = file_priv->tgid;
> > > +			pid_num = pid_nr(tgid);
> > > +
> > > +			if (pid_num == obj->pid_array[i].pid) {
> > > +				present = 1;
> > > +				break;
> > > +			}
> > > +		}
> > > +		if (present == 0) {
> > > +			DRM_DEBUG("stale_pid=%d\n", obj->pid_array[i].pid);
> > > +			list_for_each_entry_safe(entry, next,
> > > +					&obj->pid_array[i].virt_addr_head,
> > > +					head) {
> > > +				list_del(&entry->head);
> > > +				kfree(entry);
> > > +			}
> > > +
> > > +			obj->pid_array[i].open_handle_count = 0;
> > > +			obj->pid_array[i].pid = 0;
> > > +		} else {
> > > +			/* Validate the virtual address list */
> > > +			struct task_struct *task =
> > > +				get_pid_task(tgid, PIDTYPE_PID);
> > > +			if (task == NULL)
> > > +				continue;
> > > +
> > > +			list_for_each_entry_safe(entry, next,
> > > +					&obj->pid_array[i].virt_addr_head,
> > > +					head) {
> > > +				if (i915_obj_virt_addr_is_valid(gem_obj, tgid,
> > > +				entry->user_virt_addr)) {
> > > +					DRM_DEBUG("stale_addr=%ld\n",
> > > +					entry->user_virt_addr);
> > > +					list_del(&entry->head);
> > > +					kfree(entry);
> > > +				}
> > > +			}
> > > +		}
> > > +	}
> > > +}
> > > +
> > > +static int
> > > +i915_describe_obj(struct drm_i915_error_state_buf *m,
> > > +		struct drm_i915_gem_object *obj)
> > > +{
> > > +	int i;
> > > +	struct i915_vma *vma;
> > > +	struct drm_i915_obj_virt_addr *entry;
> > > +
> > > +	err_printf(m,
> > > +		"%p: %7zdK  %s    %s     %s      %s     %s      %s       %s     ",
> > > +		   &obj->base,
> > > +		   obj->base.size / 1024,
> > > +		   get_pin_flag(obj),
> > > +		   get_tiling_flag(obj),
> > > +		   obj->dirty ? "Y" : "N",
> > > +		   obj->base.name ? "Y" : "N",
> > > +		   (obj->userptr.mm != 0) ? "Y" : "N",
> > > +		   obj->stolen ? "Y" : "N",
> > > +		   (obj->pin_mappable || obj->fault_mappable) ? "Y" : "N");
> > > +
> > > +	if (obj->madv == __I915_MADV_PURGED)
> > > +		err_printf(m, " purged    ");
> > > +	else if (obj->madv == I915_MADV_DONTNEED)
> > > +		err_printf(m, " purgeable   ");
> > > +	else if (i915_gem_obj_shmem_pages_alloced(obj) != 0)
> > > +		err_printf(m, " allocated   ");
> > > +
> > > +
> > > +	list_for_each_entry(vma, &obj->vma_list, vma_link) {
> > > +		if (!i915_is_ggtt(vma->vm))
> > > +			err_puts(m, " PP    ");
> > > +		else
> > > +			err_puts(m, " G     ");
> > > +		err_printf(m, "  %08lx ", vma->node.start);
> > > +	}
> > > +
> > > +	for (i = 0; i < MAX_OPEN_HANDLE; i++) {
> > > +		if (obj->pid_array[i].pid != 0) {
> > > +			err_printf(m, " (%d: %d:",
> > > +			obj->pid_array[i].pid,
> > > +			obj->pid_array[i].open_handle_count);
> > > +			list_for_each_entry(entry,
> > > +				&obj->pid_array[i].virt_addr_head, head) {
> > > +				if (entry->user_virt_addr & 1)
> > > +					err_printf(m, " %p",
> > > +					(void *)(entry->user_virt_addr & ~1));
> > > +				else
> > > +					err_printf(m, " %p*",
> > > +					(void *)entry->user_virt_addr);
> > > +			}
> > > +			err_printf(m, ") ");
> > > +		}
> > > +	}
> > > +
> > > +	err_printf(m, "\n");
> > > +
> > > +	if (m->bytes == 0 && m->err)
> > > +		return m->err;
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +static int
> > > +i915_drm_gem_obj_info(int id, void *ptr, void *data)
> > > +{
> > > +	struct drm_i915_gem_object *obj = ptr;
> > > +	struct drm_i915_error_state_buf *m = data;
> > > +	int ret;
> > > +
> > > +	i915_obj_pidarray_validate(&obj->base);
> > > +	ret = i915_describe_obj(m, obj);
> > > +
> > > +	return ret;
> > > +}
> > > +
> > > +static int
> > > +i915_drm_gem_object_per_file_summary(int id, void *ptr, void *data)
> > > +{
> > > +	struct pid_stat_entry *pid_entry = data;
> > > +	struct drm_i915_gem_object *obj = ptr;
> > > +	struct per_file_obj_mem_info *stats = &pid_entry->stats;
> > > +	struct drm_hash_item *hash_item;
> > > +	int i, obj_shared_count = 0;
> > > +
> > > +	i915_obj_pidarray_validate(&obj->base);
> > > +
> > > +	stats->num_obj++;
> > > +
> > > +	if (obj->base.name) {
> > > +
> > > +		if (drm_ht_find_item(&pid_entry->namelist,
> > > +				(unsigned long)obj->base.name, &hash_item)) {
> > > +			struct name_entry *entry =
> > > +				kzalloc(sizeof(struct name_entry), GFP_KERNEL);
> > > +			if (entry == NULL) {
> > > +				DRM_ERROR("alloc failed\n");
> > > +				return -ENOMEM;
> > > +			}
> > > +			entry->hash_item.key = obj->base.name;
> > > +			drm_ht_insert_item(&pid_entry->namelist,
> > > +					&entry->hash_item);
> > > +			list_add_tail(&entry->head, &pid_entry->namefree);
> > > +		} else {
> > > +			DRM_DEBUG("Duplicate obj with name %d for process %s\n",
> > > +				obj->base.name, stats->process_name);
> > > +			return 0;
> > > +		}
> > > +		for (i = 0; i < MAX_OPEN_HANDLE; i++) {
> > > +			if (obj->pid_array[i].pid != 0)
> > > +				obj_shared_count++;
> > > +		}
> > > +		if (WARN_ON(obj_shared_count == 0))
> > > +			return 1;
> > > +
> > > +		DRM_DEBUG("Obj: %p, shared count =%d\n",
> > > +			&obj->base, obj_shared_count);
> > > +
> > > +		if (obj_shared_count > 1)
> > > +			stats->num_obj_shared++;
> > > +		else
> > > +			stats->num_obj_private++;
> > > +	} else {
> > > +		obj_shared_count = 1;
> > > +		stats->num_obj_private++;
> > > +	}
> > > +
> > > +	if (i915_gem_obj_bound_any(obj)) {
> > > +		stats->num_obj_gtt_bound++;
> > > +		if (obj_shared_count > 1)
> > > +			stats->gtt_space_allocated_shared += obj->base.size;
> > > +		else
> > > +			stats->gtt_space_allocated_priv += obj->base.size;
> > > +	}
> > > +
> > > +	if (obj->stolen) {
> > > +		stats->num_obj_stolen++;
> > > +		stats->stolen_space_allocated += obj->base.size;
> > > +	} else if (obj->madv == __I915_MADV_PURGED) {
> > > +		stats->num_obj_purged++;
> > > +	} else if (obj->madv == I915_MADV_DONTNEED) {
> > > +		stats->num_obj_purgeable++;
> > > +		stats->num_obj_allocated++;
> > > +		if (i915_gem_obj_shmem_pages_alloced(obj) != 0) {
> > > +			stats->phys_space_purgeable += obj->base.size;
> > > +			if (obj_shared_count > 1) {
> > > +				stats->phys_space_allocated_shared +=
> > > +					obj->base.size;
> > > +				stats->phys_space_shared_proportion +=
> > > +					obj->base.size/obj_shared_count;
> > > +			} else
> > > +				stats->phys_space_allocated_priv +=
> > > +					obj->base.size;
> > > +		} else
> > > +			WARN_ON(1);
> > > +	} else if (i915_gem_obj_shmem_pages_alloced(obj) != 0) {
> > > +		stats->num_obj_allocated++;
> > > +			if (obj_shared_count > 1) {
> > > +				stats->phys_space_allocated_shared +=
> > > +					obj->base.size;
> > > +				stats->phys_space_shared_proportion +=
> > > +					obj->base.size/obj_shared_count;
> > > +			}
> > > +		else
> > > +			stats->phys_space_allocated_priv += obj->base.size;
> > > +	}
> > > +	if (obj->fault_mappable) {
> > > +		stats->num_obj_fault_mappable++;
> > > +		stats->fault_mappable_size += obj->base.size;
> > > +	}
> > > +	return 0;
> > > +}
> > > +
> > > +int i915_get_drm_clients_info(struct drm_i915_error_state_buf *m,
> > > +			struct drm_device *dev)
> > > +{
> > > +	struct drm_file *file;
> > > +	struct drm_i915_private *dev_priv = dev->dev_private;
> > > +
> > > +	struct name_entry *entry, *next;
> > > +	struct pid_stat_entry *pid_entry, *temp_entry;
> > > +	struct pid_stat_entry *new_pid_entry, *new_temp_entry;
> > > +	struct list_head per_pid_stats, sorted_pid_stats;
> > > +	int ret = 0, total_shared_prop_space = 0, total_priv_space = 0;
> > > +
> > > +	INIT_LIST_HEAD(&per_pid_stats);
> > > +	INIT_LIST_HEAD(&sorted_pid_stats);
> > > +
> > > +	err_printf(m,
> > > +		"\n\n  pid   Total  Shared  Priv   Purgeable  Alloced  SharedPHYsize   SharedPHYprop    PrivPHYsize   PurgeablePHYsize   process\n");
> > > +
> > > +	/* Protect the access to global drm resources such as filelist. Protect
> > > +	 * against their removal under our noses, while in use.
> > > +	 */
> > > +	mutex_lock(&drm_global_mutex);
> > > +	ret = i915_mutex_lock_interruptible(dev);
> > > +	if (ret) {
> > > +		mutex_unlock(&drm_global_mutex);
> > > +		return ret;
> > > +	}
> > > +
> > > +	list_for_each_entry(file, &dev->filelist, lhead) {
> > > +		struct pid *tgid;
> > > +		struct drm_i915_file_private *file_priv = file->driver_priv;
> > > +		int pid_num, found = 0;
> > > +
> > > +		tgid = file_priv->tgid;
> > > +		pid_num = pid_nr(tgid);
> > > +
> > > +		list_for_each_entry(pid_entry, &per_pid_stats, head) {
> > > +			if (pid_entry->pid_num == pid_num) {
> > > +				found = 1;
> > > +				break;
> > > +			}
> > > +		}
> > > +
> > > +		if (!found) {
> > > +			struct pid_stat_entry *new_entry =
> > > +				kzalloc(sizeof(struct pid_stat_entry),
> > > +					GFP_KERNEL);
> > > +			if (new_entry == NULL) {
> > > +				DRM_ERROR("alloc failed\n");
> > > +				ret = -ENOMEM;
> > > +				goto out_unlock;
> > > +			}
> > > +			new_entry->pid = tgid;
> > > +			new_entry->pid_num = pid_num;
> > > +			list_add_tail(&new_entry->head, &per_pid_stats);
> > > +			drm_ht_create(&new_entry->namelist,
> > > +				DRM_MAGIC_HASH_ORDER);
> > > +			INIT_LIST_HEAD(&new_entry->namefree);
> > > +			new_entry->stats.process_name = file_priv->process_name;
> > > +			pid_entry = new_entry;
> > > +		}
> > > +
> > > +		ret = idr_for_each(&file->object_idr,
> > > +			&i915_drm_gem_object_per_file_summary, pid_entry);
> > > +		if (ret)
> > > +			break;
> > > +	}
> > > +
> > > +	list_for_each_entry_safe(pid_entry, temp_entry, &per_pid_stats, head) {
> > > +		if (list_empty(&sorted_pid_stats)) {
> > > +			list_del(&pid_entry->head);
> > > +			list_add_tail(&pid_entry->head, &sorted_pid_stats);
> > > +			continue;
> > > +		}
> > > +
> > > +		list_for_each_entry_safe(new_pid_entry, new_temp_entry,
> > > +			&sorted_pid_stats, head) {
> > > +			int prev_space =
> > > +				pid_entry->stats.phys_space_shared_proportion +
> > > +				pid_entry->stats.phys_space_allocated_priv;
> > > +			int new_space =
> > > +				new_pid_entry->
> > > +				stats.phys_space_shared_proportion +
> > > +				new_pid_entry->stats.phys_space_allocated_priv;
> > > +			if (prev_space > new_space) {
> > > +				list_del(&pid_entry->head);
> > > +				list_add_tail(&pid_entry->head,
> > > +					&new_pid_entry->head);
> > > +				break;
> > > +			}
> > > +			if (list_is_last(&new_pid_entry->head,
> > > +				&sorted_pid_stats)) {
> > > +				list_del(&pid_entry->head);
> > > +				list_add_tail(&pid_entry->head,
> > > +						&sorted_pid_stats);
> > > +			}
> > > +		}
> > > +	}
> > > +
> > > +	list_for_each_entry_safe(pid_entry, temp_entry,
> > > +				&sorted_pid_stats, head) {
> > > +		struct task_struct *task = get_pid_task(pid_entry->pid,
> > > +							PIDTYPE_PID);
> > > +		err_printf(m,
> > > +			"%5d %6d %6d %6d %9d %8d %14zdK %14zdK %14zdK  %14zdK     %s",
> > > +			   pid_entry->pid_num,
> > > +			   pid_entry->stats.num_obj,
> > > +			   pid_entry->stats.num_obj_shared,
> > > +			   pid_entry->stats.num_obj_private,
> > > +			   pid_entry->stats.num_obj_purgeable,
> > > +			   pid_entry->stats.num_obj_allocated,
> > > +			   pid_entry->stats.phys_space_allocated_shared/1024,
> > > +			   pid_entry->stats.phys_space_shared_proportion/1024,
> > > +			   pid_entry->stats.phys_space_allocated_priv/1024,
> > > +			   pid_entry->stats.phys_space_purgeable/1024,
> > > +			   pid_entry->stats.process_name);
> > > +
> > > +		if (task == NULL)
> > > +			err_printf(m, "*\n");
> > > +		else
> > > +			err_printf(m, "\n");
> > > +
> > > +		total_shared_prop_space +=
> > > +			pid_entry->stats.phys_space_shared_proportion/1024;
> > > +		total_priv_space +=
> > > +			pid_entry->stats.phys_space_allocated_priv/1024;
> > > +		list_del(&pid_entry->head);
> > > +
> > > +		list_for_each_entry_safe(entry, next,
> > > +					&pid_entry->namefree, head) {
> > > +			list_del(&entry->head);
> > > +			drm_ht_remove_item(&pid_entry->namelist,
> > > +					&entry->hash_item);
> > > +			kfree(entry);
> > > +		}
> > > +		drm_ht_remove(&pid_entry->namelist);
> > > +		kfree(pid_entry);
> > > +	}
> > > +
> > > +	err_printf(m,
> > > +		"\t\t\t\t\t\t\t\t--------------\t-------------\t--------\n");
> > > +	err_printf(m,
> > > +		"\t\t\t\t\t\t\t\t%13zdK\t%12zdK\tTotal\n",
> > > +			total_shared_prop_space, total_priv_space);
> > > +
> > > +out_unlock:
> > > +	mutex_unlock(&dev->struct_mutex);
> > > +	mutex_unlock(&drm_global_mutex);
> > > +
> > > +	if (ret)
> > > +		return ret;
> > > +	if (m->bytes == 0 && m->err)
> > > +		return m->err;
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +int i915_gem_get_all_obj_info(struct drm_i915_error_state_buf *m,
> > > +			struct drm_device *dev)
> > > +{
> > > +	struct drm_file *file;
> > > +	int pid_num, ret = 0;
> > > +
> > > +	/* Protect the access to global drm resources such as filelist. Protect
> > > +	 * against their removal under our noses, while in use.
> > > +	 */
> > > +	mutex_lock(&drm_global_mutex);
> > > +	ret = i915_mutex_lock_interruptible(dev);
> > > +	if (ret) {
> > > +		mutex_unlock(&drm_global_mutex);
> > > +		return ret;
> > > +	}
> > > +
> > > +	list_for_each_entry(file, &dev->filelist, lhead) {
> > > +		struct pid *tgid;
> > > +		struct drm_i915_file_private *file_priv = file->driver_priv;
> > > +
> > > +		tgid = file_priv->tgid;
> > > +		pid_num = pid_nr(tgid);
> > > +
> > > +		err_printf(m, "\n\n  PID  process\n");
> > > +
> > > +		err_printf(m, "%5d  %s\n",
> > > +			   pid_num, file_priv->process_name);
> > > +
> > > +		err_printf(m,
> > > +			"\n Obj Identifier       Size Pin Tiling Dirty Shared Vmap Stolen Mappable  AllocState Global/PP  GttOffset (PID: handle count: user virt addrs)\n");
> > > +		ret = idr_for_each(&file->object_idr,
> > > +				&i915_drm_gem_obj_info, m);
> > > +		if (ret)
> > > +			break;
> > > +	}
> > > +	mutex_unlock(&dev->struct_mutex);
> > > +	mutex_unlock(&drm_global_mutex);
> > > +
> > > +	if (ret)
> > > +		return ret;
> > > +	if (m->bytes == 0 && m->err)
> > > +		return m->err;
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
> > > index 2c87a79..089c7df 100644
> > > --- a/drivers/gpu/drm/i915/i915_gpu_error.c
> > > +++ b/drivers/gpu/drm/i915/i915_gpu_error.c
> > > @@ -161,7 +161,7 @@ static void i915_error_vprintf(struct drm_i915_error_state_buf *e,
> > >  	__i915_error_advance(e, len);
> > >  }
> > >  
> > > -static void i915_error_puts(struct drm_i915_error_state_buf *e,
> > > +void i915_error_puts(struct drm_i915_error_state_buf *e,
> > >  			    const char *str)
> > >  {
> > >  	unsigned len;
> > > diff --git a/drivers/gpu/drm/i915/i915_sysfs.c b/drivers/gpu/drm/i915/i915_sysfs.c
> > > index 503847f..b204c92 100644
> > > --- a/drivers/gpu/drm/i915/i915_sysfs.c
> > > +++ b/drivers/gpu/drm/i915/i915_sysfs.c
> > > @@ -582,6 +582,64 @@ static ssize_t error_state_write(struct file *file, struct kobject *kobj,
> > >  	return count;
> > >  }
> > >  
> > > +static ssize_t i915_gem_clients_state_read(struct file *filp,
> > > +				struct kobject *kobj,
> > > +				struct bin_attribute *attr,
> > > +				char *buf, loff_t off, size_t count)
> > > +{
> > > +	struct device *kdev = container_of(kobj, struct device, kobj);
> > > +	struct drm_minor *minor = dev_to_drm_minor(kdev);
> > > +	struct drm_device *dev = minor->dev;
> > > +	struct drm_i915_error_state_buf error_str;
> > > +	ssize_t ret_count = 0;
> > > +	int ret;
> > > +
> > > +	ret = i915_error_state_buf_init(&error_str, to_i915(dev), count, off);
> > > +	if (ret)
> > > +		return ret;
> > > +
> > > +	ret = i915_get_drm_clients_info(&error_str, dev);
> > > +	if (ret)
> > > +		goto out;
> > > +
> > > +	ret_count = count < error_str.bytes ? count : error_str.bytes;
> > > +
> > > +	memcpy(buf, error_str.buf, ret_count);
> > > +out:
> > > +	i915_error_state_buf_release(&error_str);
> > > +
> > > +	return ret ?: ret_count;
> > > +}
> > > +
> > > +static ssize_t i915_gem_objects_state_read(struct file *filp,
> > > +				struct kobject *kobj,
> > > +				struct bin_attribute *attr,
> > > +				char *buf, loff_t off, size_t count)
> > > +{
> > > +	struct device *kdev = container_of(kobj, struct device, kobj);
> > > +	struct drm_minor *minor = dev_to_drm_minor(kdev);
> > > +	struct drm_device *dev = minor->dev;
> > > +	struct drm_i915_error_state_buf error_str;
> > > +	ssize_t ret_count = 0;
> > > +	int ret;
> > > +
> > > +	ret = i915_error_state_buf_init(&error_str, to_i915(dev), count, off);
> > > +	if (ret)
> > > +		return ret;
> > > +
> > > +	ret = i915_gem_get_all_obj_info(&error_str, dev);
> > > +	if (ret)
> > > +		goto out;
> > > +
> > > +	ret_count = count < error_str.bytes ? count : error_str.bytes;
> > > +
> > > +	memcpy(buf, error_str.buf, ret_count);
> > > +out:
> > > +	i915_error_state_buf_release(&error_str);
> > > +
> > > +	return ret ?: ret_count;
> > > +}
> > > +
> > >  static struct bin_attribute error_state_attr = {
> > >  	.attr.name = "error",
> > >  	.attr.mode = S_IRUSR | S_IWUSR,
> > > @@ -590,6 +648,20 @@ static struct bin_attribute error_state_attr = {
> > >  	.write = error_state_write,
> > >  };
> > >  
> > > +static struct bin_attribute i915_gem_client_state_attr = {
> > > +	.attr.name = "i915_gem_meminfo",
> > > +	.attr.mode = S_IRUSR | S_IWUSR,
> > > +	.size = 0,
> > > +	.read = i915_gem_clients_state_read,
> > > +};
> > > +
> > > +static struct bin_attribute i915_gem_objects_state_attr = {
> > > +	.attr.name = "i915_gem_objinfo",
> > > +	.attr.mode = S_IRUSR | S_IWUSR,
> > > +	.size = 0,
> > > +	.read = i915_gem_objects_state_read,
> > > +};
> > > +
> > >  void i915_setup_sysfs(struct drm_device *dev)
> > >  {
> > >  	int ret;
> > > @@ -627,6 +699,17 @@ void i915_setup_sysfs(struct drm_device *dev)
> > >  				    &error_state_attr);
> > >  	if (ret)
> > >  		DRM_ERROR("error_state sysfs setup failed\n");
> > > +
> > > +	ret = sysfs_create_bin_file(&dev->primary->kdev->kobj,
> > > +				    &i915_gem_client_state_attr);
> > > +	if (ret)
> > > +		DRM_ERROR("i915_gem_client_state sysfs setup failed\n");
> > > +
> > > +	ret = sysfs_create_bin_file(&dev->primary->kdev->kobj,
> > > +				    &i915_gem_objects_state_attr);
> > > +	if (ret)
> > > +		DRM_ERROR("i915_gem_objects_state sysfs setup failed\n");
> > > +
> > >  }
> > >  
> > >  void i915_teardown_sysfs(struct drm_device *dev)
> > > -- 
> > > 1.8.5.1
> > > 
> > > _______________________________________________
> > > Intel-gfx mailing list
> > > Intel-gfx@lists.freedesktop.org
> > > http://lists.freedesktop.org/mailman/listinfo/intel-gfx
> > 
>
Daniel Vetter Sept. 4, 2014, 10:01 a.m. UTC | #4
On Thu, Sep 4, 2014 at 9:03 AM, Gupta, Sourab <sourab.gupta@intel.com> wrote:
> On Wed, 2014-09-03 at 13:09 +0000, Daniel Vetter wrote:
>> On Wed, Sep 03, 2014 at 11:49:52AM +0000, Gupta, Sourab wrote:
>> > On Wed, 2014-09-03 at 10:58 +0000, Daniel Vetter wrote:
>> > > On Wed, Sep 03, 2014 at 03:39:55PM +0530, sourab.gupta@intel.com wrote:
>> > > > From: Sourab Gupta <sourab.gupta@intel.com>
>> > > >
>> > > > Currently the Graphics Driver provides an interface through which
>> > > > one can get a snapshot of the overall Graphics memory consumption.
>> > > > Also there is an interface available, which provides information
>> > > > about the several memory related attributes of every single Graphics
>> > > > buffer created by the various clients.
>> > > >
>> > > > There is a requirement of a new interface for achieving below
>> > > > functionalities:
>> > > > 1) Need to provide Client based detailed information about the
>> > > > distribution of Graphics memory
>> > > > 2) Need to provide an interface which can provide info about the
>> > > > sharing of Graphics buffers between the clients.
>> > > >
>> > > > The client based interface would also aid in debugging of
>> > > > memory usage/consumption by each client & debug memleak related issues.
>> > > >
>> > > > With this new interface,
>> > > > 1) In case of memleak scenarios, we can easily zero in on the culprit
>> > > > client which is unexpectedly holding on the Graphics buffers for an
>> > > > inordinate amount of time.
>> > > > 2) We can get an estimate of the instantaneous memory footprint of
>> > > > every Graphics client.
>> > > > 3) We can now trace all the processes sharing a particular Graphics buffer.
>> > > >
>> > > > By means of this patch we try to provide a sysfs interface to achieve
>> > > > the mentioned functionalities.
>> > > >
>> > > > There are two files created in sysfs:
>> > > > 'i915_gem_meminfo' will provide summary of the graphics resources used by
>> > > > each graphics client.
>> > > > 'i915_gem_objinfo' will provide detailed view of each object created by
>> > > > individual clients.
>> > > >
>> > > > v2: Changes made for
>> > > >     - adding support to report user virtual addresses of mapped buffers
>> > > >     - replacing pid based reporting with tgid based one
>> > > >     - checkpatch and other misc cleanup
>> > > >
>> > > > Signed-off-by: Sourab Gupta <sourab.gupta@intel.com>
>> > > > Signed-off-by: Akash Goel <akash.goel@intel.com>
>> > >
>> > > Sorry I didn't spot this the first time around, but I think sysfs is the
>> > > wrong place for this.
>> > >
>> > > Generally sysfs is for setting/reading per-object values, and it has the
>> > > big rule that there should be only _one_ value per file. The error state
>> > > is a bit an exception, but otoh it's also just the full dump as a binary
>> > > file (which for historical reasons is printed as ascii).
>> > >
>> > > The other issue is that imo this should be a generic interface, so that we
>> > > can write a gpu_top tool for dumping memory consumers which works on all
>> > > linux platforms.
>> > >
>> > > To avoid delaying for a long time can we just move ahead by putting this
>> > > into debugfs?
>> > >
>> > > Also in debugfs there's already a lot of this stuff around - why is that
>> > > not sufficient and could we extend it somehow with the missing bits?
>> > >
>> > > Thanks, Daniel
>> >
>> > Hi Daniel,
>> >
>> > Thanks for your inputs.
>> > We had originally put the patch in sysfs, as there was a requirement for
>> > this feature to be available in production kernels also.
>> > We can move it to debugfs to move ahead with this. I'll submit the
>> > debugfs version of this patch next time.
>>
>> Yeah sysfs is the only place where we have a stable api, but that also
>> implies that requirements are a _lot_ more stringent. At least we need
>> testcases to make sure the interface actually do what we want them to do,
>> and to make sure we don't break the interface by accident.
>>
>> > Also,
>> > we developed this new interface to overcome the deficiencies of existing
>> > interface. With this new interface, we can provide client based detailed
>> > information about the distribution of Graphics memory. This gives
>> > information about the various states of the graphics objects opened per
>> > process (summarized as well as detailed info)
>> > It also gives information about Graphics buffers shared between the
>> > clients, and gives user mapped virtual address of all the mapped
>> > graphics buffers.
>> > It was not feasible to fit all this info in the existing interface. So
>> > we decided to go ahead with new interface for these functionality.
>>
>> Well the problem is that adding more files like that increases the
>> maintenance burden. So if there's some way to compute the information you
>> want from information already provided in debugfs, then I prefer we do
>> that at first.
>> -Daniel
>
> Hi Daniel,
>
> We went through the existing debugfs interfaces, but we couldn't derive
> the information we need from these interfaces.
> For our requirement, we require a process wise breakup of the objects
> and memory consumed, along with the detailed object statistics. Also,
> there is a requirement for getting info of the shared objects and user
> mapped virtual address of all the objects "per process".
> From the existing interfaces, the primary problem is that we don't get
> the "process wise breakup" of all these statistics. There, we only get
> the cumulative stats for various buffers like active, inactive, pinned,
> etc.
>
> If it is okay, we can join our two files into a single one, and have a
> single debugfs file, with a process wise listing of detailed object
> stats, and summarized stats at the end.
> I have shared the output of our interface and attached in text files
> with this mail, for your reference.
>
> Please excuse me for excluding the public mailing list, as I was not
> sure whether I could share this output there. You can add public mailing
> lists again if we can share it there.

Interface design discussions should happen in public (so that
non-intel people can jump in, which happens rather often for other
drivers actually). But at least include internal mailing lists next
time around. Also adding dri-devel.

The problem I see with your approach is that "process-wise" is not a
solid concept with drm. We can dump information per open drm file, but
that file descriptor can be shared between processes. And the latest
generation of linux compositor protocols (like dri3) actually take
advantage of this.

No procfs has links to go from processes to files, but unfortunately
not file descriptors. So we actually have a gap here with core drm, if
not even core vfs.

Resolving the "which project has which buffer object mapped" question
is easier: procfs already has a mappings file, and the mmap offsets
are global. So you can already figure out which object is mapped
where, as long you expose the fake gtt mmap offset somewhere.

This doesn't work for shmem cpu mmapings though ...

Overall getting this right looks like a fairly daunting tasks (for
upstream due to much more diverse requirements).
-Daniel


>
>
> Thanks,
> Sourab
>
>>
>> >
>> > Thanks,
>> > Sourab
>> >
>> > >
>> > > > ---
>> > > >  drivers/gpu/drm/i915/i915_dma.c       |   1 +
>> > > >  drivers/gpu/drm/i915/i915_drv.c       |   2 +
>> > > >  drivers/gpu/drm/i915/i915_drv.h       |  26 ++
>> > > >  drivers/gpu/drm/i915/i915_gem.c       | 169 ++++++++++-
>> > > >  drivers/gpu/drm/i915/i915_gem_debug.c | 542 ++++++++++++++++++++++++++++++++++
>> > > >  drivers/gpu/drm/i915/i915_gpu_error.c |   2 +-
>> > > >  drivers/gpu/drm/i915/i915_sysfs.c     |  83 ++++++
>> > > >  7 files changed, 822 insertions(+), 3 deletions(-)
>> > > >
>> > > > diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c
>> > > > index a58fed9..7ea3250 100644
>> > > > --- a/drivers/gpu/drm/i915/i915_dma.c
>> > > > +++ b/drivers/gpu/drm/i915/i915_dma.c
>> > > > @@ -1985,6 +1985,7 @@ void i915_driver_postclose(struct drm_device *dev, struct drm_file *file)
>> > > >  {
>> > > >         struct drm_i915_file_private *file_priv = file->driver_priv;
>> > > >
>> > > > +       kfree(file_priv->process_name);
>> > > >         if (file_priv && file_priv->bsd_ring)
>> > > >                 file_priv->bsd_ring = NULL;
>> > > >         kfree(file_priv);
>> > > > diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
>> > > > index 1d6d9ac..9bee20e 100644
>> > > > --- a/drivers/gpu/drm/i915/i915_drv.c
>> > > > +++ b/drivers/gpu/drm/i915/i915_drv.c
>> > > > @@ -1628,6 +1628,8 @@ static struct drm_driver driver = {
>> > > >         .debugfs_init = i915_debugfs_init,
>> > > >         .debugfs_cleanup = i915_debugfs_cleanup,
>> > > >  #endif
>> > > > +       .gem_open_object = i915_gem_open_object,
>> > > > +       .gem_close_object = i915_gem_close_object,
>> > > >         .gem_free_object = i915_gem_free_object,
>> > > >         .gem_vm_ops = &i915_gem_vm_ops,
>> > > >
>> > > > diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
>> > > > index 36f3da6..43ba7c4 100644
>> > > > --- a/drivers/gpu/drm/i915/i915_drv.h
>> > > > +++ b/drivers/gpu/drm/i915/i915_drv.h
>> > > > @@ -1765,6 +1765,11 @@ struct drm_i915_gem_object_ops {
>> > > >  #define INTEL_FRONTBUFFER_ALL_MASK(pipe) \
>> > > >         (0xf << (INTEL_FRONTBUFFER_BITS_PER_PIPE * (pipe)))
>> > > >
>> > > > +struct drm_i915_obj_virt_addr {
>> > > > +       struct list_head head;
>> > > > +       unsigned long user_virt_addr;
>> > > > +};
>> > > > +
>> > > >  struct drm_i915_gem_object {
>> > > >         struct drm_gem_object base;
>> > > >
>> > > > @@ -1890,6 +1895,13 @@ struct drm_i915_gem_object {
>> > > >                         struct work_struct *work;
>> > > >                 } userptr;
>> > > >         };
>> > > > +
>> > > > +#define MAX_OPEN_HANDLE 20
>> > > > +       struct {
>> > > > +               struct list_head virt_addr_head;
>> > > > +               pid_t pid;
>> > > > +               int open_handle_count;
>> > > > +       } pid_array[MAX_OPEN_HANDLE];
>> > > >  };
>> > > >  #define to_intel_bo(x) container_of(x, struct drm_i915_gem_object, base)
>> > > >
>> > > > @@ -1940,6 +1952,8 @@ struct drm_i915_gem_request {
>> > > >  struct drm_i915_file_private {
>> > > >         struct drm_i915_private *dev_priv;
>> > > >         struct drm_file *file;
>> > > > +       char *process_name;
>> > > > +       struct pid *tgid;
>> > > >
>> > > >         struct {
>> > > >                 spinlock_t lock;
>> > > > @@ -2370,6 +2384,10 @@ void i915_init_vm(struct drm_i915_private *dev_priv,
>> > > >                   struct i915_address_space *vm);
>> > > >  void i915_gem_free_object(struct drm_gem_object *obj);
>> > > >  void i915_gem_vma_destroy(struct i915_vma *vma);
>> > > > +int i915_gem_open_object(struct drm_gem_object *gem_obj,
>> > > > +                       struct drm_file *file_priv);
>> > > > +int i915_gem_close_object(struct drm_gem_object *gem_obj,
>> > > > +                       struct drm_file *file_priv);
>> > > >
>> > > >  #define PIN_MAPPABLE 0x1
>> > > >  #define PIN_NONBLOCK 0x2
>> > > > @@ -2420,6 +2438,8 @@ int i915_gem_dumb_create(struct drm_file *file_priv,
>> > > >                          struct drm_mode_create_dumb *args);
>> > > >  int i915_gem_mmap_gtt(struct drm_file *file_priv, struct drm_device *dev,
>> > > >                       uint32_t handle, uint64_t *offset);
>> > > > +int i915_gem_obj_shmem_pages_alloced(struct drm_i915_gem_object *obj);
>> > > > +
>> > > >  /**
>> > > >   * Returns true if seq1 is later than seq2.
>> > > >   */
>> > > > @@ -2686,6 +2706,10 @@ int i915_verify_lists(struct drm_device *dev);
>> > > >  #else
>> > > >  #define i915_verify_lists(dev) 0
>> > > >  #endif
>> > > > +int i915_get_drm_clients_info(struct drm_i915_error_state_buf *m,
>> > > > +                               struct drm_device *dev);
>> > > > +int i915_gem_get_all_obj_info(struct drm_i915_error_state_buf *m,
>> > > > +                               struct drm_device *dev);
>> > > >
>> > > >  /* i915_debugfs.c */
>> > > >  int i915_debugfs_init(struct drm_minor *minor);
>> > > > @@ -2699,6 +2723,8 @@ static inline void intel_display_crc_init(struct drm_device *dev) {}
>> > > >  /* i915_gpu_error.c */
>> > > >  __printf(2, 3)
>> > > >  void i915_error_printf(struct drm_i915_error_state_buf *e, const char *f, ...);
>> > > > +void i915_error_puts(struct drm_i915_error_state_buf *e,
>> > > > +                           const char *str);
>> > > >  int i915_error_state_to_str(struct drm_i915_error_state_buf *estr,
>> > > >                             const struct i915_error_state_file_priv *error);
>> > > >  int i915_error_state_buf_init(struct drm_i915_error_state_buf *eb,
>> > > > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
>> > > > index 6c68570..3c36486 100644
>> > > > --- a/drivers/gpu/drm/i915/i915_gem.c
>> > > > +++ b/drivers/gpu/drm/i915/i915_gem.c
>> > > > @@ -1461,6 +1461,45 @@ unlock:
>> > > >         return ret;
>> > > >  }
>> > > >
>> > > > +static void
>> > > > +i915_gem_obj_insert_virt_addr(struct drm_i915_gem_object *obj,
>> > > > +                               unsigned long addr,
>> > > > +                               bool is_map_gtt)
>> > > > +{
>> > > > +       pid_t current_pid = task_tgid_nr(current);
>> > > > +       int i, found = 0;
>> > > > +
>> > > > +       if (is_map_gtt)
>> > > > +               addr |= 1;
>> > > > +
>> > > > +       for (i = 0; i < MAX_OPEN_HANDLE; i++) {
>> > > > +               if (obj->pid_array[i].pid == current_pid) {
>> > > > +                       struct drm_i915_obj_virt_addr *entry, *new_entry;
>> > > > +
>> > > > +                       list_for_each_entry(entry,
>> > > > +                                           &obj->pid_array[i].virt_addr_head,
>> > > > +                                           head) {
>> > > > +                               if (entry->user_virt_addr == addr) {
>> > > > +                                       found = 1;
>> > > > +                                       break;
>> > > > +                               }
>> > > > +                       }
>> > > > +                       if (found)
>> > > > +                               break;
>> > > > +                       new_entry = kzalloc
>> > > > +                               (sizeof(struct drm_i915_obj_virt_addr),
>> > > > +                               GFP_KERNEL);
>> > > > +                       new_entry->user_virt_addr = addr;
>> > > > +                       list_add_tail(&new_entry->head,
>> > > > +                               &obj->pid_array[i].virt_addr_head);
>> > > > +                       break;
>> > > > +               }
>> > > > +       }
>> > > > +       if (i == MAX_OPEN_HANDLE)
>> > > > +               DRM_DEBUG("Couldn't find matching pid %d for obj 0x%x\n",
>> > > > +                       current_pid, (u32) obj);
>> > > > +}
>> > > > +
>> > > >  /**
>> > > >   * Maps the contents of an object, returning the address it is mapped
>> > > >   * into.
>> > > > @@ -1495,6 +1534,7 @@ i915_gem_mmap_ioctl(struct drm_device *dev, void *data,
>> > > >         if (IS_ERR((void *)addr))
>> > > >                 return addr;
>> > > >
>> > > > +       i915_gem_obj_insert_virt_addr(to_intel_bo(obj), addr, false);
>> > > >         args->addr_ptr = (uint64_t) addr;
>> > > >
>> > > >         return 0;
>> > > > @@ -1585,6 +1625,8 @@ int i915_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
>> > > >                 }
>> > > >
>> > > >                 obj->fault_mappable = true;
>> > > > +               i915_gem_obj_insert_virt_addr(obj,
>> > > > +                       (unsigned long)vma->vm_start, true);
>> > > >         } else
>> > > >                 ret = vm_insert_pfn(vma,
>> > > >                                     (unsigned long)vmf->virtual_address,
>> > > > @@ -1830,6 +1872,24 @@ i915_gem_object_is_purgeable(struct drm_i915_gem_object *obj)
>> > > >         return obj->madv == I915_MADV_DONTNEED;
>> > > >  }
>> > > >
>> > > > +int i915_gem_obj_shmem_pages_alloced(struct drm_i915_gem_object *obj)
>> > > > +{
>> > > > +       int ret;
>> > > > +
>> > > > +       if (obj->base.filp) {
>> > > > +               struct inode *inode = file_inode(obj->base.filp);
>> > > > +               struct shmem_inode_info *info = SHMEM_I(inode);
>> > > > +
>> > > > +               if (!inode)
>> > > > +                       return 0;
>> > > > +               spin_lock(&info->lock);
>> > > > +               ret = inode->i_mapping->nrpages;
>> > > > +               spin_unlock(&info->lock);
>> > > > +               return ret;
>> > > > +       }
>> > > > +       return 0;
>> > > > +}
>> > > > +
>> > > >  /* Immediately discard the backing storage */
>> > > >  static void
>> > > >  i915_gem_object_truncate(struct drm_i915_gem_object *obj)
>> > > > @@ -4447,6 +4507,79 @@ static bool discard_backing_storage(struct drm_i915_gem_object *obj)
>> > > >         return atomic_long_read(&obj->base.filp->f_count) == 1;
>> > > >  }
>> > > >
>> > > > +int
>> > > > +i915_gem_open_object(struct drm_gem_object *gem_obj,
>> > > > +                       struct drm_file *file_priv)
>> > > > +{
>> > > > +       struct drm_i915_gem_object *obj = to_intel_bo(gem_obj);
>> > > > +       pid_t current_pid = task_tgid_nr(current);
>> > > > +       int i, ret, free = -1;
>> > > > +
>> > > > +       ret = i915_mutex_lock_interruptible(gem_obj->dev);
>> > > > +       if (ret)
>> > > > +               return ret;
>> > > > +
>> > > > +       for (i = 0; i < MAX_OPEN_HANDLE; i++) {
>> > > > +               if (obj->pid_array[i].pid == current_pid) {
>> > > > +                       obj->pid_array[i].open_handle_count++;
>> > > > +                       break;
>> > > > +               } else if (obj->pid_array[i].pid == 0)
>> > > > +                       free = i;
>> > > > +       }
>> > > > +
>> > > > +       if (i == MAX_OPEN_HANDLE) {
>> > > > +               if (free != -1) {
>> > > > +                       WARN_ON(obj->pid_array[free].open_handle_count);
>> > > > +                       obj->pid_array[free].open_handle_count = 1;
>> > > > +                       obj->pid_array[free].pid = current_pid;
>> > > > +                       INIT_LIST_HEAD(&obj->pid_array[free].virt_addr_head);
>> > > > +               } else
>> > > > +                       DRM_DEBUG("Max open handle count limit: obj 0x%x\n",
>> > > > +                                       (u32) obj);
>> > > > +       }
>> > > > +
>> > > > +       mutex_unlock(&gem_obj->dev->struct_mutex);
>> > > > +       return 0;
>> > > > +}
>> > > > +
>> > > > +int
>> > > > +i915_gem_close_object(struct drm_gem_object *gem_obj,
>> > > > +                       struct drm_file *file_priv)
>> > > > +{
>> > > > +       struct drm_i915_gem_object *obj = to_intel_bo(gem_obj);
>> > > > +       pid_t current_pid = task_tgid_nr(current);
>> > > > +       int i, ret;
>> > > > +
>> > > > +       ret = i915_mutex_lock_interruptible(gem_obj->dev);
>> > > > +       if (ret)
>> > > > +               return ret;
>> > > > +
>> > > > +       for (i = 0; i < MAX_OPEN_HANDLE; i++) {
>> > > > +               if (obj->pid_array[i].pid == current_pid) {
>> > > > +                       obj->pid_array[i].open_handle_count--;
>> > > > +                       if (obj->pid_array[i].open_handle_count == 0) {
>> > > > +                               struct drm_i915_obj_virt_addr *entry, *next;
>> > > > +
>> > > > +                               list_for_each_entry_safe(entry, next,
>> > > > +                                       &obj->pid_array[i].virt_addr_head,
>> > > > +                                       head) {
>> > > > +                                       list_del(&entry->head);
>> > > > +                                       kfree(entry);
>> > > > +                               }
>> > > > +                               obj->pid_array[i].pid = 0;
>> > > > +                       }
>> > > > +                       break;
>> > > > +               }
>> > > > +       }
>> > > > +       if (i == MAX_OPEN_HANDLE)
>> > > > +               DRM_DEBUG("Couldn't find matching pid %d for obj 0x%x\n",
>> > > > +                               current_pid, (u32) obj);
>> > > > +
>> > > > +       mutex_unlock(&gem_obj->dev->struct_mutex);
>> > > > +       return 0;
>> > > > +}
>> > > > +
>> > > > +
>> > > >  void i915_gem_free_object(struct drm_gem_object *gem_obj)
>> > > >  {
>> > > >         struct drm_i915_gem_object *obj = to_intel_bo(gem_obj);
>> > > > @@ -5072,13 +5205,37 @@ i915_gem_file_idle_work_handler(struct work_struct *work)
>> > > >         atomic_set(&file_priv->rps_wait_boost, false);
>> > > >  }
>> > > >
>> > > > +static int i915_gem_get_pid_cmdline(struct task_struct *task, char *buffer)
>> > > > +{
>> > > > +       int res = 0;
>> > > > +       unsigned int len;
>> > > > +       struct mm_struct *mm = get_task_mm(task);
>> > > > +
>> > > > +       if (!mm)
>> > > > +               goto out;
>> > > > +       if (!mm->arg_end)
>> > > > +               goto out_mm;
>> > > > +
>> > > > +       len = mm->arg_end - mm->arg_start;
>> > > > +
>> > > > +       if (len > PAGE_SIZE)
>> > > > +               len = PAGE_SIZE;
>> > > > +
>> > > > +       res = access_process_vm(task, mm->arg_start, buffer, len, 0);
>> > > > +
>> > > > +       if (res > 0 && buffer[res-1] != '\0' && len < PAGE_SIZE)
>> > > > +               buffer[res-1] = '\0';
>> > > > +out_mm:
>> > > > +       mmput(mm);
>> > > > +out:
>> > > > +       return res;
>> > > > +}
>> > > > +
>> > > >  int i915_gem_open(struct drm_device *dev, struct drm_file *file)
>> > > >  {
>> > > >         struct drm_i915_file_private *file_priv;
>> > > >         int ret;
>> > > >
>> > > > -       DRM_DEBUG_DRIVER("\n");
>> > > > -
>> > > >         file_priv = kzalloc(sizeof(*file_priv), GFP_KERNEL);
>> > > >         if (!file_priv)
>> > > >                 return -ENOMEM;
>> > > > @@ -5086,6 +5243,14 @@ int i915_gem_open(struct drm_device *dev, struct drm_file *file)
>> > > >         file->driver_priv = file_priv;
>> > > >         file_priv->dev_priv = dev->dev_private;
>> > > >         file_priv->file = file;
>> > > > +       file_priv->tgid = find_vpid(task_tgid_nr(current));
>> > > > +       file_priv->process_name =  kzalloc(PAGE_SIZE, GFP_ATOMIC);
>> > > > +       if (!file_priv->process_name) {
>> > > > +               kfree(file_priv);
>> > > > +               return -ENOMEM;
>> > > > +       }
>> > > > +
>> > > > +       ret = i915_gem_get_pid_cmdline(current, file_priv->process_name);
>> > > >
>> > > >         spin_lock_init(&file_priv->mm.lock);
>> > > >         INIT_LIST_HEAD(&file_priv->mm.request_list);
>> > > > diff --git a/drivers/gpu/drm/i915/i915_gem_debug.c b/drivers/gpu/drm/i915/i915_gem_debug.c
>> > > > index f462d1b..7a42891 100644
>> > > > --- a/drivers/gpu/drm/i915/i915_gem_debug.c
>> > > > +++ b/drivers/gpu/drm/i915/i915_gem_debug.c
>> > > > @@ -25,6 +25,7 @@
>> > > >   *
>> > > >   */
>> > > >
>> > > > +#include <linux/pid.h>
>> > > >  #include <drm/drmP.h>
>> > > >  #include <drm/i915_drm.h>
>> > > >  #include "i915_drv.h"
>> > > > @@ -116,3 +117,544 @@ i915_verify_lists(struct drm_device *dev)
>> > > >         return warned = err;
>> > > >  }
>> > > >  #endif /* WATCH_LIST */
>> > > > +
>> > > > +struct per_file_obj_mem_info {
>> > > > +       int num_obj;
>> > > > +       int num_obj_shared;
>> > > > +       int num_obj_private;
>> > > > +       int num_obj_gtt_bound;
>> > > > +       int num_obj_purged;
>> > > > +       int num_obj_purgeable;
>> > > > +       int num_obj_allocated;
>> > > > +       int num_obj_fault_mappable;
>> > > > +       int num_obj_stolen;
>> > > > +       size_t gtt_space_allocated_shared;
>> > > > +       size_t gtt_space_allocated_priv;
>> > > > +       size_t phys_space_allocated_shared;
>> > > > +       size_t phys_space_allocated_priv;
>> > > > +       size_t phys_space_purgeable;
>> > > > +       size_t phys_space_shared_proportion;
>> > > > +       size_t fault_mappable_size;
>> > > > +       size_t stolen_space_allocated;
>> > > > +       char *process_name;
>> > > > +};
>> > > > +
>> > > > +struct name_entry {
>> > > > +       struct list_head head;
>> > > > +       struct drm_hash_item hash_item;
>> > > > +};
>> > > > +
>> > > > +struct pid_stat_entry {
>> > > > +       struct list_head head;
>> > > > +       struct list_head namefree;
>> > > > +       struct drm_open_hash namelist;
>> > > > +       struct per_file_obj_mem_info stats;
>> > > > +       struct pid *pid;
>> > > > +       int pid_num;
>> > > > +};
>> > > > +
>> > > > +
>> > > > +#define err_printf(e, ...) i915_error_printf(e, __VA_ARGS__)
>> > > > +#define err_puts(e, s) i915_error_puts(e, s)
>> > > > +
>> > > > +static const char *get_pin_flag(struct drm_i915_gem_object *obj)
>> > > > +{
>> > > > +       if (obj->user_pin_count > 0)
>> > > > +               return "P";
>> > > > +       else if (i915_gem_obj_is_pinned(obj))
>> > > > +               return "p";
>> > > > +       return " ";
>> > > > +}
>> > > > +
>> > > > +static const char *get_tiling_flag(struct drm_i915_gem_object *obj)
>> > > > +{
>> > > > +       switch (obj->tiling_mode) {
>> > > > +       default:
>> > > > +       case I915_TILING_NONE: return " ";
>> > > > +       case I915_TILING_X: return "X";
>> > > > +       case I915_TILING_Y: return "Y";
>> > > > +       }
>> > > > +}
>> > > > +
>> > > > +static int i915_obj_virt_addr_is_valid(struct drm_gem_object *obj,
>> > > > +                               struct pid *pid, unsigned long addr)
>> > > > +{
>> > > > +       struct task_struct *task;
>> > > > +       struct mm_struct *mm;
>> > > > +       struct vm_area_struct *vma;
>> > > > +       int locked, ret = 0;
>> > > > +
>> > > > +       task = get_pid_task(pid, PIDTYPE_PID);
>> > > > +       if (task == NULL) {
>> > > > +               DRM_DEBUG("null task for pid=%d\n", pid_nr(pid));
>> > > > +               return -EINVAL;
>> > > > +       }
>> > > > +
>> > > > +       mm = get_task_mm(task);
>> > > > +       if (mm == NULL) {
>> > > > +               DRM_DEBUG("null mm for pid=%d\n", pid_nr(pid));
>> > > > +               return -EINVAL;
>> > > > +       }
>> > > > +
>> > > > +       locked = down_read_trylock(&mm->mmap_sem);
>> > > > +
>> > > > +       vma = find_vma(mm, addr);
>> > > > +       if (vma) {
>> > > > +               if (addr & 1) { /* mmap_gtt case */
>> > > > +                       if (vma->vm_pgoff*PAGE_SIZE == (unsigned long)
>> > > > +                               drm_vma_node_offset_addr(&obj->vma_node))
>> > > > +                               ret = 0;
>> > > > +                       else
>> > > > +                               ret = -EINVAL;
>> > > > +               } else { /* mmap case */
>> > > > +                       if (vma->vm_file == obj->filp)
>> > > > +                               ret = 0;
>> > > > +                       else
>> > > > +                               ret = -EINVAL;
>> > > > +               }
>> > > > +       } else
>> > > > +               ret = -EINVAL;
>> > > > +
>> > > > +       if (locked)
>> > > > +               up_read(&mm->mmap_sem);
>> > > > +
>> > > > +       mmput(mm);
>> > > > +       return ret;
>> > > > +}
>> > > > +
>> > > > +static void i915_obj_pidarray_validate(struct drm_gem_object *gem_obj)
>> > > > +{
>> > > > +       struct drm_i915_gem_object *obj = to_intel_bo(gem_obj);
>> > > > +       struct drm_device *dev = gem_obj->dev;
>> > > > +       struct drm_i915_obj_virt_addr *entry, *next;
>> > > > +       struct drm_file *file;
>> > > > +       struct drm_i915_file_private *file_priv;
>> > > > +       struct pid *tgid;
>> > > > +       int pid_num, i, present;
>> > > > +
>> > > > +       /* Run a sanity check on pid_array. All entries in pid_array should
>> > > > +        * be subset of the the drm filelist pid entries.
>> > > > +        */
>> > > > +       for (i = 0; i < MAX_OPEN_HANDLE; i++) {
>> > > > +               if (obj->pid_array[i].pid == 0)
>> > > > +                       continue;
>> > > > +
>> > > > +               present = 0;
>> > > > +               list_for_each_entry(file, &dev->filelist, lhead) {
>> > > > +                       file_priv = file->driver_priv;
>> > > > +                       tgid = file_priv->tgid;
>> > > > +                       pid_num = pid_nr(tgid);
>> > > > +
>> > > > +                       if (pid_num == obj->pid_array[i].pid) {
>> > > > +                               present = 1;
>> > > > +                               break;
>> > > > +                       }
>> > > > +               }
>> > > > +               if (present == 0) {
>> > > > +                       DRM_DEBUG("stale_pid=%d\n", obj->pid_array[i].pid);
>> > > > +                       list_for_each_entry_safe(entry, next,
>> > > > +                                       &obj->pid_array[i].virt_addr_head,
>> > > > +                                       head) {
>> > > > +                               list_del(&entry->head);
>> > > > +                               kfree(entry);
>> > > > +                       }
>> > > > +
>> > > > +                       obj->pid_array[i].open_handle_count = 0;
>> > > > +                       obj->pid_array[i].pid = 0;
>> > > > +               } else {
>> > > > +                       /* Validate the virtual address list */
>> > > > +                       struct task_struct *task =
>> > > > +                               get_pid_task(tgid, PIDTYPE_PID);
>> > > > +                       if (task == NULL)
>> > > > +                               continue;
>> > > > +
>> > > > +                       list_for_each_entry_safe(entry, next,
>> > > > +                                       &obj->pid_array[i].virt_addr_head,
>> > > > +                                       head) {
>> > > > +                               if (i915_obj_virt_addr_is_valid(gem_obj, tgid,
>> > > > +                               entry->user_virt_addr)) {
>> > > > +                                       DRM_DEBUG("stale_addr=%ld\n",
>> > > > +                                       entry->user_virt_addr);
>> > > > +                                       list_del(&entry->head);
>> > > > +                                       kfree(entry);
>> > > > +                               }
>> > > > +                       }
>> > > > +               }
>> > > > +       }
>> > > > +}
>> > > > +
>> > > > +static int
>> > > > +i915_describe_obj(struct drm_i915_error_state_buf *m,
>> > > > +               struct drm_i915_gem_object *obj)
>> > > > +{
>> > > > +       int i;
>> > > > +       struct i915_vma *vma;
>> > > > +       struct drm_i915_obj_virt_addr *entry;
>> > > > +
>> > > > +       err_printf(m,
>> > > > +               "%p: %7zdK  %s    %s     %s      %s     %s      %s       %s     ",
>> > > > +                  &obj->base,
>> > > > +                  obj->base.size / 1024,
>> > > > +                  get_pin_flag(obj),
>> > > > +                  get_tiling_flag(obj),
>> > > > +                  obj->dirty ? "Y" : "N",
>> > > > +                  obj->base.name ? "Y" : "N",
>> > > > +                  (obj->userptr.mm != 0) ? "Y" : "N",
>> > > > +                  obj->stolen ? "Y" : "N",
>> > > > +                  (obj->pin_mappable || obj->fault_mappable) ? "Y" : "N");
>> > > > +
>> > > > +       if (obj->madv == __I915_MADV_PURGED)
>> > > > +               err_printf(m, " purged    ");
>> > > > +       else if (obj->madv == I915_MADV_DONTNEED)
>> > > > +               err_printf(m, " purgeable   ");
>> > > > +       else if (i915_gem_obj_shmem_pages_alloced(obj) != 0)
>> > > > +               err_printf(m, " allocated   ");
>> > > > +
>> > > > +
>> > > > +       list_for_each_entry(vma, &obj->vma_list, vma_link) {
>> > > > +               if (!i915_is_ggtt(vma->vm))
>> > > > +                       err_puts(m, " PP    ");
>> > > > +               else
>> > > > +                       err_puts(m, " G     ");
>> > > > +               err_printf(m, "  %08lx ", vma->node.start);
>> > > > +       }
>> > > > +
>> > > > +       for (i = 0; i < MAX_OPEN_HANDLE; i++) {
>> > > > +               if (obj->pid_array[i].pid != 0) {
>> > > > +                       err_printf(m, " (%d: %d:",
>> > > > +                       obj->pid_array[i].pid,
>> > > > +                       obj->pid_array[i].open_handle_count);
>> > > > +                       list_for_each_entry(entry,
>> > > > +                               &obj->pid_array[i].virt_addr_head, head) {
>> > > > +                               if (entry->user_virt_addr & 1)
>> > > > +                                       err_printf(m, " %p",
>> > > > +                                       (void *)(entry->user_virt_addr & ~1));
>> > > > +                               else
>> > > > +                                       err_printf(m, " %p*",
>> > > > +                                       (void *)entry->user_virt_addr);
>> > > > +                       }
>> > > > +                       err_printf(m, ") ");
>> > > > +               }
>> > > > +       }
>> > > > +
>> > > > +       err_printf(m, "\n");
>> > > > +
>> > > > +       if (m->bytes == 0 && m->err)
>> > > > +               return m->err;
>> > > > +
>> > > > +       return 0;
>> > > > +}
>> > > > +
>> > > > +static int
>> > > > +i915_drm_gem_obj_info(int id, void *ptr, void *data)
>> > > > +{
>> > > > +       struct drm_i915_gem_object *obj = ptr;
>> > > > +       struct drm_i915_error_state_buf *m = data;
>> > > > +       int ret;
>> > > > +
>> > > > +       i915_obj_pidarray_validate(&obj->base);
>> > > > +       ret = i915_describe_obj(m, obj);
>> > > > +
>> > > > +       return ret;
>> > > > +}
>> > > > +
>> > > > +static int
>> > > > +i915_drm_gem_object_per_file_summary(int id, void *ptr, void *data)
>> > > > +{
>> > > > +       struct pid_stat_entry *pid_entry = data;
>> > > > +       struct drm_i915_gem_object *obj = ptr;
>> > > > +       struct per_file_obj_mem_info *stats = &pid_entry->stats;
>> > > > +       struct drm_hash_item *hash_item;
>> > > > +       int i, obj_shared_count = 0;
>> > > > +
>> > > > +       i915_obj_pidarray_validate(&obj->base);
>> > > > +
>> > > > +       stats->num_obj++;
>> > > > +
>> > > > +       if (obj->base.name) {
>> > > > +
>> > > > +               if (drm_ht_find_item(&pid_entry->namelist,
>> > > > +                               (unsigned long)obj->base.name, &hash_item)) {
>> > > > +                       struct name_entry *entry =
>> > > > +                               kzalloc(sizeof(struct name_entry), GFP_KERNEL);
>> > > > +                       if (entry == NULL) {
>> > > > +                               DRM_ERROR("alloc failed\n");
>> > > > +                               return -ENOMEM;
>> > > > +                       }
>> > > > +                       entry->hash_item.key = obj->base.name;
>> > > > +                       drm_ht_insert_item(&pid_entry->namelist,
>> > > > +                                       &entry->hash_item);
>> > > > +                       list_add_tail(&entry->head, &pid_entry->namefree);
>> > > > +               } else {
>> > > > +                       DRM_DEBUG("Duplicate obj with name %d for process %s\n",
>> > > > +                               obj->base.name, stats->process_name);
>> > > > +                       return 0;
>> > > > +               }
>> > > > +               for (i = 0; i < MAX_OPEN_HANDLE; i++) {
>> > > > +                       if (obj->pid_array[i].pid != 0)
>> > > > +                               obj_shared_count++;
>> > > > +               }
>> > > > +               if (WARN_ON(obj_shared_count == 0))
>> > > > +                       return 1;
>> > > > +
>> > > > +               DRM_DEBUG("Obj: %p, shared count =%d\n",
>> > > > +                       &obj->base, obj_shared_count);
>> > > > +
>> > > > +               if (obj_shared_count > 1)
>> > > > +                       stats->num_obj_shared++;
>> > > > +               else
>> > > > +                       stats->num_obj_private++;
>> > > > +       } else {
>> > > > +               obj_shared_count = 1;
>> > > > +               stats->num_obj_private++;
>> > > > +       }
>> > > > +
>> > > > +       if (i915_gem_obj_bound_any(obj)) {
>> > > > +               stats->num_obj_gtt_bound++;
>> > > > +               if (obj_shared_count > 1)
>> > > > +                       stats->gtt_space_allocated_shared += obj->base.size;
>> > > > +               else
>> > > > +                       stats->gtt_space_allocated_priv += obj->base.size;
>> > > > +       }
>> > > > +
>> > > > +       if (obj->stolen) {
>> > > > +               stats->num_obj_stolen++;
>> > > > +               stats->stolen_space_allocated += obj->base.size;
>> > > > +       } else if (obj->madv == __I915_MADV_PURGED) {
>> > > > +               stats->num_obj_purged++;
>> > > > +       } else if (obj->madv == I915_MADV_DONTNEED) {
>> > > > +               stats->num_obj_purgeable++;
>> > > > +               stats->num_obj_allocated++;
>> > > > +               if (i915_gem_obj_shmem_pages_alloced(obj) != 0) {
>> > > > +                       stats->phys_space_purgeable += obj->base.size;
>> > > > +                       if (obj_shared_count > 1) {
>> > > > +                               stats->phys_space_allocated_shared +=
>> > > > +                                       obj->base.size;
>> > > > +                               stats->phys_space_shared_proportion +=
>> > > > +                                       obj->base.size/obj_shared_count;
>> > > > +                       } else
>> > > > +                               stats->phys_space_allocated_priv +=
>> > > > +                                       obj->base.size;
>> > > > +               } else
>> > > > +                       WARN_ON(1);
>> > > > +       } else if (i915_gem_obj_shmem_pages_alloced(obj) != 0) {
>> > > > +               stats->num_obj_allocated++;
>> > > > +                       if (obj_shared_count > 1) {
>> > > > +                               stats->phys_space_allocated_shared +=
>> > > > +                                       obj->base.size;
>> > > > +                               stats->phys_space_shared_proportion +=
>> > > > +                                       obj->base.size/obj_shared_count;
>> > > > +                       }
>> > > > +               else
>> > > > +                       stats->phys_space_allocated_priv += obj->base.size;
>> > > > +       }
>> > > > +       if (obj->fault_mappable) {
>> > > > +               stats->num_obj_fault_mappable++;
>> > > > +               stats->fault_mappable_size += obj->base.size;
>> > > > +       }
>> > > > +       return 0;
>> > > > +}
>> > > > +
>> > > > +int i915_get_drm_clients_info(struct drm_i915_error_state_buf *m,
>> > > > +                       struct drm_device *dev)
>> > > > +{
>> > > > +       struct drm_file *file;
>> > > > +       struct drm_i915_private *dev_priv = dev->dev_private;
>> > > > +
>> > > > +       struct name_entry *entry, *next;
>> > > > +       struct pid_stat_entry *pid_entry, *temp_entry;
>> > > > +       struct pid_stat_entry *new_pid_entry, *new_temp_entry;
>> > > > +       struct list_head per_pid_stats, sorted_pid_stats;
>> > > > +       int ret = 0, total_shared_prop_space = 0, total_priv_space = 0;
>> > > > +
>> > > > +       INIT_LIST_HEAD(&per_pid_stats);
>> > > > +       INIT_LIST_HEAD(&sorted_pid_stats);
>> > > > +
>> > > > +       err_printf(m,
>> > > > +               "\n\n  pid   Total  Shared  Priv   Purgeable  Alloced  SharedPHYsize   SharedPHYprop    PrivPHYsize   PurgeablePHYsize   process\n");
>> > > > +
>> > > > +       /* Protect the access to global drm resources such as filelist. Protect
>> > > > +        * against their removal under our noses, while in use.
>> > > > +        */
>> > > > +       mutex_lock(&drm_global_mutex);
>> > > > +       ret = i915_mutex_lock_interruptible(dev);
>> > > > +       if (ret) {
>> > > > +               mutex_unlock(&drm_global_mutex);
>> > > > +               return ret;
>> > > > +       }
>> > > > +
>> > > > +       list_for_each_entry(file, &dev->filelist, lhead) {
>> > > > +               struct pid *tgid;
>> > > > +               struct drm_i915_file_private *file_priv = file->driver_priv;
>> > > > +               int pid_num, found = 0;
>> > > > +
>> > > > +               tgid = file_priv->tgid;
>> > > > +               pid_num = pid_nr(tgid);
>> > > > +
>> > > > +               list_for_each_entry(pid_entry, &per_pid_stats, head) {
>> > > > +                       if (pid_entry->pid_num == pid_num) {
>> > > > +                               found = 1;
>> > > > +                               break;
>> > > > +                       }
>> > > > +               }
>> > > > +
>> > > > +               if (!found) {
>> > > > +                       struct pid_stat_entry *new_entry =
>> > > > +                               kzalloc(sizeof(struct pid_stat_entry),
>> > > > +                                       GFP_KERNEL);
>> > > > +                       if (new_entry == NULL) {
>> > > > +                               DRM_ERROR("alloc failed\n");
>> > > > +                               ret = -ENOMEM;
>> > > > +                               goto out_unlock;
>> > > > +                       }
>> > > > +                       new_entry->pid = tgid;
>> > > > +                       new_entry->pid_num = pid_num;
>> > > > +                       list_add_tail(&new_entry->head, &per_pid_stats);
>> > > > +                       drm_ht_create(&new_entry->namelist,
>> > > > +                               DRM_MAGIC_HASH_ORDER);
>> > > > +                       INIT_LIST_HEAD(&new_entry->namefree);
>> > > > +                       new_entry->stats.process_name = file_priv->process_name;
>> > > > +                       pid_entry = new_entry;
>> > > > +               }
>> > > > +
>> > > > +               ret = idr_for_each(&file->object_idr,
>> > > > +                       &i915_drm_gem_object_per_file_summary, pid_entry);
>> > > > +               if (ret)
>> > > > +                       break;
>> > > > +       }
>> > > > +
>> > > > +       list_for_each_entry_safe(pid_entry, temp_entry, &per_pid_stats, head) {
>> > > > +               if (list_empty(&sorted_pid_stats)) {
>> > > > +                       list_del(&pid_entry->head);
>> > > > +                       list_add_tail(&pid_entry->head, &sorted_pid_stats);
>> > > > +                       continue;
>> > > > +               }
>> > > > +
>> > > > +               list_for_each_entry_safe(new_pid_entry, new_temp_entry,
>> > > > +                       &sorted_pid_stats, head) {
>> > > > +                       int prev_space =
>> > > > +                               pid_entry->stats.phys_space_shared_proportion +
>> > > > +                               pid_entry->stats.phys_space_allocated_priv;
>> > > > +                       int new_space =
>> > > > +                               new_pid_entry->
>> > > > +                               stats.phys_space_shared_proportion +
>> > > > +                               new_pid_entry->stats.phys_space_allocated_priv;
>> > > > +                       if (prev_space > new_space) {
>> > > > +                               list_del(&pid_entry->head);
>> > > > +                               list_add_tail(&pid_entry->head,
>> > > > +                                       &new_pid_entry->head);
>> > > > +                               break;
>> > > > +                       }
>> > > > +                       if (list_is_last(&new_pid_entry->head,
>> > > > +                               &sorted_pid_stats)) {
>> > > > +                               list_del(&pid_entry->head);
>> > > > +                               list_add_tail(&pid_entry->head,
>> > > > +                                               &sorted_pid_stats);
>> > > > +                       }
>> > > > +               }
>> > > > +       }
>> > > > +
>> > > > +       list_for_each_entry_safe(pid_entry, temp_entry,
>> > > > +                               &sorted_pid_stats, head) {
>> > > > +               struct task_struct *task = get_pid_task(pid_entry->pid,
>> > > > +                                                       PIDTYPE_PID);
>> > > > +               err_printf(m,
>> > > > +                       "%5d %6d %6d %6d %9d %8d %14zdK %14zdK %14zdK  %14zdK     %s",
>> > > > +                          pid_entry->pid_num,
>> > > > +                          pid_entry->stats.num_obj,
>> > > > +                          pid_entry->stats.num_obj_shared,
>> > > > +                          pid_entry->stats.num_obj_private,
>> > > > +                          pid_entry->stats.num_obj_purgeable,
>> > > > +                          pid_entry->stats.num_obj_allocated,
>> > > > +                          pid_entry->stats.phys_space_allocated_shared/1024,
>> > > > +                          pid_entry->stats.phys_space_shared_proportion/1024,
>> > > > +                          pid_entry->stats.phys_space_allocated_priv/1024,
>> > > > +                          pid_entry->stats.phys_space_purgeable/1024,
>> > > > +                          pid_entry->stats.process_name);
>> > > > +
>> > > > +               if (task == NULL)
>> > > > +                       err_printf(m, "*\n");
>> > > > +               else
>> > > > +                       err_printf(m, "\n");
>> > > > +
>> > > > +               total_shared_prop_space +=
>> > > > +                       pid_entry->stats.phys_space_shared_proportion/1024;
>> > > > +               total_priv_space +=
>> > > > +                       pid_entry->stats.phys_space_allocated_priv/1024;
>> > > > +               list_del(&pid_entry->head);
>> > > > +
>> > > > +               list_for_each_entry_safe(entry, next,
>> > > > +                                       &pid_entry->namefree, head) {
>> > > > +                       list_del(&entry->head);
>> > > > +                       drm_ht_remove_item(&pid_entry->namelist,
>> > > > +                                       &entry->hash_item);
>> > > > +                       kfree(entry);
>> > > > +               }
>> > > > +               drm_ht_remove(&pid_entry->namelist);
>> > > > +               kfree(pid_entry);
>> > > > +       }
>> > > > +
>> > > > +       err_printf(m,
>> > > > +               "\t\t\t\t\t\t\t\t--------------\t-------------\t--------\n");
>> > > > +       err_printf(m,
>> > > > +               "\t\t\t\t\t\t\t\t%13zdK\t%12zdK\tTotal\n",
>> > > > +                       total_shared_prop_space, total_priv_space);
>> > > > +
>> > > > +out_unlock:
>> > > > +       mutex_unlock(&dev->struct_mutex);
>> > > > +       mutex_unlock(&drm_global_mutex);
>> > > > +
>> > > > +       if (ret)
>> > > > +               return ret;
>> > > > +       if (m->bytes == 0 && m->err)
>> > > > +               return m->err;
>> > > > +
>> > > > +       return 0;
>> > > > +}
>> > > > +
>> > > > +int i915_gem_get_all_obj_info(struct drm_i915_error_state_buf *m,
>> > > > +                       struct drm_device *dev)
>> > > > +{
>> > > > +       struct drm_file *file;
>> > > > +       int pid_num, ret = 0;
>> > > > +
>> > > > +       /* Protect the access to global drm resources such as filelist. Protect
>> > > > +        * against their removal under our noses, while in use.
>> > > > +        */
>> > > > +       mutex_lock(&drm_global_mutex);
>> > > > +       ret = i915_mutex_lock_interruptible(dev);
>> > > > +       if (ret) {
>> > > > +               mutex_unlock(&drm_global_mutex);
>> > > > +               return ret;
>> > > > +       }
>> > > > +
>> > > > +       list_for_each_entry(file, &dev->filelist, lhead) {
>> > > > +               struct pid *tgid;
>> > > > +               struct drm_i915_file_private *file_priv = file->driver_priv;
>> > > > +
>> > > > +               tgid = file_priv->tgid;
>> > > > +               pid_num = pid_nr(tgid);
>> > > > +
>> > > > +               err_printf(m, "\n\n  PID  process\n");
>> > > > +
>> > > > +               err_printf(m, "%5d  %s\n",
>> > > > +                          pid_num, file_priv->process_name);
>> > > > +
>> > > > +               err_printf(m,
>> > > > +                       "\n Obj Identifier       Size Pin Tiling Dirty Shared Vmap Stolen Mappable  AllocState Global/PP  GttOffset (PID: handle count: user virt addrs)\n");
>> > > > +               ret = idr_for_each(&file->object_idr,
>> > > > +                               &i915_drm_gem_obj_info, m);
>> > > > +               if (ret)
>> > > > +                       break;
>> > > > +       }
>> > > > +       mutex_unlock(&dev->struct_mutex);
>> > > > +       mutex_unlock(&drm_global_mutex);
>> > > > +
>> > > > +       if (ret)
>> > > > +               return ret;
>> > > > +       if (m->bytes == 0 && m->err)
>> > > > +               return m->err;
>> > > > +
>> > > > +       return 0;
>> > > > +}
>> > > > +
>> > > > diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
>> > > > index 2c87a79..089c7df 100644
>> > > > --- a/drivers/gpu/drm/i915/i915_gpu_error.c
>> > > > +++ b/drivers/gpu/drm/i915/i915_gpu_error.c
>> > > > @@ -161,7 +161,7 @@ static void i915_error_vprintf(struct drm_i915_error_state_buf *e,
>> > > >         __i915_error_advance(e, len);
>> > > >  }
>> > > >
>> > > > -static void i915_error_puts(struct drm_i915_error_state_buf *e,
>> > > > +void i915_error_puts(struct drm_i915_error_state_buf *e,
>> > > >                             const char *str)
>> > > >  {
>> > > >         unsigned len;
>> > > > diff --git a/drivers/gpu/drm/i915/i915_sysfs.c b/drivers/gpu/drm/i915/i915_sysfs.c
>> > > > index 503847f..b204c92 100644
>> > > > --- a/drivers/gpu/drm/i915/i915_sysfs.c
>> > > > +++ b/drivers/gpu/drm/i915/i915_sysfs.c
>> > > > @@ -582,6 +582,64 @@ static ssize_t error_state_write(struct file *file, struct kobject *kobj,
>> > > >         return count;
>> > > >  }
>> > > >
>> > > > +static ssize_t i915_gem_clients_state_read(struct file *filp,
>> > > > +                               struct kobject *kobj,
>> > > > +                               struct bin_attribute *attr,
>> > > > +                               char *buf, loff_t off, size_t count)
>> > > > +{
>> > > > +       struct device *kdev = container_of(kobj, struct device, kobj);
>> > > > +       struct drm_minor *minor = dev_to_drm_minor(kdev);
>> > > > +       struct drm_device *dev = minor->dev;
>> > > > +       struct drm_i915_error_state_buf error_str;
>> > > > +       ssize_t ret_count = 0;
>> > > > +       int ret;
>> > > > +
>> > > > +       ret = i915_error_state_buf_init(&error_str, to_i915(dev), count, off);
>> > > > +       if (ret)
>> > > > +               return ret;
>> > > > +
>> > > > +       ret = i915_get_drm_clients_info(&error_str, dev);
>> > > > +       if (ret)
>> > > > +               goto out;
>> > > > +
>> > > > +       ret_count = count < error_str.bytes ? count : error_str.bytes;
>> > > > +
>> > > > +       memcpy(buf, error_str.buf, ret_count);
>> > > > +out:
>> > > > +       i915_error_state_buf_release(&error_str);
>> > > > +
>> > > > +       return ret ?: ret_count;
>> > > > +}
>> > > > +
>> > > > +static ssize_t i915_gem_objects_state_read(struct file *filp,
>> > > > +                               struct kobject *kobj,
>> > > > +                               struct bin_attribute *attr,
>> > > > +                               char *buf, loff_t off, size_t count)
>> > > > +{
>> > > > +       struct device *kdev = container_of(kobj, struct device, kobj);
>> > > > +       struct drm_minor *minor = dev_to_drm_minor(kdev);
>> > > > +       struct drm_device *dev = minor->dev;
>> > > > +       struct drm_i915_error_state_buf error_str;
>> > > > +       ssize_t ret_count = 0;
>> > > > +       int ret;
>> > > > +
>> > > > +       ret = i915_error_state_buf_init(&error_str, to_i915(dev), count, off);
>> > > > +       if (ret)
>> > > > +               return ret;
>> > > > +
>> > > > +       ret = i915_gem_get_all_obj_info(&error_str, dev);
>> > > > +       if (ret)
>> > > > +               goto out;
>> > > > +
>> > > > +       ret_count = count < error_str.bytes ? count : error_str.bytes;
>> > > > +
>> > > > +       memcpy(buf, error_str.buf, ret_count);
>> > > > +out:
>> > > > +       i915_error_state_buf_release(&error_str);
>> > > > +
>> > > > +       return ret ?: ret_count;
>> > > > +}
>> > > > +
>> > > >  static struct bin_attribute error_state_attr = {
>> > > >         .attr.name = "error",
>> > > >         .attr.mode = S_IRUSR | S_IWUSR,
>> > > > @@ -590,6 +648,20 @@ static struct bin_attribute error_state_attr = {
>> > > >         .write = error_state_write,
>> > > >  };
>> > > >
>> > > > +static struct bin_attribute i915_gem_client_state_attr = {
>> > > > +       .attr.name = "i915_gem_meminfo",
>> > > > +       .attr.mode = S_IRUSR | S_IWUSR,
>> > > > +       .size = 0,
>> > > > +       .read = i915_gem_clients_state_read,
>> > > > +};
>> > > > +
>> > > > +static struct bin_attribute i915_gem_objects_state_attr = {
>> > > > +       .attr.name = "i915_gem_objinfo",
>> > > > +       .attr.mode = S_IRUSR | S_IWUSR,
>> > > > +       .size = 0,
>> > > > +       .read = i915_gem_objects_state_read,
>> > > > +};
>> > > > +
>> > > >  void i915_setup_sysfs(struct drm_device *dev)
>> > > >  {
>> > > >         int ret;
>> > > > @@ -627,6 +699,17 @@ void i915_setup_sysfs(struct drm_device *dev)
>> > > >                                     &error_state_attr);
>> > > >         if (ret)
>> > > >                 DRM_ERROR("error_state sysfs setup failed\n");
>> > > > +
>> > > > +       ret = sysfs_create_bin_file(&dev->primary->kdev->kobj,
>> > > > +                                   &i915_gem_client_state_attr);
>> > > > +       if (ret)
>> > > > +               DRM_ERROR("i915_gem_client_state sysfs setup failed\n");
>> > > > +
>> > > > +       ret = sysfs_create_bin_file(&dev->primary->kdev->kobj,
>> > > > +                                   &i915_gem_objects_state_attr);
>> > > > +       if (ret)
>> > > > +               DRM_ERROR("i915_gem_objects_state sysfs setup failed\n");
>> > > > +
>> > > >  }
>> > > >
>> > > >  void i915_teardown_sysfs(struct drm_device *dev)
>> > > > --
>> > > > 1.8.5.1
>> > > >
>> > > > _______________________________________________
>> > > > Intel-gfx mailing list
>> > > > Intel-gfx@lists.freedesktop.org
>> > > > http://lists.freedesktop.org/mailman/listinfo/intel-gfx
>> > >
>> >
>>
>
sourab.gupta@intel.com Sept. 4, 2014, 11:52 a.m. UTC | #5
On Thu, 2014-09-04 at 10:01 +0000, Daniel Vetter wrote:
> On Thu, Sep 4, 2014 at 9:03 AM, Gupta, Sourab <sourab.gupta@intel.com> wrote:
> > On Wed, 2014-09-03 at 13:09 +0000, Daniel Vetter wrote:
> >> On Wed, Sep 03, 2014 at 11:49:52AM +0000, Gupta, Sourab wrote:
> >> > On Wed, 2014-09-03 at 10:58 +0000, Daniel Vetter wrote:
> >> > > On Wed, Sep 03, 2014 at 03:39:55PM +0530, sourab.gupta@intel.com wrote:
> >> > > > From: Sourab Gupta <sourab.gupta@intel.com>
> >> > > >
> >> > > > Currently the Graphics Driver provides an interface through which
> >> > > > one can get a snapshot of the overall Graphics memory consumption.
> >> > > > Also there is an interface available, which provides information
> >> > > > about the several memory related attributes of every single Graphics
> >> > > > buffer created by the various clients.
> >> > > >
> >> > > > There is a requirement of a new interface for achieving below
> >> > > > functionalities:
> >> > > > 1) Need to provide Client based detailed information about the
> >> > > > distribution of Graphics memory
> >> > > > 2) Need to provide an interface which can provide info about the
> >> > > > sharing of Graphics buffers between the clients.
> >> > > >
> >> > > > The client based interface would also aid in debugging of
> >> > > > memory usage/consumption by each client & debug memleak related issues.
> >> > > >
> >> > > > With this new interface,
> >> > > > 1) In case of memleak scenarios, we can easily zero in on the culprit
> >> > > > client which is unexpectedly holding on the Graphics buffers for an
> >> > > > inordinate amount of time.
> >> > > > 2) We can get an estimate of the instantaneous memory footprint of
> >> > > > every Graphics client.
> >> > > > 3) We can now trace all the processes sharing a particular Graphics buffer.
> >> > > >
> >> > > > By means of this patch we try to provide a sysfs interface to achieve
> >> > > > the mentioned functionalities.
> >> > > >
> >> > > > There are two files created in sysfs:
> >> > > > 'i915_gem_meminfo' will provide summary of the graphics resources used by
> >> > > > each graphics client.
> >> > > > 'i915_gem_objinfo' will provide detailed view of each object created by
> >> > > > individual clients.
> >> > > >
> >> > > > v2: Changes made for
> >> > > >     - adding support to report user virtual addresses of mapped buffers
> >> > > >     - replacing pid based reporting with tgid based one
> >> > > >     - checkpatch and other misc cleanup
> >> > > >
> >> > > > Signed-off-by: Sourab Gupta <sourab.gupta@intel.com>
> >> > > > Signed-off-by: Akash Goel <akash.goel@intel.com>
> >> > >
> >> > > Sorry I didn't spot this the first time around, but I think sysfs is the
> >> > > wrong place for this.
> >> > >
> >> > > Generally sysfs is for setting/reading per-object values, and it has the
> >> > > big rule that there should be only _one_ value per file. The error state
> >> > > is a bit an exception, but otoh it's also just the full dump as a binary
> >> > > file (which for historical reasons is printed as ascii).
> >> > >
> >> > > The other issue is that imo this should be a generic interface, so that we
> >> > > can write a gpu_top tool for dumping memory consumers which works on all
> >> > > linux platforms.
> >> > >
> >> > > To avoid delaying for a long time can we just move ahead by putting this
> >> > > into debugfs?
> >> > >
> >> > > Also in debugfs there's already a lot of this stuff around - why is that
> >> > > not sufficient and could we extend it somehow with the missing bits?
> >> > >
> >> > > Thanks, Daniel
> >> >
> >> > Hi Daniel,
> >> >
> >> > Thanks for your inputs.
> >> > We had originally put the patch in sysfs, as there was a requirement for
> >> > this feature to be available in production kernels also.
> >> > We can move it to debugfs to move ahead with this. I'll submit the
> >> > debugfs version of this patch next time.
> >>
> >> Yeah sysfs is the only place where we have a stable api, but that also
> >> implies that requirements are a _lot_ more stringent. At least we need
> >> testcases to make sure the interface actually do what we want them to do,
> >> and to make sure we don't break the interface by accident.
> >>
> >> > Also,
> >> > we developed this new interface to overcome the deficiencies of existing
> >> > interface. With this new interface, we can provide client based detailed
> >> > information about the distribution of Graphics memory. This gives
> >> > information about the various states of the graphics objects opened per
> >> > process (summarized as well as detailed info)
> >> > It also gives information about Graphics buffers shared between the
> >> > clients, and gives user mapped virtual address of all the mapped
> >> > graphics buffers.
> >> > It was not feasible to fit all this info in the existing interface. So
> >> > we decided to go ahead with new interface for these functionality.
> >>
> >> Well the problem is that adding more files like that increases the
> >> maintenance burden. So if there's some way to compute the information you
> >> want from information already provided in debugfs, then I prefer we do
> >> that at first.
> >> -Daniel
> >
> > Hi Daniel,
> >
> > We went through the existing debugfs interfaces, but we couldn't derive
> > the information we need from these interfaces.
> > For our requirement, we require a process wise breakup of the objects
> > and memory consumed, along with the detailed object statistics. Also,
> > there is a requirement for getting info of the shared objects and user
> > mapped virtual address of all the objects "per process".
> > From the existing interfaces, the primary problem is that we don't get
> > the "process wise breakup" of all these statistics. There, we only get
> > the cumulative stats for various buffers like active, inactive, pinned,
> > etc.
> >
> > If it is okay, we can join our two files into a single one, and have a
> > single debugfs file, with a process wise listing of detailed object
> > stats, and summarized stats at the end.
> > I have shared the output of our interface and attached in text files
> > with this mail, for your reference.
> >
> > Please excuse me for excluding the public mailing list, as I was not
> > sure whether I could share this output there. You can add public mailing
> > lists again if we can share it there.
> 
> Interface design discussions should happen in public (so that
> non-intel people can jump in, which happens rather often for other
> drivers actually). But at least include internal mailing lists next
> time around. Also adding dri-devel.
> 
> The problem I see with your approach is that "process-wise" is not a
> solid concept with drm. We can dump information per open drm file, but
> that file descriptor can be shared between processes. And the latest
> generation of linux compositor protocols (like dri3) actually take
> advantage of this.

By "process-wise" sharing, do you mean the sharing of the drm file
across different processes (having different tgid's), or is it sharing
across the threads of a single process (having same tgid)?
Sorry, we are not aware of the sharing of drm file across processes in
dri3 protocols, as in android userspace, we have not come across such
scenario. Can you please shed some light on it.

In our design, we have a tgid based accounting mechanism. As long as the
drm file is shared within the threads of the same process, its resources
(objects and memory) are accounted together. But if the drm file is
shared across different processes (diff tgid's), this case is still an
open.
Will our tgid based accounting cover the dri3 usecases also (if they
share drm file within same tgid)?

> 
> No procfs has links to go from processes to files, but unfortunately
> not file descriptors. So we actually have a gap here with core drm, if
> not even core vfs.
> 
> Resolving the "which project has which buffer object mapped" question
> is easier: procfs already has a mappings file, and the mmap offsets
> are global. So you can already figure out which object is mapped
> where, as long you expose the fake gtt mmap offset somewhere.
> 
> This doesn't work for shmem cpu mmapings though ...
> 
> Overall getting this right looks like a fairly daunting tasks (for
> upstream due to much more diverse requirements).
> -Daniel
> 
> 
> >
> >
> > Thanks,
> > Sourab
> >
> >>
> >> >
> >> > Thanks,
> >> > Sourab
> >> >
> >> > >
> >> > > > ---
> >> > > >  drivers/gpu/drm/i915/i915_dma.c       |   1 +
> >> > > >  drivers/gpu/drm/i915/i915_drv.c       |   2 +
> >> > > >  drivers/gpu/drm/i915/i915_drv.h       |  26 ++
> >> > > >  drivers/gpu/drm/i915/i915_gem.c       | 169 ++++++++++-
> >> > > >  drivers/gpu/drm/i915/i915_gem_debug.c | 542 ++++++++++++++++++++++++++++++++++
> >> > > >  drivers/gpu/drm/i915/i915_gpu_error.c |   2 +-
> >> > > >  drivers/gpu/drm/i915/i915_sysfs.c     |  83 ++++++
> >> > > >  7 files changed, 822 insertions(+), 3 deletions(-)
> >> > > >
> >> > > > diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c
> >> > > > index a58fed9..7ea3250 100644
> >> > > > --- a/drivers/gpu/drm/i915/i915_dma.c
> >> > > > +++ b/drivers/gpu/drm/i915/i915_dma.c
> >> > > > @@ -1985,6 +1985,7 @@ void i915_driver_postclose(struct drm_device *dev, struct drm_file *file)
> >> > > >  {
> >> > > >         struct drm_i915_file_private *file_priv = file->driver_priv;
> >> > > >
> >> > > > +       kfree(file_priv->process_name);
> >> > > >         if (file_priv && file_priv->bsd_ring)
> >> > > >                 file_priv->bsd_ring = NULL;
> >> > > >         kfree(file_priv);
> >> > > > diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
> >> > > > index 1d6d9ac..9bee20e 100644
> >> > > > --- a/drivers/gpu/drm/i915/i915_drv.c
> >> > > > +++ b/drivers/gpu/drm/i915/i915_drv.c
> >> > > > @@ -1628,6 +1628,8 @@ static struct drm_driver driver = {
> >> > > >         .debugfs_init = i915_debugfs_init,
> >> > > >         .debugfs_cleanup = i915_debugfs_cleanup,
> >> > > >  #endif
> >> > > > +       .gem_open_object = i915_gem_open_object,
> >> > > > +       .gem_close_object = i915_gem_close_object,
> >> > > >         .gem_free_object = i915_gem_free_object,
> >> > > >         .gem_vm_ops = &i915_gem_vm_ops,
> >> > > >
> >> > > > diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> >> > > > index 36f3da6..43ba7c4 100644
> >> > > > --- a/drivers/gpu/drm/i915/i915_drv.h
> >> > > > +++ b/drivers/gpu/drm/i915/i915_drv.h
> >> > > > @@ -1765,6 +1765,11 @@ struct drm_i915_gem_object_ops {
> >> > > >  #define INTEL_FRONTBUFFER_ALL_MASK(pipe) \
> >> > > >         (0xf << (INTEL_FRONTBUFFER_BITS_PER_PIPE * (pipe)))
> >> > > >
> >> > > > +struct drm_i915_obj_virt_addr {
> >> > > > +       struct list_head head;
> >> > > > +       unsigned long user_virt_addr;
> >> > > > +};
> >> > > > +
> >> > > >  struct drm_i915_gem_object {
> >> > > >         struct drm_gem_object base;
> >> > > >
> >> > > > @@ -1890,6 +1895,13 @@ struct drm_i915_gem_object {
> >> > > >                         struct work_struct *work;
> >> > > >                 } userptr;
> >> > > >         };
> >> > > > +
> >> > > > +#define MAX_OPEN_HANDLE 20
> >> > > > +       struct {
> >> > > > +               struct list_head virt_addr_head;
> >> > > > +               pid_t pid;
> >> > > > +               int open_handle_count;
> >> > > > +       } pid_array[MAX_OPEN_HANDLE];
> >> > > >  };
> >> > > >  #define to_intel_bo(x) container_of(x, struct drm_i915_gem_object, base)
> >> > > >
> >> > > > @@ -1940,6 +1952,8 @@ struct drm_i915_gem_request {
> >> > > >  struct drm_i915_file_private {
> >> > > >         struct drm_i915_private *dev_priv;
> >> > > >         struct drm_file *file;
> >> > > > +       char *process_name;
> >> > > > +       struct pid *tgid;
> >> > > >
> >> > > >         struct {
> >> > > >                 spinlock_t lock;
> >> > > > @@ -2370,6 +2384,10 @@ void i915_init_vm(struct drm_i915_private *dev_priv,
> >> > > >                   struct i915_address_space *vm);
> >> > > >  void i915_gem_free_object(struct drm_gem_object *obj);
> >> > > >  void i915_gem_vma_destroy(struct i915_vma *vma);
> >> > > > +int i915_gem_open_object(struct drm_gem_object *gem_obj,
> >> > > > +                       struct drm_file *file_priv);
> >> > > > +int i915_gem_close_object(struct drm_gem_object *gem_obj,
> >> > > > +                       struct drm_file *file_priv);
> >> > > >
> >> > > >  #define PIN_MAPPABLE 0x1
> >> > > >  #define PIN_NONBLOCK 0x2
> >> > > > @@ -2420,6 +2438,8 @@ int i915_gem_dumb_create(struct drm_file *file_priv,
> >> > > >                          struct drm_mode_create_dumb *args);
> >> > > >  int i915_gem_mmap_gtt(struct drm_file *file_priv, struct drm_device *dev,
> >> > > >                       uint32_t handle, uint64_t *offset);
> >> > > > +int i915_gem_obj_shmem_pages_alloced(struct drm_i915_gem_object *obj);
> >> > > > +
> >> > > >  /**
> >> > > >   * Returns true if seq1 is later than seq2.
> >> > > >   */
> >> > > > @@ -2686,6 +2706,10 @@ int i915_verify_lists(struct drm_device *dev);
> >> > > >  #else
> >> > > >  #define i915_verify_lists(dev) 0
> >> > > >  #endif
> >> > > > +int i915_get_drm_clients_info(struct drm_i915_error_state_buf *m,
> >> > > > +                               struct drm_device *dev);
> >> > > > +int i915_gem_get_all_obj_info(struct drm_i915_error_state_buf *m,
> >> > > > +                               struct drm_device *dev);
> >> > > >
> >> > > >  /* i915_debugfs.c */
> >> > > >  int i915_debugfs_init(struct drm_minor *minor);
> >> > > > @@ -2699,6 +2723,8 @@ static inline void intel_display_crc_init(struct drm_device *dev) {}
> >> > > >  /* i915_gpu_error.c */
> >> > > >  __printf(2, 3)
> >> > > >  void i915_error_printf(struct drm_i915_error_state_buf *e, const char *f, ...);
> >> > > > +void i915_error_puts(struct drm_i915_error_state_buf *e,
> >> > > > +                           const char *str);
> >> > > >  int i915_error_state_to_str(struct drm_i915_error_state_buf *estr,
> >> > > >                             const struct i915_error_state_file_priv *error);
> >> > > >  int i915_error_state_buf_init(struct drm_i915_error_state_buf *eb,
> >> > > > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> >> > > > index 6c68570..3c36486 100644
> >> > > > --- a/drivers/gpu/drm/i915/i915_gem.c
> >> > > > +++ b/drivers/gpu/drm/i915/i915_gem.c
> >> > > > @@ -1461,6 +1461,45 @@ unlock:
> >> > > >         return ret;
> >> > > >  }
> >> > > >
> >> > > > +static void
> >> > > > +i915_gem_obj_insert_virt_addr(struct drm_i915_gem_object *obj,
> >> > > > +                               unsigned long addr,
> >> > > > +                               bool is_map_gtt)
> >> > > > +{
> >> > > > +       pid_t current_pid = task_tgid_nr(current);
> >> > > > +       int i, found = 0;
> >> > > > +
> >> > > > +       if (is_map_gtt)
> >> > > > +               addr |= 1;
> >> > > > +
> >> > > > +       for (i = 0; i < MAX_OPEN_HANDLE; i++) {
> >> > > > +               if (obj->pid_array[i].pid == current_pid) {
> >> > > > +                       struct drm_i915_obj_virt_addr *entry, *new_entry;
> >> > > > +
> >> > > > +                       list_for_each_entry(entry,
> >> > > > +                                           &obj->pid_array[i].virt_addr_head,
> >> > > > +                                           head) {
> >> > > > +                               if (entry->user_virt_addr == addr) {
> >> > > > +                                       found = 1;
> >> > > > +                                       break;
> >> > > > +                               }
> >> > > > +                       }
> >> > > > +                       if (found)
> >> > > > +                               break;
> >> > > > +                       new_entry = kzalloc
> >> > > > +                               (sizeof(struct drm_i915_obj_virt_addr),
> >> > > > +                               GFP_KERNEL);
> >> > > > +                       new_entry->user_virt_addr = addr;
> >> > > > +                       list_add_tail(&new_entry->head,
> >> > > > +                               &obj->pid_array[i].virt_addr_head);
> >> > > > +                       break;
> >> > > > +               }
> >> > > > +       }
> >> > > > +       if (i == MAX_OPEN_HANDLE)
> >> > > > +               DRM_DEBUG("Couldn't find matching pid %d for obj 0x%x\n",
> >> > > > +                       current_pid, (u32) obj);
> >> > > > +}
> >> > > > +
> >> > > >  /**
> >> > > >   * Maps the contents of an object, returning the address it is mapped
> >> > > >   * into.
> >> > > > @@ -1495,6 +1534,7 @@ i915_gem_mmap_ioctl(struct drm_device *dev, void *data,
> >> > > >         if (IS_ERR((void *)addr))
> >> > > >                 return addr;
> >> > > >
> >> > > > +       i915_gem_obj_insert_virt_addr(to_intel_bo(obj), addr, false);
> >> > > >         args->addr_ptr = (uint64_t) addr;
> >> > > >
> >> > > >         return 0;
> >> > > > @@ -1585,6 +1625,8 @@ int i915_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
> >> > > >                 }
> >> > > >
> >> > > >                 obj->fault_mappable = true;
> >> > > > +               i915_gem_obj_insert_virt_addr(obj,
> >> > > > +                       (unsigned long)vma->vm_start, true);
> >> > > >         } else
> >> > > >                 ret = vm_insert_pfn(vma,
> >> > > >                                     (unsigned long)vmf->virtual_address,
> >> > > > @@ -1830,6 +1872,24 @@ i915_gem_object_is_purgeable(struct drm_i915_gem_object *obj)
> >> > > >         return obj->madv == I915_MADV_DONTNEED;
> >> > > >  }
> >> > > >
> >> > > > +int i915_gem_obj_shmem_pages_alloced(struct drm_i915_gem_object *obj)
> >> > > > +{
> >> > > > +       int ret;
> >> > > > +
> >> > > > +       if (obj->base.filp) {
> >> > > > +               struct inode *inode = file_inode(obj->base.filp);
> >> > > > +               struct shmem_inode_info *info = SHMEM_I(inode);
> >> > > > +
> >> > > > +               if (!inode)
> >> > > > +                       return 0;
> >> > > > +               spin_lock(&info->lock);
> >> > > > +               ret = inode->i_mapping->nrpages;
> >> > > > +               spin_unlock(&info->lock);
> >> > > > +               return ret;
> >> > > > +       }
> >> > > > +       return 0;
> >> > > > +}
> >> > > > +
> >> > > >  /* Immediately discard the backing storage */
> >> > > >  static void
> >> > > >  i915_gem_object_truncate(struct drm_i915_gem_object *obj)
> >> > > > @@ -4447,6 +4507,79 @@ static bool discard_backing_storage(struct drm_i915_gem_object *obj)
> >> > > >         return atomic_long_read(&obj->base.filp->f_count) == 1;
> >> > > >  }
> >> > > >
> >> > > > +int
> >> > > > +i915_gem_open_object(struct drm_gem_object *gem_obj,
> >> > > > +                       struct drm_file *file_priv)
> >> > > > +{
> >> > > > +       struct drm_i915_gem_object *obj = to_intel_bo(gem_obj);
> >> > > > +       pid_t current_pid = task_tgid_nr(current);
> >> > > > +       int i, ret, free = -1;
> >> > > > +
> >> > > > +       ret = i915_mutex_lock_interruptible(gem_obj->dev);
> >> > > > +       if (ret)
> >> > > > +               return ret;
> >> > > > +
> >> > > > +       for (i = 0; i < MAX_OPEN_HANDLE; i++) {
> >> > > > +               if (obj->pid_array[i].pid == current_pid) {
> >> > > > +                       obj->pid_array[i].open_handle_count++;
> >> > > > +                       break;
> >> > > > +               } else if (obj->pid_array[i].pid == 0)
> >> > > > +                       free = i;
> >> > > > +       }
> >> > > > +
> >> > > > +       if (i == MAX_OPEN_HANDLE) {
> >> > > > +               if (free != -1) {
> >> > > > +                       WARN_ON(obj->pid_array[free].open_handle_count);
> >> > > > +                       obj->pid_array[free].open_handle_count = 1;
> >> > > > +                       obj->pid_array[free].pid = current_pid;
> >> > > > +                       INIT_LIST_HEAD(&obj->pid_array[free].virt_addr_head);
> >> > > > +               } else
> >> > > > +                       DRM_DEBUG("Max open handle count limit: obj 0x%x\n",
> >> > > > +                                       (u32) obj);
> >> > > > +       }
> >> > > > +
> >> > > > +       mutex_unlock(&gem_obj->dev->struct_mutex);
> >> > > > +       return 0;
> >> > > > +}
> >> > > > +
> >> > > > +int
> >> > > > +i915_gem_close_object(struct drm_gem_object *gem_obj,
> >> > > > +                       struct drm_file *file_priv)
> >> > > > +{
> >> > > > +       struct drm_i915_gem_object *obj = to_intel_bo(gem_obj);
> >> > > > +       pid_t current_pid = task_tgid_nr(current);
> >> > > > +       int i, ret;
> >> > > > +
> >> > > > +       ret = i915_mutex_lock_interruptible(gem_obj->dev);
> >> > > > +       if (ret)
> >> > > > +               return ret;
> >> > > > +
> >> > > > +       for (i = 0; i < MAX_OPEN_HANDLE; i++) {
> >> > > > +               if (obj->pid_array[i].pid == current_pid) {
> >> > > > +                       obj->pid_array[i].open_handle_count--;
> >> > > > +                       if (obj->pid_array[i].open_handle_count == 0) {
> >> > > > +                               struct drm_i915_obj_virt_addr *entry, *next;
> >> > > > +
> >> > > > +                               list_for_each_entry_safe(entry, next,
> >> > > > +                                       &obj->pid_array[i].virt_addr_head,
> >> > > > +                                       head) {
> >> > > > +                                       list_del(&entry->head);
> >> > > > +                                       kfree(entry);
> >> > > > +                               }
> >> > > > +                               obj->pid_array[i].pid = 0;
> >> > > > +                       }
> >> > > > +                       break;
> >> > > > +               }
> >> > > > +       }
> >> > > > +       if (i == MAX_OPEN_HANDLE)
> >> > > > +               DRM_DEBUG("Couldn't find matching pid %d for obj 0x%x\n",
> >> > > > +                               current_pid, (u32) obj);
> >> > > > +
> >> > > > +       mutex_unlock(&gem_obj->dev->struct_mutex);
> >> > > > +       return 0;
> >> > > > +}
> >> > > > +
> >> > > > +
> >> > > >  void i915_gem_free_object(struct drm_gem_object *gem_obj)
> >> > > >  {
> >> > > >         struct drm_i915_gem_object *obj = to_intel_bo(gem_obj);
> >> > > > @@ -5072,13 +5205,37 @@ i915_gem_file_idle_work_handler(struct work_struct *work)
> >> > > >         atomic_set(&file_priv->rps_wait_boost, false);
> >> > > >  }
> >> > > >
> >> > > > +static int i915_gem_get_pid_cmdline(struct task_struct *task, char *buffer)
> >> > > > +{
> >> > > > +       int res = 0;
> >> > > > +       unsigned int len;
> >> > > > +       struct mm_struct *mm = get_task_mm(task);
> >> > > > +
> >> > > > +       if (!mm)
> >> > > > +               goto out;
> >> > > > +       if (!mm->arg_end)
> >> > > > +               goto out_mm;
> >> > > > +
> >> > > > +       len = mm->arg_end - mm->arg_start;
> >> > > > +
> >> > > > +       if (len > PAGE_SIZE)
> >> > > > +               len = PAGE_SIZE;
> >> > > > +
> >> > > > +       res = access_process_vm(task, mm->arg_start, buffer, len, 0);
> >> > > > +
> >> > > > +       if (res > 0 && buffer[res-1] != '\0' && len < PAGE_SIZE)
> >> > > > +               buffer[res-1] = '\0';
> >> > > > +out_mm:
> >> > > > +       mmput(mm);
> >> > > > +out:
> >> > > > +       return res;
> >> > > > +}
> >> > > > +
> >> > > >  int i915_gem_open(struct drm_device *dev, struct drm_file *file)
> >> > > >  {
> >> > > >         struct drm_i915_file_private *file_priv;
> >> > > >         int ret;
> >> > > >
> >> > > > -       DRM_DEBUG_DRIVER("\n");
> >> > > > -
> >> > > >         file_priv = kzalloc(sizeof(*file_priv), GFP_KERNEL);
> >> > > >         if (!file_priv)
> >> > > >                 return -ENOMEM;
> >> > > > @@ -5086,6 +5243,14 @@ int i915_gem_open(struct drm_device *dev, struct drm_file *file)
> >> > > >         file->driver_priv = file_priv;
> >> > > >         file_priv->dev_priv = dev->dev_private;
> >> > > >         file_priv->file = file;
> >> > > > +       file_priv->tgid = find_vpid(task_tgid_nr(current));
> >> > > > +       file_priv->process_name =  kzalloc(PAGE_SIZE, GFP_ATOMIC);
> >> > > > +       if (!file_priv->process_name) {
> >> > > > +               kfree(file_priv);
> >> > > > +               return -ENOMEM;
> >> > > > +       }
> >> > > > +
> >> > > > +       ret = i915_gem_get_pid_cmdline(current, file_priv->process_name);
> >> > > >
> >> > > >         spin_lock_init(&file_priv->mm.lock);
> >> > > >         INIT_LIST_HEAD(&file_priv->mm.request_list);
> >> > > > diff --git a/drivers/gpu/drm/i915/i915_gem_debug.c b/drivers/gpu/drm/i915/i915_gem_debug.c
> >> > > > index f462d1b..7a42891 100644
> >> > > > --- a/drivers/gpu/drm/i915/i915_gem_debug.c
> >> > > > +++ b/drivers/gpu/drm/i915/i915_gem_debug.c
> >> > > > @@ -25,6 +25,7 @@
> >> > > >   *
> >> > > >   */
> >> > > >
> >> > > > +#include <linux/pid.h>
> >> > > >  #include <drm/drmP.h>
> >> > > >  #include <drm/i915_drm.h>
> >> > > >  #include "i915_drv.h"
> >> > > > @@ -116,3 +117,544 @@ i915_verify_lists(struct drm_device *dev)
> >> > > >         return warned = err;
> >> > > >  }
> >> > > >  #endif /* WATCH_LIST */
> >> > > > +
> >> > > > +struct per_file_obj_mem_info {
> >> > > > +       int num_obj;
> >> > > > +       int num_obj_shared;
> >> > > > +       int num_obj_private;
> >> > > > +       int num_obj_gtt_bound;
> >> > > > +       int num_obj_purged;
> >> > > > +       int num_obj_purgeable;
> >> > > > +       int num_obj_allocated;
> >> > > > +       int num_obj_fault_mappable;
> >> > > > +       int num_obj_stolen;
> >> > > > +       size_t gtt_space_allocated_shared;
> >> > > > +       size_t gtt_space_allocated_priv;
> >> > > > +       size_t phys_space_allocated_shared;
> >> > > > +       size_t phys_space_allocated_priv;
> >> > > > +       size_t phys_space_purgeable;
> >> > > > +       size_t phys_space_shared_proportion;
> >> > > > +       size_t fault_mappable_size;
> >> > > > +       size_t stolen_space_allocated;
> >> > > > +       char *process_name;
> >> > > > +};
> >> > > > +
> >> > > > +struct name_entry {
> >> > > > +       struct list_head head;
> >> > > > +       struct drm_hash_item hash_item;
> >> > > > +};
> >> > > > +
> >> > > > +struct pid_stat_entry {
> >> > > > +       struct list_head head;
> >> > > > +       struct list_head namefree;
> >> > > > +       struct drm_open_hash namelist;
> >> > > > +       struct per_file_obj_mem_info stats;
> >> > > > +       struct pid *pid;
> >> > > > +       int pid_num;
> >> > > > +};
> >> > > > +
> >> > > > +
> >> > > > +#define err_printf(e, ...) i915_error_printf(e, __VA_ARGS__)
> >> > > > +#define err_puts(e, s) i915_error_puts(e, s)
> >> > > > +
> >> > > > +static const char *get_pin_flag(struct drm_i915_gem_object *obj)
> >> > > > +{
> >> > > > +       if (obj->user_pin_count > 0)
> >> > > > +               return "P";
> >> > > > +       else if (i915_gem_obj_is_pinned(obj))
> >> > > > +               return "p";
> >> > > > +       return " ";
> >> > > > +}
> >> > > > +
> >> > > > +static const char *get_tiling_flag(struct drm_i915_gem_object *obj)
> >> > > > +{
> >> > > > +       switch (obj->tiling_mode) {
> >> > > > +       default:
> >> > > > +       case I915_TILING_NONE: return " ";
> >> > > > +       case I915_TILING_X: return "X";
> >> > > > +       case I915_TILING_Y: return "Y";
> >> > > > +       }
> >> > > > +}
> >> > > > +
> >> > > > +static int i915_obj_virt_addr_is_valid(struct drm_gem_object *obj,
> >> > > > +                               struct pid *pid, unsigned long addr)
> >> > > > +{
> >> > > > +       struct task_struct *task;
> >> > > > +       struct mm_struct *mm;
> >> > > > +       struct vm_area_struct *vma;
> >> > > > +       int locked, ret = 0;
> >> > > > +
> >> > > > +       task = get_pid_task(pid, PIDTYPE_PID);
> >> > > > +       if (task == NULL) {
> >> > > > +               DRM_DEBUG("null task for pid=%d\n", pid_nr(pid));
> >> > > > +               return -EINVAL;
> >> > > > +       }
> >> > > > +
> >> > > > +       mm = get_task_mm(task);
> >> > > > +       if (mm == NULL) {
> >> > > > +               DRM_DEBUG("null mm for pid=%d\n", pid_nr(pid));
> >> > > > +               return -EINVAL;
> >> > > > +       }
> >> > > > +
> >> > > > +       locked = down_read_trylock(&mm->mmap_sem);
> >> > > > +
> >> > > > +       vma = find_vma(mm, addr);
> >> > > > +       if (vma) {
> >> > > > +               if (addr & 1) { /* mmap_gtt case */
> >> > > > +                       if (vma->vm_pgoff*PAGE_SIZE == (unsigned long)
> >> > > > +                               drm_vma_node_offset_addr(&obj->vma_node))
> >> > > > +                               ret = 0;
> >> > > > +                       else
> >> > > > +                               ret = -EINVAL;
> >> > > > +               } else { /* mmap case */
> >> > > > +                       if (vma->vm_file == obj->filp)
> >> > > > +                               ret = 0;
> >> > > > +                       else
> >> > > > +                               ret = -EINVAL;
> >> > > > +               }
> >> > > > +       } else
> >> > > > +               ret = -EINVAL;
> >> > > > +
> >> > > > +       if (locked)
> >> > > > +               up_read(&mm->mmap_sem);
> >> > > > +
> >> > > > +       mmput(mm);
> >> > > > +       return ret;
> >> > > > +}
> >> > > > +
> >> > > > +static void i915_obj_pidarray_validate(struct drm_gem_object *gem_obj)
> >> > > > +{
> >> > > > +       struct drm_i915_gem_object *obj = to_intel_bo(gem_obj);
> >> > > > +       struct drm_device *dev = gem_obj->dev;
> >> > > > +       struct drm_i915_obj_virt_addr *entry, *next;
> >> > > > +       struct drm_file *file;
> >> > > > +       struct drm_i915_file_private *file_priv;
> >> > > > +       struct pid *tgid;
> >> > > > +       int pid_num, i, present;
> >> > > > +
> >> > > > +       /* Run a sanity check on pid_array. All entries in pid_array should
> >> > > > +        * be subset of the the drm filelist pid entries.
> >> > > > +        */
> >> > > > +       for (i = 0; i < MAX_OPEN_HANDLE; i++) {
> >> > > > +               if (obj->pid_array[i].pid == 0)
> >> > > > +                       continue;
> >> > > > +
> >> > > > +               present = 0;
> >> > > > +               list_for_each_entry(file, &dev->filelist, lhead) {
> >> > > > +                       file_priv = file->driver_priv;
> >> > > > +                       tgid = file_priv->tgid;
> >> > > > +                       pid_num = pid_nr(tgid);
> >> > > > +
> >> > > > +                       if (pid_num == obj->pid_array[i].pid) {
> >> > > > +                               present = 1;
> >> > > > +                               break;
> >> > > > +                       }
> >> > > > +               }
> >> > > > +               if (present == 0) {
> >> > > > +                       DRM_DEBUG("stale_pid=%d\n", obj->pid_array[i].pid);
> >> > > > +                       list_for_each_entry_safe(entry, next,
> >> > > > +                                       &obj->pid_array[i].virt_addr_head,
> >> > > > +                                       head) {
> >> > > > +                               list_del(&entry->head);
> >> > > > +                               kfree(entry);
> >> > > > +                       }
> >> > > > +
> >> > > > +                       obj->pid_array[i].open_handle_count = 0;
> >> > > > +                       obj->pid_array[i].pid = 0;
> >> > > > +               } else {
> >> > > > +                       /* Validate the virtual address list */
> >> > > > +                       struct task_struct *task =
> >> > > > +                               get_pid_task(tgid, PIDTYPE_PID);
> >> > > > +                       if (task == NULL)
> >> > > > +                               continue;
> >> > > > +
> >> > > > +                       list_for_each_entry_safe(entry, next,
> >> > > > +                                       &obj->pid_array[i].virt_addr_head,
> >> > > > +                                       head) {
> >> > > > +                               if (i915_obj_virt_addr_is_valid(gem_obj, tgid,
> >> > > > +                               entry->user_virt_addr)) {
> >> > > > +                                       DRM_DEBUG("stale_addr=%ld\n",
> >> > > > +                                       entry->user_virt_addr);
> >> > > > +                                       list_del(&entry->head);
> >> > > > +                                       kfree(entry);
> >> > > > +                               }
> >> > > > +                       }
> >> > > > +               }
> >> > > > +       }
> >> > > > +}
> >> > > > +
> >> > > > +static int
> >> > > > +i915_describe_obj(struct drm_i915_error_state_buf *m,
> >> > > > +               struct drm_i915_gem_object *obj)
> >> > > > +{
> >> > > > +       int i;
> >> > > > +       struct i915_vma *vma;
> >> > > > +       struct drm_i915_obj_virt_addr *entry;
> >> > > > +
> >> > > > +       err_printf(m,
> >> > > > +               "%p: %7zdK  %s    %s     %s      %s     %s      %s       %s     ",
> >> > > > +                  &obj->base,
> >> > > > +                  obj->base.size / 1024,
> >> > > > +                  get_pin_flag(obj),
> >> > > > +                  get_tiling_flag(obj),
> >> > > > +                  obj->dirty ? "Y" : "N",
> >> > > > +                  obj->base.name ? "Y" : "N",
> >> > > > +                  (obj->userptr.mm != 0) ? "Y" : "N",
> >> > > > +                  obj->stolen ? "Y" : "N",
> >> > > > +                  (obj->pin_mappable || obj->fault_mappable) ? "Y" : "N");
> >> > > > +
> >> > > > +       if (obj->madv == __I915_MADV_PURGED)
> >> > > > +               err_printf(m, " purged    ");
> >> > > > +       else if (obj->madv == I915_MADV_DONTNEED)
> >> > > > +               err_printf(m, " purgeable   ");
> >> > > > +       else if (i915_gem_obj_shmem_pages_alloced(obj) != 0)
> >> > > > +               err_printf(m, " allocated   ");
> >> > > > +
> >> > > > +
> >> > > > +       list_for_each_entry(vma, &obj->vma_list, vma_link) {
> >> > > > +               if (!i915_is_ggtt(vma->vm))
> >> > > > +                       err_puts(m, " PP    ");
> >> > > > +               else
> >> > > > +                       err_puts(m, " G     ");
> >> > > > +               err_printf(m, "  %08lx ", vma->node.start);
> >> > > > +       }
> >> > > > +
> >> > > > +       for (i = 0; i < MAX_OPEN_HANDLE; i++) {
> >> > > > +               if (obj->pid_array[i].pid != 0) {
> >> > > > +                       err_printf(m, " (%d: %d:",
> >> > > > +                       obj->pid_array[i].pid,
> >> > > > +                       obj->pid_array[i].open_handle_count);
> >> > > > +                       list_for_each_entry(entry,
> >> > > > +                               &obj->pid_array[i].virt_addr_head, head) {
> >> > > > +                               if (entry->user_virt_addr & 1)
> >> > > > +                                       err_printf(m, " %p",
> >> > > > +                                       (void *)(entry->user_virt_addr & ~1));
> >> > > > +                               else
> >> > > > +                                       err_printf(m, " %p*",
> >> > > > +                                       (void *)entry->user_virt_addr);
> >> > > > +                       }
> >> > > > +                       err_printf(m, ") ");
> >> > > > +               }
> >> > > > +       }
> >> > > > +
> >> > > > +       err_printf(m, "\n");
> >> > > > +
> >> > > > +       if (m->bytes == 0 && m->err)
> >> > > > +               return m->err;
> >> > > > +
> >> > > > +       return 0;
> >> > > > +}
> >> > > > +
> >> > > > +static int
> >> > > > +i915_drm_gem_obj_info(int id, void *ptr, void *data)
> >> > > > +{
> >> > > > +       struct drm_i915_gem_object *obj = ptr;
> >> > > > +       struct drm_i915_error_state_buf *m = data;
> >> > > > +       int ret;
> >> > > > +
> >> > > > +       i915_obj_pidarray_validate(&obj->base);
> >> > > > +       ret = i915_describe_obj(m, obj);
> >> > > > +
> >> > > > +       return ret;
> >> > > > +}
> >> > > > +
> >> > > > +static int
> >> > > > +i915_drm_gem_object_per_file_summary(int id, void *ptr, void *data)
> >> > > > +{
> >> > > > +       struct pid_stat_entry *pid_entry = data;
> >> > > > +       struct drm_i915_gem_object *obj = ptr;
> >> > > > +       struct per_file_obj_mem_info *stats = &pid_entry->stats;
> >> > > > +       struct drm_hash_item *hash_item;
> >> > > > +       int i, obj_shared_count = 0;
> >> > > > +
> >> > > > +       i915_obj_pidarray_validate(&obj->base);
> >> > > > +
> >> > > > +       stats->num_obj++;
> >> > > > +
> >> > > > +       if (obj->base.name) {
> >> > > > +
> >> > > > +               if (drm_ht_find_item(&pid_entry->namelist,
> >> > > > +                               (unsigned long)obj->base.name, &hash_item)) {
> >> > > > +                       struct name_entry *entry =
> >> > > > +                               kzalloc(sizeof(struct name_entry), GFP_KERNEL);
> >> > > > +                       if (entry == NULL) {
> >> > > > +                               DRM_ERROR("alloc failed\n");
> >> > > > +                               return -ENOMEM;
> >> > > > +                       }
> >> > > > +                       entry->hash_item.key = obj->base.name;
> >> > > > +                       drm_ht_insert_item(&pid_entry->namelist,
> >> > > > +                                       &entry->hash_item);
> >> > > > +                       list_add_tail(&entry->head, &pid_entry->namefree);
> >> > > > +               } else {
> >> > > > +                       DRM_DEBUG("Duplicate obj with name %d for process %s\n",
> >> > > > +                               obj->base.name, stats->process_name);
> >> > > > +                       return 0;
> >> > > > +               }
> >> > > > +               for (i = 0; i < MAX_OPEN_HANDLE; i++) {
> >> > > > +                       if (obj->pid_array[i].pid != 0)
> >> > > > +                               obj_shared_count++;
> >> > > > +               }
> >> > > > +               if (WARN_ON(obj_shared_count == 0))
> >> > > > +                       return 1;
> >> > > > +
> >> > > > +               DRM_DEBUG("Obj: %p, shared count =%d\n",
> >> > > > +                       &obj->base, obj_shared_count);
> >> > > > +
> >> > > > +               if (obj_shared_count > 1)
> >> > > > +                       stats->num_obj_shared++;
> >> > > > +               else
> >> > > > +                       stats->num_obj_private++;
> >> > > > +       } else {
> >> > > > +               obj_shared_count = 1;
> >> > > > +               stats->num_obj_private++;
> >> > > > +       }
> >> > > > +
> >> > > > +       if (i915_gem_obj_bound_any(obj)) {
> >> > > > +               stats->num_obj_gtt_bound++;
> >> > > > +               if (obj_shared_count > 1)
> >> > > > +                       stats->gtt_space_allocated_shared += obj->base.size;
> >> > > > +               else
> >> > > > +                       stats->gtt_space_allocated_priv += obj->base.size;
> >> > > > +       }
> >> > > > +
> >> > > > +       if (obj->stolen) {
> >> > > > +               stats->num_obj_stolen++;
> >> > > > +               stats->stolen_space_allocated += obj->base.size;
> >> > > > +       } else if (obj->madv == __I915_MADV_PURGED) {
> >> > > > +               stats->num_obj_purged++;
> >> > > > +       } else if (obj->madv == I915_MADV_DONTNEED) {
> >> > > > +               stats->num_obj_purgeable++;
> >> > > > +               stats->num_obj_allocated++;
> >> > > > +               if (i915_gem_obj_shmem_pages_alloced(obj) != 0) {
> >> > > > +                       stats->phys_space_purgeable += obj->base.size;
> >> > > > +                       if (obj_shared_count > 1) {
> >> > > > +                               stats->phys_space_allocated_shared +=
> >> > > > +                                       obj->base.size;
> >> > > > +                               stats->phys_space_shared_proportion +=
> >> > > > +                                       obj->base.size/obj_shared_count;
> >> > > > +                       } else
> >> > > > +                               stats->phys_space_allocated_priv +=
> >> > > > +                                       obj->base.size;
> >> > > > +               } else
> >> > > > +                       WARN_ON(1);
> >> > > > +       } else if (i915_gem_obj_shmem_pages_alloced(obj) != 0) {
> >> > > > +               stats->num_obj_allocated++;
> >> > > > +                       if (obj_shared_count > 1) {
> >> > > > +                               stats->phys_space_allocated_shared +=
> >> > > > +                                       obj->base.size;
> >> > > > +                               stats->phys_space_shared_proportion +=
> >> > > > +                                       obj->base.size/obj_shared_count;
> >> > > > +                       }
> >> > > > +               else
> >> > > > +                       stats->phys_space_allocated_priv += obj->base.size;
> >> > > > +       }
> >> > > > +       if (obj->fault_mappable) {
> >> > > > +               stats->num_obj_fault_mappable++;
> >> > > > +               stats->fault_mappable_size += obj->base.size;
> >> > > > +       }
> >> > > > +       return 0;
> >> > > > +}
> >> > > > +
> >> > > > +int i915_get_drm_clients_info(struct drm_i915_error_state_buf *m,
> >> > > > +                       struct drm_device *dev)
> >> > > > +{
> >> > > > +       struct drm_file *file;
> >> > > > +       struct drm_i915_private *dev_priv = dev->dev_private;
> >> > > > +
> >> > > > +       struct name_entry *entry, *next;
> >> > > > +       struct pid_stat_entry *pid_entry, *temp_entry;
> >> > > > +       struct pid_stat_entry *new_pid_entry, *new_temp_entry;
> >> > > > +       struct list_head per_pid_stats, sorted_pid_stats;
> >> > > > +       int ret = 0, total_shared_prop_space = 0, total_priv_space = 0;
> >> > > > +
> >> > > > +       INIT_LIST_HEAD(&per_pid_stats);
> >> > > > +       INIT_LIST_HEAD(&sorted_pid_stats);
> >> > > > +
> >> > > > +       err_printf(m,
> >> > > > +               "\n\n  pid   Total  Shared  Priv   Purgeable  Alloced  SharedPHYsize   SharedPHYprop    PrivPHYsize   PurgeablePHYsize   process\n");
> >> > > > +
> >> > > > +       /* Protect the access to global drm resources such as filelist. Protect
> >> > > > +        * against their removal under our noses, while in use.
> >> > > > +        */
> >> > > > +       mutex_lock(&drm_global_mutex);
> >> > > > +       ret = i915_mutex_lock_interruptible(dev);
> >> > > > +       if (ret) {
> >> > > > +               mutex_unlock(&drm_global_mutex);
> >> > > > +               return ret;
> >> > > > +       }
> >> > > > +
> >> > > > +       list_for_each_entry(file, &dev->filelist, lhead) {
> >> > > > +               struct pid *tgid;
> >> > > > +               struct drm_i915_file_private *file_priv = file->driver_priv;
> >> > > > +               int pid_num, found = 0;
> >> > > > +
> >> > > > +               tgid = file_priv->tgid;
> >> > > > +               pid_num = pid_nr(tgid);
> >> > > > +
> >> > > > +               list_for_each_entry(pid_entry, &per_pid_stats, head) {
> >> > > > +                       if (pid_entry->pid_num == pid_num) {
> >> > > > +                               found = 1;
> >> > > > +                               break;
> >> > > > +                       }
> >> > > > +               }
> >> > > > +
> >> > > > +               if (!found) {
> >> > > > +                       struct pid_stat_entry *new_entry =
> >> > > > +                               kzalloc(sizeof(struct pid_stat_entry),
> >> > > > +                                       GFP_KERNEL);
> >> > > > +                       if (new_entry == NULL) {
> >> > > > +                               DRM_ERROR("alloc failed\n");
> >> > > > +                               ret = -ENOMEM;
> >> > > > +                               goto out_unlock;
> >> > > > +                       }
> >> > > > +                       new_entry->pid = tgid;
> >> > > > +                       new_entry->pid_num = pid_num;
> >> > > > +                       list_add_tail(&new_entry->head, &per_pid_stats);
> >> > > > +                       drm_ht_create(&new_entry->namelist,
> >> > > > +                               DRM_MAGIC_HASH_ORDER);
> >> > > > +                       INIT_LIST_HEAD(&new_entry->namefree);
> >> > > > +                       new_entry->stats.process_name = file_priv->process_name;
> >> > > > +                       pid_entry = new_entry;
> >> > > > +               }
> >> > > > +
> >> > > > +               ret = idr_for_each(&file->object_idr,
> >> > > > +                       &i915_drm_gem_object_per_file_summary, pid_entry);
> >> > > > +               if (ret)
> >> > > > +                       break;
> >> > > > +       }
> >> > > > +
> >> > > > +       list_for_each_entry_safe(pid_entry, temp_entry, &per_pid_stats, head) {
> >> > > > +               if (list_empty(&sorted_pid_stats)) {
> >> > > > +                       list_del(&pid_entry->head);
> >> > > > +                       list_add_tail(&pid_entry->head, &sorted_pid_stats);
> >> > > > +                       continue;
> >> > > > +               }
> >> > > > +
> >> > > > +               list_for_each_entry_safe(new_pid_entry, new_temp_entry,
> >> > > > +                       &sorted_pid_stats, head) {
> >> > > > +                       int prev_space =
> >> > > > +                               pid_entry->stats.phys_space_shared_proportion +
> >> > > > +                               pid_entry->stats.phys_space_allocated_priv;
> >> > > > +                       int new_space =
> >> > > > +                               new_pid_entry->
> >> > > > +                               stats.phys_space_shared_proportion +
> >> > > > +                               new_pid_entry->stats.phys_space_allocated_priv;
> >> > > > +                       if (prev_space > new_space) {
> >> > > > +                               list_del(&pid_entry->head);
> >> > > > +                               list_add_tail(&pid_entry->head,
> >> > > > +                                       &new_pid_entry->head);
> >> > > > +                               break;
> >> > > > +                       }
> >> > > > +                       if (list_is_last(&new_pid_entry->head,
> >> > > > +                               &sorted_pid_stats)) {
> >> > > > +                               list_del(&pid_entry->head);
> >> > > > +                               list_add_tail(&pid_entry->head,
> >> > > > +                                               &sorted_pid_stats);
> >> > > > +                       }
> >> > > > +               }
> >> > > > +       }
> >> > > > +
> >> > > > +       list_for_each_entry_safe(pid_entry, temp_entry,
> >> > > > +                               &sorted_pid_stats, head) {
> >> > > > +               struct task_struct *task = get_pid_task(pid_entry->pid,
> >> > > > +                                                       PIDTYPE_PID);
> >> > > > +               err_printf(m,
> >> > > > +                       "%5d %6d %6d %6d %9d %8d %14zdK %14zdK %14zdK  %14zdK     %s",
> >> > > > +                          pid_entry->pid_num,
> >> > > > +                          pid_entry->stats.num_obj,
> >> > > > +                          pid_entry->stats.num_obj_shared,
> >> > > > +                          pid_entry->stats.num_obj_private,
> >> > > > +                          pid_entry->stats.num_obj_purgeable,
> >> > > > +                          pid_entry->stats.num_obj_allocated,
> >> > > > +                          pid_entry->stats.phys_space_allocated_shared/1024,
> >> > > > +                          pid_entry->stats.phys_space_shared_proportion/1024,
> >> > > > +                          pid_entry->stats.phys_space_allocated_priv/1024,
> >> > > > +                          pid_entry->stats.phys_space_purgeable/1024,
> >> > > > +                          pid_entry->stats.process_name);
> >> > > > +
> >> > > > +               if (task == NULL)
> >> > > > +                       err_printf(m, "*\n");
> >> > > > +               else
> >> > > > +                       err_printf(m, "\n");
> >> > > > +
> >> > > > +               total_shared_prop_space +=
> >> > > > +                       pid_entry->stats.phys_space_shared_proportion/1024;
> >> > > > +               total_priv_space +=
> >> > > > +                       pid_entry->stats.phys_space_allocated_priv/1024;
> >> > > > +               list_del(&pid_entry->head);
> >> > > > +
> >> > > > +               list_for_each_entry_safe(entry, next,
> >> > > > +                                       &pid_entry->namefree, head) {
> >> > > > +                       list_del(&entry->head);
> >> > > > +                       drm_ht_remove_item(&pid_entry->namelist,
> >> > > > +                                       &entry->hash_item);
> >> > > > +                       kfree(entry);
> >> > > > +               }
> >> > > > +               drm_ht_remove(&pid_entry->namelist);
> >> > > > +               kfree(pid_entry);
> >> > > > +       }
> >> > > > +
> >> > > > +       err_printf(m,
> >> > > > +               "\t\t\t\t\t\t\t\t--------------\t-------------\t--------\n");
> >> > > > +       err_printf(m,
> >> > > > +               "\t\t\t\t\t\t\t\t%13zdK\t%12zdK\tTotal\n",
> >> > > > +                       total_shared_prop_space, total_priv_space);
> >> > > > +
> >> > > > +out_unlock:
> >> > > > +       mutex_unlock(&dev->struct_mutex);
> >> > > > +       mutex_unlock(&drm_global_mutex);
> >> > > > +
> >> > > > +       if (ret)
> >> > > > +               return ret;
> >> > > > +       if (m->bytes == 0 && m->err)
> >> > > > +               return m->err;
> >> > > > +
> >> > > > +       return 0;
> >> > > > +}
> >> > > > +
> >> > > > +int i915_gem_get_all_obj_info(struct drm_i915_error_state_buf *m,
> >> > > > +                       struct drm_device *dev)
> >> > > > +{
> >> > > > +       struct drm_file *file;
> >> > > > +       int pid_num, ret = 0;
> >> > > > +
> >> > > > +       /* Protect the access to global drm resources such as filelist. Protect
> >> > > > +        * against their removal under our noses, while in use.
> >> > > > +        */
> >> > > > +       mutex_lock(&drm_global_mutex);
> >> > > > +       ret = i915_mutex_lock_interruptible(dev);
> >> > > > +       if (ret) {
> >> > > > +               mutex_unlock(&drm_global_mutex);
> >> > > > +               return ret;
> >> > > > +       }
> >> > > > +
> >> > > > +       list_for_each_entry(file, &dev->filelist, lhead) {
> >> > > > +               struct pid *tgid;
> >> > > > +               struct drm_i915_file_private *file_priv = file->driver_priv;
> >> > > > +
> >> > > > +               tgid = file_priv->tgid;
> >> > > > +               pid_num = pid_nr(tgid);
> >> > > > +
> >> > > > +               err_printf(m, "\n\n  PID  process\n");
> >> > > > +
> >> > > > +               err_printf(m, "%5d  %s\n",
> >> > > > +                          pid_num, file_priv->process_name);
> >> > > > +
> >> > > > +               err_printf(m,
> >> > > > +                       "\n Obj Identifier       Size Pin Tiling Dirty Shared Vmap Stolen Mappable  AllocState Global/PP  GttOffset (PID: handle count: user virt addrs)\n");
> >> > > > +               ret = idr_for_each(&file->object_idr,
> >> > > > +                               &i915_drm_gem_obj_info, m);
> >> > > > +               if (ret)
> >> > > > +                       break;
> >> > > > +       }
> >> > > > +       mutex_unlock(&dev->struct_mutex);
> >> > > > +       mutex_unlock(&drm_global_mutex);
> >> > > > +
> >> > > > +       if (ret)
> >> > > > +               return ret;
> >> > > > +       if (m->bytes == 0 && m->err)
> >> > > > +               return m->err;
> >> > > > +
> >> > > > +       return 0;
> >> > > > +}
> >> > > > +
> >> > > > diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
> >> > > > index 2c87a79..089c7df 100644
> >> > > > --- a/drivers/gpu/drm/i915/i915_gpu_error.c
> >> > > > +++ b/drivers/gpu/drm/i915/i915_gpu_error.c
> >> > > > @@ -161,7 +161,7 @@ static void i915_error_vprintf(struct drm_i915_error_state_buf *e,
> >> > > >         __i915_error_advance(e, len);
> >> > > >  }
> >> > > >
> >> > > > -static void i915_error_puts(struct drm_i915_error_state_buf *e,
> >> > > > +void i915_error_puts(struct drm_i915_error_state_buf *e,
> >> > > >                             const char *str)
> >> > > >  {
> >> > > >         unsigned len;
> >> > > > diff --git a/drivers/gpu/drm/i915/i915_sysfs.c b/drivers/gpu/drm/i915/i915_sysfs.c
> >> > > > index 503847f..b204c92 100644
> >> > > > --- a/drivers/gpu/drm/i915/i915_sysfs.c
> >> > > > +++ b/drivers/gpu/drm/i915/i915_sysfs.c
> >> > > > @@ -582,6 +582,64 @@ static ssize_t error_state_write(struct file *file, struct kobject *kobj,
> >> > > >         return count;
> >> > > >  }
> >> > > >
> >> > > > +static ssize_t i915_gem_clients_state_read(struct file *filp,
> >> > > > +                               struct kobject *kobj,
> >> > > > +                               struct bin_attribute *attr,
> >> > > > +                               char *buf, loff_t off, size_t count)
> >> > > > +{
> >> > > > +       struct device *kdev = container_of(kobj, struct device, kobj);
> >> > > > +       struct drm_minor *minor = dev_to_drm_minor(kdev);
> >> > > > +       struct drm_device *dev = minor->dev;
> >> > > > +       struct drm_i915_error_state_buf error_str;
> >> > > > +       ssize_t ret_count = 0;
> >> > > > +       int ret;
> >> > > > +
> >> > > > +       ret = i915_error_state_buf_init(&error_str, to_i915(dev), count, off);
> >> > > > +       if (ret)
> >> > > > +               return ret;
> >> > > > +
> >> > > > +       ret = i915_get_drm_clients_info(&error_str, dev);
> >> > > > +       if (ret)
> >> > > > +               goto out;
> >> > > > +
> >> > > > +       ret_count = count < error_str.bytes ? count : error_str.bytes;
> >> > > > +
> >> > > > +       memcpy(buf, error_str.buf, ret_count);
> >> > > > +out:
> >> > > > +       i915_error_state_buf_release(&error_str);
> >> > > > +
> >> > > > +       return ret ?: ret_count;
> >> > > > +}
> >> > > > +
> >> > > > +static ssize_t i915_gem_objects_state_read(struct file *filp,
> >> > > > +                               struct kobject *kobj,
> >> > > > +                               struct bin_attribute *attr,
> >> > > > +                               char *buf, loff_t off, size_t count)
> >> > > > +{
> >> > > > +       struct device *kdev = container_of(kobj, struct device, kobj);
> >> > > > +       struct drm_minor *minor = dev_to_drm_minor(kdev);
> >> > > > +       struct drm_device *dev = minor->dev;
> >> > > > +       struct drm_i915_error_state_buf error_str;
> >> > > > +       ssize_t ret_count = 0;
> >> > > > +       int ret;
> >> > > > +
> >> > > > +       ret = i915_error_state_buf_init(&error_str, to_i915(dev), count, off);
> >> > > > +       if (ret)
> >> > > > +               return ret;
> >> > > > +
> >> > > > +       ret = i915_gem_get_all_obj_info(&error_str, dev);
> >> > > > +       if (ret)
> >> > > > +               goto out;
> >> > > > +
> >> > > > +       ret_count = count < error_str.bytes ? count : error_str.bytes;
> >> > > > +
> >> > > > +       memcpy(buf, error_str.buf, ret_count);
> >> > > > +out:
> >> > > > +       i915_error_state_buf_release(&error_str);
> >> > > > +
> >> > > > +       return ret ?: ret_count;
> >> > > > +}
> >> > > > +
> >> > > >  static struct bin_attribute error_state_attr = {
> >> > > >         .attr.name = "error",
> >> > > >         .attr.mode = S_IRUSR | S_IWUSR,
> >> > > > @@ -590,6 +648,20 @@ static struct bin_attribute error_state_attr = {
> >> > > >         .write = error_state_write,
> >> > > >  };
> >> > > >
> >> > > > +static struct bin_attribute i915_gem_client_state_attr = {
> >> > > > +       .attr.name = "i915_gem_meminfo",
> >> > > > +       .attr.mode = S_IRUSR | S_IWUSR,
> >> > > > +       .size = 0,
> >> > > > +       .read = i915_gem_clients_state_read,
> >> > > > +};
> >> > > > +
> >> > > > +static struct bin_attribute i915_gem_objects_state_attr = {
> >> > > > +       .attr.name = "i915_gem_objinfo",
> >> > > > +       .attr.mode = S_IRUSR | S_IWUSR,
> >> > > > +       .size = 0,
> >> > > > +       .read = i915_gem_objects_state_read,
> >> > > > +};
> >> > > > +
> >> > > >  void i915_setup_sysfs(struct drm_device *dev)
> >> > > >  {
> >> > > >         int ret;
> >> > > > @@ -627,6 +699,17 @@ void i915_setup_sysfs(struct drm_device *dev)
> >> > > >                                     &error_state_attr);
> >> > > >         if (ret)
> >> > > >                 DRM_ERROR("error_state sysfs setup failed\n");
> >> > > > +
> >> > > > +       ret = sysfs_create_bin_file(&dev->primary->kdev->kobj,
> >> > > > +                                   &i915_gem_client_state_attr);
> >> > > > +       if (ret)
> >> > > > +               DRM_ERROR("i915_gem_client_state sysfs setup failed\n");
> >> > > > +
> >> > > > +       ret = sysfs_create_bin_file(&dev->primary->kdev->kobj,
> >> > > > +                                   &i915_gem_objects_state_attr);
> >> > > > +       if (ret)
> >> > > > +               DRM_ERROR("i915_gem_objects_state sysfs setup failed\n");
> >> > > > +
> >> > > >  }
> >> > > >
> >> > > >  void i915_teardown_sysfs(struct drm_device *dev)
> >> > > > --
> >> > > > 1.8.5.1
> >> > > >
> >> > > > _______________________________________________
> >> > > > Intel-gfx mailing list
> >> > > > Intel-gfx@lists.freedesktop.org
> >> > > > http://lists.freedesktop.org/mailman/listinfo/intel-gfx
> >> > >
> >> >
> >>
> >
> 
> 
>
Daniel Vetter Sept. 4, 2014, 12:40 p.m. UTC | #6
On Thu, Sep 04, 2014 at 11:52:15AM +0000, Gupta, Sourab wrote:
> On Thu, 2014-09-04 at 10:01 +0000, Daniel Vetter wrote:
> > Interface design discussions should happen in public (so that
> > non-intel people can jump in, which happens rather often for other
> > drivers actually). But at least include internal mailing lists next
> > time around. Also adding dri-devel.
> > 
> > The problem I see with your approach is that "process-wise" is not a
> > solid concept with drm. We can dump information per open drm file, but
> > that file descriptor can be shared between processes. And the latest
> > generation of linux compositor protocols (like dri3) actually take
> > advantage of this.
> 
> By "process-wise" sharing, do you mean the sharing of the drm file
> across different processes (having different tgid's), or is it sharing
> across the threads of a single process (having same tgid)?
> Sorry, we are not aware of the sharing of drm file across processes in
> dri3 protocols, as in android userspace, we have not come across such
> scenario. Can you please shed some light on it.
> 
> In our design, we have a tgid based accounting mechanism. As long as the
> drm file is shared within the threads of the same process, its resources
> (objects and memory) are accounted together. But if the drm file is
> shared across different processes (diff tgid's), this case is still an
> open.
> Will our tgid based accounting cover the dri3 usecases also (if they
> share drm file within same tgid)?

Well in unix a file descriptor is simply not tied to a process/thread at
all, so if you expose accounting data for resources which are tied to file
descriptors then that doesn't work. E.g.
- fork inteherits all the filedescriptors from its parents, same for exec
- you can pass file descriptors explicitly between processes over unix
  domain sockets (this is what dri3 does).

So if you'd use the tgid of the process that opened the file you'd account
everything to the X server with dri3. Which is not really useful.

Cheers, Daniel
diff mbox

Patch

diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c
index a58fed9..7ea3250 100644
--- a/drivers/gpu/drm/i915/i915_dma.c
+++ b/drivers/gpu/drm/i915/i915_dma.c
@@ -1985,6 +1985,7 @@  void i915_driver_postclose(struct drm_device *dev, struct drm_file *file)
 {
 	struct drm_i915_file_private *file_priv = file->driver_priv;
 
+	kfree(file_priv->process_name);
 	if (file_priv && file_priv->bsd_ring)
 		file_priv->bsd_ring = NULL;
 	kfree(file_priv);
diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index 1d6d9ac..9bee20e 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -1628,6 +1628,8 @@  static struct drm_driver driver = {
 	.debugfs_init = i915_debugfs_init,
 	.debugfs_cleanup = i915_debugfs_cleanup,
 #endif
+	.gem_open_object = i915_gem_open_object,
+	.gem_close_object = i915_gem_close_object,
 	.gem_free_object = i915_gem_free_object,
 	.gem_vm_ops = &i915_gem_vm_ops,
 
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 36f3da6..43ba7c4 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1765,6 +1765,11 @@  struct drm_i915_gem_object_ops {
 #define INTEL_FRONTBUFFER_ALL_MASK(pipe) \
 	(0xf << (INTEL_FRONTBUFFER_BITS_PER_PIPE * (pipe)))
 
+struct drm_i915_obj_virt_addr {
+	struct list_head head;
+	unsigned long user_virt_addr;
+};
+
 struct drm_i915_gem_object {
 	struct drm_gem_object base;
 
@@ -1890,6 +1895,13 @@  struct drm_i915_gem_object {
 			struct work_struct *work;
 		} userptr;
 	};
+
+#define MAX_OPEN_HANDLE 20
+	struct {
+		struct list_head virt_addr_head;
+		pid_t pid;
+		int open_handle_count;
+	} pid_array[MAX_OPEN_HANDLE];
 };
 #define to_intel_bo(x) container_of(x, struct drm_i915_gem_object, base)
 
@@ -1940,6 +1952,8 @@  struct drm_i915_gem_request {
 struct drm_i915_file_private {
 	struct drm_i915_private *dev_priv;
 	struct drm_file *file;
+	char *process_name;
+	struct pid *tgid;
 
 	struct {
 		spinlock_t lock;
@@ -2370,6 +2384,10 @@  void i915_init_vm(struct drm_i915_private *dev_priv,
 		  struct i915_address_space *vm);
 void i915_gem_free_object(struct drm_gem_object *obj);
 void i915_gem_vma_destroy(struct i915_vma *vma);
+int i915_gem_open_object(struct drm_gem_object *gem_obj,
+			struct drm_file *file_priv);
+int i915_gem_close_object(struct drm_gem_object *gem_obj,
+			struct drm_file *file_priv);
 
 #define PIN_MAPPABLE 0x1
 #define PIN_NONBLOCK 0x2
@@ -2420,6 +2438,8 @@  int i915_gem_dumb_create(struct drm_file *file_priv,
 			 struct drm_mode_create_dumb *args);
 int i915_gem_mmap_gtt(struct drm_file *file_priv, struct drm_device *dev,
 		      uint32_t handle, uint64_t *offset);
+int i915_gem_obj_shmem_pages_alloced(struct drm_i915_gem_object *obj);
+
 /**
  * Returns true if seq1 is later than seq2.
  */
@@ -2686,6 +2706,10 @@  int i915_verify_lists(struct drm_device *dev);
 #else
 #define i915_verify_lists(dev) 0
 #endif
+int i915_get_drm_clients_info(struct drm_i915_error_state_buf *m,
+				struct drm_device *dev);
+int i915_gem_get_all_obj_info(struct drm_i915_error_state_buf *m,
+				struct drm_device *dev);
 
 /* i915_debugfs.c */
 int i915_debugfs_init(struct drm_minor *minor);
@@ -2699,6 +2723,8 @@  static inline void intel_display_crc_init(struct drm_device *dev) {}
 /* i915_gpu_error.c */
 __printf(2, 3)
 void i915_error_printf(struct drm_i915_error_state_buf *e, const char *f, ...);
+void i915_error_puts(struct drm_i915_error_state_buf *e,
+			    const char *str);
 int i915_error_state_to_str(struct drm_i915_error_state_buf *estr,
 			    const struct i915_error_state_file_priv *error);
 int i915_error_state_buf_init(struct drm_i915_error_state_buf *eb,
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 6c68570..3c36486 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1461,6 +1461,45 @@  unlock:
 	return ret;
 }
 
+static void
+i915_gem_obj_insert_virt_addr(struct drm_i915_gem_object *obj,
+				unsigned long addr,
+				bool is_map_gtt)
+{
+	pid_t current_pid = task_tgid_nr(current);
+	int i, found = 0;
+
+	if (is_map_gtt)
+		addr |= 1;
+
+	for (i = 0; i < MAX_OPEN_HANDLE; i++) {
+		if (obj->pid_array[i].pid == current_pid) {
+			struct drm_i915_obj_virt_addr *entry, *new_entry;
+
+			list_for_each_entry(entry,
+					    &obj->pid_array[i].virt_addr_head,
+					    head) {
+				if (entry->user_virt_addr == addr) {
+					found = 1;
+					break;
+				}
+			}
+			if (found)
+				break;
+			new_entry = kzalloc
+				(sizeof(struct drm_i915_obj_virt_addr),
+				GFP_KERNEL);
+			new_entry->user_virt_addr = addr;
+			list_add_tail(&new_entry->head,
+				&obj->pid_array[i].virt_addr_head);
+			break;
+		}
+	}
+	if (i == MAX_OPEN_HANDLE)
+		DRM_DEBUG("Couldn't find matching pid %d for obj 0x%x\n",
+			current_pid, (u32) obj);
+}
+
 /**
  * Maps the contents of an object, returning the address it is mapped
  * into.
@@ -1495,6 +1534,7 @@  i915_gem_mmap_ioctl(struct drm_device *dev, void *data,
 	if (IS_ERR((void *)addr))
 		return addr;
 
+	i915_gem_obj_insert_virt_addr(to_intel_bo(obj), addr, false);
 	args->addr_ptr = (uint64_t) addr;
 
 	return 0;
@@ -1585,6 +1625,8 @@  int i915_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
 		}
 
 		obj->fault_mappable = true;
+		i915_gem_obj_insert_virt_addr(obj,
+			(unsigned long)vma->vm_start, true);
 	} else
 		ret = vm_insert_pfn(vma,
 				    (unsigned long)vmf->virtual_address,
@@ -1830,6 +1872,24 @@  i915_gem_object_is_purgeable(struct drm_i915_gem_object *obj)
 	return obj->madv == I915_MADV_DONTNEED;
 }
 
+int i915_gem_obj_shmem_pages_alloced(struct drm_i915_gem_object *obj)
+{
+	int ret;
+
+	if (obj->base.filp) {
+		struct inode *inode = file_inode(obj->base.filp);
+		struct shmem_inode_info *info = SHMEM_I(inode);
+
+		if (!inode)
+			return 0;
+		spin_lock(&info->lock);
+		ret = inode->i_mapping->nrpages;
+		spin_unlock(&info->lock);
+		return ret;
+	}
+	return 0;
+}
+
 /* Immediately discard the backing storage */
 static void
 i915_gem_object_truncate(struct drm_i915_gem_object *obj)
@@ -4447,6 +4507,79 @@  static bool discard_backing_storage(struct drm_i915_gem_object *obj)
 	return atomic_long_read(&obj->base.filp->f_count) == 1;
 }
 
+int
+i915_gem_open_object(struct drm_gem_object *gem_obj,
+			struct drm_file *file_priv)
+{
+	struct drm_i915_gem_object *obj = to_intel_bo(gem_obj);
+	pid_t current_pid = task_tgid_nr(current);
+	int i, ret, free = -1;
+
+	ret = i915_mutex_lock_interruptible(gem_obj->dev);
+	if (ret)
+		return ret;
+
+	for (i = 0; i < MAX_OPEN_HANDLE; i++) {
+		if (obj->pid_array[i].pid == current_pid) {
+			obj->pid_array[i].open_handle_count++;
+			break;
+		} else if (obj->pid_array[i].pid == 0)
+			free = i;
+	}
+
+	if (i == MAX_OPEN_HANDLE) {
+		if (free != -1) {
+			WARN_ON(obj->pid_array[free].open_handle_count);
+			obj->pid_array[free].open_handle_count = 1;
+			obj->pid_array[free].pid = current_pid;
+			INIT_LIST_HEAD(&obj->pid_array[free].virt_addr_head);
+		} else
+			DRM_DEBUG("Max open handle count limit: obj 0x%x\n",
+					(u32) obj);
+	}
+
+	mutex_unlock(&gem_obj->dev->struct_mutex);
+	return 0;
+}
+
+int
+i915_gem_close_object(struct drm_gem_object *gem_obj,
+			struct drm_file *file_priv)
+{
+	struct drm_i915_gem_object *obj = to_intel_bo(gem_obj);
+	pid_t current_pid = task_tgid_nr(current);
+	int i, ret;
+
+	ret = i915_mutex_lock_interruptible(gem_obj->dev);
+	if (ret)
+		return ret;
+
+	for (i = 0; i < MAX_OPEN_HANDLE; i++) {
+		if (obj->pid_array[i].pid == current_pid) {
+			obj->pid_array[i].open_handle_count--;
+			if (obj->pid_array[i].open_handle_count == 0) {
+				struct drm_i915_obj_virt_addr *entry, *next;
+
+				list_for_each_entry_safe(entry, next,
+					&obj->pid_array[i].virt_addr_head,
+					head) {
+					list_del(&entry->head);
+					kfree(entry);
+				}
+				obj->pid_array[i].pid = 0;
+			}
+			break;
+		}
+	}
+	if (i == MAX_OPEN_HANDLE)
+		DRM_DEBUG("Couldn't find matching pid %d for obj 0x%x\n",
+				current_pid, (u32) obj);
+
+	mutex_unlock(&gem_obj->dev->struct_mutex);
+	return 0;
+}
+
+
 void i915_gem_free_object(struct drm_gem_object *gem_obj)
 {
 	struct drm_i915_gem_object *obj = to_intel_bo(gem_obj);
@@ -5072,13 +5205,37 @@  i915_gem_file_idle_work_handler(struct work_struct *work)
 	atomic_set(&file_priv->rps_wait_boost, false);
 }
 
+static int i915_gem_get_pid_cmdline(struct task_struct *task, char *buffer)
+{
+	int res = 0;
+	unsigned int len;
+	struct mm_struct *mm = get_task_mm(task);
+
+	if (!mm)
+		goto out;
+	if (!mm->arg_end)
+		goto out_mm;
+
+	len = mm->arg_end - mm->arg_start;
+
+	if (len > PAGE_SIZE)
+		len = PAGE_SIZE;
+
+	res = access_process_vm(task, mm->arg_start, buffer, len, 0);
+
+	if (res > 0 && buffer[res-1] != '\0' && len < PAGE_SIZE)
+		buffer[res-1] = '\0';
+out_mm:
+	mmput(mm);
+out:
+	return res;
+}
+
 int i915_gem_open(struct drm_device *dev, struct drm_file *file)
 {
 	struct drm_i915_file_private *file_priv;
 	int ret;
 
-	DRM_DEBUG_DRIVER("\n");
-
 	file_priv = kzalloc(sizeof(*file_priv), GFP_KERNEL);
 	if (!file_priv)
 		return -ENOMEM;
@@ -5086,6 +5243,14 @@  int i915_gem_open(struct drm_device *dev, struct drm_file *file)
 	file->driver_priv = file_priv;
 	file_priv->dev_priv = dev->dev_private;
 	file_priv->file = file;
+	file_priv->tgid = find_vpid(task_tgid_nr(current));
+	file_priv->process_name =  kzalloc(PAGE_SIZE, GFP_ATOMIC);
+	if (!file_priv->process_name) {
+		kfree(file_priv);
+		return -ENOMEM;
+	}
+
+	ret = i915_gem_get_pid_cmdline(current, file_priv->process_name);
 
 	spin_lock_init(&file_priv->mm.lock);
 	INIT_LIST_HEAD(&file_priv->mm.request_list);
diff --git a/drivers/gpu/drm/i915/i915_gem_debug.c b/drivers/gpu/drm/i915/i915_gem_debug.c
index f462d1b..7a42891 100644
--- a/drivers/gpu/drm/i915/i915_gem_debug.c
+++ b/drivers/gpu/drm/i915/i915_gem_debug.c
@@ -25,6 +25,7 @@ 
  *
  */
 
+#include <linux/pid.h>
 #include <drm/drmP.h>
 #include <drm/i915_drm.h>
 #include "i915_drv.h"
@@ -116,3 +117,544 @@  i915_verify_lists(struct drm_device *dev)
 	return warned = err;
 }
 #endif /* WATCH_LIST */
+
+struct per_file_obj_mem_info {
+	int num_obj;
+	int num_obj_shared;
+	int num_obj_private;
+	int num_obj_gtt_bound;
+	int num_obj_purged;
+	int num_obj_purgeable;
+	int num_obj_allocated;
+	int num_obj_fault_mappable;
+	int num_obj_stolen;
+	size_t gtt_space_allocated_shared;
+	size_t gtt_space_allocated_priv;
+	size_t phys_space_allocated_shared;
+	size_t phys_space_allocated_priv;
+	size_t phys_space_purgeable;
+	size_t phys_space_shared_proportion;
+	size_t fault_mappable_size;
+	size_t stolen_space_allocated;
+	char *process_name;
+};
+
+struct name_entry {
+	struct list_head head;
+	struct drm_hash_item hash_item;
+};
+
+struct pid_stat_entry {
+	struct list_head head;
+	struct list_head namefree;
+	struct drm_open_hash namelist;
+	struct per_file_obj_mem_info stats;
+	struct pid *pid;
+	int pid_num;
+};
+
+
+#define err_printf(e, ...) i915_error_printf(e, __VA_ARGS__)
+#define err_puts(e, s) i915_error_puts(e, s)
+
+static const char *get_pin_flag(struct drm_i915_gem_object *obj)
+{
+	if (obj->user_pin_count > 0)
+		return "P";
+	else if (i915_gem_obj_is_pinned(obj))
+		return "p";
+	return " ";
+}
+
+static const char *get_tiling_flag(struct drm_i915_gem_object *obj)
+{
+	switch (obj->tiling_mode) {
+	default:
+	case I915_TILING_NONE: return " ";
+	case I915_TILING_X: return "X";
+	case I915_TILING_Y: return "Y";
+	}
+}
+
+static int i915_obj_virt_addr_is_valid(struct drm_gem_object *obj,
+				struct pid *pid, unsigned long addr)
+{
+	struct task_struct *task;
+	struct mm_struct *mm;
+	struct vm_area_struct *vma;
+	int locked, ret = 0;
+
+	task = get_pid_task(pid, PIDTYPE_PID);
+	if (task == NULL) {
+		DRM_DEBUG("null task for pid=%d\n", pid_nr(pid));
+		return -EINVAL;
+	}
+
+	mm = get_task_mm(task);
+	if (mm == NULL) {
+		DRM_DEBUG("null mm for pid=%d\n", pid_nr(pid));
+		return -EINVAL;
+	}
+
+	locked = down_read_trylock(&mm->mmap_sem);
+
+	vma = find_vma(mm, addr);
+	if (vma) {
+		if (addr & 1) { /* mmap_gtt case */
+			if (vma->vm_pgoff*PAGE_SIZE == (unsigned long)
+				drm_vma_node_offset_addr(&obj->vma_node))
+				ret = 0;
+			else
+				ret = -EINVAL;
+		} else { /* mmap case */
+			if (vma->vm_file == obj->filp)
+				ret = 0;
+			else
+				ret = -EINVAL;
+		}
+	} else
+		ret = -EINVAL;
+
+	if (locked)
+		up_read(&mm->mmap_sem);
+
+	mmput(mm);
+	return ret;
+}
+
+static void i915_obj_pidarray_validate(struct drm_gem_object *gem_obj)
+{
+	struct drm_i915_gem_object *obj = to_intel_bo(gem_obj);
+	struct drm_device *dev = gem_obj->dev;
+	struct drm_i915_obj_virt_addr *entry, *next;
+	struct drm_file *file;
+	struct drm_i915_file_private *file_priv;
+	struct pid *tgid;
+	int pid_num, i, present;
+
+	/* Run a sanity check on pid_array. All entries in pid_array should
+	 * be subset of the the drm filelist pid entries.
+	 */
+	for (i = 0; i < MAX_OPEN_HANDLE; i++) {
+		if (obj->pid_array[i].pid == 0)
+			continue;
+
+		present = 0;
+		list_for_each_entry(file, &dev->filelist, lhead) {
+			file_priv = file->driver_priv;
+			tgid = file_priv->tgid;
+			pid_num = pid_nr(tgid);
+
+			if (pid_num == obj->pid_array[i].pid) {
+				present = 1;
+				break;
+			}
+		}
+		if (present == 0) {
+			DRM_DEBUG("stale_pid=%d\n", obj->pid_array[i].pid);
+			list_for_each_entry_safe(entry, next,
+					&obj->pid_array[i].virt_addr_head,
+					head) {
+				list_del(&entry->head);
+				kfree(entry);
+			}
+
+			obj->pid_array[i].open_handle_count = 0;
+			obj->pid_array[i].pid = 0;
+		} else {
+			/* Validate the virtual address list */
+			struct task_struct *task =
+				get_pid_task(tgid, PIDTYPE_PID);
+			if (task == NULL)
+				continue;
+
+			list_for_each_entry_safe(entry, next,
+					&obj->pid_array[i].virt_addr_head,
+					head) {
+				if (i915_obj_virt_addr_is_valid(gem_obj, tgid,
+				entry->user_virt_addr)) {
+					DRM_DEBUG("stale_addr=%ld\n",
+					entry->user_virt_addr);
+					list_del(&entry->head);
+					kfree(entry);
+				}
+			}
+		}
+	}
+}
+
+static int
+i915_describe_obj(struct drm_i915_error_state_buf *m,
+		struct drm_i915_gem_object *obj)
+{
+	int i;
+	struct i915_vma *vma;
+	struct drm_i915_obj_virt_addr *entry;
+
+	err_printf(m,
+		"%p: %7zdK  %s    %s     %s      %s     %s      %s       %s     ",
+		   &obj->base,
+		   obj->base.size / 1024,
+		   get_pin_flag(obj),
+		   get_tiling_flag(obj),
+		   obj->dirty ? "Y" : "N",
+		   obj->base.name ? "Y" : "N",
+		   (obj->userptr.mm != 0) ? "Y" : "N",
+		   obj->stolen ? "Y" : "N",
+		   (obj->pin_mappable || obj->fault_mappable) ? "Y" : "N");
+
+	if (obj->madv == __I915_MADV_PURGED)
+		err_printf(m, " purged    ");
+	else if (obj->madv == I915_MADV_DONTNEED)
+		err_printf(m, " purgeable   ");
+	else if (i915_gem_obj_shmem_pages_alloced(obj) != 0)
+		err_printf(m, " allocated   ");
+
+
+	list_for_each_entry(vma, &obj->vma_list, vma_link) {
+		if (!i915_is_ggtt(vma->vm))
+			err_puts(m, " PP    ");
+		else
+			err_puts(m, " G     ");
+		err_printf(m, "  %08lx ", vma->node.start);
+	}
+
+	for (i = 0; i < MAX_OPEN_HANDLE; i++) {
+		if (obj->pid_array[i].pid != 0) {
+			err_printf(m, " (%d: %d:",
+			obj->pid_array[i].pid,
+			obj->pid_array[i].open_handle_count);
+			list_for_each_entry(entry,
+				&obj->pid_array[i].virt_addr_head, head) {
+				if (entry->user_virt_addr & 1)
+					err_printf(m, " %p",
+					(void *)(entry->user_virt_addr & ~1));
+				else
+					err_printf(m, " %p*",
+					(void *)entry->user_virt_addr);
+			}
+			err_printf(m, ") ");
+		}
+	}
+
+	err_printf(m, "\n");
+
+	if (m->bytes == 0 && m->err)
+		return m->err;
+
+	return 0;
+}
+
+static int
+i915_drm_gem_obj_info(int id, void *ptr, void *data)
+{
+	struct drm_i915_gem_object *obj = ptr;
+	struct drm_i915_error_state_buf *m = data;
+	int ret;
+
+	i915_obj_pidarray_validate(&obj->base);
+	ret = i915_describe_obj(m, obj);
+
+	return ret;
+}
+
+static int
+i915_drm_gem_object_per_file_summary(int id, void *ptr, void *data)
+{
+	struct pid_stat_entry *pid_entry = data;
+	struct drm_i915_gem_object *obj = ptr;
+	struct per_file_obj_mem_info *stats = &pid_entry->stats;
+	struct drm_hash_item *hash_item;
+	int i, obj_shared_count = 0;
+
+	i915_obj_pidarray_validate(&obj->base);
+
+	stats->num_obj++;
+
+	if (obj->base.name) {
+
+		if (drm_ht_find_item(&pid_entry->namelist,
+				(unsigned long)obj->base.name, &hash_item)) {
+			struct name_entry *entry =
+				kzalloc(sizeof(struct name_entry), GFP_KERNEL);
+			if (entry == NULL) {
+				DRM_ERROR("alloc failed\n");
+				return -ENOMEM;
+			}
+			entry->hash_item.key = obj->base.name;
+			drm_ht_insert_item(&pid_entry->namelist,
+					&entry->hash_item);
+			list_add_tail(&entry->head, &pid_entry->namefree);
+		} else {
+			DRM_DEBUG("Duplicate obj with name %d for process %s\n",
+				obj->base.name, stats->process_name);
+			return 0;
+		}
+		for (i = 0; i < MAX_OPEN_HANDLE; i++) {
+			if (obj->pid_array[i].pid != 0)
+				obj_shared_count++;
+		}
+		if (WARN_ON(obj_shared_count == 0))
+			return 1;
+
+		DRM_DEBUG("Obj: %p, shared count =%d\n",
+			&obj->base, obj_shared_count);
+
+		if (obj_shared_count > 1)
+			stats->num_obj_shared++;
+		else
+			stats->num_obj_private++;
+	} else {
+		obj_shared_count = 1;
+		stats->num_obj_private++;
+	}
+
+	if (i915_gem_obj_bound_any(obj)) {
+		stats->num_obj_gtt_bound++;
+		if (obj_shared_count > 1)
+			stats->gtt_space_allocated_shared += obj->base.size;
+		else
+			stats->gtt_space_allocated_priv += obj->base.size;
+	}
+
+	if (obj->stolen) {
+		stats->num_obj_stolen++;
+		stats->stolen_space_allocated += obj->base.size;
+	} else if (obj->madv == __I915_MADV_PURGED) {
+		stats->num_obj_purged++;
+	} else if (obj->madv == I915_MADV_DONTNEED) {
+		stats->num_obj_purgeable++;
+		stats->num_obj_allocated++;
+		if (i915_gem_obj_shmem_pages_alloced(obj) != 0) {
+			stats->phys_space_purgeable += obj->base.size;
+			if (obj_shared_count > 1) {
+				stats->phys_space_allocated_shared +=
+					obj->base.size;
+				stats->phys_space_shared_proportion +=
+					obj->base.size/obj_shared_count;
+			} else
+				stats->phys_space_allocated_priv +=
+					obj->base.size;
+		} else
+			WARN_ON(1);
+	} else if (i915_gem_obj_shmem_pages_alloced(obj) != 0) {
+		stats->num_obj_allocated++;
+			if (obj_shared_count > 1) {
+				stats->phys_space_allocated_shared +=
+					obj->base.size;
+				stats->phys_space_shared_proportion +=
+					obj->base.size/obj_shared_count;
+			}
+		else
+			stats->phys_space_allocated_priv += obj->base.size;
+	}
+	if (obj->fault_mappable) {
+		stats->num_obj_fault_mappable++;
+		stats->fault_mappable_size += obj->base.size;
+	}
+	return 0;
+}
+
+int i915_get_drm_clients_info(struct drm_i915_error_state_buf *m,
+			struct drm_device *dev)
+{
+	struct drm_file *file;
+	struct drm_i915_private *dev_priv = dev->dev_private;
+
+	struct name_entry *entry, *next;
+	struct pid_stat_entry *pid_entry, *temp_entry;
+	struct pid_stat_entry *new_pid_entry, *new_temp_entry;
+	struct list_head per_pid_stats, sorted_pid_stats;
+	int ret = 0, total_shared_prop_space = 0, total_priv_space = 0;
+
+	INIT_LIST_HEAD(&per_pid_stats);
+	INIT_LIST_HEAD(&sorted_pid_stats);
+
+	err_printf(m,
+		"\n\n  pid   Total  Shared  Priv   Purgeable  Alloced  SharedPHYsize   SharedPHYprop    PrivPHYsize   PurgeablePHYsize   process\n");
+
+	/* Protect the access to global drm resources such as filelist. Protect
+	 * against their removal under our noses, while in use.
+	 */
+	mutex_lock(&drm_global_mutex);
+	ret = i915_mutex_lock_interruptible(dev);
+	if (ret) {
+		mutex_unlock(&drm_global_mutex);
+		return ret;
+	}
+
+	list_for_each_entry(file, &dev->filelist, lhead) {
+		struct pid *tgid;
+		struct drm_i915_file_private *file_priv = file->driver_priv;
+		int pid_num, found = 0;
+
+		tgid = file_priv->tgid;
+		pid_num = pid_nr(tgid);
+
+		list_for_each_entry(pid_entry, &per_pid_stats, head) {
+			if (pid_entry->pid_num == pid_num) {
+				found = 1;
+				break;
+			}
+		}
+
+		if (!found) {
+			struct pid_stat_entry *new_entry =
+				kzalloc(sizeof(struct pid_stat_entry),
+					GFP_KERNEL);
+			if (new_entry == NULL) {
+				DRM_ERROR("alloc failed\n");
+				ret = -ENOMEM;
+				goto out_unlock;
+			}
+			new_entry->pid = tgid;
+			new_entry->pid_num = pid_num;
+			list_add_tail(&new_entry->head, &per_pid_stats);
+			drm_ht_create(&new_entry->namelist,
+				DRM_MAGIC_HASH_ORDER);
+			INIT_LIST_HEAD(&new_entry->namefree);
+			new_entry->stats.process_name = file_priv->process_name;
+			pid_entry = new_entry;
+		}
+
+		ret = idr_for_each(&file->object_idr,
+			&i915_drm_gem_object_per_file_summary, pid_entry);
+		if (ret)
+			break;
+	}
+
+	list_for_each_entry_safe(pid_entry, temp_entry, &per_pid_stats, head) {
+		if (list_empty(&sorted_pid_stats)) {
+			list_del(&pid_entry->head);
+			list_add_tail(&pid_entry->head, &sorted_pid_stats);
+			continue;
+		}
+
+		list_for_each_entry_safe(new_pid_entry, new_temp_entry,
+			&sorted_pid_stats, head) {
+			int prev_space =
+				pid_entry->stats.phys_space_shared_proportion +
+				pid_entry->stats.phys_space_allocated_priv;
+			int new_space =
+				new_pid_entry->
+				stats.phys_space_shared_proportion +
+				new_pid_entry->stats.phys_space_allocated_priv;
+			if (prev_space > new_space) {
+				list_del(&pid_entry->head);
+				list_add_tail(&pid_entry->head,
+					&new_pid_entry->head);
+				break;
+			}
+			if (list_is_last(&new_pid_entry->head,
+				&sorted_pid_stats)) {
+				list_del(&pid_entry->head);
+				list_add_tail(&pid_entry->head,
+						&sorted_pid_stats);
+			}
+		}
+	}
+
+	list_for_each_entry_safe(pid_entry, temp_entry,
+				&sorted_pid_stats, head) {
+		struct task_struct *task = get_pid_task(pid_entry->pid,
+							PIDTYPE_PID);
+		err_printf(m,
+			"%5d %6d %6d %6d %9d %8d %14zdK %14zdK %14zdK  %14zdK     %s",
+			   pid_entry->pid_num,
+			   pid_entry->stats.num_obj,
+			   pid_entry->stats.num_obj_shared,
+			   pid_entry->stats.num_obj_private,
+			   pid_entry->stats.num_obj_purgeable,
+			   pid_entry->stats.num_obj_allocated,
+			   pid_entry->stats.phys_space_allocated_shared/1024,
+			   pid_entry->stats.phys_space_shared_proportion/1024,
+			   pid_entry->stats.phys_space_allocated_priv/1024,
+			   pid_entry->stats.phys_space_purgeable/1024,
+			   pid_entry->stats.process_name);
+
+		if (task == NULL)
+			err_printf(m, "*\n");
+		else
+			err_printf(m, "\n");
+
+		total_shared_prop_space +=
+			pid_entry->stats.phys_space_shared_proportion/1024;
+		total_priv_space +=
+			pid_entry->stats.phys_space_allocated_priv/1024;
+		list_del(&pid_entry->head);
+
+		list_for_each_entry_safe(entry, next,
+					&pid_entry->namefree, head) {
+			list_del(&entry->head);
+			drm_ht_remove_item(&pid_entry->namelist,
+					&entry->hash_item);
+			kfree(entry);
+		}
+		drm_ht_remove(&pid_entry->namelist);
+		kfree(pid_entry);
+	}
+
+	err_printf(m,
+		"\t\t\t\t\t\t\t\t--------------\t-------------\t--------\n");
+	err_printf(m,
+		"\t\t\t\t\t\t\t\t%13zdK\t%12zdK\tTotal\n",
+			total_shared_prop_space, total_priv_space);
+
+out_unlock:
+	mutex_unlock(&dev->struct_mutex);
+	mutex_unlock(&drm_global_mutex);
+
+	if (ret)
+		return ret;
+	if (m->bytes == 0 && m->err)
+		return m->err;
+
+	return 0;
+}
+
+int i915_gem_get_all_obj_info(struct drm_i915_error_state_buf *m,
+			struct drm_device *dev)
+{
+	struct drm_file *file;
+	int pid_num, ret = 0;
+
+	/* Protect the access to global drm resources such as filelist. Protect
+	 * against their removal under our noses, while in use.
+	 */
+	mutex_lock(&drm_global_mutex);
+	ret = i915_mutex_lock_interruptible(dev);
+	if (ret) {
+		mutex_unlock(&drm_global_mutex);
+		return ret;
+	}
+
+	list_for_each_entry(file, &dev->filelist, lhead) {
+		struct pid *tgid;
+		struct drm_i915_file_private *file_priv = file->driver_priv;
+
+		tgid = file_priv->tgid;
+		pid_num = pid_nr(tgid);
+
+		err_printf(m, "\n\n  PID  process\n");
+
+		err_printf(m, "%5d  %s\n",
+			   pid_num, file_priv->process_name);
+
+		err_printf(m,
+			"\n Obj Identifier       Size Pin Tiling Dirty Shared Vmap Stolen Mappable  AllocState Global/PP  GttOffset (PID: handle count: user virt addrs)\n");
+		ret = idr_for_each(&file->object_idr,
+				&i915_drm_gem_obj_info, m);
+		if (ret)
+			break;
+	}
+	mutex_unlock(&dev->struct_mutex);
+	mutex_unlock(&drm_global_mutex);
+
+	if (ret)
+		return ret;
+	if (m->bytes == 0 && m->err)
+		return m->err;
+
+	return 0;
+}
+
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index 2c87a79..089c7df 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -161,7 +161,7 @@  static void i915_error_vprintf(struct drm_i915_error_state_buf *e,
 	__i915_error_advance(e, len);
 }
 
-static void i915_error_puts(struct drm_i915_error_state_buf *e,
+void i915_error_puts(struct drm_i915_error_state_buf *e,
 			    const char *str)
 {
 	unsigned len;
diff --git a/drivers/gpu/drm/i915/i915_sysfs.c b/drivers/gpu/drm/i915/i915_sysfs.c
index 503847f..b204c92 100644
--- a/drivers/gpu/drm/i915/i915_sysfs.c
+++ b/drivers/gpu/drm/i915/i915_sysfs.c
@@ -582,6 +582,64 @@  static ssize_t error_state_write(struct file *file, struct kobject *kobj,
 	return count;
 }
 
+static ssize_t i915_gem_clients_state_read(struct file *filp,
+				struct kobject *kobj,
+				struct bin_attribute *attr,
+				char *buf, loff_t off, size_t count)
+{
+	struct device *kdev = container_of(kobj, struct device, kobj);
+	struct drm_minor *minor = dev_to_drm_minor(kdev);
+	struct drm_device *dev = minor->dev;
+	struct drm_i915_error_state_buf error_str;
+	ssize_t ret_count = 0;
+	int ret;
+
+	ret = i915_error_state_buf_init(&error_str, to_i915(dev), count, off);
+	if (ret)
+		return ret;
+
+	ret = i915_get_drm_clients_info(&error_str, dev);
+	if (ret)
+		goto out;
+
+	ret_count = count < error_str.bytes ? count : error_str.bytes;
+
+	memcpy(buf, error_str.buf, ret_count);
+out:
+	i915_error_state_buf_release(&error_str);
+
+	return ret ?: ret_count;
+}
+
+static ssize_t i915_gem_objects_state_read(struct file *filp,
+				struct kobject *kobj,
+				struct bin_attribute *attr,
+				char *buf, loff_t off, size_t count)
+{
+	struct device *kdev = container_of(kobj, struct device, kobj);
+	struct drm_minor *minor = dev_to_drm_minor(kdev);
+	struct drm_device *dev = minor->dev;
+	struct drm_i915_error_state_buf error_str;
+	ssize_t ret_count = 0;
+	int ret;
+
+	ret = i915_error_state_buf_init(&error_str, to_i915(dev), count, off);
+	if (ret)
+		return ret;
+
+	ret = i915_gem_get_all_obj_info(&error_str, dev);
+	if (ret)
+		goto out;
+
+	ret_count = count < error_str.bytes ? count : error_str.bytes;
+
+	memcpy(buf, error_str.buf, ret_count);
+out:
+	i915_error_state_buf_release(&error_str);
+
+	return ret ?: ret_count;
+}
+
 static struct bin_attribute error_state_attr = {
 	.attr.name = "error",
 	.attr.mode = S_IRUSR | S_IWUSR,
@@ -590,6 +648,20 @@  static struct bin_attribute error_state_attr = {
 	.write = error_state_write,
 };
 
+static struct bin_attribute i915_gem_client_state_attr = {
+	.attr.name = "i915_gem_meminfo",
+	.attr.mode = S_IRUSR | S_IWUSR,
+	.size = 0,
+	.read = i915_gem_clients_state_read,
+};
+
+static struct bin_attribute i915_gem_objects_state_attr = {
+	.attr.name = "i915_gem_objinfo",
+	.attr.mode = S_IRUSR | S_IWUSR,
+	.size = 0,
+	.read = i915_gem_objects_state_read,
+};
+
 void i915_setup_sysfs(struct drm_device *dev)
 {
 	int ret;
@@ -627,6 +699,17 @@  void i915_setup_sysfs(struct drm_device *dev)
 				    &error_state_attr);
 	if (ret)
 		DRM_ERROR("error_state sysfs setup failed\n");
+
+	ret = sysfs_create_bin_file(&dev->primary->kdev->kobj,
+				    &i915_gem_client_state_attr);
+	if (ret)
+		DRM_ERROR("i915_gem_client_state sysfs setup failed\n");
+
+	ret = sysfs_create_bin_file(&dev->primary->kdev->kobj,
+				    &i915_gem_objects_state_attr);
+	if (ret)
+		DRM_ERROR("i915_gem_objects_state sysfs setup failed\n");
+
 }
 
 void i915_teardown_sysfs(struct drm_device *dev)