Message ID | 20230912084044.955864-6-adrian.larumbe@collabora.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | Add fdinfo support to Panfrost | expand |
On Tue, 12 Sep 2023 09:36:59 +0100 Adrián Larumbe <adrian.larumbe@collabora.com> wrote: > BO's RSS is updated every time new pages are allocated on demand and mapped > for the object at GPU page fault's IRQ handler, but only for heap buffers. > The reason this is unnecessary for non-heap buffers is that they are mapped > onto the GPU's VA space and backed by physical memory in their entirety at > BO creation time. > > This calculation is unnecessary for imported PRIME objects, since heap > buffers cannot be exported by our driver, and the actual BO RSS size is the > one reported in its attached dmabuf structure. > > Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com> Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com> > --- > drivers/gpu/drm/panfrost/panfrost_gem.c | 15 +++++++++++++++ > drivers/gpu/drm/panfrost/panfrost_gem.h | 5 +++++ > drivers/gpu/drm/panfrost/panfrost_mmu.c | 1 + > 3 files changed, 21 insertions(+) > > diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c > index 7d8f83d20539..4365434b48db 100644 > --- a/drivers/gpu/drm/panfrost/panfrost_gem.c > +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c > @@ -208,6 +208,20 @@ static enum drm_gem_object_status panfrost_gem_status(struct drm_gem_object *obj > return res; > } > > +static size_t panfrost_gem_rss(struct drm_gem_object *obj) > +{ > + struct panfrost_gem_object *bo = to_panfrost_bo(obj); > + > + if (bo->is_heap) { > + return bo->heap_rss_size; > + } else if (bo->base.pages) { > + WARN_ON(bo->heap_rss_size); > + return bo->base.base.size; > + } else { > + return 0; > + } > +} > + > static const struct drm_gem_object_funcs panfrost_gem_funcs = { > .free = panfrost_gem_free_object, > .open = panfrost_gem_open, > @@ -220,6 +234,7 @@ static const struct drm_gem_object_funcs panfrost_gem_funcs = { > .vunmap = drm_gem_shmem_object_vunmap, > .mmap = drm_gem_shmem_object_mmap, > .status = panfrost_gem_status, > + .rss = panfrost_gem_rss, > .vm_ops = &drm_gem_shmem_vm_ops, > }; > > diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.h b/drivers/gpu/drm/panfrost/panfrost_gem.h > index ad2877eeeccd..13c0a8149c3a 100644 > --- a/drivers/gpu/drm/panfrost/panfrost_gem.h > +++ b/drivers/gpu/drm/panfrost/panfrost_gem.h > @@ -36,6 +36,11 @@ struct panfrost_gem_object { > */ > atomic_t gpu_usecount; > > + /* > + * Object chunk size currently mapped onto physical memory > + */ > + size_t heap_rss_size; > + > bool noexec :1; > bool is_heap :1; > }; > diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c > index d54d4e7b2195..7b1490cdaa48 100644 > --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c > +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c > @@ -522,6 +522,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, > IOMMU_WRITE | IOMMU_READ | IOMMU_NOEXEC, sgt); > > bomapping->active = true; > + bo->heap_rss_size += SZ_2; > > dev_dbg(pfdev->dev, "mapped page fault @ AS%d %llx", as, addr); >
diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c index 7d8f83d20539..4365434b48db 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -208,6 +208,20 @@ static enum drm_gem_object_status panfrost_gem_status(struct drm_gem_object *obj return res; } +static size_t panfrost_gem_rss(struct drm_gem_object *obj) +{ + struct panfrost_gem_object *bo = to_panfrost_bo(obj); + + if (bo->is_heap) { + return bo->heap_rss_size; + } else if (bo->base.pages) { + WARN_ON(bo->heap_rss_size); + return bo->base.base.size; + } else { + return 0; + } +} + static const struct drm_gem_object_funcs panfrost_gem_funcs = { .free = panfrost_gem_free_object, .open = panfrost_gem_open, @@ -220,6 +234,7 @@ static const struct drm_gem_object_funcs panfrost_gem_funcs = { .vunmap = drm_gem_shmem_object_vunmap, .mmap = drm_gem_shmem_object_mmap, .status = panfrost_gem_status, + .rss = panfrost_gem_rss, .vm_ops = &drm_gem_shmem_vm_ops, }; diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.h b/drivers/gpu/drm/panfrost/panfrost_gem.h index ad2877eeeccd..13c0a8149c3a 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.h +++ b/drivers/gpu/drm/panfrost/panfrost_gem.h @@ -36,6 +36,11 @@ struct panfrost_gem_object { */ atomic_t gpu_usecount; + /* + * Object chunk size currently mapped onto physical memory + */ + size_t heap_rss_size; + bool noexec :1; bool is_heap :1; }; diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index d54d4e7b2195..7b1490cdaa48 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -522,6 +522,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, IOMMU_WRITE | IOMMU_READ | IOMMU_NOEXEC, sgt); bomapping->active = true; + bo->heap_rss_size += SZ_2; dev_dbg(pfdev->dev, "mapped page fault @ AS%d %llx", as, addr);
BO's RSS is updated every time new pages are allocated on demand and mapped for the object at GPU page fault's IRQ handler, but only for heap buffers. The reason this is unnecessary for non-heap buffers is that they are mapped onto the GPU's VA space and backed by physical memory in their entirety at BO creation time. This calculation is unnecessary for imported PRIME objects, since heap buffers cannot be exported by our driver, and the actual BO RSS size is the one reported in its attached dmabuf structure. Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com> --- drivers/gpu/drm/panfrost/panfrost_gem.c | 15 +++++++++++++++ drivers/gpu/drm/panfrost/panfrost_gem.h | 5 +++++ drivers/gpu/drm/panfrost/panfrost_mmu.c | 1 + 3 files changed, 21 insertions(+)