diff mbox series

[08/10] accel/ivpu: Disable buffer sharing among VPU contexts

Message ID 20240105112218.351265-9-jacek.lawrynowicz@linux.intel.com (mailing list archive)
State New, archived
Headers show
Series accel/ivpu fixes for 6.8 | expand

Commit Message

Jacek Lawrynowicz Jan. 5, 2024, 11:22 a.m. UTC
This was not supported properly. A buffer was imported to another VPU
context as a separate buffer object with duplicated sgt.
Both exported and imported buffers could be DMA mapped causing a double
mapping on the same device.

Imported buffer from another VPU context will now have just reference
increased and there will be a single sgt fixing above problem but
buffers still can't be shared among VPU contexts because each context
have its own MMU mapping and ivpu_bo supports only single MMU mapping.

The solution would be to use a mapping list as in panfrost or etnaviv
drivers and it will be implemented in future if required.

Signed-off-by: Jacek Lawrynowicz <jacek.lawrynowicz@linux.intel.com>
---
 drivers/accel/ivpu/ivpu_gem.c | 44 +++++------------------------------
 1 file changed, 6 insertions(+), 38 deletions(-)

Comments

Jeffrey Hugo Jan. 5, 2024, 4:46 p.m. UTC | #1
On 1/5/2024 4:22 AM, Jacek Lawrynowicz wrote:
> This was not supported properly. A buffer was imported to another VPU
> context as a separate buffer object with duplicated sgt.
> Both exported and imported buffers could be DMA mapped causing a double
> mapping on the same device.
> 
> Imported buffer from another VPU context will now have just reference
> increased and there will be a single sgt fixing above problem but
> buffers still can't be shared among VPU contexts because each context
> have its own MMU mapping and ivpu_bo supports only single MMU mapping.
> 
> The solution would be to use a mapping list as in panfrost or etnaviv
> drivers and it will be implemented in future if required.
> 
> Signed-off-by: Jacek Lawrynowicz <jacek.lawrynowicz@linux.intel.com>
> ---
>   drivers/accel/ivpu/ivpu_gem.c | 44 +++++------------------------------
>   1 file changed, 6 insertions(+), 38 deletions(-)
> 
> diff --git a/drivers/accel/ivpu/ivpu_gem.c b/drivers/accel/ivpu/ivpu_gem.c
> index 4de454bfbf91..8cb4d337552e 100644
> --- a/drivers/accel/ivpu/ivpu_gem.c
> +++ b/drivers/accel/ivpu/ivpu_gem.c
> @@ -222,6 +222,12 @@ static int ivpu_bo_open(struct drm_gem_object *obj, struct drm_file *file)
>   	struct ivpu_bo *bo = to_ivpu_bo(obj);
>   	struct ivpu_addr_range *range;
>   
> +	if (bo->ctx) {
> +		ivpu_warn(vdev, "Can't add BO (vpu_addr 0x%llx) to ctx %u: already in ctx %u\n",
> +			  bo->vpu_addr, file_priv->ctx.id, bo->ctx->id);

Looks like the vpu_addr is being used as a unique identifier for the BO. 
  Is that really the best value to use?  Seems like if I want to attack 
another context, knowing the device address of something that context 
owns would be useful information.
Carl Vanderlip Jan. 5, 2024, 10:34 p.m. UTC | #2
On 1/5/2024 3:22 AM, Jacek Lawrynowicz wrote:
> Imported buffer from another VPU context will now have just reference
> increased and there will be a single sgt fixing above problem but
> buffers still can't be shared among VPU contexts because each context
> have its own MMU mapping and ivpu_bo supports only single MMU mapping.

This paragraph/sentence needs rewrite. Here's my take...

"
Buffers imported from another VPU context will now just increase 
reference count, leaving only a single sgt, fixing the problem above.
Buffers still can't be shared among VPU contexts because each has its 
own MMU mapping and ivpu_bo only supports single MMU mappings.
"

-Carl V.
Jacek Lawrynowicz Jan. 10, 2024, 10:53 a.m. UTC | #3
On 05.01.2024 17:46, Jeffrey Hugo wrote:
> On 1/5/2024 4:22 AM, Jacek Lawrynowicz wrote:
>> This was not supported properly. A buffer was imported to another VPU
>> context as a separate buffer object with duplicated sgt.
>> Both exported and imported buffers could be DMA mapped causing a double
>> mapping on the same device.
>>
>> Imported buffer from another VPU context will now have just reference
>> increased and there will be a single sgt fixing above problem but
>> buffers still can't be shared among VPU contexts because each context
>> have its own MMU mapping and ivpu_bo supports only single MMU mapping.
>>
>> The solution would be to use a mapping list as in panfrost or etnaviv
>> drivers and it will be implemented in future if required.
>>
>> Signed-off-by: Jacek Lawrynowicz <jacek.lawrynowicz@linux.intel.com>
>> ---
>>   drivers/accel/ivpu/ivpu_gem.c | 44 +++++------------------------------
>>   1 file changed, 6 insertions(+), 38 deletions(-)
>>
>> diff --git a/drivers/accel/ivpu/ivpu_gem.c b/drivers/accel/ivpu/ivpu_gem.c
>> index 4de454bfbf91..8cb4d337552e 100644
>> --- a/drivers/accel/ivpu/ivpu_gem.c
>> +++ b/drivers/accel/ivpu/ivpu_gem.c
>> @@ -222,6 +222,12 @@ static int ivpu_bo_open(struct drm_gem_object *obj, struct drm_file *file)
>>       struct ivpu_bo *bo = to_ivpu_bo(obj);
>>       struct ivpu_addr_range *range;
>>   +    if (bo->ctx) {
>> +        ivpu_warn(vdev, "Can't add BO (vpu_addr 0x%llx) to ctx %u: already in ctx %u\n",
>> +              bo->vpu_addr, file_priv->ctx.id, bo->ctx->id);
> 
> Looks like the vpu_addr is being used as a unique identifier for the BO.  Is that really the best value to use?  Seems like if I want to attack another context, knowing the device address of something that context owns would be useful information.

OK, I'll remove vpu_addr form the message.
Jacek Lawrynowicz Jan. 10, 2024, 10:54 a.m. UTC | #4
On 05.01.2024 23:34, Carl Vanderlip wrote:
> 
> On 1/5/2024 3:22 AM, Jacek Lawrynowicz wrote:
>> Imported buffer from another VPU context will now have just reference
>> increased and there will be a single sgt fixing above problem but
>> buffers still can't be shared among VPU contexts because each context
>> have its own MMU mapping and ivpu_bo supports only single MMU mapping.
> 
> This paragraph/sentence needs rewrite. Here's my take...
> 
> "
> Buffers imported from another VPU context will now just increase reference count, leaving only a single sgt, fixing the problem above.
> Buffers still can't be shared among VPU contexts because each has its own MMU mapping and ivpu_bo only supports single MMU mappings.

Perfect, thanks!
diff mbox series

Patch

diff --git a/drivers/accel/ivpu/ivpu_gem.c b/drivers/accel/ivpu/ivpu_gem.c
index 4de454bfbf91..8cb4d337552e 100644
--- a/drivers/accel/ivpu/ivpu_gem.c
+++ b/drivers/accel/ivpu/ivpu_gem.c
@@ -222,6 +222,12 @@  static int ivpu_bo_open(struct drm_gem_object *obj, struct drm_file *file)
 	struct ivpu_bo *bo = to_ivpu_bo(obj);
 	struct ivpu_addr_range *range;
 
+	if (bo->ctx) {
+		ivpu_warn(vdev, "Can't add BO (vpu_addr 0x%llx) to ctx %u: already in ctx %u\n",
+			  bo->vpu_addr, file_priv->ctx.id, bo->ctx->id);
+		return -EALREADY;
+	}
+
 	if (bo->flags & DRM_IVPU_BO_SHAVE_MEM)
 		range = &vdev->hw->ranges.shave;
 	else if (bo->flags & DRM_IVPU_BO_DMA_MEM)
@@ -252,47 +258,9 @@  static void ivpu_bo_free(struct drm_gem_object *obj)
 	drm_gem_shmem_free(&bo->base);
 }
 
-static const struct dma_buf_ops ivpu_bo_dmabuf_ops =  {
-	.cache_sgt_mapping = true,
-	.attach = drm_gem_map_attach,
-	.detach = drm_gem_map_detach,
-	.map_dma_buf = drm_gem_map_dma_buf,
-	.unmap_dma_buf = drm_gem_unmap_dma_buf,
-	.release = drm_gem_dmabuf_release,
-	.mmap = drm_gem_dmabuf_mmap,
-	.vmap = drm_gem_dmabuf_vmap,
-	.vunmap = drm_gem_dmabuf_vunmap,
-};
-
-static struct dma_buf *ivpu_bo_export(struct drm_gem_object *obj, int flags)
-{
-	struct drm_device *dev = obj->dev;
-	struct dma_buf_export_info exp_info = {
-		.exp_name = KBUILD_MODNAME,
-		.owner = dev->driver->fops->owner,
-		.ops = &ivpu_bo_dmabuf_ops,
-		.size = obj->size,
-		.flags = flags,
-		.priv = obj,
-		.resv = obj->resv,
-	};
-	void *sgt;
-
-	/*
-	 * Make sure that pages are allocated and dma-mapped before exporting the bo.
-	 * DMA-mapping is required if the bo will be imported to the same device.
-	 */
-	sgt = drm_gem_shmem_get_pages_sgt(to_drm_gem_shmem_obj(obj));
-	if (IS_ERR(sgt))
-		return sgt;
-
-	return drm_gem_dmabuf_export(dev, &exp_info);
-}
-
 static const struct drm_gem_object_funcs ivpu_gem_funcs = {
 	.free = ivpu_bo_free,
 	.open = ivpu_bo_open,
-	.export = ivpu_bo_export,
 	.print_info = drm_gem_shmem_object_print_info,
 	.pin = drm_gem_shmem_object_pin,
 	.unpin = drm_gem_shmem_object_unpin,