mbox series

[v2,0/2] drm/etnaviv: Fix GPUVA range collision when CPU page size is not equal to GPU page size

Message ID 20241025204355.595805-1-sui.jingfeng@linux.dev (mailing list archive)
Headers show
Series drm/etnaviv: Fix GPUVA range collision when CPU page size is not equal to GPU page size | expand

Message

Sui Jingfeng Oct. 25, 2024, 8:43 p.m. UTC
Etnaviv assumes that GPU page size is 4KiB, however, when using
softpin capable GPUs on a different CPU page size configuration.
The userspace allocated GPUVA ranges collision, unable to be
inserted to the specified address hole exactly.


For example, when running glmark2-drm:

[kernel space debug log]

 etnaviv 0000:03:00.0: Insert bo failed, va: 0xfd38b000, size: 0x4000
 etnaviv 0000:03:00.0: Insert bo failed, va: 0xfd38a000, size: 0x4000

[user space debug log]

bo->va = 0xfd38c000, bo->size=0x100000
bo->va = 0xfd38b000, bo->size=0x1000  <-- Insert IOVA fails here.
bo->va = 0xfd38a000, bo->size=0x1000
bo->va = 0xfd389000, bo->size=0x1000


The root cause is that kernel side BO takes up bigger address space
than userspace assumes.

To solve this problem, we first track the GPU visible size of GEM buffer
object, then map and unmap the GEM BOs exactly with respect to its GPUVA
size. Ensure that GPU VA is fully mapped/unmapped, not more and not less.

v2:
- Aligned to the GPU page size (Lucas)

v1:
- No GPUVA range wasting (Lucas)
Link: https://lore.kernel.org/dri-devel/20241004194207.1013744-1-sui.jingfeng@linux.dev/

v0:
Link: https://lore.kernel.org/dri-devel/20240930221706.399139-1-sui.jingfeng@linux.dev/

Sui Jingfeng (2):
  drm/etnaviv: Record GPU visible size of GEM BO separately
  drm/etnaviv: Map and unmap GPUVA range with respect to the GPUVA size

 drivers/gpu/drm/etnaviv/etnaviv_gem.c | 11 ++++----
 drivers/gpu/drm/etnaviv/etnaviv_gem.h |  5 ++++
 drivers/gpu/drm/etnaviv/etnaviv_mmu.c | 36 +++++++++------------------
 3 files changed, 22 insertions(+), 30 deletions(-)

Comments

Lucas Stach Oct. 28, 2024, 3:57 p.m. UTC | #1
Am Samstag, dem 26.10.2024 um 04:43 +0800 schrieb Sui Jingfeng:
> Etnaviv assumes that GPU page size is 4KiB, however, when using
> softpin capable GPUs on a different CPU page size configuration.
> The userspace allocated GPUVA ranges collision, unable to be
> inserted to the specified address hole exactly.
> 
> 
> For example, when running glmark2-drm:
> 
> [kernel space debug log]
> 
>  etnaviv 0000:03:00.0: Insert bo failed, va: 0xfd38b000, size: 0x4000
>  etnaviv 0000:03:00.0: Insert bo failed, va: 0xfd38a000, size: 0x4000
> 
> [user space debug log]
> 
> bo->va = 0xfd38c000, bo->size=0x100000
> bo->va = 0xfd38b000, bo->size=0x1000  <-- Insert IOVA fails here.
> bo->va = 0xfd38a000, bo->size=0x1000
> bo->va = 0xfd389000, bo->size=0x1000
> 
> 
> The root cause is that kernel side BO takes up bigger address space
> than userspace assumes.
> 
> To solve this problem, we first track the GPU visible size of GEM buffer
> object, then map and unmap the GEM BOs exactly with respect to its GPUVA
> size. Ensure that GPU VA is fully mapped/unmapped, not more and not less.
> 

Thanks, series applied to etnaviv/next

> v2:
> - Aligned to the GPU page size (Lucas)
> 
> v1:
> - No GPUVA range wasting (Lucas)
> Link: https://lore.kernel.org/dri-devel/20241004194207.1013744-1-sui.jingfeng@linux.dev/
> 
> v0:
> Link: https://lore.kernel.org/dri-devel/20240930221706.399139-1-sui.jingfeng@linux.dev/
> 
> Sui Jingfeng (2):
>   drm/etnaviv: Record GPU visible size of GEM BO separately
>   drm/etnaviv: Map and unmap GPUVA range with respect to the GPUVA size
> 
>  drivers/gpu/drm/etnaviv/etnaviv_gem.c | 11 ++++----
>  drivers/gpu/drm/etnaviv/etnaviv_gem.h |  5 ++++
>  drivers/gpu/drm/etnaviv/etnaviv_mmu.c | 36 +++++++++------------------
>  3 files changed, 22 insertions(+), 30 deletions(-)
>