From patchwork Wed Mar 19 14:52:33 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 14022730 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 86F07C36001 for ; Wed, 19 Mar 2025 14:55:56 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D767410E537; Wed, 19 Mar 2025 14:55:55 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="EvNpZTZJ"; dkim-atps=neutral Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) by gabe.freedesktop.org (Postfix) with ESMTPS id A6A0D10E527; Wed, 19 Mar 2025 14:55:48 +0000 (UTC) Received: by mail-pl1-f182.google.com with SMTP id d9443c01a7336-225477548e1so124421145ad.0; Wed, 19 Mar 2025 07:55:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742396148; x=1743000948; darn=lists.freedesktop.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=R8+zPwPf6DFamv9kvd5m//5hO8beo2yPdyaSE+SMRQ4=; b=EvNpZTZJVVAbDpRmgdxx7zkf9RfOrdBmg0Jrlj/Q4WNF32yhNaXeCbijgmkGUGeDbm 2qfnR9ODc+a8DoYZqP4V64YgMyidc8jCHq6LkY2bPRY4DZj7mr7318xQ+crZXSqIgd/M 6byiGUCgyot8bEkRGHtc3LR66dJSswe2xZ/lMB0lUztPf6Tr4/h4geC+Qvc3+kfQ3Qmv yikJUT9xWWBnp8cmvmVjsPRmy5zd7J2NnwfS69c8yhvBX/ihk9/GSo6q1P2EhN/C5x3z 2W1ANECHVGMOH8LQgmQS0zER+Iu2yxxr8+pyge2YEEW+dpwSXRHbjCRK1VVQzlIbgtIZ nQlw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742396148; x=1743000948; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=R8+zPwPf6DFamv9kvd5m//5hO8beo2yPdyaSE+SMRQ4=; b=aCnQ+C4RL0sdEnys8qSKbY0Qgip1wJ8ICepQL2St0Vbnr+pZTowWR6a+pOzaGsYhMs Jp9kjdygaPAN1uf32rp920HHT8Fg0HzOJZvDuvzHdjxaE54hRAWk1yDwQ7Z71HCksLXX Cf6D24WUiBYvwQsZdLZelLrksCjMCEh7AlaueMjkqVZ0B26dlLcWpouzVG8YnpDQSXvs Yfq7BcAtrJHgtwMWouPSvMUqImb2wAuhR6lVRcldYW+xsnZ0rHqfv66eJZJ+Fav9AL9b wbR5JV0ZuTfr4AU59tLxQuQBsfGKSH4TSkTvNED61b7xpECqbKQ7myTIcgaW9NMXLICI f/fA== X-Gm-Message-State: AOJu0YxHRxHBadqSBon2PFCXf6zGyGwnpBvmhNUnHvygKTC8/sS4mEFm TO7bz7pJi3wItJkSHTjhIZ0D1Dmw8+uRMz36O9q2chjrVaugU97RfR12lw== X-Gm-Gg: ASbGncsRwaEzMWuSmwIh0f5C+SmgSeWz4u1fgfDCHNTfK0mft0fEZgCJn9B82E3JphD OTN0PzGrKGxv9dvh25OkGFOZ/ngocr/ho2IYKa4xN6uG8ogTzyromHgH7ZFOnmVfLM1W/DdNEuL tUY5U+1a57vBtzmDOiiEaMuVom3hJ7JsIncovS5rZNSex4iufD1k2mWEkwl2Laj7oO4aQivElpN o8/uUTnZ3KE4ALJ7DwGe/jTTXnwDL3fKHRPV0mKn+GBfHxA5T+WCDKFC8NCxmUtSrEdJvPA2dC3 9qx/eqbXL29mnVCpCFvFBnJfjgfk7P+8qw4okEb62jibW93hSIvr+AwdEA03q+fr78Fu2fXIk+N 4xvcSzN2Ctx2A+c1WVcM= X-Google-Smtp-Source: AGHT+IHiqo+qawMmux5rf8+b3xJRoCuA1P23oR0ah0wr7xAD/8HYKD0QG4mticK1SAzdlYmjYI1XvA== X-Received: by 2002:a05:6a21:3987:b0:1f5:9330:2a18 with SMTP id adf61e73a8af0-1fbebc85602mr4728730637.23.1742396147805; Wed, 19 Mar 2025 07:55:47 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-73730ca057csm8445640b3a.48.2025.03.19.07.55.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 07:55:47 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 21/34] drm/msm: Add mmu support for non-zero offset Date: Wed, 19 Mar 2025 07:52:33 -0700 Message-ID: <20250319145425.51935-22-robdclark@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319145425.51935-1-robdclark@gmail.com> References: <20250319145425.51935-1-robdclark@gmail.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Rob Clark Only needs to be supported for iopgtables mmu, the other cases are either only used for kernel managed mappings (where offset is always zero) or devices which do not support sparse bindings. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a2xx_gpummu.c | 5 ++++- drivers/gpu/drm/msm/msm_gem.c | 4 ++-- drivers/gpu/drm/msm/msm_gem.h | 4 ++-- drivers/gpu/drm/msm/msm_gem_vma.c | 13 +++++++------ drivers/gpu/drm/msm/msm_iommu.c | 22 ++++++++++++++++++++-- drivers/gpu/drm/msm/msm_mmu.h | 2 +- 6 files changed, 36 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c index 39641551eeb6..6124336af2ec 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c @@ -29,13 +29,16 @@ static void a2xx_gpummu_detach(struct msm_mmu *mmu) } static int a2xx_gpummu_map(struct msm_mmu *mmu, uint64_t iova, - struct sg_table *sgt, size_t len, int prot) + struct sg_table *sgt, size_t off, size_t len, + int prot) { struct a2xx_gpummu *gpummu = to_a2xx_gpummu(mmu); unsigned idx = (iova - GPUMMU_VA_START) / GPUMMU_PAGE_SIZE; struct sg_dma_page_iter dma_iter; unsigned prot_bits = 0; + WARN_ON(off != 0); + if (prot & IOMMU_WRITE) prot_bits |= 1; if (prot & IOMMU_READ) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 632f560c81ec..577da3c54c8c 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -441,7 +441,7 @@ static struct drm_gpuva *get_vma_locked(struct drm_gem_object *obj, vma = lookup_vma(obj, vm); if (!vma) { - vma = msm_gem_vma_new(vm, obj, range_start, range_end); + vma = msm_gem_vma_new(vm, obj, 0, range_start, range_end); } else { GEM_WARN_ON(vma->va.addr < range_start); GEM_WARN_ON((vma->va.addr + obj->size) > range_end); @@ -483,7 +483,7 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma) if (IS_ERR(pages)) return PTR_ERR(pages); - return msm_gem_vma_map(vma, prot, msm_obj->sgt, obj->size); + return msm_gem_vma_map(vma, prot, msm_obj->sgt); } void msm_gem_unpin_locked(struct drm_gem_object *obj) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 7ccdf15476b9..3919b384d599 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -140,9 +140,9 @@ struct msm_gem_vma { struct drm_gpuva * msm_gem_vma_new(struct drm_gpuvm *vm, struct drm_gem_object *obj, - u64 range_start, u64 range_end); + u64 offset, u64 range_start, u64 range_end); void msm_gem_vma_purge(struct drm_gpuva *vma); -int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt, int size); +int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt); void msm_gem_vma_close(struct drm_gpuva *vma); struct msm_gem_object { diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 8c780dd6a936..d51d54c0da33 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -38,8 +38,7 @@ void msm_gem_vma_purge(struct drm_gpuva *vma) /* Map and pin vma: */ int -msm_gem_vma_map(struct drm_gpuva *vma, int prot, - struct sg_table *sgt, int size) +msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt) { struct msm_gem_vma *msm_vma = to_msm_vma(vma); struct msm_gem_vm *vm = to_msm_vm(vma->vm); @@ -62,8 +61,9 @@ msm_gem_vma_map(struct drm_gpuva *vma, int prot, * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret = vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, size, prot); - + ret = vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, + vma->gem.offset, vma->va.range, + prot); if (ret) { msm_vma->mapped = false; } @@ -97,7 +97,7 @@ void msm_gem_vma_close(struct drm_gpuva *vma) /* Create a new vma and allocate an iova for it */ struct drm_gpuva * msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj, - u64 range_start, u64 range_end) + u64 offset, u64 range_start, u64 range_end) { struct msm_gem_vm *vm = to_msm_vm(_vm); struct drm_gpuvm_bo *vm_bo; @@ -109,6 +109,7 @@ msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj, return ERR_PTR(-ENOMEM); if (vm->managed) { + BUG_ON(offset != 0); spin_lock(&vm->mm_lock); ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node, obj->size, PAGE_SIZE, 0, @@ -124,7 +125,7 @@ msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj, GEM_WARN_ON((range_end - range_start) > obj->size); - drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, 0); + drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, offset); vma->mapped = false; mutex_lock(&vm->vm_lock); diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c index e70088a91283..2fd48e66bc98 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -113,7 +113,8 @@ static int msm_iommu_pagetable_unmap(struct msm_mmu *mmu, u64 iova, } static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, - struct sg_table *sgt, size_t len, int prot) + struct sg_table *sgt, size_t off, size_t len, + int prot) { struct msm_iommu_pagetable *pagetable = to_pagetable(mmu); struct io_pgtable_ops *ops = pagetable->pgtbl_ops; @@ -125,6 +126,19 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, size_t size = sg->length; phys_addr_t phys = sg_phys(sg); + if (!len) + break; + + if (size <= off) { + off -= size; + continue; + } + + phys += off; + size -= off; + size = min_t(size_t, size, len); + off = 0; + while (size) { size_t pgsize, count, mapped = 0; int ret; @@ -140,6 +154,7 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, phys += mapped; addr += mapped; size -= mapped; + len -= mapped; if (ret) { msm_iommu_pagetable_unmap(mmu, iova, addr - iova); @@ -400,11 +415,14 @@ static void msm_iommu_detach(struct msm_mmu *mmu) } static int msm_iommu_map(struct msm_mmu *mmu, uint64_t iova, - struct sg_table *sgt, size_t len, int prot) + struct sg_table *sgt, size_t off, size_t len, + int prot) { struct msm_iommu *iommu = to_msm_iommu(mmu); size_t ret; + WARN_ON(off != 0); + /* The arm-smmu driver expects the addresses to be sign extended */ if (iova & BIT_ULL(48)) iova |= GENMASK_ULL(63, 49); diff --git a/drivers/gpu/drm/msm/msm_mmu.h b/drivers/gpu/drm/msm/msm_mmu.h index c33247e459d6..c874852b7331 100644 --- a/drivers/gpu/drm/msm/msm_mmu.h +++ b/drivers/gpu/drm/msm/msm_mmu.h @@ -12,7 +12,7 @@ struct msm_mmu_funcs { void (*detach)(struct msm_mmu *mmu); int (*map)(struct msm_mmu *mmu, uint64_t iova, struct sg_table *sgt, - size_t len, int prot); + size_t off, size_t len, int prot); int (*unmap)(struct msm_mmu *mmu, uint64_t iova, size_t len); void (*destroy)(struct msm_mmu *mmu); void (*resume_translation)(struct msm_mmu *mmu);