From patchwork Tue Oct 31 07:31:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stanislaw Gruszka X-Patchwork-Id: 13441149 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 95BE5C4332F for ; Tue, 31 Oct 2023 07:32:34 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C5CD010E410; Tue, 31 Oct 2023 07:32:33 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.88]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2D32B10E410 for ; Tue, 31 Oct 2023 07:32:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698737552; x=1730273552; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dcOgo4SvTJjhac0Y41TSomTUIHvTGceAayXxFYVzBRo=; b=Dqlskgn4KdCZ6A47QNh5ptF2+bcV5NNiy5ozX9fCXZXswLju3KeXDQwT pG5kpJCBZc3dyE43RVja2Hz9fQOSHdxL/64AzdDGVHqnfrVfFMl5A4CNP ooBS97flO5bXDW9uuvIe9ikpILfGf5dB5hd9cCzbJL1o6MatjcV68fuYK 88lJiN0n/069p89ZTwkRGKpWTFkhZ6GSZJCyHlF7ccemIGs8uUMriIu+P 1zKksOhafs4HX5a20tPuuOIsFkJjSweyE4VfQWVb2+NSsCXvSE89j60wC SZu3CUN74B72on7A+akYXbzkzaAUMlUM8wfBwxbojrpc5EdprPZ4ir2M8 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10879"; a="419342687" X-IronPort-AV: E=Sophos;i="6.03,265,1694761200"; d="scan'208";a="419342687" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Oct 2023 00:32:04 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,265,1694761200"; d="scan'208";a="1769271" Received: from joe-255.igk.intel.com (HELO localhost) ([10.91.220.57]) by fmviesa002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Oct 2023 00:32:01 -0700 From: Stanislaw Gruszka To: dri-devel@lists.freedesktop.org Subject: [PATCH 1/4] accel/ivpu: Allocate vpu_addr in gem->open() callback Date: Tue, 31 Oct 2023 08:31:53 +0100 Message-Id: <20231031073156.1301669-2-stanislaw.gruszka@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231031073156.1301669-1-stanislaw.gruszka@linux.intel.com> References: <20231031073156.1301669-1-stanislaw.gruszka@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Stanislaw Gruszka , Oded Gabbay , Jeffrey Hugo , Jacek Lawrynowicz Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Jacek Lawrynowicz Use gem->open() callback to simplify the code and prepare for gem_shmem conversion. It is called during handle creation for a gem object, during prime import and in BO_CREATE ioctl. Hence can be used for vpu_addr allocation. On the way remove unused bo->user_ptr field. Signed-off-by: Jacek Lawrynowicz Reviewed-by: Jeffrey Hugo Signed-off-by: Stanislaw Gruszka --- drivers/accel/ivpu/ivpu_gem.c | 66 ++++++++++++--------------- drivers/accel/ivpu/ivpu_gem.h | 1 - drivers/accel/ivpu/ivpu_mmu_context.c | 2 + 3 files changed, 32 insertions(+), 37 deletions(-) diff --git a/drivers/accel/ivpu/ivpu_gem.c b/drivers/accel/ivpu/ivpu_gem.c index c91852f2edc8..d1077cf90b65 100644 --- a/drivers/accel/ivpu/ivpu_gem.c +++ b/drivers/accel/ivpu/ivpu_gem.c @@ -281,15 +281,6 @@ ivpu_bo_alloc_vpu_addr(struct ivpu_bo *bo, struct ivpu_mmu_context *ctx, struct ivpu_device *vdev = ivpu_bo_to_vdev(bo); int ret; - if (!range) { - if (bo->flags & DRM_IVPU_BO_SHAVE_MEM) - range = &vdev->hw->ranges.shave; - else if (bo->flags & DRM_IVPU_BO_DMA_MEM) - range = &vdev->hw->ranges.dma; - else - range = &vdev->hw->ranges.user; - } - mutex_lock(&ctx->lock); ret = ivpu_mmu_context_insert_node_locked(ctx, range, ivpu_bo_size(bo), &bo->mm_node); if (!ret) { @@ -299,6 +290,9 @@ ivpu_bo_alloc_vpu_addr(struct ivpu_bo *bo, struct ivpu_mmu_context *ctx, } mutex_unlock(&ctx->lock); + if (ret) + ivpu_err(vdev, "Failed to add BO to context %u: %d\n", ctx->id, ret); + return ret; } @@ -337,9 +331,7 @@ void ivpu_bo_remove_all_bos_from_context(struct ivpu_mmu_context *ctx) } static struct ivpu_bo * -ivpu_bo_alloc(struct ivpu_device *vdev, struct ivpu_mmu_context *mmu_context, - u64 size, u32 flags, const struct ivpu_bo_ops *ops, - const struct ivpu_addr_range *range, u64 user_ptr) +ivpu_bo_alloc(struct ivpu_device *vdev, u64 size, u32 flags, const struct ivpu_bo_ops *ops) { struct ivpu_bo *bo; int ret = 0; @@ -364,7 +356,6 @@ ivpu_bo_alloc(struct ivpu_device *vdev, struct ivpu_mmu_context *mmu_context, bo->base.funcs = &ivpu_gem_funcs; bo->flags = flags; bo->ops = ops; - bo->user_ptr = user_ptr; if (ops->type == IVPU_BO_TYPE_SHMEM) ret = drm_gem_object_init(&vdev->drm, &bo->base, size); @@ -384,14 +375,6 @@ ivpu_bo_alloc(struct ivpu_device *vdev, struct ivpu_mmu_context *mmu_context, } } - if (mmu_context) { - ret = ivpu_bo_alloc_vpu_addr(bo, mmu_context, range); - if (ret) { - ivpu_err(vdev, "Failed to add BO to context: %d\n", ret); - goto err_release; - } - } - return bo; err_release: @@ -401,6 +384,23 @@ ivpu_bo_alloc(struct ivpu_device *vdev, struct ivpu_mmu_context *mmu_context, return ERR_PTR(ret); } +static int ivpu_bo_open(struct drm_gem_object *obj, struct drm_file *file) +{ + struct ivpu_file_priv *file_priv = file->driver_priv; + struct ivpu_device *vdev = file_priv->vdev; + struct ivpu_bo *bo = to_ivpu_bo(obj); + struct ivpu_addr_range *range; + + if (bo->flags & DRM_IVPU_BO_SHAVE_MEM) + range = &vdev->hw->ranges.shave; + else if (bo->flags & DRM_IVPU_BO_DMA_MEM) + range = &vdev->hw->ranges.dma; + else + range = &vdev->hw->ranges.user; + + return ivpu_bo_alloc_vpu_addr(bo, &file_priv->ctx, range); +} + static void ivpu_bo_free(struct drm_gem_object *obj) { struct ivpu_bo *bo = to_ivpu_bo(obj); @@ -516,6 +516,7 @@ static const struct vm_operations_struct ivpu_vm_ops = { static const struct drm_gem_object_funcs ivpu_gem_funcs = { .free = ivpu_bo_free, + .open = ivpu_bo_open, .mmap = ivpu_bo_mmap, .vm_ops = &ivpu_vm_ops, .get_sg_table = ivpu_bo_get_sg_table, @@ -537,7 +538,7 @@ ivpu_bo_create_ioctl(struct drm_device *dev, void *data, struct drm_file *file) if (size == 0) return -EINVAL; - bo = ivpu_bo_alloc(vdev, &file_priv->ctx, size, args->flags, &shmem_ops, NULL, 0); + bo = ivpu_bo_alloc(vdev, size, args->flags, &shmem_ops); if (IS_ERR(bo)) { ivpu_err(vdev, "Failed to create BO: %pe (ctx %u size %llu flags 0x%x)", bo, file_priv->ctx.id, args->size, args->flags); @@ -578,13 +579,17 @@ ivpu_bo_alloc_internal(struct ivpu_device *vdev, u64 vpu_addr, u64 size, u32 fla range = &vdev->hw->ranges.global; } - bo = ivpu_bo_alloc(vdev, &vdev->gctx, size, flags, &internal_ops, range, 0); + bo = ivpu_bo_alloc(vdev, size, flags, &internal_ops); if (IS_ERR(bo)) { ivpu_err(vdev, "Failed to create BO: %pe (vpu_addr 0x%llx size %llu flags 0x%x)", bo, vpu_addr, size, flags); return NULL; } + ret = ivpu_bo_alloc_vpu_addr(bo, &vdev->gctx, range); + if (ret) + goto err_put; + ret = ivpu_bo_pin(bo); if (ret) goto err_put; @@ -631,7 +636,7 @@ struct drm_gem_object *ivpu_gem_prime_import(struct drm_device *dev, struct dma_ get_dma_buf(buf); - bo = ivpu_bo_alloc(vdev, NULL, buf->size, DRM_IVPU_BO_MAPPABLE, &prime_ops, NULL, 0); + bo = ivpu_bo_alloc(vdev, buf->size, DRM_IVPU_BO_MAPPABLE, &prime_ops); if (IS_ERR(bo)) { ivpu_err(vdev, "Failed to import BO: %pe (size %lu)", bo, buf->size); goto err_detach; @@ -651,8 +656,6 @@ struct drm_gem_object *ivpu_gem_prime_import(struct drm_device *dev, struct dma_ int ivpu_bo_info_ioctl(struct drm_device *dev, void *data, struct drm_file *file) { - struct ivpu_file_priv *file_priv = file->driver_priv; - struct ivpu_device *vdev = to_ivpu_device(dev); struct drm_ivpu_bo_info *args = data; struct drm_gem_object *obj; struct ivpu_bo *bo; @@ -665,21 +668,12 @@ int ivpu_bo_info_ioctl(struct drm_device *dev, void *data, struct drm_file *file bo = to_ivpu_bo(obj); mutex_lock(&bo->lock); - - if (!bo->ctx) { - ret = ivpu_bo_alloc_vpu_addr(bo, &file_priv->ctx, NULL); - if (ret) { - ivpu_err(vdev, "Failed to allocate vpu_addr: %d\n", ret); - goto unlock; - } - } - args->flags = bo->flags; args->mmap_offset = drm_vma_node_offset_addr(&obj->vma_node); args->vpu_addr = bo->vpu_addr; args->size = obj->size; -unlock: mutex_unlock(&bo->lock); + drm_gem_object_put(obj); return ret; } diff --git a/drivers/accel/ivpu/ivpu_gem.h b/drivers/accel/ivpu/ivpu_gem.h index a0b4d4a32b3b..9bbbcd779265 100644 --- a/drivers/accel/ivpu/ivpu_gem.h +++ b/drivers/accel/ivpu/ivpu_gem.h @@ -29,7 +29,6 @@ struct ivpu_bo { u64 vpu_addr; u32 handle; u32 flags; - uintptr_t user_ptr; u32 job_status; }; diff --git a/drivers/accel/ivpu/ivpu_mmu_context.c b/drivers/accel/ivpu/ivpu_mmu_context.c index 7d8005bc78d2..f6de6a4ea608 100644 --- a/drivers/accel/ivpu/ivpu_mmu_context.c +++ b/drivers/accel/ivpu/ivpu_mmu_context.c @@ -410,6 +410,8 @@ ivpu_mmu_context_insert_node_locked(struct ivpu_mmu_context *ctx, { lockdep_assert_held(&ctx->lock); + WARN_ON(!range); + if (!ivpu_disable_mmu_cont_pages && size >= IVPU_MMU_CONT_PAGES_SIZE) { if (!drm_mm_insert_node_in_range(&ctx->mm, node, size, IVPU_MMU_CONT_PAGES_SIZE, 0, range->start, range->end, DRM_MM_INSERT_BEST)) From patchwork Tue Oct 31 07:31:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stanislaw Gruszka X-Patchwork-Id: 13441150 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 57676C4332F for ; Tue, 31 Oct 2023 07:32:38 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 840FE10E411; Tue, 31 Oct 2023 07:32:37 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.88]) by gabe.freedesktop.org (Postfix) with ESMTPS id 678DF10E411 for ; Tue, 31 Oct 2023 07:32:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698737552; x=1730273552; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=aF98qmp71x7Gin37u8h2AtQ9xdevBNa+oBHJpderMkE=; b=GR+4Bmskq6B2IZTxHN58Mkw9WrsiQ+Xzx1IyTuidroAHD9Lxsk8HmBjC ww7ZXiMrhfoBjUl4adhbZz0CNRYeR2rEMsD7JIxs/i2Dp0JuUZoTbOHLT QWEEc2rrBypT2TP5E0SsUanUoxPYAJUw8dT0TjoDpkyks+smiohzI/1Ia By4YOxknmBlsPSDXpuBM0lZ3mogsP3mgeuTGRMPiX+c4JDp0ZBFGETDf8 anIGohV+zYMslloMTAIv5TynZ65N9u5N3/y5ZsiIqzRgICLnPB5AxhtG0 ISgpvbbzEDLhWo9Qz7bObO3Nsd0hFOotuVhb6Ns6jbhKMgQfHpRF/8mOU g==; X-IronPort-AV: E=McAfee;i="6600,9927,10879"; a="419342692" X-IronPort-AV: E=Sophos;i="6.03,265,1694761200"; d="scan'208";a="419342692" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Oct 2023 00:32:07 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,265,1694761200"; d="scan'208";a="1769292" Received: from joe-255.igk.intel.com (HELO localhost) ([10.91.220.57]) by fmviesa002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Oct 2023 00:32:05 -0700 From: Stanislaw Gruszka To: dri-devel@lists.freedesktop.org Subject: [PATCH 2/4] accel/ivpu: Fix locking in ivpu_bo_remove_all_bos_from_context() Date: Tue, 31 Oct 2023 08:31:54 +0100 Message-Id: <20231031073156.1301669-3-stanislaw.gruszka@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231031073156.1301669-1-stanislaw.gruszka@linux.intel.com> References: <20231031073156.1301669-1-stanislaw.gruszka@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Stanislaw Gruszka , Oded Gabbay , Jeffrey Hugo , Jacek Lawrynowicz Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Jacek Lawrynowicz ivpu_bo_remove_all_bos_from_context() could race with ivpu_bo_free() when prime buffer was closed after vpu device was closed. Move the bo_list from context to vdev and use a dedicated lock to sync it. This list is not modified when BO is added/removed from a context. Also rename ivpu_bo_free_vpu_addr() to ivpu_bo_unbind() because this function does more then just free vpu_addr. Signed-off-by: Jacek Lawrynowicz Reviewed-by: Jeffrey Hugo Signed-off-by: Stanislaw Gruszka --- drivers/accel/ivpu/ivpu_drv.c | 7 +- drivers/accel/ivpu/ivpu_drv.h | 3 + drivers/accel/ivpu/ivpu_gem.c | 131 +++++++++++++------------- drivers/accel/ivpu/ivpu_gem.h | 6 +- drivers/accel/ivpu/ivpu_mmu.c | 5 +- drivers/accel/ivpu/ivpu_mmu_context.c | 36 ++++--- drivers/accel/ivpu/ivpu_mmu_context.h | 11 +-- 7 files changed, 109 insertions(+), 90 deletions(-) diff --git a/drivers/accel/ivpu/ivpu_drv.c b/drivers/accel/ivpu/ivpu_drv.c index 60277ff6af69..cc6a4dca2aaa 100644 --- a/drivers/accel/ivpu/ivpu_drv.c +++ b/drivers/accel/ivpu/ivpu_drv.c @@ -91,8 +91,8 @@ static void file_priv_release(struct kref *ref) ivpu_dbg(vdev, FILE, "file_priv release: ctx %u\n", file_priv->ctx.id); ivpu_cmdq_release_all(file_priv); - ivpu_bo_remove_all_bos_from_context(&file_priv->ctx); ivpu_jsm_context_release(vdev, file_priv->ctx.id); + ivpu_bo_remove_all_bos_from_context(vdev, &file_priv->ctx); ivpu_mmu_user_context_fini(vdev, &file_priv->ctx); drm_WARN_ON(&vdev->drm, xa_erase_irq(&vdev->context_xa, file_priv->ctx.id) != file_priv); mutex_destroy(&file_priv->lock); @@ -528,6 +528,11 @@ static int ivpu_dev_init(struct ivpu_device *vdev) xa_init_flags(&vdev->context_xa, XA_FLAGS_ALLOC); xa_init_flags(&vdev->submitted_jobs_xa, XA_FLAGS_ALLOC1); lockdep_set_class(&vdev->submitted_jobs_xa.xa_lock, &submitted_jobs_xa_lock_class_key); + INIT_LIST_HEAD(&vdev->bo_list); + + ret = drmm_mutex_init(&vdev->drm, &vdev->bo_list_lock); + if (ret) + goto err_xa_destroy; ret = ivpu_pci_init(vdev); if (ret) diff --git a/drivers/accel/ivpu/ivpu_drv.h b/drivers/accel/ivpu/ivpu_drv.h index 79b193ffbc26..e0517ec4133d 100644 --- a/drivers/accel/ivpu/ivpu_drv.h +++ b/drivers/accel/ivpu/ivpu_drv.h @@ -115,6 +115,9 @@ struct ivpu_device { struct xarray context_xa; struct xa_limit context_xa_limit; + struct mutex bo_list_lock; /* Protects bo_list */ + struct list_head bo_list; + struct xarray submitted_jobs_xa; struct task_struct *job_done_thread; diff --git a/drivers/accel/ivpu/ivpu_gem.c b/drivers/accel/ivpu/ivpu_gem.c index d1077cf90b65..915c53d7bb97 100644 --- a/drivers/accel/ivpu/ivpu_gem.c +++ b/drivers/accel/ivpu/ivpu_gem.c @@ -26,6 +26,17 @@ static const struct drm_gem_object_funcs ivpu_gem_funcs; static struct lock_class_key prime_bo_lock_class_key; +static inline void ivpu_dbg_bo(struct ivpu_device *vdev, struct ivpu_bo *bo, const char *action) +{ + if (bo->ctx) + ivpu_dbg(vdev, BO, "%6s: size %zu has_pages %d dma_mapped %d handle %u ctx %d vpu_addr 0x%llx mmu_mapped %d\n", + action, bo->base.size, (bool)bo->pages, (bool)bo->sgt, bo->handle, + bo->ctx->id, bo->vpu_addr, bo->mmu_mapped); + else + ivpu_dbg(vdev, BO, "%6s: size %zu has_pages %d dma_mapped %d handle %u (not added to context)\n", + action, bo->base.size, (bool)bo->pages, (bool)bo->sgt, bo->handle); +} + static int __must_check prime_alloc_pages_locked(struct ivpu_bo *bo) { /* Pages are managed by the underlying dma-buf */ @@ -245,9 +256,8 @@ int __must_check ivpu_bo_pin(struct ivpu_bo *bo) mutex_lock(&bo->lock); - if (!bo->vpu_addr) { - ivpu_err(vdev, "vpu_addr not set for BO ctx_id: %d handle: %d\n", - bo->ctx->id, bo->handle); + if (!bo->ctx) { + ivpu_err(vdev, "vpu_addr not allocated for BO %d\n", bo->handle); ret = -EINVAL; goto unlock; } @@ -281,53 +291,68 @@ ivpu_bo_alloc_vpu_addr(struct ivpu_bo *bo, struct ivpu_mmu_context *ctx, struct ivpu_device *vdev = ivpu_bo_to_vdev(bo); int ret; - mutex_lock(&ctx->lock); - ret = ivpu_mmu_context_insert_node_locked(ctx, range, ivpu_bo_size(bo), &bo->mm_node); + mutex_lock(&bo->lock); + + ret = ivpu_mmu_context_insert_node(ctx, range, bo->base.size, &bo->mm_node); if (!ret) { bo->ctx = ctx; bo->vpu_addr = bo->mm_node.start; - list_add_tail(&bo->ctx_node, &ctx->bo_list); + } else { + ivpu_err(vdev, "Failed to add BO to context %u: %d\n", ctx->id, ret); } - mutex_unlock(&ctx->lock); + ivpu_dbg_bo(vdev, bo, "alloc"); - if (ret) - ivpu_err(vdev, "Failed to add BO to context %u: %d\n", ctx->id, ret); + mutex_unlock(&bo->lock); return ret; } -static void ivpu_bo_free_vpu_addr(struct ivpu_bo *bo) +static void ivpu_bo_unbind_locked(struct ivpu_bo *bo) { struct ivpu_device *vdev = ivpu_bo_to_vdev(bo); - struct ivpu_mmu_context *ctx = bo->ctx; - ivpu_dbg(vdev, BO, "remove from ctx: ctx %d vpu_addr 0x%llx allocated %d mmu_mapped %d\n", - ctx->id, bo->vpu_addr, (bool)bo->sgt, bo->mmu_mapped); + lockdep_assert_held(&bo->lock); - mutex_lock(&bo->lock); + ivpu_dbg_bo(vdev, bo, "unbind"); + /* TODO: dma_unmap */ if (bo->mmu_mapped) { + drm_WARN_ON(&vdev->drm, !bo->ctx); + drm_WARN_ON(&vdev->drm, !bo->vpu_addr); drm_WARN_ON(&vdev->drm, !bo->sgt); - ivpu_mmu_context_unmap_sgt(vdev, ctx, bo->vpu_addr, bo->sgt); + ivpu_mmu_context_unmap_sgt(vdev, bo->ctx, bo->vpu_addr, bo->sgt); bo->mmu_mapped = false; } - mutex_lock(&ctx->lock); - list_del(&bo->ctx_node); - bo->vpu_addr = 0; - bo->ctx = NULL; - ivpu_mmu_context_remove_node_locked(ctx, &bo->mm_node); - mutex_unlock(&ctx->lock); + if (bo->ctx) { + ivpu_mmu_context_remove_node(bo->ctx, &bo->mm_node); + bo->vpu_addr = 0; + bo->ctx = NULL; + } +} +static void ivpu_bo_unbind(struct ivpu_bo *bo) +{ + mutex_lock(&bo->lock); + ivpu_bo_unbind_locked(bo); mutex_unlock(&bo->lock); } -void ivpu_bo_remove_all_bos_from_context(struct ivpu_mmu_context *ctx) +void ivpu_bo_remove_all_bos_from_context(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx) { - struct ivpu_bo *bo, *tmp; + struct ivpu_bo *bo; + + if (drm_WARN_ON(&vdev->drm, !ctx)) + return; - list_for_each_entry_safe(bo, tmp, &ctx->bo_list, ctx_node) - ivpu_bo_free_vpu_addr(bo); + mutex_lock(&vdev->bo_list_lock); + list_for_each_entry(bo, &vdev->bo_list, bo_list_node) { + mutex_lock(&bo->lock); + if (bo->ctx == ctx) + ivpu_bo_unbind_locked(bo); + mutex_unlock(&bo->lock); + } + mutex_unlock(&vdev->bo_list_lock); } static struct ivpu_bo * @@ -375,6 +400,10 @@ ivpu_bo_alloc(struct ivpu_device *vdev, u64 size, u32 flags, const struct ivpu_b } } + mutex_lock(&vdev->bo_list_lock); + list_add_tail(&bo->bo_list_node, &vdev->bo_list); + mutex_unlock(&vdev->bo_list_lock); + return bo; err_release: @@ -406,19 +435,16 @@ static void ivpu_bo_free(struct drm_gem_object *obj) struct ivpu_bo *bo = to_ivpu_bo(obj); struct ivpu_device *vdev = ivpu_bo_to_vdev(bo); - if (bo->ctx) - ivpu_dbg(vdev, BO, "free: ctx %d vpu_addr 0x%llx allocated %d mmu_mapped %d\n", - bo->ctx->id, bo->vpu_addr, (bool)bo->sgt, bo->mmu_mapped); - else - ivpu_dbg(vdev, BO, "free: ctx (released) allocated %d mmu_mapped %d\n", - (bool)bo->sgt, bo->mmu_mapped); + mutex_lock(&vdev->bo_list_lock); + list_del(&bo->bo_list_node); + mutex_unlock(&vdev->bo_list_lock); drm_WARN_ON(&vdev->drm, !dma_resv_test_signaled(obj->resv, DMA_RESV_USAGE_READ)); - vunmap(bo->kvaddr); + ivpu_dbg_bo(vdev, bo, "free"); - if (bo->ctx) - ivpu_bo_free_vpu_addr(bo); + ivpu_bo_unbind(bo); + vunmap(bo->kvaddr); if (bo->sgt) ivpu_bo_unmap_and_free_pages(bo); @@ -435,10 +461,6 @@ static void ivpu_bo_free(struct drm_gem_object *obj) static int ivpu_bo_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) { struct ivpu_bo *bo = to_ivpu_bo(obj); - struct ivpu_device *vdev = ivpu_bo_to_vdev(bo); - - ivpu_dbg(vdev, BO, "mmap: ctx %u handle %u vpu_addr 0x%llx size %zu type %s", - bo->ctx->id, bo->handle, bo->vpu_addr, ivpu_bo_size(bo), bo->ops->name); if (obj->import_attach) { /* Drop the reference drm_gem_mmap_obj() acquired.*/ @@ -553,9 +575,6 @@ ivpu_bo_create_ioctl(struct drm_device *dev, void *data, struct drm_file *file) drm_gem_object_put(&bo->base); - ivpu_dbg(vdev, BO, "alloc shmem: ctx %u vpu_addr 0x%llx size %zu flags 0x%x\n", - file_priv->ctx.id, bo->vpu_addr, ivpu_bo_size(bo), bo->flags); - return ret; } @@ -609,9 +628,6 @@ ivpu_bo_alloc_internal(struct ivpu_device *vdev, u64 vpu_addr, u64 size, u32 fla goto err_put; } - ivpu_dbg(vdev, BO, "alloc internal: ctx 0 vpu_addr 0x%llx size %zu flags 0x%x\n", - bo->vpu_addr, ivpu_bo_size(bo), flags); - return bo; err_put: @@ -708,41 +724,30 @@ static void ivpu_bo_print_info(struct ivpu_bo *bo, struct drm_printer *p) { unsigned long dma_refcount = 0; + mutex_lock(&bo->lock); + if (bo->base.dma_buf && bo->base.dma_buf->file) dma_refcount = atomic_long_read(&bo->base.dma_buf->file->f_count); drm_printf(p, "%5u %6d %16llx %10lu %10u %12lu %14s\n", - bo->ctx->id, bo->handle, bo->vpu_addr, ivpu_bo_size(bo), + bo->ctx ? bo->ctx->id : -1, bo->handle, bo->vpu_addr, bo->base.size, kref_read(&bo->base.refcount), dma_refcount, bo->ops->name); + + mutex_unlock(&bo->lock); } void ivpu_bo_list(struct drm_device *dev, struct drm_printer *p) { struct ivpu_device *vdev = to_ivpu_device(dev); - struct ivpu_file_priv *file_priv; - unsigned long ctx_id; struct ivpu_bo *bo; drm_printf(p, "%5s %6s %16s %10s %10s %12s %14s\n", "ctx", "handle", "vpu_addr", "size", "refcount", "dma_refcount", "type"); - mutex_lock(&vdev->gctx.lock); - list_for_each_entry(bo, &vdev->gctx.bo_list, ctx_node) + mutex_lock(&vdev->bo_list_lock); + list_for_each_entry(bo, &vdev->bo_list, bo_list_node) ivpu_bo_print_info(bo, p); - mutex_unlock(&vdev->gctx.lock); - - xa_for_each(&vdev->context_xa, ctx_id, file_priv) { - file_priv = ivpu_file_priv_get_by_ctx_id(vdev, ctx_id); - if (!file_priv) - continue; - - mutex_lock(&file_priv->ctx.lock); - list_for_each_entry(bo, &file_priv->ctx.bo_list, ctx_node) - ivpu_bo_print_info(bo, p); - mutex_unlock(&file_priv->ctx.lock); - - ivpu_file_priv_put(&file_priv); - } + mutex_unlock(&vdev->bo_list_lock); } void ivpu_bo_list_print(struct drm_device *dev) diff --git a/drivers/accel/ivpu/ivpu_gem.h b/drivers/accel/ivpu/ivpu_gem.h index 9bbbcd779265..214d8fd14310 100644 --- a/drivers/accel/ivpu/ivpu_gem.h +++ b/drivers/accel/ivpu/ivpu_gem.h @@ -17,10 +17,10 @@ struct ivpu_bo { const struct ivpu_bo_ops *ops; struct ivpu_mmu_context *ctx; - struct list_head ctx_node; + struct list_head bo_list_node; struct drm_mm_node mm_node; - struct mutex lock; /* Protects: pages, sgt, mmu_mapped */ + struct mutex lock; /* Protects: pages, sgt, ctx, mmu_mapped, vpu_addr */ struct sg_table *sgt; struct page **pages; bool mmu_mapped; @@ -48,7 +48,7 @@ struct ivpu_bo_ops { }; int ivpu_bo_pin(struct ivpu_bo *bo); -void ivpu_bo_remove_all_bos_from_context(struct ivpu_mmu_context *ctx); +void ivpu_bo_remove_all_bos_from_context(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx); void ivpu_bo_list(struct drm_device *dev, struct drm_printer *p); void ivpu_bo_list_print(struct drm_device *dev); diff --git a/drivers/accel/ivpu/ivpu_mmu.c b/drivers/accel/ivpu/ivpu_mmu.c index b46f3743c0e7..2228c44b115f 100644 --- a/drivers/accel/ivpu/ivpu_mmu.c +++ b/drivers/accel/ivpu/ivpu_mmu.c @@ -775,9 +775,12 @@ int ivpu_mmu_init(struct ivpu_device *vdev) ivpu_dbg(vdev, MMU, "Init..\n"); - drmm_mutex_init(&vdev->drm, &mmu->lock); ivpu_mmu_config_check(vdev); + ret = drmm_mutex_init(&vdev->drm, &mmu->lock); + if (ret) + return ret; + ret = ivpu_mmu_structs_alloc(vdev); if (ret) return ret; diff --git a/drivers/accel/ivpu/ivpu_mmu_context.c b/drivers/accel/ivpu/ivpu_mmu_context.c index f6de6a4ea608..122c90fecf66 100644 --- a/drivers/accel/ivpu/ivpu_mmu_context.c +++ b/drivers/accel/ivpu/ivpu_mmu_context.c @@ -335,6 +335,9 @@ ivpu_mmu_context_map_sgt(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx, u64 prot; u64 i; + if (drm_WARN_ON(&vdev->drm, !ctx)) + return -EINVAL; + if (!IS_ALIGNED(vpu_addr, IVPU_MMU_PAGE_SIZE)) return -EINVAL; /* @@ -382,8 +385,8 @@ ivpu_mmu_context_unmap_sgt(struct ivpu_device *vdev, struct ivpu_mmu_context *ct int ret; u64 i; - if (!IS_ALIGNED(vpu_addr, IVPU_MMU_PAGE_SIZE)) - ivpu_warn(vdev, "Unaligned vpu_addr: 0x%llx\n", vpu_addr); + if (drm_WARN_ON(&vdev->drm, !ctx)) + return; mutex_lock(&ctx->lock); @@ -404,30 +407,34 @@ ivpu_mmu_context_unmap_sgt(struct ivpu_device *vdev, struct ivpu_mmu_context *ct } int -ivpu_mmu_context_insert_node_locked(struct ivpu_mmu_context *ctx, - const struct ivpu_addr_range *range, - u64 size, struct drm_mm_node *node) +ivpu_mmu_context_insert_node(struct ivpu_mmu_context *ctx, const struct ivpu_addr_range *range, + u64 size, struct drm_mm_node *node) { - lockdep_assert_held(&ctx->lock); + int ret; WARN_ON(!range); + mutex_lock(&ctx->lock); if (!ivpu_disable_mmu_cont_pages && size >= IVPU_MMU_CONT_PAGES_SIZE) { - if (!drm_mm_insert_node_in_range(&ctx->mm, node, size, IVPU_MMU_CONT_PAGES_SIZE, 0, - range->start, range->end, DRM_MM_INSERT_BEST)) - return 0; + ret = drm_mm_insert_node_in_range(&ctx->mm, node, size, IVPU_MMU_CONT_PAGES_SIZE, 0, + range->start, range->end, DRM_MM_INSERT_BEST); + if (!ret) + goto unlock; } - return drm_mm_insert_node_in_range(&ctx->mm, node, size, IVPU_MMU_PAGE_SIZE, 0, - range->start, range->end, DRM_MM_INSERT_BEST); + ret = drm_mm_insert_node_in_range(&ctx->mm, node, size, IVPU_MMU_PAGE_SIZE, 0, + range->start, range->end, DRM_MM_INSERT_BEST); +unlock: + mutex_unlock(&ctx->lock); + return ret; } void -ivpu_mmu_context_remove_node_locked(struct ivpu_mmu_context *ctx, struct drm_mm_node *node) +ivpu_mmu_context_remove_node(struct ivpu_mmu_context *ctx, struct drm_mm_node *node) { - lockdep_assert_held(&ctx->lock); - + mutex_lock(&ctx->lock); drm_mm_remove_node(node); + mutex_unlock(&ctx->lock); } static int @@ -437,7 +444,6 @@ ivpu_mmu_context_init(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx, u3 int ret; mutex_init(&ctx->lock); - INIT_LIST_HEAD(&ctx->bo_list); ret = ivpu_mmu_pgtable_init(vdev, &ctx->pgtable); if (ret) { diff --git a/drivers/accel/ivpu/ivpu_mmu_context.h b/drivers/accel/ivpu/ivpu_mmu_context.h index f15d8c630d8a..535db3a1fc74 100644 --- a/drivers/accel/ivpu/ivpu_mmu_context.h +++ b/drivers/accel/ivpu/ivpu_mmu_context.h @@ -23,10 +23,9 @@ struct ivpu_mmu_pgtable { }; struct ivpu_mmu_context { - struct mutex lock; /* protects: mm, pgtable, bo_list */ + struct mutex lock; /* Protects: mm, pgtable */ struct drm_mm mm; struct ivpu_mmu_pgtable pgtable; - struct list_head bo_list; u32 id; }; @@ -39,11 +38,9 @@ int ivpu_mmu_user_context_init(struct ivpu_device *vdev, struct ivpu_mmu_context void ivpu_mmu_user_context_fini(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx); void ivpu_mmu_user_context_mark_invalid(struct ivpu_device *vdev, u32 ssid); -int ivpu_mmu_context_insert_node_locked(struct ivpu_mmu_context *ctx, - const struct ivpu_addr_range *range, - u64 size, struct drm_mm_node *node); -void ivpu_mmu_context_remove_node_locked(struct ivpu_mmu_context *ctx, - struct drm_mm_node *node); +int ivpu_mmu_context_insert_node(struct ivpu_mmu_context *ctx, const struct ivpu_addr_range *range, + u64 size, struct drm_mm_node *node); +void ivpu_mmu_context_remove_node(struct ivpu_mmu_context *ctx, struct drm_mm_node *node); int ivpu_mmu_context_map_sgt(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx, u64 vpu_addr, struct sg_table *sgt, bool llc_coherent); From patchwork Tue Oct 31 07:31:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stanislaw Gruszka X-Patchwork-Id: 13441151 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1B806C4332F for ; Tue, 31 Oct 2023 07:32:44 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 60F1E10E413; Tue, 31 Oct 2023 07:32:43 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.88]) by gabe.freedesktop.org (Postfix) with ESMTPS id 0C5F510E411 for ; Tue, 31 Oct 2023 07:32:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698737555; x=1730273555; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=yCIooUCa7rGopZue1SkOvCVrioqMGM0nXPieSMORn0U=; b=XVpxJZ4xwNhdoXLi33kJJH8TRwFl9Yc3CAkjVMen3fuoyyDJrRJqw9dO /YvH9zz2cK9mX3jVKRm1R73FjlDiqtbshm6VaN0uAHI1Zi3POPRi1QYwg P30lSrp9FaZ8LDvGdV8GmhrF+jfA06/7JnGhEzGIB1tSVC+JBCZgr1M+j 3tuzULAUt7YFNonWqdjvYJubQSQlChd4bZDtAMzZzmnt5uHMGhE/1y/Qm huq+1nMgXJL4nPKUP8mST55sQAISe0XrhrhYghjIX80pi071r13Tb6D2Z +4UmFuGzDCMdv5jL/GtYfwDVc3T8EaJr57YfiJaLJIOKOjlzKZhfq3Hvu g==; X-IronPort-AV: E=McAfee;i="6600,9927,10879"; a="419342731" X-IronPort-AV: E=Sophos;i="6.03,265,1694761200"; d="scan'208";a="419342731" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Oct 2023 00:32:09 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,265,1694761200"; d="scan'208";a="1769303" Received: from joe-255.igk.intel.com (HELO localhost) ([10.91.220.57]) by fmviesa002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Oct 2023 00:32:08 -0700 From: Stanislaw Gruszka To: dri-devel@lists.freedesktop.org Subject: [PATCH 3/4] accel/ivpu: Remove support for uncached buffers Date: Tue, 31 Oct 2023 08:31:55 +0100 Message-Id: <20231031073156.1301669-4-stanislaw.gruszka@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231031073156.1301669-1-stanislaw.gruszka@linux.intel.com> References: <20231031073156.1301669-1-stanislaw.gruszka@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Stanislaw Gruszka , Oded Gabbay , Jeffrey Hugo , Jacek Lawrynowicz Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Jacek Lawrynowicz Usages of DRM_IVPU_BO_UNCACHED should be replaced by DRM_IVPU_BO_WC. There is no functional benefit from DRM_IVPU_BO_UNCACHED if these buffers are never mapped to host VM. This allows to cut the buffer handling code in the kernel driver by half. Usage of DRM_IVPU_BO_UNCACHED buffers was removed from user-space driver and will not be part of first UMD release. Signed-off-by: Jacek Lawrynowicz Reviewed-by: Jeffrey Hugo Signed-off-by: Stanislaw Gruszka --- drivers/accel/ivpu/ivpu_fw.c | 2 +- drivers/accel/ivpu/ivpu_gem.c | 3 --- include/uapi/drm/ivpu_accel.h | 2 +- 3 files changed, 2 insertions(+), 5 deletions(-) diff --git a/drivers/accel/ivpu/ivpu_fw.c b/drivers/accel/ivpu/ivpu_fw.c index 3fd74cd4205f..3d0c18d68f6c 100644 --- a/drivers/accel/ivpu/ivpu_fw.c +++ b/drivers/accel/ivpu/ivpu_fw.c @@ -286,7 +286,7 @@ static int ivpu_fw_mem_init(struct ivpu_device *vdev) if (fw->shave_nn_size) { fw->mem_shave_nn = ivpu_bo_alloc_internal(vdev, vdev->hw->ranges.shave.start, - fw->shave_nn_size, DRM_IVPU_BO_UNCACHED); + fw->shave_nn_size, DRM_IVPU_BO_WC); if (!fw->mem_shave_nn) { ivpu_err(vdev, "Failed to allocate shavenn buffer\n"); ret = -ENOMEM; diff --git a/drivers/accel/ivpu/ivpu_gem.c b/drivers/accel/ivpu/ivpu_gem.c index 915c53d7bb97..2a91eb1e3627 100644 --- a/drivers/accel/ivpu/ivpu_gem.c +++ b/drivers/accel/ivpu/ivpu_gem.c @@ -89,8 +89,6 @@ static int __must_check shmem_alloc_pages_locked(struct ivpu_bo *bo) if (bo->flags & DRM_IVPU_BO_WC) set_pages_array_wc(pages, npages); - else if (bo->flags & DRM_IVPU_BO_UNCACHED) - set_pages_array_uc(pages, npages); bo->pages = pages; return 0; @@ -366,7 +364,6 @@ ivpu_bo_alloc(struct ivpu_device *vdev, u64 size, u32 flags, const struct ivpu_b switch (flags & DRM_IVPU_BO_CACHE_MASK) { case DRM_IVPU_BO_CACHED: - case DRM_IVPU_BO_UNCACHED: case DRM_IVPU_BO_WC: break; default: diff --git a/include/uapi/drm/ivpu_accel.h b/include/uapi/drm/ivpu_accel.h index 262db0c3beee..de1944e42c65 100644 --- a/include/uapi/drm/ivpu_accel.h +++ b/include/uapi/drm/ivpu_accel.h @@ -196,7 +196,7 @@ struct drm_ivpu_bo_create { * * %DRM_IVPU_BO_UNCACHED: * - * Allocated BO will not be cached on host side nor snooped on the VPU side. + * Not supported. Use DRM_IVPU_BO_WC instead. * * %DRM_IVPU_BO_WC: * From patchwork Tue Oct 31 07:31:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stanislaw Gruszka X-Patchwork-Id: 13441152 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8777AC4332F for ; Tue, 31 Oct 2023 07:32:50 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E179910E414; Tue, 31 Oct 2023 07:32:49 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.88]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7603310E411 for ; Tue, 31 Oct 2023 07:32:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698737556; x=1730273556; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=IzUzzUbESXLWzx/xttW8M6NlhJhoz59J4fI3Ldqc420=; b=XEADezKQzw0hQnejyd1K2hZe/oJJHTORav3YdARgazdk8/ZhoLRJyocH yLGvLhv96pRZ4/csnDznZ19ADK9slkylg3w/RWZNWgfsFJLcPa9hwtGi2 6aoFYfOpvFFXIHsafvUFXiFeuKxejRb9vJt9oXBIy4bmKv8EHYpuLZvW/ 4dYTHw9eZr2jr9NLsjfxpVnizwSUaDWdOfqitgtkf5kfUeBITiVkiTnn6 YRMM84kQqGxA7sNxYqSM4AzO9cDj/ImYCex7eEjNO19BGDdmPaZdhOvRR aZcrv1MgdPMilOzC50tWaiw8/N0qZcUJ5sfglkGoG0STJloVl8pVDK+dj Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10879"; a="419342744" X-IronPort-AV: E=Sophos;i="6.03,265,1694761200"; d="scan'208";a="419342744" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Oct 2023 00:32:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,265,1694761200"; d="scan'208";a="1769326" Received: from joe-255.igk.intel.com (HELO localhost) ([10.91.220.57]) by fmviesa002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Oct 2023 00:32:12 -0700 From: Stanislaw Gruszka To: dri-devel@lists.freedesktop.org Subject: [PATCH 4/4] accel/ivpu: Use GEM shmem helper for all buffers Date: Tue, 31 Oct 2023 08:31:56 +0100 Message-Id: <20231031073156.1301669-5-stanislaw.gruszka@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231031073156.1301669-1-stanislaw.gruszka@linux.intel.com> References: <20231031073156.1301669-1-stanislaw.gruszka@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Stanislaw Gruszka , Oded Gabbay , Jeffrey Hugo , Jacek Lawrynowicz Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Jacek Lawrynowicz Use struct drm_gem_shmem_object as a base for struct ivpu_bo. This cuts by 50% the buffer management code. Signed-off-by: Jacek Lawrynowicz Reviewed-by: Jeffrey Hugo Signed-off-by: Stanislaw Gruszka --- drivers/accel/ivpu/Kconfig | 2 +- drivers/accel/ivpu/ivpu_drv.c | 6 +- drivers/accel/ivpu/ivpu_gem.c | 518 ++++++++-------------------------- drivers/accel/ivpu/ivpu_gem.h | 70 +---- drivers/accel/ivpu/ivpu_job.c | 8 +- 5 files changed, 144 insertions(+), 460 deletions(-) diff --git a/drivers/accel/ivpu/Kconfig b/drivers/accel/ivpu/Kconfig index 82749e0da524..682c53245286 100644 --- a/drivers/accel/ivpu/Kconfig +++ b/drivers/accel/ivpu/Kconfig @@ -6,7 +6,7 @@ config DRM_ACCEL_IVPU depends on X86_64 && !UML depends on PCI && PCI_MSI select FW_LOADER - select SHMEM + select DRM_GEM_SHMEM_HELPER select GENERIC_ALLOCATOR help Choose this option if you have a system with an 14th generation diff --git a/drivers/accel/ivpu/ivpu_drv.c b/drivers/accel/ivpu/ivpu_drv.c index cc6a4dca2aaa..51fa60b6254c 100644 --- a/drivers/accel/ivpu/ivpu_drv.c +++ b/drivers/accel/ivpu/ivpu_drv.c @@ -360,7 +360,7 @@ int ivpu_boot(struct ivpu_device *vdev) int ret; /* Update boot params located at first 4KB of FW memory */ - ivpu_fw_boot_params_setup(vdev, vdev->fw->mem->kvaddr); + ivpu_fw_boot_params_setup(vdev, ivpu_bo_vaddr(vdev->fw->mem)); ret = ivpu_hw_boot_fw(vdev); if (ret) { @@ -409,7 +409,9 @@ static const struct drm_driver driver = { .open = ivpu_open, .postclose = ivpu_postclose, - .gem_prime_import = ivpu_gem_prime_import, + + .gem_create_object = ivpu_gem_create_object, + .gem_prime_import_sg_table = drm_gem_shmem_prime_import_sg_table, .ioctls = ivpu_drm_ioctls, .num_ioctls = ARRAY_SIZE(ivpu_drm_ioctls), diff --git a/drivers/accel/ivpu/ivpu_gem.c b/drivers/accel/ivpu/ivpu_gem.c index 2a91eb1e3627..1dda4f38ea25 100644 --- a/drivers/accel/ivpu/ivpu_gem.c +++ b/drivers/accel/ivpu/ivpu_gem.c @@ -20,224 +20,18 @@ #include "ivpu_mmu.h" #include "ivpu_mmu_context.h" -MODULE_IMPORT_NS(DMA_BUF); - static const struct drm_gem_object_funcs ivpu_gem_funcs; -static struct lock_class_key prime_bo_lock_class_key; - static inline void ivpu_dbg_bo(struct ivpu_device *vdev, struct ivpu_bo *bo, const char *action) { if (bo->ctx) ivpu_dbg(vdev, BO, "%6s: size %zu has_pages %d dma_mapped %d handle %u ctx %d vpu_addr 0x%llx mmu_mapped %d\n", - action, bo->base.size, (bool)bo->pages, (bool)bo->sgt, bo->handle, - bo->ctx->id, bo->vpu_addr, bo->mmu_mapped); + action, ivpu_bo_size(bo), (bool)bo->base.pages, (bool)bo->base.sgt, + bo->handle, bo->ctx->id, bo->vpu_addr, bo->mmu_mapped); else ivpu_dbg(vdev, BO, "%6s: size %zu has_pages %d dma_mapped %d handle %u (not added to context)\n", - action, bo->base.size, (bool)bo->pages, (bool)bo->sgt, bo->handle); -} - -static int __must_check prime_alloc_pages_locked(struct ivpu_bo *bo) -{ - /* Pages are managed by the underlying dma-buf */ - return 0; -} - -static void prime_free_pages_locked(struct ivpu_bo *bo) -{ - /* Pages are managed by the underlying dma-buf */ -} - -static int prime_map_pages_locked(struct ivpu_bo *bo) -{ - struct ivpu_device *vdev = ivpu_bo_to_vdev(bo); - struct sg_table *sgt; - - sgt = dma_buf_map_attachment_unlocked(bo->base.import_attach, DMA_BIDIRECTIONAL); - if (IS_ERR(sgt)) { - ivpu_err(vdev, "Failed to map attachment: %ld\n", PTR_ERR(sgt)); - return PTR_ERR(sgt); - } - - bo->sgt = sgt; - return 0; -} - -static void prime_unmap_pages_locked(struct ivpu_bo *bo) -{ - dma_buf_unmap_attachment_unlocked(bo->base.import_attach, bo->sgt, DMA_BIDIRECTIONAL); - bo->sgt = NULL; -} - -static const struct ivpu_bo_ops prime_ops = { - .type = IVPU_BO_TYPE_PRIME, - .name = "prime", - .alloc_pages = prime_alloc_pages_locked, - .free_pages = prime_free_pages_locked, - .map_pages = prime_map_pages_locked, - .unmap_pages = prime_unmap_pages_locked, -}; - -static int __must_check shmem_alloc_pages_locked(struct ivpu_bo *bo) -{ - int npages = ivpu_bo_size(bo) >> PAGE_SHIFT; - struct page **pages; - - pages = drm_gem_get_pages(&bo->base); - if (IS_ERR(pages)) - return PTR_ERR(pages); - - if (bo->flags & DRM_IVPU_BO_WC) - set_pages_array_wc(pages, npages); - - bo->pages = pages; - return 0; -} - -static void shmem_free_pages_locked(struct ivpu_bo *bo) -{ - if (ivpu_bo_cache_mode(bo) != DRM_IVPU_BO_CACHED) - set_pages_array_wb(bo->pages, ivpu_bo_size(bo) >> PAGE_SHIFT); - - drm_gem_put_pages(&bo->base, bo->pages, true, false); - bo->pages = NULL; -} - -static int ivpu_bo_map_pages_locked(struct ivpu_bo *bo) -{ - int npages = ivpu_bo_size(bo) >> PAGE_SHIFT; - struct ivpu_device *vdev = ivpu_bo_to_vdev(bo); - struct sg_table *sgt; - int ret; - - sgt = drm_prime_pages_to_sg(&vdev->drm, bo->pages, npages); - if (IS_ERR(sgt)) { - ivpu_err(vdev, "Failed to allocate sgtable\n"); - return PTR_ERR(sgt); - } - - ret = dma_map_sgtable(vdev->drm.dev, sgt, DMA_BIDIRECTIONAL, 0); - if (ret) { - ivpu_err(vdev, "Failed to map BO in IOMMU: %d\n", ret); - goto err_free_sgt; - } - - bo->sgt = sgt; - return 0; - -err_free_sgt: - kfree(sgt); - return ret; -} - -static void ivpu_bo_unmap_pages_locked(struct ivpu_bo *bo) -{ - struct ivpu_device *vdev = ivpu_bo_to_vdev(bo); - - dma_unmap_sgtable(vdev->drm.dev, bo->sgt, DMA_BIDIRECTIONAL, 0); - sg_free_table(bo->sgt); - kfree(bo->sgt); - bo->sgt = NULL; -} - -static const struct ivpu_bo_ops shmem_ops = { - .type = IVPU_BO_TYPE_SHMEM, - .name = "shmem", - .alloc_pages = shmem_alloc_pages_locked, - .free_pages = shmem_free_pages_locked, - .map_pages = ivpu_bo_map_pages_locked, - .unmap_pages = ivpu_bo_unmap_pages_locked, -}; - -static int __must_check internal_alloc_pages_locked(struct ivpu_bo *bo) -{ - unsigned int i, npages = ivpu_bo_size(bo) >> PAGE_SHIFT; - struct page **pages; - int ret; - - pages = kvmalloc_array(npages, sizeof(*bo->pages), GFP_KERNEL); - if (!pages) - return -ENOMEM; - - for (i = 0; i < npages; i++) { - pages[i] = alloc_page(GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO); - if (!pages[i]) { - ret = -ENOMEM; - goto err_free_pages; - } - cond_resched(); - } - - bo->pages = pages; - return 0; - -err_free_pages: - while (i--) - put_page(pages[i]); - kvfree(pages); - return ret; -} - -static void internal_free_pages_locked(struct ivpu_bo *bo) -{ - unsigned int i, npages = ivpu_bo_size(bo) >> PAGE_SHIFT; - - if (ivpu_bo_cache_mode(bo) != DRM_IVPU_BO_CACHED) - set_pages_array_wb(bo->pages, ivpu_bo_size(bo) >> PAGE_SHIFT); - - for (i = 0; i < npages; i++) - put_page(bo->pages[i]); - - kvfree(bo->pages); - bo->pages = NULL; -} - -static const struct ivpu_bo_ops internal_ops = { - .type = IVPU_BO_TYPE_INTERNAL, - .name = "internal", - .alloc_pages = internal_alloc_pages_locked, - .free_pages = internal_free_pages_locked, - .map_pages = ivpu_bo_map_pages_locked, - .unmap_pages = ivpu_bo_unmap_pages_locked, -}; - -static int __must_check ivpu_bo_alloc_and_map_pages_locked(struct ivpu_bo *bo) -{ - struct ivpu_device *vdev = ivpu_bo_to_vdev(bo); - int ret; - - lockdep_assert_held(&bo->lock); - drm_WARN_ON(&vdev->drm, bo->sgt); - - ret = bo->ops->alloc_pages(bo); - if (ret) { - ivpu_err(vdev, "Failed to allocate pages for BO: %d", ret); - return ret; - } - - ret = bo->ops->map_pages(bo); - if (ret) { - ivpu_err(vdev, "Failed to map pages for BO: %d", ret); - goto err_free_pages; - } - return ret; - -err_free_pages: - bo->ops->free_pages(bo); - return ret; -} - -static void ivpu_bo_unmap_and_free_pages(struct ivpu_bo *bo) -{ - mutex_lock(&bo->lock); - - WARN_ON(!bo->sgt); - bo->ops->unmap_pages(bo); - WARN_ON(bo->sgt); - bo->ops->free_pages(bo); - WARN_ON(bo->pages); - - mutex_unlock(&bo->lock); + action, ivpu_bo_size(bo), (bool)bo->base.pages, (bool)bo->base.sgt, + bo->handle); } /* @@ -254,20 +48,24 @@ int __must_check ivpu_bo_pin(struct ivpu_bo *bo) mutex_lock(&bo->lock); + ivpu_dbg_bo(vdev, bo, "pin"); + if (!bo->ctx) { ivpu_err(vdev, "vpu_addr not allocated for BO %d\n", bo->handle); ret = -EINVAL; goto unlock; } - if (!bo->sgt) { - ret = ivpu_bo_alloc_and_map_pages_locked(bo); - if (ret) + if (!bo->mmu_mapped) { + struct sg_table *sgt = drm_gem_shmem_get_pages_sgt(&bo->base); + + if (IS_ERR(sgt)) { + ret = PTR_ERR(sgt); + ivpu_err(vdev, "Failed to map BO in IOMMU: %d\n", ret); goto unlock; - } + } - if (!bo->mmu_mapped) { - ret = ivpu_mmu_context_map_sgt(vdev, bo->ctx, bo->vpu_addr, bo->sgt, + ret = ivpu_mmu_context_map_sgt(vdev, bo->ctx, bo->vpu_addr, sgt, ivpu_bo_is_snooped(bo)); if (ret) { ivpu_err(vdev, "Failed to map BO in MMU: %d\n", ret); @@ -291,13 +89,14 @@ ivpu_bo_alloc_vpu_addr(struct ivpu_bo *bo, struct ivpu_mmu_context *ctx, mutex_lock(&bo->lock); - ret = ivpu_mmu_context_insert_node(ctx, range, bo->base.size, &bo->mm_node); + ret = ivpu_mmu_context_insert_node(ctx, range, ivpu_bo_size(bo), &bo->mm_node); if (!ret) { bo->ctx = ctx; bo->vpu_addr = bo->mm_node.start; } else { ivpu_err(vdev, "Failed to add BO to context %u: %d\n", ctx->id, ret); } + ivpu_dbg_bo(vdev, bo, "alloc"); mutex_unlock(&bo->lock); @@ -314,11 +113,12 @@ static void ivpu_bo_unbind_locked(struct ivpu_bo *bo) ivpu_dbg_bo(vdev, bo, "unbind"); /* TODO: dma_unmap */ + if (bo->mmu_mapped) { drm_WARN_ON(&vdev->drm, !bo->ctx); drm_WARN_ON(&vdev->drm, !bo->vpu_addr); - drm_WARN_ON(&vdev->drm, !bo->sgt); - ivpu_mmu_context_unmap_sgt(vdev, bo->ctx, bo->vpu_addr, bo->sgt); + drm_WARN_ON(&vdev->drm, !bo->base.sgt); + ivpu_mmu_context_unmap_sgt(vdev, bo->ctx, bo->vpu_addr, bo->base.sgt); bo->mmu_mapped = false; } @@ -353,15 +153,32 @@ void ivpu_bo_remove_all_bos_from_context(struct ivpu_device *vdev, struct ivpu_m mutex_unlock(&vdev->bo_list_lock); } -static struct ivpu_bo * -ivpu_bo_alloc(struct ivpu_device *vdev, u64 size, u32 flags, const struct ivpu_bo_ops *ops) +struct drm_gem_object *ivpu_gem_create_object(struct drm_device *dev, size_t size) { struct ivpu_bo *bo; - int ret = 0; - if (drm_WARN_ON(&vdev->drm, size == 0 || !PAGE_ALIGNED(size))) + if (size == 0 || !PAGE_ALIGNED(size)) return ERR_PTR(-EINVAL); + bo = kzalloc(sizeof(*bo), GFP_KERNEL); + if (!bo) + return ERR_PTR(-ENOMEM); + + bo->base.base.funcs = &ivpu_gem_funcs; + bo->base.pages_mark_dirty_on_put = true; /* VPU can dirty a BO anytime */ + + INIT_LIST_HEAD(&bo->bo_list_node); + mutex_init(&bo->lock); + + return &bo->base.base; +} + +static struct ivpu_bo * +ivpu_bo_create(struct ivpu_device *vdev, u64 size, u32 flags) +{ + struct drm_gem_shmem_object *shmem; + struct ivpu_bo *bo; + switch (flags & DRM_IVPU_BO_CACHE_MASK) { case DRM_IVPU_BO_CACHED: case DRM_IVPU_BO_WC: @@ -370,44 +187,22 @@ ivpu_bo_alloc(struct ivpu_device *vdev, u64 size, u32 flags, const struct ivpu_b return ERR_PTR(-EINVAL); } - bo = kzalloc(sizeof(*bo), GFP_KERNEL); - if (!bo) - return ERR_PTR(-ENOMEM); + shmem = drm_gem_shmem_create(&vdev->drm, size); + if (IS_ERR(shmem)) + return ERR_CAST(shmem); - mutex_init(&bo->lock); - bo->base.funcs = &ivpu_gem_funcs; + bo = to_ivpu_bo(&shmem->base); + bo->base.map_wc = flags & DRM_IVPU_BO_WC; bo->flags = flags; - bo->ops = ops; - - if (ops->type == IVPU_BO_TYPE_SHMEM) - ret = drm_gem_object_init(&vdev->drm, &bo->base, size); - else - drm_gem_private_object_init(&vdev->drm, &bo->base, size); - - if (ret) { - ivpu_err(vdev, "Failed to initialize drm object\n"); - goto err_free; - } - - if (flags & DRM_IVPU_BO_MAPPABLE) { - ret = drm_gem_create_mmap_offset(&bo->base); - if (ret) { - ivpu_err(vdev, "Failed to allocate mmap offset\n"); - goto err_release; - } - } mutex_lock(&vdev->bo_list_lock); list_add_tail(&bo->bo_list_node, &vdev->bo_list); mutex_unlock(&vdev->bo_list_lock); - return bo; + ivpu_dbg(vdev, BO, "create: vpu_addr 0x%llx size %zu flags 0x%x\n", + bo->vpu_addr, bo->base.base.size, flags); -err_release: - drm_gem_object_release(&bo->base); -err_free: - kfree(bo); - return ERR_PTR(ret); + return bo; } static int ivpu_bo_open(struct drm_gem_object *obj, struct drm_file *file) @@ -429,8 +224,8 @@ static int ivpu_bo_open(struct drm_gem_object *obj, struct drm_file *file) static void ivpu_bo_free(struct drm_gem_object *obj) { + struct ivpu_device *vdev = to_ivpu_device(obj->dev); struct ivpu_bo *bo = to_ivpu_bo(obj); - struct ivpu_device *vdev = ivpu_bo_to_vdev(bo); mutex_lock(&vdev->bo_list_lock); list_del(&bo->bo_list_node); @@ -441,108 +236,64 @@ static void ivpu_bo_free(struct drm_gem_object *obj) ivpu_dbg_bo(vdev, bo, "free"); ivpu_bo_unbind(bo); - vunmap(bo->kvaddr); - - if (bo->sgt) - ivpu_bo_unmap_and_free_pages(bo); - - if (bo->base.import_attach) - drm_prime_gem_destroy(&bo->base, bo->sgt); - - drm_gem_object_release(&bo->base); - mutex_destroy(&bo->lock); - kfree(bo); -} - -static int ivpu_bo_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) -{ - struct ivpu_bo *bo = to_ivpu_bo(obj); - if (obj->import_attach) { - /* Drop the reference drm_gem_mmap_obj() acquired.*/ - drm_gem_object_put(obj); - vma->vm_private_data = NULL; - return dma_buf_mmap(obj->dma_buf, vma, 0); - } - - vm_flags_set(vma, VM_PFNMAP | VM_DONTEXPAND); - vma->vm_page_prot = ivpu_bo_pgprot(bo, vm_get_page_prot(vma->vm_flags)); - - return 0; + drm_WARN_ON(obj->dev, bo->base.pages_use_count > 1); + drm_gem_shmem_free(&bo->base); } -static struct sg_table *ivpu_bo_get_sg_table(struct drm_gem_object *obj) -{ - struct ivpu_bo *bo = to_ivpu_bo(obj); - loff_t npages = obj->size >> PAGE_SHIFT; - int ret = 0; - - mutex_lock(&bo->lock); - - if (!bo->sgt) - ret = ivpu_bo_alloc_and_map_pages_locked(bo); - - mutex_unlock(&bo->lock); - - if (ret) - return ERR_PTR(ret); - - return drm_prime_pages_to_sg(obj->dev, bo->pages, npages); -} +static const struct dma_buf_ops ivpu_bo_dmabuf_ops = { + .cache_sgt_mapping = true, + .attach = drm_gem_map_attach, + .detach = drm_gem_map_detach, + .map_dma_buf = drm_gem_map_dma_buf, + .unmap_dma_buf = drm_gem_unmap_dma_buf, + .release = drm_gem_dmabuf_release, + .mmap = drm_gem_dmabuf_mmap, + .vmap = drm_gem_dmabuf_vmap, + .vunmap = drm_gem_dmabuf_vunmap, +}; -static vm_fault_t ivpu_vm_fault(struct vm_fault *vmf) +static struct dma_buf *ivpu_bo_export(struct drm_gem_object *obj, int flags) { - struct vm_area_struct *vma = vmf->vma; - struct drm_gem_object *obj = vma->vm_private_data; - struct ivpu_bo *bo = to_ivpu_bo(obj); - loff_t npages = obj->size >> PAGE_SHIFT; - pgoff_t page_offset; - struct page *page; - vm_fault_t ret; - int err; + struct drm_device *dev = obj->dev; + struct dma_buf_export_info exp_info = { + .exp_name = KBUILD_MODNAME, + .owner = dev->driver->fops->owner, + .ops = &ivpu_bo_dmabuf_ops, + .size = obj->size, + .flags = flags, + .priv = obj, + .resv = obj->resv, + }; + void *sgt; - mutex_lock(&bo->lock); + /* + * Make sure that pages are allocated and dma-mapped before exporting the bo. + * DMA-mapping is required if the bo will be imported to the same device. + */ + sgt = drm_gem_shmem_get_pages_sgt(to_drm_gem_shmem_obj(obj)); + if (IS_ERR(sgt)) + return sgt; - if (!bo->sgt) { - err = ivpu_bo_alloc_and_map_pages_locked(bo); - if (err) { - ret = vmf_error(err); - goto unlock; - } - } - - /* We don't use vmf->pgoff since that has the fake offset */ - page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT; - if (page_offset >= npages) { - ret = VM_FAULT_SIGBUS; - } else { - page = bo->pages[page_offset]; - ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page)); - } - -unlock: - mutex_unlock(&bo->lock); - - return ret; + return drm_gem_dmabuf_export(dev, &exp_info); } -static const struct vm_operations_struct ivpu_vm_ops = { - .fault = ivpu_vm_fault, - .open = drm_gem_vm_open, - .close = drm_gem_vm_close, -}; - static const struct drm_gem_object_funcs ivpu_gem_funcs = { .free = ivpu_bo_free, .open = ivpu_bo_open, - .mmap = ivpu_bo_mmap, - .vm_ops = &ivpu_vm_ops, - .get_sg_table = ivpu_bo_get_sg_table, + .export = ivpu_bo_export, + .print_info = drm_gem_shmem_object_print_info, + .pin = drm_gem_shmem_object_pin, + .unpin = drm_gem_shmem_object_unpin, + .get_sg_table = drm_gem_shmem_object_get_sg_table, + .vmap = drm_gem_shmem_object_vmap, + .vunmap = drm_gem_shmem_object_vunmap, + .mmap = drm_gem_shmem_object_mmap, + .vm_ops = &drm_gem_shmem_vm_ops, }; -int -ivpu_bo_create_ioctl(struct drm_device *dev, void *data, struct drm_file *file) +int ivpu_bo_create_ioctl(struct drm_device *dev, void *data, struct drm_file *file) { struct ivpu_file_priv *file_priv = file->driver_priv; struct ivpu_device *vdev = file_priv->vdev; @@ -557,20 +308,20 @@ ivpu_bo_create_ioctl(struct drm_device *dev, void *data, struct drm_file *file) if (size == 0) return -EINVAL; - bo = ivpu_bo_alloc(vdev, size, args->flags, &shmem_ops); + bo = ivpu_bo_create(vdev, size, args->flags); if (IS_ERR(bo)) { ivpu_err(vdev, "Failed to create BO: %pe (ctx %u size %llu flags 0x%x)", bo, file_priv->ctx.id, args->size, args->flags); return PTR_ERR(bo); } - ret = drm_gem_handle_create(file, &bo->base, &bo->handle); + ret = drm_gem_handle_create(file, &bo->base.base, &bo->handle); if (!ret) { args->vpu_addr = bo->vpu_addr; args->handle = bo->handle; } - drm_gem_object_put(&bo->base); + drm_gem_object_put(&bo->base.base); return ret; } @@ -580,8 +331,8 @@ ivpu_bo_alloc_internal(struct ivpu_device *vdev, u64 vpu_addr, u64 size, u32 fla { const struct ivpu_addr_range *range; struct ivpu_addr_range fixed_range; + struct iosys_map map; struct ivpu_bo *bo; - pgprot_t prot; int ret; drm_WARN_ON(&vdev->drm, !PAGE_ALIGNED(vpu_addr)); @@ -595,7 +346,7 @@ ivpu_bo_alloc_internal(struct ivpu_device *vdev, u64 vpu_addr, u64 size, u32 fla range = &vdev->hw->ranges.global; } - bo = ivpu_bo_alloc(vdev, size, flags, &internal_ops); + bo = ivpu_bo_create(vdev, size, flags); if (IS_ERR(bo)) { ivpu_err(vdev, "Failed to create BO: %pe (vpu_addr 0x%llx size %llu flags 0x%x)", bo, vpu_addr, size, flags); @@ -610,61 +361,23 @@ ivpu_bo_alloc_internal(struct ivpu_device *vdev, u64 vpu_addr, u64 size, u32 fla if (ret) goto err_put; - if (ivpu_bo_cache_mode(bo) != DRM_IVPU_BO_CACHED) - drm_clflush_pages(bo->pages, ivpu_bo_size(bo) >> PAGE_SHIFT); - - if (bo->flags & DRM_IVPU_BO_WC) - set_pages_array_wc(bo->pages, ivpu_bo_size(bo) >> PAGE_SHIFT); - else if (bo->flags & DRM_IVPU_BO_UNCACHED) - set_pages_array_uc(bo->pages, ivpu_bo_size(bo) >> PAGE_SHIFT); - - prot = ivpu_bo_pgprot(bo, PAGE_KERNEL); - bo->kvaddr = vmap(bo->pages, ivpu_bo_size(bo) >> PAGE_SHIFT, VM_MAP, prot); - if (!bo->kvaddr) { - ivpu_err(vdev, "Failed to map BO into kernel virtual memory\n"); + ret = drm_gem_shmem_vmap(&bo->base, &map); + if (ret) goto err_put; - } return bo; err_put: - drm_gem_object_put(&bo->base); + drm_gem_object_put(&bo->base.base); return NULL; } void ivpu_bo_free_internal(struct ivpu_bo *bo) { - drm_gem_object_put(&bo->base); -} - -struct drm_gem_object *ivpu_gem_prime_import(struct drm_device *dev, struct dma_buf *buf) -{ - struct ivpu_device *vdev = to_ivpu_device(dev); - struct dma_buf_attachment *attach; - struct ivpu_bo *bo; - - attach = dma_buf_attach(buf, dev->dev); - if (IS_ERR(attach)) - return ERR_CAST(attach); + struct iosys_map map = IOSYS_MAP_INIT_VADDR(bo->base.vaddr); - get_dma_buf(buf); - - bo = ivpu_bo_alloc(vdev, buf->size, DRM_IVPU_BO_MAPPABLE, &prime_ops); - if (IS_ERR(bo)) { - ivpu_err(vdev, "Failed to import BO: %pe (size %lu)", bo, buf->size); - goto err_detach; - } - - lockdep_set_class(&bo->lock, &prime_bo_lock_class_key); - - bo->base.import_attach = attach; - - return &bo->base; - -err_detach: - dma_buf_detach(buf, attach); - dma_buf_put(buf); - return ERR_CAST(bo); + drm_gem_shmem_vunmap(&bo->base, &map); + drm_gem_object_put(&bo->base.base); } int ivpu_bo_info_ioctl(struct drm_device *dev, void *data, struct drm_file *file) @@ -723,12 +436,23 @@ static void ivpu_bo_print_info(struct ivpu_bo *bo, struct drm_printer *p) mutex_lock(&bo->lock); - if (bo->base.dma_buf && bo->base.dma_buf->file) - dma_refcount = atomic_long_read(&bo->base.dma_buf->file->f_count); + if (bo->base.base.dma_buf && bo->base.base.dma_buf->file) + dma_refcount = atomic_long_read(&bo->base.base.dma_buf->file->f_count); + + drm_printf(p, "%-3u %-6d 0x%-12llx %-10lu 0x%-8x %-4u %-8lu", + bo->ctx->id, bo->handle, bo->vpu_addr, bo->base.base.size, + bo->flags, kref_read(&bo->base.base.refcount), dma_refcount); + + if (bo->base.base.import_attach) + drm_printf(p, " imported"); + + if (bo->base.pages) + drm_printf(p, " has_pages"); + + if (bo->mmu_mapped) + drm_printf(p, " mmu_mapped"); - drm_printf(p, "%5u %6d %16llx %10lu %10u %12lu %14s\n", - bo->ctx ? bo->ctx->id : -1, bo->handle, bo->vpu_addr, bo->base.size, - kref_read(&bo->base.refcount), dma_refcount, bo->ops->name); + drm_printf(p, "\n"); mutex_unlock(&bo->lock); } @@ -738,8 +462,8 @@ void ivpu_bo_list(struct drm_device *dev, struct drm_printer *p) struct ivpu_device *vdev = to_ivpu_device(dev); struct ivpu_bo *bo; - drm_printf(p, "%5s %6s %16s %10s %10s %12s %14s\n", - "ctx", "handle", "vpu_addr", "size", "refcount", "dma_refcount", "type"); + drm_printf(p, "%-3s %-6s %-14s %-10s %-10s %-4s %-8s %s\n", + "ctx", "handle", "vpu_addr", "size", "flags", "refs", "dma_refs", "attribs"); mutex_lock(&vdev->bo_list_lock); list_for_each_entry(bo, &vdev->bo_list, bo_list_node) diff --git a/drivers/accel/ivpu/ivpu_gem.h b/drivers/accel/ivpu/ivpu_gem.h index 214d8fd14310..d75cad0d3c74 100644 --- a/drivers/accel/ivpu/ivpu_gem.h +++ b/drivers/accel/ivpu/ivpu_gem.h @@ -6,83 +6,52 @@ #define __IVPU_GEM_H__ #include +#include #include -struct dma_buf; -struct ivpu_bo_ops; struct ivpu_file_priv; struct ivpu_bo { - struct drm_gem_object base; - const struct ivpu_bo_ops *ops; - + struct drm_gem_shmem_object base; struct ivpu_mmu_context *ctx; struct list_head bo_list_node; struct drm_mm_node mm_node; - struct mutex lock; /* Protects: pages, sgt, ctx, mmu_mapped, vpu_addr */ - struct sg_table *sgt; - struct page **pages; - bool mmu_mapped; - - void *kvaddr; + struct mutex lock; /* Protects: ctx, mmu_mapped, vpu_addr */ u64 vpu_addr; u32 handle; u32 flags; - u32 job_status; -}; - -enum ivpu_bo_type { - IVPU_BO_TYPE_SHMEM = 1, - IVPU_BO_TYPE_INTERNAL, - IVPU_BO_TYPE_PRIME, -}; - -struct ivpu_bo_ops { - enum ivpu_bo_type type; - const char *name; - int (*alloc_pages)(struct ivpu_bo *bo); - void (*free_pages)(struct ivpu_bo *bo); - int (*map_pages)(struct ivpu_bo *bo); - void (*unmap_pages)(struct ivpu_bo *bo); + u32 job_status; /* Valid only for command buffer */ + bool mmu_mapped; }; int ivpu_bo_pin(struct ivpu_bo *bo); void ivpu_bo_remove_all_bos_from_context(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx); -void ivpu_bo_list(struct drm_device *dev, struct drm_printer *p); -void ivpu_bo_list_print(struct drm_device *dev); -struct ivpu_bo * -ivpu_bo_alloc_internal(struct ivpu_device *vdev, u64 vpu_addr, u64 size, u32 flags); +struct drm_gem_object *ivpu_gem_create_object(struct drm_device *dev, size_t size); +struct ivpu_bo *ivpu_bo_alloc_internal(struct ivpu_device *vdev, u64 vpu_addr, u64 size, u32 flags); void ivpu_bo_free_internal(struct ivpu_bo *bo); -struct drm_gem_object *ivpu_gem_prime_import(struct drm_device *dev, struct dma_buf *dma_buf); -void ivpu_bo_unmap_sgt_and_remove_from_context(struct ivpu_bo *bo); int ivpu_bo_create_ioctl(struct drm_device *dev, void *data, struct drm_file *file); int ivpu_bo_info_ioctl(struct drm_device *dev, void *data, struct drm_file *file); int ivpu_bo_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file); +void ivpu_bo_list(struct drm_device *dev, struct drm_printer *p); +void ivpu_bo_list_print(struct drm_device *dev); + static inline struct ivpu_bo *to_ivpu_bo(struct drm_gem_object *obj) { - return container_of(obj, struct ivpu_bo, base); + return container_of(obj, struct ivpu_bo, base.base); } static inline void *ivpu_bo_vaddr(struct ivpu_bo *bo) { - return bo->kvaddr; + return bo->base.vaddr; } static inline size_t ivpu_bo_size(struct ivpu_bo *bo) { - return bo->base.size; -} - -static inline struct page *ivpu_bo_get_page(struct ivpu_bo *bo, u64 offset) -{ - if (offset > ivpu_bo_size(bo) || !bo->pages) - return NULL; - - return bo->pages[offset / PAGE_SIZE]; + return bo->base.base.size; } static inline u32 ivpu_bo_cache_mode(struct ivpu_bo *bo) @@ -95,20 +64,9 @@ static inline bool ivpu_bo_is_snooped(struct ivpu_bo *bo) return ivpu_bo_cache_mode(bo) == DRM_IVPU_BO_CACHED; } -static inline pgprot_t ivpu_bo_pgprot(struct ivpu_bo *bo, pgprot_t prot) -{ - if (bo->flags & DRM_IVPU_BO_WC) - return pgprot_writecombine(prot); - - if (bo->flags & DRM_IVPU_BO_UNCACHED) - return pgprot_noncached(prot); - - return prot; -} - static inline struct ivpu_device *ivpu_bo_to_vdev(struct ivpu_bo *bo) { - return to_ivpu_device(bo->base.dev); + return to_ivpu_device(bo->base.base.dev); } static inline void *ivpu_to_cpu_addr(struct ivpu_bo *bo, u32 vpu_addr) diff --git a/drivers/accel/ivpu/ivpu_job.c b/drivers/accel/ivpu/ivpu_job.c index 15a408fad494..02acd8dba02a 100644 --- a/drivers/accel/ivpu/ivpu_job.c +++ b/drivers/accel/ivpu/ivpu_job.c @@ -266,7 +266,7 @@ static void job_release(struct kref *ref) for (i = 0; i < job->bo_count; i++) if (job->bos[i]) - drm_gem_object_put(&job->bos[i]->base); + drm_gem_object_put(&job->bos[i]->base.base); dma_fence_put(job->done_fence); ivpu_file_priv_put(&job->file_priv); @@ -450,7 +450,7 @@ ivpu_job_prepare_bos_for_submit(struct drm_file *file, struct ivpu_job *job, u32 } bo = job->bos[CMD_BUF_IDX]; - if (!dma_resv_test_signaled(bo->base.resv, DMA_RESV_USAGE_READ)) { + if (!dma_resv_test_signaled(bo->base.base.resv, DMA_RESV_USAGE_READ)) { ivpu_warn(vdev, "Buffer is already in use\n"); return -EBUSY; } @@ -470,7 +470,7 @@ ivpu_job_prepare_bos_for_submit(struct drm_file *file, struct ivpu_job *job, u32 } for (i = 0; i < buf_count; i++) { - ret = dma_resv_reserve_fences(job->bos[i]->base.resv, 1); + ret = dma_resv_reserve_fences(job->bos[i]->base.base.resv, 1); if (ret) { ivpu_warn(vdev, "Failed to reserve fences: %d\n", ret); goto unlock_reservations; @@ -479,7 +479,7 @@ ivpu_job_prepare_bos_for_submit(struct drm_file *file, struct ivpu_job *job, u32 for (i = 0; i < buf_count; i++) { usage = (i == CMD_BUF_IDX) ? DMA_RESV_USAGE_WRITE : DMA_RESV_USAGE_BOOKKEEP; - dma_resv_add_fence(job->bos[i]->base.resv, job->done_fence, usage); + dma_resv_add_fence(job->bos[i]->base.base.resv, job->done_fence, usage); } unlock_reservations: