From patchwork Thu Aug 29 10:32:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gerd Hoffmann X-Patchwork-Id: 11120909 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 92FAE14D5 for ; Thu, 29 Aug 2019 10:33:58 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7BAF923427 for ; Thu, 29 Aug 2019 10:33:58 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7BAF923427 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D6CF16E0A2; Thu, 29 Aug 2019 10:33:57 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2866E6E09F for ; Thu, 29 Aug 2019 10:33:08 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 8A434301A893; Thu, 29 Aug 2019 10:33:07 +0000 (UTC) Received: from sirius.home.kraxel.org (ovpn-116-95.ams2.redhat.com [10.36.116.95]) by smtp.corp.redhat.com (Postfix) with ESMTP id BFC841001B09; Thu, 29 Aug 2019 10:33:06 +0000 (UTC) Received: by sirius.home.kraxel.org (Postfix, from userid 1000) id C878331EEA; Thu, 29 Aug 2019 12:33:02 +0200 (CEST) From: Gerd Hoffmann To: dri-devel@lists.freedesktop.org Subject: [PATCH v9 07/18] drm/virtio: add virtio_gpu_object_array & helpers Date: Thu, 29 Aug 2019 12:32:50 +0200 Message-Id: <20190829103301.3539-8-kraxel@redhat.com> In-Reply-To: <20190829103301.3539-1-kraxel@redhat.com> References: <20190829103301.3539-1-kraxel@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.45]); Thu, 29 Aug 2019 10:33:07 +0000 (UTC) X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: David Airlie , open list , "open list:VIRTIO GPU DRIVER" , Gerd Hoffmann , gurchetansingh@chromium.org MIME-Version: 1.0 Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Some helper functions to manage an array of gem objects. v9: use dma_resv_lock_interruptible. v6: - add ticket to struct virtio_gpu_object_array. - add virtio_gpu_array_{lock,unlock}_resv helpers. - add virtio_gpu_array_add_fence helper. v5: some small optimizations (Chia-I Wu). v4: make them virtio-private instead of generic helpers. Signed-off-by: Gerd Hoffmann [fixup] virtio_gpu_array_lock_resv --- drivers/gpu/drm/virtio/virtgpu_drv.h | 17 +++++ drivers/gpu/drm/virtio/virtgpu_gem.c | 93 ++++++++++++++++++++++++++++ 2 files changed, 110 insertions(+) diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h index db57bbb36216..b6bd2b1141fb 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.h +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h @@ -84,6 +84,12 @@ struct virtio_gpu_object { #define gem_to_virtio_gpu_obj(gobj) \ container_of((gobj), struct virtio_gpu_object, gem_base) +struct virtio_gpu_object_array { + struct ww_acquire_ctx ticket; + u32 nents, total; + struct drm_gem_object *objs[]; +}; + struct virtio_gpu_vbuffer; struct virtio_gpu_device; @@ -251,6 +257,17 @@ int virtio_gpu_mode_dumb_mmap(struct drm_file *file_priv, struct drm_device *dev, uint32_t handle, uint64_t *offset_p); +struct virtio_gpu_object_array *virtio_gpu_array_alloc(u32 nents); +struct virtio_gpu_object_array* +virtio_gpu_array_from_handles(struct drm_file *drm_file, u32 *handles, u32 nents); +void virtio_gpu_array_add_obj(struct virtio_gpu_object_array *objs, + struct drm_gem_object *obj); +int virtio_gpu_array_lock_resv(struct virtio_gpu_object_array *objs); +void virtio_gpu_array_unlock_resv(struct virtio_gpu_object_array *objs); +void virtio_gpu_array_add_fence(struct virtio_gpu_object_array *objs, + struct dma_fence *fence); +void virtio_gpu_array_put_free(struct virtio_gpu_object_array *objs); + /* virtio vg */ int virtio_gpu_alloc_vbufs(struct virtio_gpu_device *vgdev); void virtio_gpu_free_vbufs(struct virtio_gpu_device *vgdev); diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c index 6fe6f72f64d1..fd60b45aabd2 100644 --- a/drivers/gpu/drm/virtio/virtgpu_gem.c +++ b/drivers/gpu/drm/virtio/virtgpu_gem.c @@ -171,3 +171,96 @@ void virtio_gpu_gem_object_close(struct drm_gem_object *obj, qobj->hw_res_handle); virtio_gpu_object_unreserve(qobj); } + +struct virtio_gpu_object_array *virtio_gpu_array_alloc(u32 nents) +{ + struct virtio_gpu_object_array *objs; + size_t size = sizeof(*objs) + sizeof(objs->objs[0]) * nents; + + objs = kmalloc(size, GFP_KERNEL); + if (!objs) + return NULL; + + objs->nents = 0; + objs->total = nents; + return objs; +} + +static void virtio_gpu_array_free(struct virtio_gpu_object_array *objs) +{ + kfree(objs); +} + +struct virtio_gpu_object_array* +virtio_gpu_array_from_handles(struct drm_file *drm_file, u32 *handles, u32 nents) +{ + struct virtio_gpu_object_array *objs; + u32 i; + + objs = virtio_gpu_array_alloc(nents); + if (!objs) + return NULL; + + for (i = 0; i < nents; i++) { + objs->objs[i] = drm_gem_object_lookup(drm_file, handles[i]); + if (!objs->objs[i]) { + objs->nents = i; + virtio_gpu_array_put_free(objs); + return NULL; + } + } + objs->nents = i; + return objs; +} + +void virtio_gpu_array_add_obj(struct virtio_gpu_object_array *objs, + struct drm_gem_object *obj) +{ + if (WARN_ON_ONCE(objs->nents == objs->total)) + return; + + drm_gem_object_get(obj); + objs->objs[objs->nents] = obj; + objs->nents++; +} + +int virtio_gpu_array_lock_resv(struct virtio_gpu_object_array *objs) +{ + int ret; + + if (objs->nents == 1) { + ret = dma_resv_lock_interruptible(objs->objs[0]->resv, NULL); + } else { + ret = drm_gem_lock_reservations(objs->objs, objs->nents, + &objs->ticket); + } + return ret; +} + +void virtio_gpu_array_unlock_resv(struct virtio_gpu_object_array *objs) +{ + if (objs->nents == 1) { + dma_resv_unlock(objs->objs[0]->resv); + } else { + drm_gem_unlock_reservations(objs->objs, objs->nents, + &objs->ticket); + } +} + +void virtio_gpu_array_add_fence(struct virtio_gpu_object_array *objs, + struct dma_fence *fence) +{ + int i; + + for (i = 0; i < objs->nents; i++) + dma_resv_add_excl_fence(objs->objs[i]->resv, fence); +} + +void virtio_gpu_array_put_free(struct virtio_gpu_object_array *objs) +{ + u32 i; + + for (i = 0; i < objs->nents; i++) + drm_gem_object_put_unlocked(objs->objs[i]); + virtio_gpu_array_free(objs); +}