From patchwork Tue Jan 12 21:23:48 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?TWFyZWsgT2zFocOhaw==?= X-Patchwork-Id: 8021001 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 71D039F3F6 for ; Tue, 12 Jan 2016 21:24:10 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 751122041D for ; Tue, 12 Jan 2016 21:24:09 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 5FB21203DF for ; Tue, 12 Jan 2016 21:24:08 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 684EE72065; Tue, 12 Jan 2016 13:24:06 -0800 (PST) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-wm0-f66.google.com (mail-wm0-f66.google.com [74.125.82.66]) by gabe.freedesktop.org (Postfix) with ESMTPS id DE7B572005 for ; Tue, 12 Jan 2016 13:24:04 -0800 (PST) Received: by mail-wm0-f66.google.com with SMTP id f206so33460988wmf.2 for ; Tue, 12 Jan 2016 13:24:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-type:content-transfer-encoding; bh=b/nOgRalRBYoCfSXk4af3ItE0WIvlc730sDQhaZYCJg=; b=b471iiUSHU1/mT+74e5gNufgRkILqEi8B4Sgx1MA7SD7GlLagL5yKDx14eb3XXP5XZ fixLx1JXIFhP8IVQlifngcMaerkm22E4mr0en7FWrzlbETfv04xatjyZu2EXi5FmgilO /N6bCi028ELgcuzaA+zRZrcbl49Dx+WoMe03Dc3k63d/NYV/3qGobmF3AIVBqU1Y+UA8 QO4V2hN0j50r23zGcNmkzB6lD+G6Sd3lPyoNddcmzvnpI8oN9tMj6nvsUUAThOxFiQcd ltJkiULCbKhrr2BDrFixfVRv/PRdKsGLM+AjHfba5waFLBWZq3L8DUs3hPhaThlk9F/u R7+A== X-Received: by 10.195.18.100 with SMTP id gl4mr82939588wjd.177.1452633843706; Tue, 12 Jan 2016 13:24:03 -0800 (PST) Received: from heaven.amd.com (ip-89-103-110-3.net.upcbroadband.cz. [89.103.110.3]) by smtp.gmail.com with ESMTPSA id n5sm19291001wmf.3.2016.01.12.13.24.02 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 12 Jan 2016 13:24:03 -0800 (PST) From: =?UTF-8?q?Marek=20Ol=C5=A1=C3=A1k?= To: dri-devel@lists.freedesktop.org Subject: [PATCH 02/10] amdgpu: add the interface of waiting multiple fences Date: Tue, 12 Jan 2016 22:23:48 +0100 Message-Id: <1452633836-26855-3-git-send-email-maraeo@gmail.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1452633836-26855-1-git-send-email-maraeo@gmail.com> References: <1452633836-26855-1-git-send-email-maraeo@gmail.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Junwei Zhang Signed-off-by: Junwei Zhang Reviewed-by: Christian König Reviewed-by: Jammy Zhou --- amdgpu/amdgpu.h | 22 +++++++++++++++ amdgpu/amdgpu_cs.c | 71 ++++++++++++++++++++++++++++++++++++++++++++++++ include/drm/amdgpu_drm.h | 27 ++++++++++++++++++ 3 files changed, 120 insertions(+) diff --git a/amdgpu/amdgpu.h b/amdgpu/amdgpu.h index e44d802..9ae6ca3 100644 --- a/amdgpu/amdgpu.h +++ b/amdgpu/amdgpu.h @@ -902,6 +902,28 @@ int amdgpu_cs_query_fence_status(struct amdgpu_cs_fence *fence, uint64_t flags, uint32_t *expired); +/** + * Wait for multiple fences + * + * \param fences - \c [in] The fence array to wait + * \param fence_count - \c [in] The fence count + * \param wait_all - \c [in] If true, wait all fences to be signaled, + * otherwise, wait at least one fence + * \param timeout_ns - \c [in] The timeout to wait, in nanoseconds + * \param status - \c [out] '1' for signaled, '0' for timeout + * + * \return 0 on success + * <0 - Negative POSIX Error code + * + * \note Currently it supports only one amdgpu_device. All fences come from + * the same amdgpu_device with the same fd. +*/ +int amdgpu_cs_wait_fences(struct amdgpu_cs_fence *fences, + uint32_t fence_count, + bool wait_all, + uint64_t timeout_ns, + uint32_t *status); + /* * Query / Info API * diff --git a/amdgpu/amdgpu_cs.c b/amdgpu/amdgpu_cs.c index 511d53f..d5e4ea0 100644 --- a/amdgpu/amdgpu_cs.c +++ b/amdgpu/amdgpu_cs.c @@ -379,3 +379,74 @@ int amdgpu_cs_query_fence_status(struct amdgpu_cs_fence *fence, return r; } +static int amdgpu_ioctl_wait_fences(struct amdgpu_cs_fence *fences, + uint32_t fence_count, + bool wait_all, + uint64_t timeout_ns, + uint32_t *status) +{ + struct drm_amdgpu_fence *drm_fences; + amdgpu_device_handle dev = fences[0].context->dev; + union drm_amdgpu_wait_fences args; + int r; + uint32_t i; + + drm_fences = alloca(sizeof(struct drm_amdgpu_fence) * fence_count); + for (i = 0; i < fence_count; i++) { + drm_fences[i].ctx_id = fences[i].context->id; + drm_fences[i].ip_type = fences[i].ip_type; + drm_fences[i].ip_instance = fences[i].ip_instance; + drm_fences[i].ring = fences[i].ring; + drm_fences[i].seq_no = fences[i].fence; + } + + memset(&args, 0, sizeof(args)); + args.in.fences = (uint64_t)(uintptr_t)drm_fences; + args.in.fence_count = fence_count; + args.in.wait_all = wait_all; + args.in.timeout_ns = amdgpu_cs_calculate_timeout(timeout_ns); + + r = drmIoctl(dev->fd, DRM_IOCTL_AMDGPU_WAIT_FENCES, &args); + if (r) + return -errno; + + *status = args.out.status; + return 0; +} + +int amdgpu_cs_wait_fences(struct amdgpu_cs_fence *fences, + uint32_t fence_count, + bool wait_all, + uint64_t timeout_ns, + uint32_t *status) +{ + uint32_t ioctl_status = 0; + uint32_t i; + int r; + + /* Sanity check */ + if (NULL == fences) + return -EINVAL; + if (NULL == status) + return -EINVAL; + if (fence_count <= 0) + return -EINVAL; + for (i = 0; i < fence_count; i++) { + if (NULL == fences[i].context) + return -EINVAL; + if (fences[i].ip_type >= AMDGPU_HW_IP_NUM) + return -EINVAL; + if (fences[i].ring >= AMDGPU_CS_MAX_RINGS) + return -EINVAL; + } + + *status = 0; + + r = amdgpu_ioctl_wait_fences(fences, fence_count, wait_all, timeout_ns, + &ioctl_status); + + if (!r) + *status = ioctl_status; + + return r; +} diff --git a/include/drm/amdgpu_drm.h b/include/drm/amdgpu_drm.h index fbdd118..2cbea72 100644 --- a/include/drm/amdgpu_drm.h +++ b/include/drm/amdgpu_drm.h @@ -46,6 +46,7 @@ #define DRM_AMDGPU_WAIT_CS 0x09 #define DRM_AMDGPU_GEM_OP 0x10 #define DRM_AMDGPU_GEM_USERPTR 0x11 +#define DRM_AMDGPU_WAIT_FENCES 0x12 #define DRM_IOCTL_AMDGPU_GEM_CREATE DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_GEM_CREATE, union drm_amdgpu_gem_create) #define DRM_IOCTL_AMDGPU_GEM_MMAP DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_GEM_MMAP, union drm_amdgpu_gem_mmap) @@ -59,6 +60,7 @@ #define DRM_IOCTL_AMDGPU_WAIT_CS DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_WAIT_CS, union drm_amdgpu_wait_cs) #define DRM_IOCTL_AMDGPU_GEM_OP DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_GEM_OP, struct drm_amdgpu_gem_op) #define DRM_IOCTL_AMDGPU_GEM_USERPTR DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_GEM_USERPTR, struct drm_amdgpu_gem_userptr) +#define DRM_IOCTL_AMDGPU_WAIT_FENCES DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_WAIT_FENCES, union drm_amdgpu_wait_fences) #define AMDGPU_GEM_DOMAIN_CPU 0x1 #define AMDGPU_GEM_DOMAIN_GTT 0x2 @@ -297,6 +299,31 @@ union drm_amdgpu_wait_cs { struct drm_amdgpu_wait_cs_out out; }; +struct drm_amdgpu_fence { + uint32_t ctx_id; + uint32_t ip_type; + uint32_t ip_instance; + uint32_t ring; + uint64_t seq_no; +}; + +struct drm_amdgpu_wait_fences_in { + /** This points to uint64_t * which points to fences */ + uint64_t fences; + uint32_t fence_count; + uint32_t wait_all; + uint64_t timeout_ns; +}; + +struct drm_amdgpu_wait_fences_out { + uint64_t status; +}; + +union drm_amdgpu_wait_fences { + struct drm_amdgpu_wait_fences_in in; + struct drm_amdgpu_wait_fences_out out; +}; + #define AMDGPU_GEM_OP_GET_GEM_CREATE_INFO 0 #define AMDGPU_GEM_OP_SET_PLACEMENT 1