From patchwork Fri Nov 15 08:24:31 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Hellstrom X-Patchwork-Id: 3187141 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id DCCE69F39E for ; Fri, 15 Nov 2013 08:24:57 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 03D452082C for ; Fri, 15 Nov 2013 08:24:57 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 0B89B207D8 for ; Fri, 15 Nov 2013 08:24:56 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 80F28FB767; Fri, 15 Nov 2013 00:24:50 -0800 (PST) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from smtp-outbound-2.vmware.com (smtp-outbound-2.vmware.com [208.91.2.13]) by gabe.freedesktop.org (Postfix) with ESMTP id C9D82FB766 for ; Fri, 15 Nov 2013 00:24:48 -0800 (PST) Received: from sc9-mailhost1.vmware.com (sc9-mailhost1.vmware.com [10.113.161.71]) by smtp-outbound-2.vmware.com (Postfix) with ESMTP id 89CDC98233; Fri, 15 Nov 2013 00:24:48 -0800 (PST) Received: from zcs-prod-ext-mta-2.vmware.com (zcs-prod-ext-mta-2.vmware.com [10.113.62.224]) by sc9-mailhost1.vmware.com (Postfix) with ESMTP id 7C40C18345; Fri, 15 Nov 2013 00:24:48 -0800 (PST) Received: from zcs-prod-ext-mta-2.vmware.com (localhost.localdomain [127.0.0.1]) by zcs-prod-ext-mta-2.vmware.com (Postfix) with ESMTP id 54F15C0048; Fri, 15 Nov 2013 00:25:05 -0800 (PST) Received: from localhost.localdomain (zimbra-prod-ext-proxy-vip.vmware.com [10.113.63.87]) by zcs-prod-ext-mta-2.vmware.com (Postfix) with ESMTPSA; Fri, 15 Nov 2013 00:25:03 -0800 (PST) From: Thomas Hellstrom To: dri-devel@lists.freedesktop.org, linux-graphics-maintainer@vmware.com Subject: [PATCH 1/2] drm/ttm: Allow execbuf util reserves without ticket Date: Fri, 15 Nov 2013 00:24:31 -0800 Message-Id: <1384503872-3675-2-git-send-email-thellstrom@vmware.com> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1384503872-3675-1-git-send-email-thellstrom@vmware.com> References: <1384503872-3675-1-git-send-email-thellstrom@vmware.com> Cc: jglisse@redhat.com, Thomas Hellstrom X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: dri-devel-bounces@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP If no reservation ticket is given to the execbuf reservation utilities, try reservation with non-blocking semantics. This is intended for eviction paths that use the execbuf reservation utilities for convenience rather than for deadlock avoidance. Signed-off-by: Thomas Hellstrom --- drivers/gpu/drm/ttm/ttm_execbuf_util.c | 32 +++++++++++++++++++------------- include/drm/ttm/ttm_execbuf_util.h | 3 ++- 2 files changed, 21 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_execbuf_util.c b/drivers/gpu/drm/ttm/ttm_execbuf_util.c index 6c91178..479e941 100644 --- a/drivers/gpu/drm/ttm/ttm_execbuf_util.c +++ b/drivers/gpu/drm/ttm/ttm_execbuf_util.c @@ -32,8 +32,7 @@ #include #include -static void ttm_eu_backoff_reservation_locked(struct list_head *list, - struct ww_acquire_ctx *ticket) +static void ttm_eu_backoff_reservation_locked(struct list_head *list) { struct ttm_validate_buffer *entry; @@ -93,8 +92,9 @@ void ttm_eu_backoff_reservation(struct ww_acquire_ctx *ticket, entry = list_first_entry(list, struct ttm_validate_buffer, head); glob = entry->bo->glob; spin_lock(&glob->lru_lock); - ttm_eu_backoff_reservation_locked(list, ticket); - ww_acquire_fini(ticket); + ttm_eu_backoff_reservation_locked(list); + if (ticket) + ww_acquire_fini(ticket); spin_unlock(&glob->lru_lock); } EXPORT_SYMBOL(ttm_eu_backoff_reservation); @@ -130,7 +130,8 @@ int ttm_eu_reserve_buffers(struct ww_acquire_ctx *ticket, entry = list_first_entry(list, struct ttm_validate_buffer, head); glob = entry->bo->glob; - ww_acquire_init(ticket, &reservation_ww_class); + if (ticket) + ww_acquire_init(ticket, &reservation_ww_class); retry: list_for_each_entry(entry, list, head) { struct ttm_buffer_object *bo = entry->bo; @@ -139,16 +140,17 @@ retry: if (entry->reserved) continue; - - ret = ttm_bo_reserve_nolru(bo, true, false, true, ticket); + ret = ttm_bo_reserve_nolru(bo, true, (ticket == NULL), true, + ticket); if (ret == -EDEADLK) { /* uh oh, we lost out, drop every reservation and try * to only reserve this buffer, then start over if * this succeeds. */ + BUG_ON(ticket == NULL); spin_lock(&glob->lru_lock); - ttm_eu_backoff_reservation_locked(list, ticket); + ttm_eu_backoff_reservation_locked(list); spin_unlock(&glob->lru_lock); ttm_eu_list_ref_sub(list); ret = ww_mutex_lock_slow_interruptible(&bo->resv->lock, @@ -175,7 +177,8 @@ retry: } } - ww_acquire_done(ticket); + if (ticket) + ww_acquire_done(ticket); spin_lock(&glob->lru_lock); ttm_eu_del_from_lru_locked(list); spin_unlock(&glob->lru_lock); @@ -184,12 +187,14 @@ retry: err: spin_lock(&glob->lru_lock); - ttm_eu_backoff_reservation_locked(list, ticket); + ttm_eu_backoff_reservation_locked(list); spin_unlock(&glob->lru_lock); ttm_eu_list_ref_sub(list); err_fini: - ww_acquire_done(ticket); - ww_acquire_fini(ticket); + if (ticket) { + ww_acquire_done(ticket); + ww_acquire_fini(ticket); + } return ret; } EXPORT_SYMBOL(ttm_eu_reserve_buffers); @@ -224,7 +229,8 @@ void ttm_eu_fence_buffer_objects(struct ww_acquire_ctx *ticket, } spin_unlock(&bdev->fence_lock); spin_unlock(&glob->lru_lock); - ww_acquire_fini(ticket); + if (ticket) + ww_acquire_fini(ticket); list_for_each_entry(entry, list, head) { if (entry->old_sync_obj) diff --git a/include/drm/ttm/ttm_execbuf_util.h b/include/drm/ttm/ttm_execbuf_util.h index ec8a1d3..16db7d0 100644 --- a/include/drm/ttm/ttm_execbuf_util.h +++ b/include/drm/ttm/ttm_execbuf_util.h @@ -70,7 +70,8 @@ extern void ttm_eu_backoff_reservation(struct ww_acquire_ctx *ticket, /** * function ttm_eu_reserve_buffers * - * @ticket: [out] ww_acquire_ctx returned by call. + * @ticket: [out] ww_acquire_ctx filled in by call, or NULL if only + * non-blocking reserves should be tried. * @list: thread private list of ttm_validate_buffer structs. * * Tries to reserve bos pointed to by the list entries for validation.