From patchwork Fri Nov 30 12:12:55 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 1825211 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by patchwork2.kernel.org (Postfix) with ESMTP id B0AA8DF24C for ; Fri, 30 Nov 2012 12:13:28 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id AE6C3E6783 for ; Fri, 30 Nov 2012 04:13:28 -0800 (PST) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-wg0-f45.google.com (mail-wg0-f45.google.com [74.125.82.45]) by gabe.freedesktop.org (Postfix) with ESMTP id 7CC98E6051 for ; Fri, 30 Nov 2012 04:13:07 -0800 (PST) Received: by mail-wg0-f45.google.com with SMTP id dq12so139597wgb.12 for ; Fri, 30 Nov 2012 04:13:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references; bh=oPVuBxQnBllxQCwB8ub6qkQbKVywuN7LNrQLdo5/oDg=; b=C4gqbpN0fP6C6Gjrxyt34w4uKRn/Ek+RukVcE8wX+gpS0jDqNHeDy2371GWJjrw/n7 mXlYLtWmLwS6xJzN2aZe3tBQKl/40RTsywmt+WhhBUe4PFsNEfWW+t+jiqFFbnzVHFc2 XSt03I3rtD2rg8xQCbFiznK9OIvHT+UpCw9sYnkJzd/QzG7jqKvWnKRtvv2Ugd2UCyda 7M+J8GfQk69qsO/go1QK4K0gNj1BjK/qZc/ywQCe/7fjkJ2IM4MznI0Fk6TSrcMBwZc5 cd/PQv0yaSf6Dw9apiyfC77m9y0LIHvFWqs8WHzaE4DH74XbmXYXHJ5tp59vBCCSDD/E 1QIw== Received: by 10.216.73.11 with SMTP id u11mr400673wed.221.1354277586538; Fri, 30 Nov 2012 04:13:06 -0800 (PST) Received: from localhost (5ED48CEF.cm-7-5c.dynamic.ziggo.nl. [94.212.140.239]) by mx.google.com with ESMTPS id p2sm6751458wic.7.2012.11.30.04.13.04 (version=TLSv1/SSLv3 cipher=OTHER); Fri, 30 Nov 2012 04:13:05 -0800 (PST) Received: by localhost (sSMTP sendmail emulation); Fri, 30 Nov 2012 13:13:03 +0100 From: Maarten Lankhorst To: dri-devel@lists.freedesktop.org, thellstrom@vmware.com Subject: [PATCH 1/6] drm/ttm: remove lru_lock around ttm_bo_reserve Date: Fri, 30 Nov 2012 13:12:55 +0100 Message-Id: <1354277580-17958-1-git-send-email-maarten.lankhorst@canonical.com> X-Mailer: git-send-email 1.8.0 In-Reply-To: <50B8A254.904@canonical.com> References: <50B8A254.904@canonical.com> X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org Errors-To: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org There should no longer be assumptions that reserve will always succeed with the lru lock held, so we can safely break the whole atomic reserve/lru thing. As a bonus this fixes most lockdep annotations for reservations. Signed-off-by: Maarten Lankhorst --- drivers/gpu/drm/ttm/ttm_bo.c | 54 ++++++++++++++++++++++------------ drivers/gpu/drm/ttm/ttm_execbuf_util.c | 2 +- include/drm/ttm/ttm_bo_driver.h | 19 +++--------- 3 files changed, 40 insertions(+), 35 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 9028327..61b5cd0 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -213,14 +213,13 @@ int ttm_bo_del_from_lru(struct ttm_buffer_object *bo) return put_count; } -int ttm_bo_reserve_locked(struct ttm_buffer_object *bo, +int ttm_bo_reserve_nolru(struct ttm_buffer_object *bo, bool interruptible, bool no_wait, bool use_sequence, uint32_t sequence) { - struct ttm_bo_global *glob = bo->glob; int ret; - while (unlikely(atomic_read(&bo->reserved) != 0)) { + while (unlikely(atomic_xchg(&bo->reserved, 1) != 0)) { /** * Deadlock avoidance for multi-bo reserving. */ @@ -241,26 +240,36 @@ int ttm_bo_reserve_locked(struct ttm_buffer_object *bo, if (no_wait) return -EBUSY; - spin_unlock(&glob->lru_lock); ret = ttm_bo_wait_unreserved(bo, interruptible); - spin_lock(&glob->lru_lock); if (unlikely(ret)) return ret; } - atomic_set(&bo->reserved, 1); if (use_sequence) { + bool wake_up = false; /** * Wake up waiters that may need to recheck for deadlock, * if we decreased the sequence number. */ if (unlikely((bo->val_seq - sequence < (1 << 31)) || !bo->seq_valid)) - wake_up_all(&bo->event_queue); + wake_up = true; + /* + * In the worst case with memory ordering these values can be + * seen in the wrong order. However since we call wake_up_all + * in that case, this will hopefully not pose a problem, + * and the worst case would only cause someone to accidentally + * hit -EAGAIN in ttm_bo_reserve when they see old value of + * val_seq. However this would only happen if seq_valid was + * written before val_seq was, and just means some slightly + * increased cpu usage + */ bo->val_seq = sequence; bo->seq_valid = true; + if (wake_up) + wake_up_all(&bo->event_queue); } else { bo->seq_valid = false; } @@ -289,14 +298,14 @@ int ttm_bo_reserve(struct ttm_buffer_object *bo, int put_count = 0; int ret; - spin_lock(&glob->lru_lock); - ret = ttm_bo_reserve_locked(bo, interruptible, no_wait, use_sequence, - sequence); - if (likely(ret == 0)) + ret = ttm_bo_reserve_nolru(bo, interruptible, no_wait, use_sequence, + sequence); + if (likely(ret == 0)) { + spin_lock(&glob->lru_lock); put_count = ttm_bo_del_from_lru(bo); - spin_unlock(&glob->lru_lock); - - ttm_bo_list_ref_sub(bo, put_count, true); + spin_unlock(&glob->lru_lock); + ttm_bo_list_ref_sub(bo, put_count, true); + } return ret; } @@ -510,7 +519,7 @@ static void ttm_bo_cleanup_refs_or_queue(struct ttm_buffer_object *bo) int ret; spin_lock(&glob->lru_lock); - ret = ttm_bo_reserve_locked(bo, false, true, false, 0); + ret = ttm_bo_reserve_nolru(bo, false, true, false, 0); spin_lock(&bdev->fence_lock); (void) ttm_bo_wait(bo, false, false, true); @@ -603,7 +612,7 @@ static int ttm_bo_cleanup_refs_and_unlock(struct ttm_buffer_object *bo, return ret; spin_lock(&glob->lru_lock); - ret = ttm_bo_reserve_locked(bo, false, true, false, 0); + ret = ttm_bo_reserve_nolru(bo, false, true, false, 0); /* * We raced, and lost, someone else holds the reservation now, @@ -667,7 +676,14 @@ static int ttm_bo_delayed_delete(struct ttm_bo_device *bdev, bool remove_all) kref_get(&nentry->list_kref); } - ret = ttm_bo_reserve_locked(entry, false, !remove_all, false, 0); + ret = ttm_bo_reserve_nolru(entry, false, true, false, 0); + if (remove_all && ret) { + spin_unlock(&glob->lru_lock); + ret = ttm_bo_reserve_nolru(entry, false, false, + false, 0); + spin_lock(&glob->lru_lock); + } + if (!ret) ret = ttm_bo_cleanup_refs_and_unlock(entry, false, !remove_all); @@ -818,7 +834,7 @@ static int ttm_mem_evict_first(struct ttm_bo_device *bdev, spin_lock(&glob->lru_lock); list_for_each_entry(bo, &man->lru, lru) { - ret = ttm_bo_reserve_locked(bo, false, true, false, 0); + ret = ttm_bo_reserve_nolru(bo, false, true, false, 0); if (!ret) break; } @@ -1799,7 +1815,7 @@ static int ttm_bo_swapout(struct ttm_mem_shrink *shrink) spin_lock(&glob->lru_lock); list_for_each_entry(bo, &glob->swap_lru, swap) { - ret = ttm_bo_reserve_locked(bo, false, true, false, 0); + ret = ttm_bo_reserve_nolru(bo, false, true, false, 0); if (!ret) break; } diff --git a/drivers/gpu/drm/ttm/ttm_execbuf_util.c b/drivers/gpu/drm/ttm/ttm_execbuf_util.c index cd9e452..bd37b5c 100644 --- a/drivers/gpu/drm/ttm/ttm_execbuf_util.c +++ b/drivers/gpu/drm/ttm/ttm_execbuf_util.c @@ -153,7 +153,7 @@ retry: struct ttm_buffer_object *bo = entry->bo; retry_this_bo: - ret = ttm_bo_reserve_locked(bo, true, true, true, val_seq); + ret = ttm_bo_reserve_nolru(bo, true, true, true, val_seq); switch (ret) { case 0: break; diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h index e3a43a4..6fff432 100644 --- a/include/drm/ttm/ttm_bo_driver.h +++ b/include/drm/ttm/ttm_bo_driver.h @@ -790,16 +790,7 @@ extern void ttm_mem_io_unlock(struct ttm_mem_type_manager *man); * to make room for a buffer already reserved. (Buffers are reserved before * they are evicted). The following algorithm prevents such deadlocks from * occurring: - * 1) Buffers are reserved with the lru spinlock held. Upon successful - * reservation they are removed from the lru list. This stops a reserved buffer - * from being evicted. However the lru spinlock is released between the time - * a buffer is selected for eviction and the time it is reserved. - * Therefore a check is made when a buffer is reserved for eviction, that it - * is still the first buffer in the lru list, before it is removed from the - * list. @check_lru == 1 forces this check. If it fails, the function returns - * -EINVAL, and the caller should then choose a new buffer to evict and repeat - * the procedure. - * 2) Processes attempting to reserve multiple buffers other than for eviction, + * Processes attempting to reserve multiple buffers other than for eviction, * (typically execbuf), should first obtain a unique 32-bit * validation sequence number, * and call this function with @use_sequence == 1 and @sequence == the unique @@ -832,7 +823,7 @@ extern int ttm_bo_reserve(struct ttm_buffer_object *bo, /** - * ttm_bo_reserve_locked: + * ttm_bo_reserve_nolru: * * @bo: A pointer to a struct ttm_buffer_object. * @interruptible: Sleep interruptible if waiting. @@ -840,9 +831,7 @@ extern int ttm_bo_reserve(struct ttm_buffer_object *bo, * @use_sequence: If @bo is already reserved, Only sleep waiting for * it to become unreserved if @sequence < (@bo)->sequence. * - * Must be called with struct ttm_bo_global::lru_lock held, - * and will not remove reserved buffers from the lru lists. - * The function may release the LRU spinlock if it needs to sleep. + * Will not remove reserved buffers from the lru lists. * Otherwise identical to ttm_bo_reserve. * * Returns: @@ -855,7 +844,7 @@ extern int ttm_bo_reserve(struct ttm_buffer_object *bo, * -EDEADLK: Bo already reserved using @sequence. This error code will only * be returned if @use_sequence is set to true. */ -extern int ttm_bo_reserve_locked(struct ttm_buffer_object *bo, +extern int ttm_bo_reserve_nolru(struct ttm_buffer_object *bo, bool interruptible, bool no_wait, bool use_sequence, uint32_t sequence);