From patchwork Wed Nov 28 11:25:39 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 1815701 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by patchwork1.kernel.org (Postfix) with ESMTP id 213023FC54 for ; Wed, 28 Nov 2012 11:26:22 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0603DE6395 for ; Wed, 28 Nov 2012 03:26:22 -0800 (PST) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-wi0-f177.google.com (mail-wi0-f177.google.com [209.85.212.177]) by gabe.freedesktop.org (Postfix) with ESMTP id CE761E5EC3 for ; Wed, 28 Nov 2012 03:26:07 -0800 (PST) Received: by mail-wi0-f177.google.com with SMTP id c10so3794442wiw.12 for ; Wed, 28 Nov 2012 03:26:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer; bh=6NY5z8ghyJAY2DH7Thjsx5DjZ8QXZJkCeHLJG4njp/o=; b=vMSBa6lmAtWNJnzQr0ZEzpHbj+kZwxqc5qaG2wesiaxjE/qCekJpU3xDis4CFBEgUa NVdA9DlloXfMN6zw4QGq+Vtw8Aw71UYq7eZkX2e/+2hRKtKu/W9i4th6XL7pn6tTdgl7 AHWEoVEOYdTSiA1FZ8BP4RhmFQM/sAs2biJa85F0BzejeNFjbTe/RpfHVYhLmsSeyNGI MoxM/S4o/l9eGDYtc5jgQZlz9VLRgG7vsyTQHdm+juujgoPM4+NI+fWDnpVjGaeHK6Fw licd+BOHD3H0lTYokKjCtj0P1gKFu/CBc7UvT/wnaY/Ie7UVFucpRgNeK/DQvq7Jxhbg 0cww== Received: by 10.180.88.99 with SMTP id bf3mr28664329wib.22.1354101966895; Wed, 28 Nov 2012 03:26:06 -0800 (PST) Received: from localhost (5ED48CEF.cm-7-5c.dynamic.ziggo.nl. [94.212.140.239]) by mx.google.com with ESMTPS id p2sm7471514wic.7.2012.11.28.03.25.56 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 28 Nov 2012 03:26:05 -0800 (PST) Received: by localhost (sSMTP sendmail emulation); Wed, 28 Nov 2012 12:25:55 +0100 From: Maarten Lankhorst To: thellstrom@vmware.com, dri-devel@lists.freedesktop.org Subject: [PATCH 1/6] drm/ttm: change fence_lock to inner lock Date: Wed, 28 Nov 2012 12:25:39 +0100 Message-Id: <1354101944-10455-1-git-send-email-maarten.lankhorst@canonical.com> X-Mailer: git-send-email 1.8.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org Errors-To: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org This requires changing the order in ttm_bo_cleanup_refs_or_queue to take the reservation first, as there is otherwise no race free way to take lru lock before fence_lock. Signed-off-by: Maarten Lankhorst Reviewed-by: Thomas Hellstrom --- drivers/gpu/drm/ttm/ttm_bo.c | 31 +++++++++++-------------------- drivers/gpu/drm/ttm/ttm_execbuf_util.c | 4 ++-- 2 files changed, 13 insertions(+), 22 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 7426fe5..202fc20 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -500,27 +500,17 @@ static void ttm_bo_cleanup_refs_or_queue(struct ttm_buffer_object *bo) { struct ttm_bo_device *bdev = bo->bdev; struct ttm_bo_global *glob = bo->glob; - struct ttm_bo_driver *driver; + struct ttm_bo_driver *driver = bdev->driver; void *sync_obj = NULL; int put_count; int ret; + spin_lock(&glob->lru_lock); + ret = ttm_bo_reserve_locked(bo, false, true, false, 0); + spin_lock(&bdev->fence_lock); (void) ttm_bo_wait(bo, false, false, true); - if (!bo->sync_obj) { - - spin_lock(&glob->lru_lock); - - /** - * Lock inversion between bo:reserve and bdev::fence_lock here, - * but that's OK, since we're only trylocking. - */ - - ret = ttm_bo_reserve_locked(bo, false, true, false, 0); - - if (unlikely(ret == -EBUSY)) - goto queue; - + if (!ret && !bo->sync_obj) { spin_unlock(&bdev->fence_lock); put_count = ttm_bo_del_from_lru(bo); @@ -530,18 +520,19 @@ static void ttm_bo_cleanup_refs_or_queue(struct ttm_buffer_object *bo) ttm_bo_list_ref_sub(bo, put_count, true); return; - } else { - spin_lock(&glob->lru_lock); } -queue: - driver = bdev->driver; if (bo->sync_obj) sync_obj = driver->sync_obj_ref(bo->sync_obj); + spin_unlock(&bdev->fence_lock); + + if (!ret) { + atomic_set(&bo->reserved, 0); + wake_up_all(&bo->event_queue); + } kref_get(&bo->list_kref); list_add_tail(&bo->ddestroy, &bdev->ddestroy); spin_unlock(&glob->lru_lock); - spin_unlock(&bdev->fence_lock); if (sync_obj) { driver->sync_obj_flush(sync_obj); diff --git a/drivers/gpu/drm/ttm/ttm_execbuf_util.c b/drivers/gpu/drm/ttm/ttm_execbuf_util.c index 1986d00..cd9e452 100644 --- a/drivers/gpu/drm/ttm/ttm_execbuf_util.c +++ b/drivers/gpu/drm/ttm/ttm_execbuf_util.c @@ -213,8 +213,8 @@ void ttm_eu_fence_buffer_objects(struct list_head *list, void *sync_obj) driver = bdev->driver; glob = bo->glob; - spin_lock(&bdev->fence_lock); spin_lock(&glob->lru_lock); + spin_lock(&bdev->fence_lock); list_for_each_entry(entry, list, head) { bo = entry->bo; @@ -223,8 +223,8 @@ void ttm_eu_fence_buffer_objects(struct list_head *list, void *sync_obj) ttm_bo_unreserve_locked(bo); entry->reserved = false; } - spin_unlock(&glob->lru_lock); spin_unlock(&bdev->fence_lock); + spin_unlock(&glob->lru_lock); list_for_each_entry(entry, list, head) { if (entry->old_sync_obj)