From patchwork Wed Nov 28 11:25:43 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 1815751 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by patchwork1.kernel.org (Postfix) with ESMTP id 96BFE3FC54 for ; Wed, 28 Nov 2012 11:30:33 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 77A35E63A5 for ; Wed, 28 Nov 2012 03:30:33 -0800 (PST) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-we0-f177.google.com (mail-we0-f177.google.com [74.125.82.177]) by gabe.freedesktop.org (Postfix) with ESMTP id 188E3E63A5 for ; Wed, 28 Nov 2012 03:26:55 -0800 (PST) Received: by mail-we0-f177.google.com with SMTP id x48so4807675wey.36 for ; Wed, 28 Nov 2012 03:26:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references; bh=3Ob/lt/hVNSsC/vPCLidm0LiblZYkIsD6AWRGLSyoZ0=; b=cj3OdQw4kszik8voZWv0BigPoe/1yZm29slRNjlwK/i5VJBHw+tfy/9cIwa/YVvX+a V/zB9Ks+9mDDTPaUroPj1lgdHJFZpIJb2+JJE0UJ7PnQatmKgCz+mSdrqXowgISf+U/s NoVdtFD6jTAcR0iJdn06bHvL3H0FPVgB4QxFyMcVDveoy2QuBkIoiwb97yGqgar162ZM AJdVliw0cpyqxWCsNAFPZrNkukJtInhWJ0aWpfI+Ee8LUWsnKqRytGc+3M2Wg5Trklw/ CpxNDL1I5xPnE53m5B8e84iV5ZZdVpdhKQ9GAPeguBuyV/kWb4i0kwaNiTgYH8pBFubw uAUQ== Received: by 10.216.91.17 with SMTP id g17mr2514643wef.76.1354102015573; Wed, 28 Nov 2012 03:26:55 -0800 (PST) Received: from localhost (5ED48CEF.cm-7-5c.dynamic.ziggo.nl. [94.212.140.239]) by mx.google.com with ESMTPS id d9sm6836420wiw.0.2012.11.28.03.26.42 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 28 Nov 2012 03:26:54 -0800 (PST) Received: by localhost (sSMTP sendmail emulation); Wed, 28 Nov 2012 12:26:41 +0100 From: Maarten Lankhorst To: thellstrom@vmware.com, dri-devel@lists.freedesktop.org Subject: [PATCH 5/6] drm/ttm: cope with reserved buffers on lru list in ttm_mem_evict_first, v2 Date: Wed, 28 Nov 2012 12:25:43 +0100 Message-Id: <1354101944-10455-5-git-send-email-maarten.lankhorst@canonical.com> X-Mailer: git-send-email 1.8.0 In-Reply-To: <1354101944-10455-1-git-send-email-maarten.lankhorst@canonical.com> References: <1354101944-10455-1-git-send-email-maarten.lankhorst@canonical.com> X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org Errors-To: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org Replace the goto loop with a simple for each loop, and only run the delayed destroy cleanup if we can reserve the buffer first. No race occurs, since lru lock is never dropped any more. An empty list and a list full of unreservable buffers both cause -EBUSY to be returned, which is identical to the previous situation, because previously buffers on the lru list were always guaranteed to be reservable. This should work since currently ttm guarantees items on the lru are always reservable, and reserving items blockingly with some bo held are enough to cause you to run into a deadlock. Currently this is not a concern since removal off the lru list and reservations are always done with atomically, but when this guarantee no longer holds, we have to handle this situation or end up with possible deadlocks. Signed-off-by: Maarten Lankhorst --- drivers/gpu/drm/ttm/ttm_bo.c | 42 +++++++++++------------------------------- 1 file changed, 11 insertions(+), 31 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 74b296f..ef7b2ad 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -793,49 +793,29 @@ static int ttm_mem_evict_first(struct ttm_bo_device *bdev, struct ttm_bo_global *glob = bdev->glob; struct ttm_mem_type_manager *man = &bdev->man[mem_type]; struct ttm_buffer_object *bo; - int ret, put_count = 0; + int ret = -EBUSY, put_count; -retry: spin_lock(&glob->lru_lock); - if (list_empty(&man->lru)) { + list_for_each_entry(bo, &man->lru, lru) { + ret = ttm_bo_reserve_locked(bo, false, true, false, 0); + if (!ret) + break; + } + + if (ret) { spin_unlock(&glob->lru_lock); - return -EBUSY; + return ret; } - bo = list_first_entry(&man->lru, struct ttm_buffer_object, lru); kref_get(&bo->list_kref); if (!list_empty(&bo->ddestroy)) { - ret = ttm_bo_reserve_locked(bo, interruptible, no_wait_reserve, false, 0); - if (!ret) - ret = ttm_bo_cleanup_refs_and_unlock(bo, interruptible, - no_wait_gpu); - else - spin_unlock(&glob->lru_lock); - + ret = ttm_bo_cleanup_refs_and_unlock(bo, interruptible, + no_wait_gpu); kref_put(&bo->list_kref, ttm_bo_release_list); - return ret; } - ret = ttm_bo_reserve_locked(bo, false, true, false, 0); - - if (unlikely(ret == -EBUSY)) { - spin_unlock(&glob->lru_lock); - if (likely(!no_wait_reserve)) - ret = ttm_bo_wait_unreserved(bo, interruptible); - - kref_put(&bo->list_kref, ttm_bo_release_list); - - /** - * We *need* to retry after releasing the lru lock. - */ - - if (unlikely(ret != 0)) - return ret; - goto retry; - } - put_count = ttm_bo_del_from_lru(bo); spin_unlock(&glob->lru_lock);