From patchwork Mon Dec 10 09:16:58 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 1856521 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by patchwork1.kernel.org (Postfix) with ESMTP id 013A03FCF2 for ; Mon, 10 Dec 2012 09:24:30 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E8D96E6095 for ; Mon, 10 Dec 2012 01:24:30 -0800 (PST) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-wi0-f173.google.com (mail-wi0-f173.google.com [209.85.212.173]) by gabe.freedesktop.org (Postfix) with ESMTP id 783CAE6058 for ; Mon, 10 Dec 2012 01:17:26 -0800 (PST) Received: by mail-wi0-f173.google.com with SMTP id hn17so823336wib.12 for ; Mon, 10 Dec 2012 01:17:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references; bh=5ZB7E/PCyZvUQwyM5dmJBIpymbCAkPpeLMG203VOBZw=; b=04bzJTaLWNQj/ytA8kwxHJGQIscwlNj1u8spp8RNe2ze/U3D3H2VFhEF3/7BXrF9Tt GtKwWt6l23Zmm+r8Ks4Th1s97W7yA3VM1IzoQTw0Bjo8PFJ+HjVWHy5Gb+qe7kKi/hDQ v1yyZtR813gvDH4Aa5EILG6sh/F4IECtwkjnl/BEM6aNMKl6A0FyjJ+HMlahJujvCcP6 /ioXua/GV/0edJTgjIdjnz1xiU/HhWRARbwO/Gjl4jz0yU2Pu+B2Qw9Lb1gcrUpmK66d 5ZRuBC5Dp7heMVTL3eAFKiWEdNLLGDw0by1Jq/HmsNspEFm/xjZArd+xC9YhefaUDim9 dTng== Received: by 10.180.102.6 with SMTP id fk6mr9462170wib.9.1355131046088; Mon, 10 Dec 2012 01:17:26 -0800 (PST) Received: from localhost (5ED48CEF.cm-7-5c.dynamic.ziggo.nl. [94.212.140.239]) by mx.google.com with ESMTPS id g2sm10653668wiy.0.2012.12.10.01.17.23 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 10 Dec 2012 01:17:25 -0800 (PST) Received: by localhost (sSMTP sendmail emulation); Mon, 10 Dec 2012 10:17:22 +0100 From: Maarten Lankhorst To: dri-devel@lists.freedesktop.org, thellstrom@vmware.com Subject: [PATCH 4/7] drm/ttm: add ttm_bo_reserve_slowpath Date: Mon, 10 Dec 2012 10:16:58 +0100 Message-Id: <1355131021-11611-4-git-send-email-maarten.lankhorst@canonical.com> X-Mailer: git-send-email 1.8.0 In-Reply-To: <1355131021-11611-1-git-send-email-maarten.lankhorst@canonical.com> References: <1355131021-11611-1-git-send-email-maarten.lankhorst@canonical.com> X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org Errors-To: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org Instead of dropping everything, waiting for the bo to be unreserved and trying over, a better strategy would be to do a blocking wait. This can be mapped a lot better to a mutex_lock-like call. Signed-off-by: Maarten Lankhorst Reviewed-by: Jerome Glisse --- drivers/gpu/drm/ttm/ttm_bo.c | 47 +++++++++++++++++++++++++++++++++++++++++ include/drm/ttm/ttm_bo_driver.h | 30 ++++++++++++++++++++++++++ 2 files changed, 77 insertions(+) diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 61b5cd0..174b325 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -310,6 +310,53 @@ int ttm_bo_reserve(struct ttm_buffer_object *bo, return ret; } +int ttm_bo_reserve_slowpath_nolru(struct ttm_buffer_object *bo, + bool interruptible, uint32_t sequence) +{ + bool wake_up = false; + int ret; + + while (unlikely(atomic_xchg(&bo->reserved, 1) != 0)) { + WARN_ON(bo->seq_valid && sequence == bo->val_seq); + + ret = ttm_bo_wait_unreserved(bo, interruptible); + + if (unlikely(ret)) + return ret; + } + + if ((bo->val_seq - sequence < (1 << 31)) || !bo->seq_valid) + wake_up = true; + + /** + * Wake up waiters that may need to recheck for deadlock, + * if we decreased the sequence number. + */ + bo->val_seq = sequence; + bo->seq_valid = true; + if (wake_up) + wake_up_all(&bo->event_queue); + + return 0; +} + +int ttm_bo_reserve_slowpath(struct ttm_buffer_object *bo, + bool interruptible, uint32_t sequence) +{ + struct ttm_bo_global *glob = bo->glob; + int put_count, ret; + + ret = ttm_bo_reserve_slowpath_nolru(bo, interruptible, sequence); + if (likely(!ret)) { + spin_lock(&glob->lru_lock); + put_count = ttm_bo_del_from_lru(bo); + spin_unlock(&glob->lru_lock); + ttm_bo_list_ref_sub(bo, put_count, true); + } + return ret; +} +EXPORT_SYMBOL(ttm_bo_reserve_slowpath); + void ttm_bo_unreserve_locked(struct ttm_buffer_object *bo) { ttm_bo_add_to_lru(bo); diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h index 6fff432..5af71af 100644 --- a/include/drm/ttm/ttm_bo_driver.h +++ b/include/drm/ttm/ttm_bo_driver.h @@ -821,6 +821,36 @@ extern int ttm_bo_reserve(struct ttm_buffer_object *bo, bool interruptible, bool no_wait, bool use_sequence, uint32_t sequence); +/** + * ttm_bo_reserve_slowpath_nolru: + * @bo: A pointer to a struct ttm_buffer_object. + * @interruptible: Sleep interruptible if waiting. + * @sequence: Set (@bo)->sequence to this value after lock + * + * This is called after ttm_bo_reserve returns -EAGAIN and we backed off + * from all our other reservations. Because there are no other reservations + * held by us, this function cannot deadlock any more. + * + * Will not remove reserved buffers from the lru lists. + * Otherwise identical to ttm_bo_reserve_slowpath. + */ +extern int ttm_bo_reserve_slowpath_nolru(struct ttm_buffer_object *bo, + bool interruptible, + uint32_t sequence); + + +/** + * ttm_bo_reserve_slowpath: + * @bo: A pointer to a struct ttm_buffer_object. + * @interruptible: Sleep interruptible if waiting. + * @sequence: Set (@bo)->sequence to this value after lock + * + * This is called after ttm_bo_reserve returns -EAGAIN and we backed off + * from all our other reservations. Because there are no other reservations + * held by us, this function cannot deadlock any more. + */ +extern int ttm_bo_reserve_slowpath(struct ttm_buffer_object *bo, + bool interruptible, uint32_t sequence); /** * ttm_bo_reserve_nolru: