From patchwork Mon Nov 5 13:55:44 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Hellstrom X-Patchwork-Id: 1697711 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by patchwork1.kernel.org (Postfix) with ESMTP id D65963FCA5 for ; Mon, 5 Nov 2012 13:59:26 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A1BC59F3B9 for ; Mon, 5 Nov 2012 05:59:26 -0800 (PST) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from smtp-outbound-1.vmware.com (smtp-outbound-1.vmware.com [208.91.2.12]) by gabe.freedesktop.org (Postfix) with ESMTP id 86BA79EB36 for ; Mon, 5 Nov 2012 05:56:07 -0800 (PST) Received: from sc9-mailhost1.vmware.com (sc9-mailhost1.vmware.com [10.113.161.71]) by smtp-outbound-1.vmware.com (Postfix) with ESMTP id 7AD2128AC6; Mon, 5 Nov 2012 05:56:07 -0800 (PST) Received: from zcs-prod-mta-1.vmware.com (zcs-prod-mta-1.vmware.com [10.113.163.63]) by sc9-mailhost1.vmware.com (Postfix) with ESMTP id 7751318447; Mon, 5 Nov 2012 05:56:07 -0800 (PST) Received: from localhost (localhost.localdomain [127.0.0.1]) by zcs-prod-mta-1.vmware.com (Postfix) with ESMTP id C2861E2AF8; Mon, 5 Nov 2012 05:55:35 -0800 (PST) X-Virus-Scanned: amavisd-new at zcs-prod-mta-1.vmware.com Received: from zcs-prod-mta-1.vmware.com ([127.0.0.1]) by localhost (zcs-prod-mta-1.vmware.com [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id WhCKqf8td7GL; Mon, 5 Nov 2012 05:55:35 -0800 (PST) Received: from sc9-mailhost1.vmware.com (unknown [10.113.160.14]) by zcs-prod-mta-1.vmware.com (Postfix) with ESMTPSA id B6B27E2AF6; Mon, 5 Nov 2012 05:55:33 -0800 (PST) From: Thomas Hellstrom To: airlied@gmail.com, airlied@redhat.com Subject: [PATCH 4/4] drm/ttm: Optimize reservation slightly Date: Mon, 5 Nov 2012 14:55:44 +0100 Message-Id: <1352123744-13269-5-git-send-email-thellstrom@vmware.com> X-Mailer: git-send-email 1.7.4.4 In-Reply-To: <1352123744-13269-1-git-send-email-thellstrom@vmware.com> References: <1352123744-13269-1-git-send-email-thellstrom@vmware.com> Cc: Thomas Hellstrom , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org Errors-To: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org Reservation locking currently always takes place under the LRU spinlock. Hence, strictly there is no need for an atomic_cmpxchg call; we can use atomic_read followed by atomic_write since nobody else will ever reserve without the lru spinlock held. At least on Intel this should remove a locked bus cycle on successful reserve. Signed-off-by: Thomas Hellstrom --- drivers/gpu/drm/ttm/ttm_bo.c | 3 ++- 1 files changed, 2 insertions(+), 1 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index bf6e4b5..46008ea 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -220,7 +220,7 @@ int ttm_bo_reserve_locked(struct ttm_buffer_object *bo, struct ttm_bo_global *glob = bo->glob; int ret; - while (unlikely(atomic_cmpxchg(&bo->reserved, 0, 1) != 0)) { + while (unlikely(atomic_read(&bo->reserved) != 0)) { /** * Deadlock avoidance for multi-bo reserving. */ @@ -249,6 +249,7 @@ int ttm_bo_reserve_locked(struct ttm_buffer_object *bo, return ret; } + atomic_set(&bo->reserved, 1); if (use_sequence) { /** * Wake up waiters that may need to recheck for deadlock,