From patchwork Sun Aug 27 17:54:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 13367380 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8D6F6C83F18 for ; Sun, 27 Aug 2023 17:57:02 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 5B77710E1FC; Sun, 27 Aug 2023 17:57:01 +0000 (UTC) Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by gabe.freedesktop.org (Postfix) with ESMTPS id 1896C10E1D8; Sun, 27 Aug 2023 17:56:16 +0000 (UTC) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 6E5DE6607365; Sun, 27 Aug 2023 18:56:13 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1693158975; bh=RdboBXyGBrh9C432+mMzN0orerGXkbDg2K/MlizItwI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZbpTgZ1I4DFQocP7yPu4rlmD4C8x2Rid7LuWQWs7LA8KPfHSxjoy7LAvIndsQAKiE 1EDBi7Tiu1BltLtpmueGnk2uMc0t+ml/zC4JtVdskRWCatlh27ZSL6w+GGlbwFW7+v 529BKE3f/+tG7BG/+BRHXLi29WC/2Mfob25B7IjugM2AETUPjGO42KfM2F/ndcSg/B OoL98Jx2xIELSBS/yd02tX58cetRZbsj04/ZrcwuEtWmlWnv9fyqXkD9xMZup/TOfq DcDGOPbZoKRiE/6TK6CKn13K4l5osK+9cEBEX8fr1xSnGuxsPoQAMo9wQ9pWtmfGqs 3LchkWr/D6AVQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen , Will Deacon , Peter Zijlstra , Boqun Feng , Mark Rutland Date: Sun, 27 Aug 2023 20:54:40 +0300 Message-ID: <20230827175449.1766701-15-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230827175449.1766701-1-dmitry.osipenko@collabora.com> References: <20230827175449.1766701-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v15 14/23] drm/shmem-helper: Add and use lockless drm_gem_shmem_get_pages() X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: intel-gfx@lists.freedesktop.org, kernel@collabora.com, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, virtualization@lists.linux-foundation.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Add lockless drm_gem_shmem_get_pages() helper that skips taking reservation lock if pages_use_count is non-zero, leveraging from atomicity of the kref counter. Make drm_gem_shmem_mmap() to utilize the new helper. Suggested-by: Boris Brezillon Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 19 +++++++++++++++---- 1 file changed, 15 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 5a2e37b3e51d..f386289c24fc 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -234,6 +234,20 @@ void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) } EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked); +static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) +{ + int ret; + + if (kref_get_unless_zero(&shmem->pages_use_count)) + return 0; + + dma_resv_lock(shmem->base.resv, NULL); + ret = drm_gem_shmem_get_pages_locked(shmem); + dma_resv_unlock(shmem->base.resv); + + return ret; +} + static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem) { int ret; @@ -616,10 +630,7 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct return ret; } - dma_resv_lock(shmem->base.resv, NULL); - ret = drm_gem_shmem_get_pages_locked(shmem); - dma_resv_unlock(shmem->base.resv); - + ret = drm_gem_shmem_get_pages(shmem); if (ret) return ret;