From patchwork Sun Oct 29 23:01:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 13439940 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 507B7C4167B for ; Sun, 29 Oct 2023 23:03:28 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3D4F210E1E6; Sun, 29 Oct 2023 23:03:20 +0000 (UTC) Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by gabe.freedesktop.org (Postfix) with ESMTPS id F0FC410E1D5 for ; Sun, 29 Oct 2023 23:02:45 +0000 (UTC) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 994336607398; Sun, 29 Oct 2023 23:02:43 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1698620564; bh=6JSrV8dvFR5m76/teSuwB8SoeYdLm77F2T3NdDv9SQQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PeN7dCvY7+whfuYqVPe+2Zkcivdg478TFFVcsWFNKyHjzKg+m7MoVZVSSKB1sNSFx F802eM4s0jQy3ZWMxkxCn3bN1LjkBrADWVn+D9sRF/e+jPFNvgKv/5sbpbTnxjg3Ar B/j2uTxhKr2Ki/kNM8LfvZMJ7vcyDhCWcDSAOqL73C0Lf93+eOG6gemcE/fWjkYE/R IrCOKoIanJJICwzN6we1X9m5qYIaoq39xRVeIHYHL/UUVLSTNcACsMEaPdziEdVeyr a2WuTlEKFZrkv6FFB50n1pZmrL1D/RXpEzwyQ1Nltw8IwG/4On6N0OaXdvJv07yitG iZFzaLI7XL2IA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Subject: [PATCH v18 12/26] drm/shmem-helper: Make drm_gem_shmem_get_pages() public Date: Mon, 30 Oct 2023 02:01:51 +0300 Message-ID: <20231029230205.93277-13-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231029230205.93277-1-dmitry.osipenko@collabora.com> References: <20231029230205.93277-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kernel@collabora.com, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, virtualization@lists.linux-foundation.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" We're going to move away from having implicit get_pages() done by get_pages_sgt() to ease simplify refcnt handling. Drivers will manage get/put_pages() by themselves. Expose the drm_gem_shmem_get_pages() in a public drm-shmem API. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 10 +++++++++- include/drm/drm_gem_shmem_helper.h | 1 + 2 files changed, 10 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 24ff2b99e75b..ca6f422c0dfc 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -227,7 +227,14 @@ void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) } EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked); -static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) +/* + * drm_gem_shmem_get_pages - Increase use count on the backing pages for a shmem GEM object + * @shmem: shmem GEM object + * + * This function Increases the use count and allocates the backing pages if + * use-count equals to zero. + */ +int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) { int ret; @@ -240,6 +247,7 @@ static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) return ret; } +EXPORT_SYMBOL_GPL(drm_gem_shmem_get_pages); static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem) { diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index e7b3f4c02bf5..45cd293e10a4 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -110,6 +110,7 @@ struct drm_gem_shmem_object { struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size); void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem); +int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem); int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem);