From patchwork Tue Mar 14 02:26:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 13173573 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 81D51C6FD1F for ; Tue, 14 Mar 2023 02:28:34 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3016A10E6D1; Tue, 14 Mar 2023 02:28:20 +0000 (UTC) Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8C32E10E6CB; Tue, 14 Mar 2023 02:28:15 +0000 (UTC) Received: from workpc.. (109-252-120-116.nat.spd-mgts.ru [109.252.120.116]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 81937660215E; Tue, 14 Mar 2023 02:28:12 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1678760894; bh=Psio9qhd6L6SasDscAeJJftdYb1YBRE8R/hg+dvDyKw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IvYnyaGvKz679ZXMHXKuiS1DVL70ssLVONUvUSTVoV++sCEpJiAe3VfvoMHBPt+m3 Thj8J8U2RrypoJy8dzHIaVo14fwtKLVtwJYa3EHqTemIbNCkhaX7oubbh8J1dUY0ac vMBktcD8UGEVeeH3DWpeKha9trNhrOxVHRT+FZx5N/HxxdqodGkrQdE/iBoPZx5H1y 23GuvFBcSi/tU7YELCHa4cF3VoDCY9/ZGz2YsBqeQfDBU0kyMKgYUSjQxKes5UwU1E 8SObC4U6OOK4U15ix5TNhXO3HYOtaf9QNBJeo/FnFrXaUcrWVFenO3m8LmMXzaL5EM HubaXZamXs6aQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gustavo Padovan , Daniel Stone , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Alyssa Rosenzweig , Rob Herring Date: Tue, 14 Mar 2023 05:26:53 +0300 Message-Id: <20230314022659.1816246-5-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230314022659.1816246-1-dmitry.osipenko@collabora.com> References: <20230314022659.1816246-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v13 04/10] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: intel-gfx@lists.freedesktop.org, kernel@collabora.com, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, virtualization@lists.linux-foundation.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" The vmapped pages shall be pinned in memory. Previously get/put pages were implicitly pinning/unpinning the pages. This will no longer be the case with addition of memory shrinker because pages_use_count>0 won't determine whether pages are pinned anymore, while the new pages_pin_count will do that. Switch the vmap/vunmap to use pin/unpin functions in a preparation of addition of the memory shrinker support. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 81d61791f874..1fcb7d850cc7 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -380,7 +380,7 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, return 0; } - ret = drm_gem_shmem_get_pages(shmem); + ret = drm_gem_shmem_pin_locked(shmem); if (ret) goto err_zero_use; @@ -403,7 +403,7 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, err_put_pages: if (!obj->import_attach) - drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_unpin_locked(shmem); err_zero_use: shmem->vmap_use_count = 0; @@ -440,7 +440,7 @@ void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, return; vunmap(shmem->vaddr); - drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_unpin_locked(shmem); } shmem->vaddr = NULL;