From patchwork Tue Feb 18 23:25:31 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adri=C3=A1n_Larumbe?= X-Patchwork-Id: 13981059 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4019BC021AD for ; Tue, 18 Feb 2025 23:29:06 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2182B10E16D; Tue, 18 Feb 2025 23:29:05 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (1024-bit key; unprotected) header.d=collabora.com header.i=adrian.larumbe@collabora.com header.b="REpdhcUj"; dkim-atps=neutral Received: from sender4-pp-f112.zoho.com (sender4-pp-f112.zoho.com [136.143.188.112]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4249D10E157 for ; Tue, 18 Feb 2025 23:29:03 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; t=1739921334; cv=none; d=zohomail.com; s=zohoarc; b=iUWcBqs6mNafZVIXiqFTo6A/liAIFzzS5QKQ23NUxyUOVaSXn7SrwzkRnqHEpyIJBXtggUuIulHAFjQJplNsThft+joniuucfxY3vREFKxzysDX50XZl0ubfk9EhL43adtkNM0UqnvCuLvbT0lgH0NHvWDJc5ZQLlVAgrT979oY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1739921334; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; bh=MT/z3CoYWH1GTvPLpRnh3YEX8M4rKQcUJDbxLPJnRwA=; b=ida1JEdG3LBnNTURDnmmKjsB3Ei6L4/JI2/7DKXtravnSibGn1ohC/e+xHWSseigRjgEp1y4mL0lwO/8FIUjtuy2izbAOpxronf6OwrqFxie7D4sE0GGPE3LL0/Um8tj9D6g7XEihOa+YA2AmAg9hjvFgS+wojZXnjUTm1EOPsY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=collabora.com; spf=pass smtp.mailfrom=adrian.larumbe@collabora.com; dmarc=pass header.from= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1739921334; s=zohomail; d=collabora.com; i=adrian.larumbe@collabora.com; h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-ID:In-Reply-To:References:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To; bh=MT/z3CoYWH1GTvPLpRnh3YEX8M4rKQcUJDbxLPJnRwA=; b=REpdhcUjQJ6Xzte09GzdHaNMDd7WA50NPKA2bydQT4gU/B3lQ7RgiflO8S0O2Uu/ RYOnuEh2E9TBz1B94m2G60jSf5dOXGErfkW0o7t8n/zV9XUaxAfhd5lyyPAQGlNLxzr b4vIV1rFwqcSzxX+W55Ni1p+nhMMW6RlzGue+Obw= Received: by mx.zohomail.com with SMTPS id 1739921331681134.0055665173901; Tue, 18 Feb 2025 15:28:51 -0800 (PST) From: =?utf-8?q?Adri=C3=A1n_Larumbe?= To: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Boris Brezillon , Steven Price , Rob Herring , Hugh Dickins Cc: kernel@collabora.com, =?utf-8?q?Adri=C3=A1n_Larumbe?= , linux-mm@kvack.org Subject: [RFC PATCH 1/7] shmem: Introduce non-blocking allocation of shmem pages Date: Tue, 18 Feb 2025 23:25:31 +0000 Message-ID: <20250218232552.3450939-2-adrian.larumbe@collabora.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250218232552.3450939-1-adrian.larumbe@collabora.com> References: <20250218232552.3450939-1-adrian.larumbe@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" With the future goal of preventing deadlocks with the shrinker when reclaiming GEM-allocated memory, a variant of shmem_read_mapping_page_gfp() that does not sleep when enough memory isn't available, therefore potentially triggering the shrinker on same driver, is introduced. Signed-off-by: Adrián Larumbe --- include/linux/shmem_fs.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index 0b273a7b9f01..5735728aeda2 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -167,6 +167,13 @@ static inline struct page *shmem_read_mapping_page( mapping_gfp_mask(mapping)); } +static inline struct page *shmem_read_mapping_page_nonblocking( + struct address_space *mapping, pgoff_t index) +{ + return shmem_read_mapping_page_gfp(mapping, index, + mapping_gfp_mask(mapping) | GFP_NOWAIT); +} + static inline bool shmem_file(struct file *file) { if (!IS_ENABLED(CONFIG_SHMEM)) From patchwork Tue Feb 18 23:25:32 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adri=C3=A1n_Larumbe?= X-Patchwork-Id: 13981060 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C4AD7C021AF for ; Tue, 18 Feb 2025 23:29:07 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3447610E239; Tue, 18 Feb 2025 23:29:05 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (1024-bit key; unprotected) header.d=collabora.com header.i=adrian.larumbe@collabora.com header.b="bWfB42Hu"; dkim-atps=neutral Received: from sender4-pp-f112.zoho.com (sender4-pp-f112.zoho.com [136.143.188.112]) by gabe.freedesktop.org (Postfix) with ESMTPS id 1D02710E157 for ; Tue, 18 Feb 2025 23:29:04 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; t=1739921336; cv=none; d=zohomail.com; s=zohoarc; b=l0KzvJGwWpePn6EQSYFdMNxLgi1lxE3dcL7INCLKWOPETJf9XELBUfuTDbmUi4J29CRfbGBts5w70TAQIG2Fcd33u7f5eZwjY8StUwnD/O0kXPbu5Ns/JcrjCLJcUnPhlGXfXk989lhZcZGCIs8CHZAO1OC/Wpr0yomVb4m+/Bw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1739921336; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; bh=8Rmrq+SVHz9TSycz2daVmEOgD0I/l/bPDtjOwgr6T4I=; b=kOdPjJasFNnkctBMarNyUNAcUu5ruNFarIBLbsMBPDdaEHgn3+hacinnLd9kMTJjliBdSFyHXE0u8s88wKUHz/hU8rbacHwwydU/k/s/5+K5Bb5Vo0kGXq9jWfLHhwKKX+lwGS8t4qmEDK7F3jPG9oaOXO5Ksc1sE8TZkHaFAJc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=collabora.com; spf=pass smtp.mailfrom=adrian.larumbe@collabora.com; dmarc=pass header.from= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1739921336; s=zohomail; d=collabora.com; i=adrian.larumbe@collabora.com; h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-ID:In-Reply-To:References:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To; bh=8Rmrq+SVHz9TSycz2daVmEOgD0I/l/bPDtjOwgr6T4I=; b=bWfB42HuLv1Z+nllroUYZ+ikzWa0+ulO1cDS+ohv5ZKQOs/tgR4pisGzzEK4vUI0 7Q5W4lTEXexyw/wO7RgC1xscPa4EjM9Bd38rIYEYGZjGHngriDBlfeVmrV9+lzba6QX 34id3AF1B/ZUhHU9ILByIufRcU3o91udZXfu2waE= Received: by mx.zohomail.com with SMTPS id 1739921333909239.0258459553687; Tue, 18 Feb 2025 15:28:53 -0800 (PST) From: =?utf-8?q?Adri=C3=A1n_Larumbe?= To: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Boris Brezillon , Steven Price , Rob Herring , Andrew Morton Cc: kernel@collabora.com, =?utf-8?q?Adri=C3=A1n_Larumbe?= Subject: [RFC PATCH 2/7] lib/scatterlist.c: Support constructing sgt from page xarray Date: Tue, 18 Feb 2025 23:25:32 +0000 Message-ID: <20250218232552.3450939-3-adrian.larumbe@collabora.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250218232552.3450939-1-adrian.larumbe@collabora.com> References: <20250218232552.3450939-1-adrian.larumbe@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" In preparation for a future commit that will introduce sparse allocation of pages in DRM shmem, a scatterlist function that knows how to deal with an xarray collection of memory pages had to be introduced. Because the new function is identical to the existing one that deals with a page array, the page_array abstraction is also introduced, which hides the way pages are retrieved from a collection. Signed-off-by: Adrián Larumbe --- include/linux/scatterlist.h | 47 +++++++++++++ lib/scatterlist.c | 128 ++++++++++++++++++++++++++++++++++++ 2 files changed, 175 insertions(+) diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h index d836e7440ee8..0045df9c374f 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -48,6 +48,39 @@ struct sg_append_table { unsigned int total_nents; /* Total entries in the table */ }; +struct page_array { + union { + struct page **array; + struct xarray *xarray; + }; + + struct page *(*get_page)(struct page_array, unsigned int); +}; + +static inline struct page *page_array_get_page(struct page_array a, + unsigned int index) +{ + return a.array[index]; +} + +static inline struct page *page_xarray_get_page(struct page_array a, + unsigned int index) +{ + return xa_load(a.xarray, index); +} + +#define PAGE_ARRAY(pages) \ + ((struct page_array) { \ + .array = pages, \ + .get_page = page_array_get_page, \ + }) + +#define PAGE_XARRAY(pages) \ + ((struct page_array) { \ + .xarray = pages, \ + .get_page = page_xarray_get_page, \ + }) + /* * Notes on SG table design. * @@ -448,6 +481,20 @@ int sg_alloc_table_from_pages_segment(struct sg_table *sgt, struct page **pages, unsigned long size, unsigned int max_segment, gfp_t gfp_mask); +int sg_alloc_table_from_page_array_segment(struct sg_table *sgt, struct page_array pages, + unsigned int idx, unsigned int n_pages, unsigned int offset, + unsigned long size, unsigned int max_segment, gfp_t gfp_mask); + +static inline int sg_alloc_table_from_page_xarray(struct sg_table *sgt, struct xarray *pages, + unsigned int idx, unsigned int n_pages, unsigned int offset, + unsigned long size, gfp_t gfp_mask) +{ + struct page_array parray = PAGE_XARRAY(pages); + + return sg_alloc_table_from_page_array_segment(sgt, parray, idx, n_pages, offset, + size, UINT_MAX, gfp_mask); +} + /** * sg_alloc_table_from_pages - Allocate and initialize an sg table from * an array of pages diff --git a/lib/scatterlist.c b/lib/scatterlist.c index 5bb6b8aff232..669ebd23e4ad 100644 --- a/lib/scatterlist.c +++ b/lib/scatterlist.c @@ -553,6 +553,115 @@ int sg_alloc_append_table_from_pages(struct sg_append_table *sgt_append, } EXPORT_SYMBOL(sg_alloc_append_table_from_pages); +static inline int +sg_alloc_append_table_from_page_array(struct sg_append_table *sgt_append, + struct page_array pages, + unsigned int first_page, + unsigned int n_pages, + unsigned int offset, unsigned long size, + unsigned int max_segment, + unsigned int left_pages, gfp_t gfp_mask) +{ + unsigned int chunks, seg_len, i, prv_len = 0; + unsigned int added_nents = 0; + struct scatterlist *s = sgt_append->prv; + unsigned int cur_pg_index = first_page; + unsigned int last_pg_index = first_page + n_pages - 1; + struct page *last_pg; + + /* + * The algorithm below requires max_segment to be aligned to PAGE_SIZE + * otherwise it can overshoot. + */ + max_segment = ALIGN_DOWN(max_segment, PAGE_SIZE); + if (WARN_ON(max_segment < PAGE_SIZE)) + return -EINVAL; + + if (IS_ENABLED(CONFIG_ARCH_NO_SG_CHAIN) && sgt_append->prv) + return -EOPNOTSUPP; + + if (sgt_append->prv) { + unsigned long next_pfn; + struct page *page; + + if (WARN_ON(offset)) + return -EINVAL; + + /* Merge contiguous pages into the last SG */ + page = pages.get_page(pages, cur_pg_index); + prv_len = sgt_append->prv->length; + next_pfn = (sg_phys(sgt_append->prv) + prv_len) / PAGE_SIZE; + if (page_to_pfn(page) == next_pfn) { + last_pg = pfn_to_page(next_pfn - 1); + while (cur_pg_index <= last_pg_index && + pages_are_mergeable(page, last_pg)) { + if (sgt_append->prv->length + PAGE_SIZE > max_segment) + break; + sgt_append->prv->length += PAGE_SIZE; + last_pg = page; + cur_pg_index++; + } + if (cur_pg_index > last_pg_index) + goto out; + } + } + + /* compute number of contiguous chunks */ + chunks = 1; + seg_len = 0; + for (i = cur_pg_index + 1; i <= last_pg_index; i++) { + seg_len += PAGE_SIZE; + if (seg_len >= max_segment || + !pages_are_mergeable(pages.get_page(pages, i), + pages.get_page(pages, i - 1))) { + chunks++; + seg_len = 0; + } + } + + /* merging chunks and putting them into the scatterlist */ + for (i = 0; i < chunks; i++) { + unsigned int j, chunk_size; + + /* look for the end of the current chunk */ + seg_len = 0; + for (j = cur_pg_index + 1; j <= last_pg_index; j++) { + seg_len += PAGE_SIZE; + if (seg_len >= max_segment || + !pages_are_mergeable(pages.get_page(pages, j), + pages.get_page(pages, j - 1))) + break; + } + + /* Pass how many chunks might be left */ + s = get_next_sg(sgt_append, s, chunks - i + left_pages, + gfp_mask); + if (IS_ERR(s)) { + /* + * Adjust entry length to be as before function was + * called. + */ + if (sgt_append->prv) + sgt_append->prv->length = prv_len; + return PTR_ERR(s); + } + chunk_size = ((j - cur_pg_index) << PAGE_SHIFT) - offset; + sg_set_page(s, pages.get_page(pages, cur_pg_index), + min_t(unsigned long, size, chunk_size), offset); + added_nents++; + size -= chunk_size; + offset = 0; + cur_pg_index = j; + } + sgt_append->sgt.nents += added_nents; + sgt_append->sgt.orig_nents = sgt_append->sgt.nents; + sgt_append->prv = s; +out: + if (!left_pages) + sg_mark_end(s); + return 0; +} + /** * sg_alloc_table_from_pages_segment - Allocate and initialize an sg table from * an array of pages and given maximum @@ -596,6 +705,25 @@ int sg_alloc_table_from_pages_segment(struct sg_table *sgt, struct page **pages, } EXPORT_SYMBOL(sg_alloc_table_from_pages_segment); +int sg_alloc_table_from_page_array_segment(struct sg_table *sgt, struct page_array pages, + unsigned int idx, unsigned int n_pages, unsigned int offset, + unsigned long size, unsigned int max_segment, gfp_t gfp_mask) +{ + struct sg_append_table append = {}; + int err; + + err = sg_alloc_append_table_from_page_array(&append, pages, idx, n_pages, offset, + size, max_segment, 0, gfp_mask); + if (err) { + sg_free_append_table(&append); + return err; + } + memcpy(sgt, &append.sgt, sizeof(*sgt)); + WARN_ON(append.total_nents != sgt->orig_nents); + return 0; +} +EXPORT_SYMBOL(sg_alloc_table_from_page_array_segment); + #ifdef CONFIG_SGL_ALLOC /** From patchwork Tue Feb 18 23:25:33 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adri=C3=A1n_Larumbe?= X-Patchwork-Id: 13981062 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 040A1C021AA for ; Tue, 18 Feb 2025 23:29:11 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3B3A010E779; Tue, 18 Feb 2025 23:29:11 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (1024-bit key; unprotected) header.d=collabora.com header.i=adrian.larumbe@collabora.com header.b="YHwYeW+G"; dkim-atps=neutral Received: from sender4-pp-f112.zoho.com (sender4-pp-f112.zoho.com [136.143.188.112]) by gabe.freedesktop.org (Postfix) with ESMTPS id 769AE10E463 for ; Tue, 18 Feb 2025 23:29:09 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; t=1739921338; cv=none; d=zohomail.com; s=zohoarc; b=KAB+ODyFxGhg7B6x99dY/ELB1ZUxT4mKPu1HceEiLZbGFSKNWjggrdYJGFquNuSvUfXaVAI/07+ckaai2Zr8jb9yTlrBaRVpINI4IbsrQHsthbTeThmkhdgF0rDTCs/dYo3CSdZw/g5xhvpd7+KdMmLYBpshaLrIlauw5GEMdog= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1739921338; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; bh=xXOQHDyDK3ylLZfTlWzUigjKBktcbj0fWV8yThDx3wA=; b=SC0T7cuumYiDbudXfvebEbIuTjsdTWZ68i99nzQui5PMwjcVjauxJb1yiQ7YZyJscy1755Ht2exz6sKfcoTWN4Q3YqjMgRDjjkspn/v42i6vvkssmkGa7yGEB3W+mqiMPXjBejnMmVgzvmR/ojRjMCfEYnIIXemL9AzQ1Fx5+OY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=collabora.com; spf=pass smtp.mailfrom=adrian.larumbe@collabora.com; dmarc=pass header.from= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1739921338; s=zohomail; d=collabora.com; i=adrian.larumbe@collabora.com; h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-ID:In-Reply-To:References:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To; bh=xXOQHDyDK3ylLZfTlWzUigjKBktcbj0fWV8yThDx3wA=; b=YHwYeW+G8NwqvIUlXgU7Tl3xjbF/vDtvQq33g0bLvaR2ve1bbmsPlR98L7/Ehi/w 0PsRn2/aNwdggrxgsQ0VjV0a00pbkmzLaorCCooWmkTkaLMIeB6EsmRXjp6VtqaoMGE Gtx9jDeaQ3fUggAJIXJij2KMEb80IthjjMho9gOY= Received: by mx.zohomail.com with SMTPS id 1739921336488392.7532493423994; Tue, 18 Feb 2025 15:28:56 -0800 (PST) From: =?utf-8?q?Adri=C3=A1n_Larumbe?= To: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Boris Brezillon , Steven Price , Rob Herring , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter Cc: kernel@collabora.com, =?utf-8?q?Adri=C3=A1n_Larumbe?= Subject: [RFC PATCH 3/7] drm/prime: Let drm_prime_pages_to_sg use the page_array interface Date: Tue, 18 Feb 2025 23:25:33 +0000 Message-ID: <20250218232552.3450939-4-adrian.larumbe@collabora.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250218232552.3450939-1-adrian.larumbe@collabora.com> References: <20250218232552.3450939-1-adrian.larumbe@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Switch to sg_alloc_table_from_page_array_segment() when generating an sgtable from an array of pages. This is functionally equivalent, but a future commit will also let us do the same from a memory page xarray. Signed-off-by: Adrián Larumbe --- drivers/gpu/drm/drm_prime.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c index 32a8781cfd67..1549733d3833 100644 --- a/drivers/gpu/drm/drm_prime.c +++ b/drivers/gpu/drm/drm_prime.c @@ -837,6 +837,7 @@ struct sg_table *drm_prime_pages_to_sg(struct drm_device *dev, struct page **pages, unsigned int nr_pages) { struct sg_table *sg; + struct page_array parray = PAGE_ARRAY(pages); size_t max_segment = 0; int err; @@ -848,9 +849,9 @@ struct sg_table *drm_prime_pages_to_sg(struct drm_device *dev, max_segment = dma_max_mapping_size(dev->dev); if (max_segment == 0) max_segment = UINT_MAX; - err = sg_alloc_table_from_pages_segment(sg, pages, nr_pages, 0, - (unsigned long)nr_pages << PAGE_SHIFT, - max_segment, GFP_KERNEL); + err = sg_alloc_table_from_page_array_segment(sg, parray, 0, nr_pages, 0, + (unsigned long)nr_pages << PAGE_SHIFT, + max_segment, GFP_KERNEL); if (err) { kfree(sg); sg = ERR_PTR(err); From patchwork Tue Feb 18 23:25:34 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adri=C3=A1n_Larumbe?= X-Patchwork-Id: 13981061 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 582A7C021AD for ; Tue, 18 Feb 2025 23:29:11 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C771610E463; Tue, 18 Feb 2025 23:29:10 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (1024-bit key; unprotected) header.d=collabora.com header.i=adrian.larumbe@collabora.com header.b="M2r2mMvi"; dkim-atps=neutral Received: from sender4-pp-f112.zoho.com (sender4-pp-f112.zoho.com [136.143.188.112]) by gabe.freedesktop.org (Postfix) with ESMTPS id C721D10E779 for ; Tue, 18 Feb 2025 23:29:09 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; t=1739921340; cv=none; d=zohomail.com; s=zohoarc; b=RnCgSv/BfIU54AQTpHckROhV7AhgvZ2s5yE1hiYtY4HI7OCmRj3pgWpm9HbVTvp9MZ84V6hJJfXKvForuQ3i+q4wX+ssJm+mXzGG88sPn5/AYr84mJDF8NwzJWxPZUfbPjCM/+7MA1Rpw/aDaD3zPM4uqk1kXgOFrd2yv1zELiE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1739921340; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; bh=n3MGcvZJ2jssBxw31RXxSN0eBDZxEkdm3sXeyoWiJR4=; b=lGA4gdInt0uvqni3N8MDyjGAYMCuR3KXVnl7XRj1tXZqc5Mx/90xMmmQchkfQED960y0qecQL6MKsrkPt5d9aFfU12xg47H+fqm4KJ9xkw3MPTUhGU5YeSmU+JywQXPb7Dt47ZDvH22NXtDMQXaNNHgv9UMhtL7kEs0K+KsF3Sw= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=collabora.com; spf=pass smtp.mailfrom=adrian.larumbe@collabora.com; dmarc=pass header.from= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1739921340; s=zohomail; d=collabora.com; i=adrian.larumbe@collabora.com; h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-ID:In-Reply-To:References:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To; bh=n3MGcvZJ2jssBxw31RXxSN0eBDZxEkdm3sXeyoWiJR4=; b=M2r2mMviKC8pmuNGhWoKa6o3u99Kcp1TEkCLHqjdKKOgdX2x2VgEWzAFBXICR3Ep yOBNJMWCw1kbO14NcMuLcmP68AKt0KHww1jhaUIMZpD5t+Dd2M2ZTe2Ly7s24+5pReW kCsQCIMAqIIFOFFO0vMcbAswrIady1ubXJUsU8hA= Received: by mx.zohomail.com with SMTPS id 1739921339255606.5607224644493; Tue, 18 Feb 2025 15:28:59 -0800 (PST) From: =?utf-8?q?Adri=C3=A1n_Larumbe?= To: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Boris Brezillon , Steven Price , Rob Herring , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter Cc: kernel@collabora.com, =?utf-8?q?Adri=C3=A1n_Larumbe?= Subject: [RFC PATCH 4/7] drm/shmem: Introduce the notion of sparse objects Date: Tue, 18 Feb 2025 23:25:34 +0000 Message-ID: <20250218232552.3450939-5-adrian.larumbe@collabora.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250218232552.3450939-1-adrian.larumbe@collabora.com> References: <20250218232552.3450939-1-adrian.larumbe@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Sparse DRM objects will store their backing pages in an xarray, to avoid the overhead of preallocating a huge struct page pointer array when only a very small range of indices might be assigned. For now, only the definition of a sparse object as a union alternative to a 'dense' object is provided, with functions that exploit it being part of later commits. Signed-off-by: Adrián Larumbe --- drivers/gpu/drm/drm_gem_shmem_helper.c | 42 +++++++++++++++++++++++--- include/drm/drm_gem_shmem_helper.h | 18 ++++++++++- 2 files changed, 54 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 5ab351409312..d63e42be2d72 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -10,6 +10,7 @@ #include #include #include +#include #ifdef CONFIG_X86 #include @@ -50,7 +51,7 @@ static const struct drm_gem_object_funcs drm_gem_shmem_funcs = { static struct drm_gem_shmem_object * __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private, - struct vfsmount *gemfs) + bool sparse, struct vfsmount *gemfs) { struct drm_gem_shmem_object *shmem; struct drm_gem_object *obj; @@ -90,6 +91,11 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private, INIT_LIST_HEAD(&shmem->madv_list); + if (unlikely(sparse)) + xa_init_flags(&shmem->xapages, XA_FLAGS_ALLOC); + + shmem->sparse = sparse; + if (!private) { /* * Our buffers are kept pinned, so allocating them @@ -124,10 +130,16 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private, */ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size) { - return __drm_gem_shmem_create(dev, size, false, NULL); + return __drm_gem_shmem_create(dev, size, false, false, NULL); } EXPORT_SYMBOL_GPL(drm_gem_shmem_create); +struct drm_gem_shmem_object *drm_gem_shmem_create_sparse(struct drm_device *dev, size_t size) +{ + return __drm_gem_shmem_create(dev, size, false, true, NULL); +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_create_sparse); + /** * drm_gem_shmem_create_with_mnt - Allocate an object with the given size in a * given mountpoint @@ -145,7 +157,7 @@ struct drm_gem_shmem_object *drm_gem_shmem_create_with_mnt(struct drm_device *de size_t size, struct vfsmount *gemfs) { - return __drm_gem_shmem_create(dev, size, false, gemfs); + return __drm_gem_shmem_create(dev, size, false, false, gemfs); } EXPORT_SYMBOL_GPL(drm_gem_shmem_create_with_mnt); @@ -173,7 +185,9 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) sg_free_table(shmem->sgt); kfree(shmem->sgt); } - if (shmem->pages) + + if ((!shmem->sparse && shmem->pages) || + (shmem->sparse && !xa_empty(&shmem->xapages))) drm_gem_shmem_put_pages(shmem); drm_WARN_ON(obj->dev, shmem->pages_use_count); @@ -191,11 +205,19 @@ static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) struct drm_gem_object *obj = &shmem->base; struct page **pages; + if (drm_WARN_ON(obj->dev, shmem->sparse)) + return -EINVAL; + dma_resv_assert_held(shmem->base.resv); if (shmem->pages_use_count++ > 0) return 0; + /* We only allow increasing the user count in the case of + sparse shmem objects with some backed pages for now */ + if (shmem->sparse && xa_empty(&shmem->xapages)) + return -EINVAL; + pages = drm_gem_get_pages(obj); if (IS_ERR(pages)) { drm_dbg_kms(obj->dev, "Failed to get pages (%ld)\n", @@ -541,6 +563,8 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf) struct page *page; pgoff_t page_offset; + drm_WARN_ON(obj->dev, shmem->sparse); + /* We don't use vmf->pgoff since that has the fake offset */ page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT; @@ -567,6 +591,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma) struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); drm_WARN_ON(obj->dev, obj->import_attach); + drm_WARN_ON(obj->dev, shmem->sparse); dma_resv_lock(shmem->base.resv, NULL); @@ -666,6 +691,9 @@ void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem, if (shmem->base.import_attach) return; + if (drm_WARN_ON(shmem->base.dev, shmem->sparse)) + return; + drm_printf_indent(p, indent, "pages_use_count=%u\n", shmem->pages_use_count); drm_printf_indent(p, indent, "vmap_use_count=%u\n", shmem->vmap_use_count); drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr); @@ -691,6 +719,7 @@ struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem) struct drm_gem_object *obj = &shmem->base; drm_WARN_ON(obj->dev, obj->import_attach); + drm_WARN_ON(obj->dev, shmem->sparse); return drm_prime_pages_to_sg(obj->dev, shmem->pages, obj->size >> PAGE_SHIFT); } @@ -702,6 +731,9 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_ int ret; struct sg_table *sgt; + if (drm_WARN_ON(obj->dev, shmem->sparse)) + return ERR_PTR(-EINVAL); + if (shmem->sgt) return shmem->sgt; @@ -787,7 +819,7 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev, size_t size = PAGE_ALIGN(attach->dmabuf->size); struct drm_gem_shmem_object *shmem; - shmem = __drm_gem_shmem_create(dev, size, true, NULL); + shmem = __drm_gem_shmem_create(dev, size, true, false, NULL); if (IS_ERR(shmem)) return ERR_CAST(shmem); diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index d22e3fb53631..902039cfc4ce 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -6,6 +6,7 @@ #include #include #include +#include #include #include @@ -29,7 +30,11 @@ struct drm_gem_shmem_object { /** * @pages: Page table */ - struct page **pages; + union { + + struct page **pages; + struct xarray xapages; + }; /** * @pages_use_count: @@ -91,6 +96,11 @@ struct drm_gem_shmem_object { * @map_wc: map object write-combined (instead of using shmem defaults). */ bool map_wc : 1; + + /** + * @sparse: the object's virtual memory space is only partially backed by pages + */ + bool sparse : 1; }; #define to_drm_gem_shmem_obj(obj) \ @@ -229,6 +239,9 @@ static inline int drm_gem_shmem_object_vmap(struct drm_gem_object *obj, { struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + if (shmem->sparse) + return -EACCES; + return drm_gem_shmem_vmap(shmem, map); } @@ -263,6 +276,9 @@ static inline int drm_gem_shmem_object_mmap(struct drm_gem_object *obj, struct v { struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + if (shmem->sparse) + return -EACCES; + return drm_gem_shmem_mmap(shmem, vma); } From patchwork Tue Feb 18 23:25:35 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adri=C3=A1n_Larumbe?= X-Patchwork-Id: 13981063 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E8315C021AA for ; Tue, 18 Feb 2025 23:29:16 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 682F610E780; Tue, 18 Feb 2025 23:29:16 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (1024-bit key; unprotected) header.d=collabora.com header.i=adrian.larumbe@collabora.com header.b="goQi3QnU"; dkim-atps=neutral Received: from sender4-pp-f112.zoho.com (sender4-pp-f112.zoho.com [136.143.188.112]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4143710E77E for ; Tue, 18 Feb 2025 23:29:14 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; t=1739921344; cv=none; d=zohomail.com; s=zohoarc; b=RuyA0qYHE59i9UeK6jSfX+vKYF2pBJDcnJVG7va5kUfSTIk5jFN9ElBtkdnbN64Nzesvxv3whjj4eHuC2152ZxI5/8TTQi7uO79sxq6zPsY4M+DLbZTeZaYIM0tAtnNWUEgOoAWoKlVwHppUYthA44QDE9QNVf5Obm93RJQKSXU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1739921344; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; bh=lxIiOljX+8RCkQFRb8Qc1CK/237y/rJX69Kr+P0hkwg=; b=TGwkodcGf1i/Jv04xgiPBhLO9NrPPS/XDKsYKoPkXGpam+bF7ZAYsnwVzq/s8fSPK163/Gu3Dq66CT098bKyYlxzWU46dylQc4BnhX8zB5/DKBys6r4nj157DVzx4ae3Fe2uCBpW5nwFT4rdsLUQcwuR9jA3FcKlJl2Tju8KZMY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=collabora.com; spf=pass smtp.mailfrom=adrian.larumbe@collabora.com; dmarc=pass header.from= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1739921344; s=zohomail; d=collabora.com; i=adrian.larumbe@collabora.com; h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-ID:In-Reply-To:References:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To; bh=lxIiOljX+8RCkQFRb8Qc1CK/237y/rJX69Kr+P0hkwg=; b=goQi3QnU9PXH3+xd3RZvDZ+f09wsssdxlkz4goU4WmQefz985Zww0D1TKAQP8qMG czdr9dc96AlszkwAyLxfjMr95dEiDI3b2HoQhQI5dqIvMw3dz+C07NbQOVca5zopDer oS15tMIH2nyDW3pLDqKp+XU7AyIZtaRZCncxKWR8= Received: by mx.zohomail.com with SMTPS id 1739921342252847.3662481768877; Tue, 18 Feb 2025 15:29:02 -0800 (PST) From: =?utf-8?q?Adri=C3=A1n_Larumbe?= To: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Boris Brezillon , Steven Price , Rob Herring , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter Cc: kernel@collabora.com, =?utf-8?q?Adri=C3=A1n_Larumbe?= Subject: [RFC PATCH 5/7] drm/shmem: Implement sparse allocation of pages for shmem objects Date: Tue, 18 Feb 2025 23:25:35 +0000 Message-ID: <20250218232552.3450939-6-adrian.larumbe@collabora.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250218232552.3450939-1-adrian.larumbe@collabora.com> References: <20250218232552.3450939-1-adrian.larumbe@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Add a new function that lets drivers allocate pages for a subset of the shmem object's virtual address range. Expand the shmem object's definition to include an RSS field, since it's different from the base GEM object's virtual size. Add also new function for putting the pages of a sparse page array. There is refactorisation potential with drm_gem_put_pages, but it is yet to be decided what this should look like. Signed-off-by: Adrián Larumbe --- drivers/gpu/drm/drm_gem.c | 32 +++++++ drivers/gpu/drm/drm_gem_shmem_helper.c | 123 ++++++++++++++++++++++++- include/drm/drm_gem.h | 3 + include/drm/drm_gem_shmem_helper.h | 12 +++ 4 files changed, 165 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index ee811764c3df..930c5219e1e9 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -679,6 +679,38 @@ void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages, } EXPORT_SYMBOL(drm_gem_put_pages); +void drm_gem_put_sparse_xarray(struct xarray *pa, unsigned long idx, + unsigned int npages, bool dirty, bool accessed) +{ + struct folio_batch fbatch; + struct page *page; + + folio_batch_init(&fbatch); + + xa_for_each(pa, idx, page) { + struct folio *folio = page_folio(page); + + if (dirty) + folio_mark_dirty(folio); + if (accessed) + folio_mark_accessed(folio); + + /* Undo the reference we took when populating the table */ + if (!folio_batch_add(&fbatch, folio)) + drm_gem_check_release_batch(&fbatch); + + xa_erase(pa, idx); + + idx += folio_nr_pages(folio) - 1; + } + + if (folio_batch_count(&fbatch)) + drm_gem_check_release_batch(&fbatch); + + WARN_ON((idx+1) != npages); +} +EXPORT_SYMBOL(drm_gem_put_sparse_xarray); + static int objects_lookup(struct drm_file *filp, u32 *handle, int count, struct drm_gem_object **objs) { diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index d63e42be2d72..40f7f6812195 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -10,7 +10,6 @@ #include #include #include -#include #ifdef CONFIG_X86 #include @@ -161,6 +160,18 @@ struct drm_gem_shmem_object *drm_gem_shmem_create_with_mnt(struct drm_device *de } EXPORT_SYMBOL_GPL(drm_gem_shmem_create_with_mnt); +static void drm_gem_shmem_put_pages_sparse(struct drm_gem_shmem_object *shmem) +{ + unsigned int n_pages = shmem->rss_size / PAGE_SIZE; + + drm_WARN_ON(shmem->base.dev, (shmem->rss_size & (PAGE_SIZE - 1)) != 0); + drm_WARN_ON(shmem->base.dev, !shmem->sparse); + + drm_gem_put_sparse_xarray(&shmem->xapages, 0, n_pages, + shmem->pages_mark_dirty_on_put, + shmem->pages_mark_accessed_on_put); +} + /** * drm_gem_shmem_free - Free resources associated with a shmem GEM object * @shmem: shmem GEM object to free @@ -264,10 +275,15 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); #endif - drm_gem_put_pages(obj, shmem->pages, - shmem->pages_mark_dirty_on_put, - shmem->pages_mark_accessed_on_put); - shmem->pages = NULL; + if (!shmem->sparse) { + drm_gem_put_pages(obj, shmem->pages, + shmem->pages_mark_dirty_on_put, + shmem->pages_mark_accessed_on_put); + shmem->pages = NULL; + } else { + drm_gem_shmem_put_pages_sparse(shmem); + xa_destroy(&shmem->xapages); + } } EXPORT_SYMBOL(drm_gem_shmem_put_pages); @@ -765,6 +781,81 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_ return ERR_PTR(ret); } +static struct sg_table *drm_gem_shmem_get_sparse_pages_locked(struct drm_gem_shmem_object *shmem, + unsigned int n_pages, + pgoff_t page_offset) +{ + struct drm_gem_object *obj = &shmem->base; + gfp_t mask = GFP_KERNEL | GFP_NOWAIT; + size_t size = n_pages * PAGE_SIZE; + struct address_space *mapping; + struct sg_table *sgt; + struct page *page; + bool first_alloc; + int ret, i; + + if (!shmem->sparse) + return ERR_PTR(-EINVAL); + + /* If the mapping exists, then bail out immediately */ + if (xa_load(&shmem->xapages, page_offset) != NULL) + return ERR_PTR(-EEXIST); + + dma_resv_assert_held(shmem->base.resv); + + first_alloc = xa_empty(&shmem->xapages); + + mapping = shmem->base.filp->f_mapping; + mapping_set_unevictable(mapping); + + for (i = 0; i < n_pages; i++) { + page = shmem_read_mapping_page_nonblocking(mapping, page_offset + i); + if (IS_ERR(page)) { + ret = PTR_ERR(page); + goto err_free_pages; + } + + /* Add the page into the xarray */ + ret = xa_err(xa_store(&shmem->xapages, page_offset + i, page, mask)); + if (ret) { + put_page(page); + goto err_free_pages; + } + } + + sgt = kzalloc(sizeof(*sgt), mask); + if (!sgt) { + ret = -ENOMEM; + goto err_free_pages; + } + + ret = sg_alloc_table_from_page_xarray(sgt, &shmem->xapages, page_offset, n_pages, 0, size, mask); + if (ret) + goto err_free_sgtable; + + ret = dma_map_sgtable(obj->dev->dev, sgt, DMA_BIDIRECTIONAL, 0); + if (ret) + goto err_free_sgtable; + + if (first_alloc) + shmem->pages_use_count = 1; + + shmem->rss_size += size; + + return sgt; + +err_free_sgtable: + kfree(sgt); +err_free_pages: + while (--i) { + page = xa_erase(&shmem->xapages, page_offset + i); + if (drm_WARN_ON(obj->dev, !page)) + continue; + put_page(page); + } + return ERR_PTR(ret); +} + /** * drm_gem_shmem_get_pages_sgt - Pin pages, dma map them, and return a * scatter/gather table for a shmem GEM object. @@ -796,6 +887,28 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem) } EXPORT_SYMBOL_GPL(drm_gem_shmem_get_pages_sgt); +struct sg_table *drm_gem_shmem_get_sparse_pages_sgt(struct drm_gem_shmem_object *shmem, + unsigned int n_pages, pgoff_t page_offset) +{ + struct drm_gem_object *obj = &shmem->base; + struct sg_table *sgt; + int ret; + + if (drm_WARN_ON(obj->dev, !shmem->sparse)) + return ERR_PTR(-EINVAL); + + ret = dma_resv_lock(shmem->base.resv, NULL); + if (ret) + return ERR_PTR(ret); + + sgt = drm_gem_shmem_get_sparse_pages_locked(shmem, n_pages, page_offset); + + dma_resv_unlock(shmem->base.resv); + + return sgt; +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sparse_pages_sgt); + /** * drm_gem_shmem_prime_import_sg_table - Produce a shmem GEM object from * another driver's scatter/gather table of pinned pages diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index fdae947682cd..4fd45169a3af 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -38,6 +38,7 @@ #include #include #include +#include #include @@ -532,6 +533,8 @@ int drm_gem_create_mmap_offset_size(struct drm_gem_object *obj, size_t size); struct page **drm_gem_get_pages(struct drm_gem_object *obj); void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages, bool dirty, bool accessed); +void drm_gem_put_sparse_xarray(struct xarray *pa, unsigned long idx, + unsigned int npages, bool dirty, bool accessed); void drm_gem_lock(struct drm_gem_object *obj); void drm_gem_unlock(struct drm_gem_object *obj); diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 902039cfc4ce..fcd84c8cf8e7 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -44,6 +44,14 @@ struct drm_gem_shmem_object { */ unsigned int pages_use_count; + /** + * @rss_size: + * + * Size of the object RSS, in bytes. + * lifetime. + */ + size_t rss_size; + /** * @madv: State for madvise * @@ -107,6 +115,7 @@ struct drm_gem_shmem_object { container_of(obj, struct drm_gem_shmem_object, base) struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size); +struct drm_gem_shmem_object *drm_gem_shmem_create_sparse(struct drm_device *dev, size_t size); struct drm_gem_shmem_object *drm_gem_shmem_create_with_mnt(struct drm_device *dev, size_t size, struct vfsmount *gemfs); @@ -138,6 +147,9 @@ void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem); struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem); struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem); +struct sg_table *drm_gem_shmem_get_sparse_pages_sgt(struct drm_gem_shmem_object *shmem, + unsigned int n_pages, pgoff_t page_offset); + void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem, struct drm_printer *p, unsigned int indent); From patchwork Tue Feb 18 23:25:36 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adri=C3=A1n_Larumbe?= X-Patchwork-Id: 13981064 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 07D33C021AD for ; Tue, 18 Feb 2025 23:29:20 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6B23110E77E; Tue, 18 Feb 2025 23:29:19 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (1024-bit key; unprotected) header.d=collabora.com header.i=adrian.larumbe@collabora.com header.b="LPRN0EBK"; dkim-atps=neutral Received: from sender4-pp-f112.zoho.com (sender4-pp-f112.zoho.com [136.143.188.112]) by gabe.freedesktop.org (Postfix) with ESMTPS id 139A010E77E for ; Tue, 18 Feb 2025 23:29:17 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; t=1739921346; cv=none; d=zohomail.com; s=zohoarc; b=Qifcp8XYul+frmQSo8lIl216jfu1LGHu/smbtO+yxc8piOwYPhaM/2+MdAUrj/r9TYV8Y37/HVQ/wFjQ5X+fNvBZU7z2BTaPBqyOc+4PLJeIfRUGWEIefTtKDFx1ulK4d0RAXu8A+cz5KdqZi+qMXbGpTAgodM8EbDAXGjcydTI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1739921346; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; bh=XNOD0+zkb7PNYrrdBX15Ic1+43Bxv9rI1DSYcDxeTZE=; b=i+A8U+2f+RILhuuDeJ0Q6I8b33LJ64AXfwpznqQS0PTWRUiF51IQ2N3ZF3Io3hoKEmy5qEDOtnQwBuVHqj9KG74TbdbOIHkMcHVtwIHz+cFclZtTeq6nL8PPHpMA2SLkc/FJUCvgCV/sAPFd0Sl0lEZmZJaViJ5GfPMbC7dYqac= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=collabora.com; spf=pass smtp.mailfrom=adrian.larumbe@collabora.com; dmarc=pass header.from= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1739921346; s=zohomail; d=collabora.com; i=adrian.larumbe@collabora.com; h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-ID:In-Reply-To:References:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To; bh=XNOD0+zkb7PNYrrdBX15Ic1+43Bxv9rI1DSYcDxeTZE=; b=LPRN0EBKk3GnEYb72Xr1ToNxXHID31v1QyhTh8veT/Euxg9bRaoKE5ihT7mSArwg 0z+fJwEi9dpTY+NHLpw7tghrabCPmF2V94CmZuNtP1crATAaQ1RwpjwDbZXDPTrfMEP xDymQHBUabu0ibUTHVS+O50I5lZs+e9XuuDBDNbM= Received: by mx.zohomail.com with SMTPS id 1739921345075345.4757482949359; Tue, 18 Feb 2025 15:29:05 -0800 (PST) From: =?utf-8?q?Adri=C3=A1n_Larumbe?= To: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Boris Brezillon , Steven Price , Rob Herring , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter Cc: kernel@collabora.com, =?utf-8?q?Adri=C3=A1n_Larumbe?= Subject: [RFC PATCH 6/7] drm/panfrost: Use shmem sparse allocation for heap BOs Date: Tue, 18 Feb 2025 23:25:36 +0000 Message-ID: <20250218232552.3450939-7-adrian.larumbe@collabora.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250218232552.3450939-1-adrian.larumbe@collabora.com> References: <20250218232552.3450939-1-adrian.larumbe@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Panfrost heap BOs grow on demand when the GPU triggers a page fault after accessing an address within the BO's virtual range. We still store the sgts we get back from the shmem sparse allocation function, since it was decided management of sparse memory SGTs should be done by client drivers rather than the shmem subsystem. Signed-off-by: Adrián Larumbe --- drivers/gpu/drm/panfrost/panfrost_gem.c | 12 ++-- drivers/gpu/drm/panfrost/panfrost_gem.h | 2 +- drivers/gpu/drm/panfrost/panfrost_mmu.c | 85 +++++-------------------- 3 files changed, 25 insertions(+), 74 deletions(-) diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c index 8e0ff3efede7..0cda2c4e524f 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -40,10 +40,10 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj) int n_sgt = bo->base.base.size / SZ_2M; for (i = 0; i < n_sgt; i++) { - if (bo->sgts[i].sgl) { - dma_unmap_sgtable(pfdev->dev, &bo->sgts[i], + if (bo->sgts[i]) { + dma_unmap_sgtable(pfdev->dev, bo->sgts[i], DMA_BIDIRECTIONAL, 0); - sg_free_table(&bo->sgts[i]); + sg_free_table(bo->sgts[i]); } } kvfree(bo->sgts); @@ -274,7 +274,11 @@ panfrost_gem_create(struct drm_device *dev, size_t size, u32 flags) if (flags & PANFROST_BO_HEAP) size = roundup(size, SZ_2M); - shmem = drm_gem_shmem_create(dev, size); + if (flags & PANFROST_BO_HEAP) + shmem = drm_gem_shmem_create_sparse(dev, size); + else + shmem = drm_gem_shmem_create(dev, size); + if (IS_ERR(shmem)) return ERR_CAST(shmem); diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.h b/drivers/gpu/drm/panfrost/panfrost_gem.h index 7516b7ecf7fe..2a8d0752011e 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.h +++ b/drivers/gpu/drm/panfrost/panfrost_gem.h @@ -11,7 +11,7 @@ struct panfrost_mmu; struct panfrost_gem_object { struct drm_gem_shmem_object base; - struct sg_table *sgts; + struct sg_table **sgts; /* * Use a list for now. If searching a mapping ever becomes the diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index b91019cd5acb..4a78ff9ca293 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -441,14 +441,11 @@ addr_to_mapping(struct panfrost_device *pfdev, int as, u64 addr) static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, u64 addr) { - int ret, i; struct panfrost_gem_mapping *bomapping; struct panfrost_gem_object *bo; - struct address_space *mapping; - struct drm_gem_object *obj; pgoff_t page_offset; struct sg_table *sgt; - struct page **pages; + int ret = 0; bomapping = addr_to_mapping(pfdev, as, addr); if (!bomapping) @@ -459,94 +456,44 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, dev_WARN(pfdev->dev, "matching BO is not heap type (GPU VA = %llx)", bomapping->mmnode.start << PAGE_SHIFT); ret = -EINVAL; - goto err_bo; + goto fault_out; } WARN_ON(bomapping->mmu->as != as); /* Assume 2MB alignment and size multiple */ addr &= ~((u64)SZ_2M - 1); - page_offset = addr >> PAGE_SHIFT; - page_offset -= bomapping->mmnode.start; + page_offset = (addr >> PAGE_SHIFT) - bomapping->mmnode.start; - obj = &bo->base.base; - - dma_resv_lock(obj->resv, NULL); - - if (!bo->base.pages) { + if (!bo->sgts) { bo->sgts = kvmalloc_array(bo->base.base.size / SZ_2M, - sizeof(struct sg_table), GFP_KERNEL | __GFP_ZERO); + sizeof(struct sg_table *), GFP_KERNEL | __GFP_ZERO); if (!bo->sgts) { ret = -ENOMEM; - goto err_unlock; - } - - pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT, - sizeof(struct page *), GFP_KERNEL | __GFP_ZERO); - if (!pages) { - kvfree(bo->sgts); - bo->sgts = NULL; - ret = -ENOMEM; - goto err_unlock; - } - bo->base.pages = pages; - bo->base.pages_use_count = 1; - } else { - pages = bo->base.pages; - if (pages[page_offset]) { - /* Pages are already mapped, bail out. */ - goto out; + goto fault_out; } } - mapping = bo->base.base.filp->f_mapping; - mapping_set_unevictable(mapping); + sgt = drm_gem_shmem_get_sparse_pages_sgt(&bo->base, NUM_FAULT_PAGES, page_offset); + if (IS_ERR(sgt)) { + if (WARN_ON(PTR_ERR(sgt) != -EEXIST)) + ret = PTR_ERR(sgt); + else + ret = 0; - for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) { - /* Can happen if the last fault only partially filled this - * section of the pages array before failing. In that case - * we skip already filled pages. - */ - if (pages[i]) - continue; - - pages[i] = shmem_read_mapping_page(mapping, i); - if (IS_ERR(pages[i])) { - ret = PTR_ERR(pages[i]); - pages[i] = NULL; - goto err_unlock; - } + goto fault_out; } - sgt = &bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)]; - ret = sg_alloc_table_from_pages(sgt, pages + page_offset, - NUM_FAULT_PAGES, 0, SZ_2M, GFP_KERNEL); - if (ret) - goto err_unlock; - - ret = dma_map_sgtable(pfdev->dev, sgt, DMA_BIDIRECTIONAL, 0); - if (ret) - goto err_map; - mmu_map_sg(pfdev, bomapping->mmu, addr, IOMMU_WRITE | IOMMU_READ | IOMMU_NOEXEC, sgt); + bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)] = sgt; + bomapping->active = true; bo->heap_rss_size += SZ_2M; dev_dbg(pfdev->dev, "mapped page fault @ AS%d %llx", as, addr); -out: - dma_resv_unlock(obj->resv); - - panfrost_gem_mapping_put(bomapping); - - return 0; - -err_map: - sg_free_table(sgt); -err_unlock: - dma_resv_unlock(obj->resv); -err_bo: +fault_out: panfrost_gem_mapping_put(bomapping); return ret; } From patchwork Tue Feb 18 23:25:37 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adri=C3=A1n_Larumbe?= X-Patchwork-Id: 13981065 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E1950C021AD for ; Tue, 18 Feb 2025 23:29:22 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 56F8410E781; Tue, 18 Feb 2025 23:29:22 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (1024-bit key; unprotected) header.d=collabora.com header.i=adrian.larumbe@collabora.com header.b="cT44W4Xt"; dkim-atps=neutral Received: from sender4-pp-f112.zoho.com (sender4-pp-f112.zoho.com [136.143.188.112]) by gabe.freedesktop.org (Postfix) with ESMTPS id 0EBBB10E781 for ; Tue, 18 Feb 2025 23:29:21 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; t=1739921350; cv=none; d=zohomail.com; s=zohoarc; b=nmqVa7cF5xhDoKrRmhZBDJGYT0v38+pBpdExeM7XdrYfG33t8RrDj559mOOlN1XoUUaOboH6DwR0oRuh5bI8TZPXSrAzgJ3O/p6FbaUt0YuUoMXK/X/K17gx1qOYGT/E08mgE/uhSQ/6M4zLBRTaoZDYRTOCnV3jaMLgF2Jv9KM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1739921350; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; bh=VtJM+6VGRnYwpG1okrpvmsoo+QzSvXWf7ZKxTATVhv4=; b=bNxUEARlfyR/n2eH01zRD0BuXaKT8WS0p1R1V6YkVO+GRiticJW3WFCaHwTV7ircuY2lOvX9Lm+w/tWvlQzjsIUI+gBL5eREisOeYNwASA+Ht9FHijUZ59icKx8vEH0Tty21YcvrRwGlUoB4fP20KH20Q/wTyq8mfDoNXaq012k= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=collabora.com; spf=pass smtp.mailfrom=adrian.larumbe@collabora.com; dmarc=pass header.from= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1739921350; s=zohomail; d=collabora.com; i=adrian.larumbe@collabora.com; h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-ID:In-Reply-To:References:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To; bh=VtJM+6VGRnYwpG1okrpvmsoo+QzSvXWf7ZKxTATVhv4=; b=cT44W4XtMz3+QJyLmTynv+wH6fRh/+jQdWmJJo+rVT8kV39GlTeWbFF4enqIuUFv b+CwdXdHKIa/jvfjNgOQIX8btggOULKtIChLy5iSLOazqNSz+PT36bw4KPhMHLRdY/Y tIBjX9Gv1EF/mOb7SuK38JzAsTuqr8AdsnuVNtQo= Received: by mx.zohomail.com with SMTPS id 1739921347867297.0981931852451; Tue, 18 Feb 2025 15:29:07 -0800 (PST) From: =?utf-8?q?Adri=C3=A1n_Larumbe?= To: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Boris Brezillon , Steven Price , Rob Herring , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Liviu Dudau Cc: kernel@collabora.com, =?utf-8?q?Adri=C3=A1n_Larumbe?= Subject: [RFC PATCH 7/7] drm/panfrost/panthor: Take sparse objects into account for fdinfo Date: Tue, 18 Feb 2025 23:25:37 +0000 Message-ID: <20250218232552.3450939-8-adrian.larumbe@collabora.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250218232552.3450939-1-adrian.larumbe@collabora.com> References: <20250218232552.3450939-1-adrian.larumbe@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Because of the alternative definition of the 'pages' field in shmem after adding support for sparse allocations, the logic for deciding whether pages are available must be expanded. Signed-off-by: Adrián Larumbe --- drivers/gpu/drm/panfrost/panfrost_gem.c | 4 +++- drivers/gpu/drm/panthor/panthor_gem.c | 4 +++- 2 files changed, 6 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c index 0cda2c4e524f..ced2fdee74ab 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -200,7 +200,9 @@ static enum drm_gem_object_status panfrost_gem_status(struct drm_gem_object *obj struct panfrost_gem_object *bo = to_panfrost_bo(obj); enum drm_gem_object_status res = 0; - if (bo->base.base.import_attach || bo->base.pages) + if (bo->base.base.import_attach || + (!bo->base.sparse && bo->base.pages) || + (bo->base.sparse && !xa_empty(&bo->base.xapages))) res |= DRM_GEM_OBJECT_RESIDENT; if (bo->base.madv == PANFROST_MADV_DONTNEED) diff --git a/drivers/gpu/drm/panthor/panthor_gem.c b/drivers/gpu/drm/panthor/panthor_gem.c index 8244a4e6c2a2..8dbaf766bd79 100644 --- a/drivers/gpu/drm/panthor/panthor_gem.c +++ b/drivers/gpu/drm/panthor/panthor_gem.c @@ -155,7 +155,9 @@ static enum drm_gem_object_status panthor_gem_status(struct drm_gem_object *obj) struct panthor_gem_object *bo = to_panthor_bo(obj); enum drm_gem_object_status res = 0; - if (bo->base.base.import_attach || bo->base.pages) + if (bo->base.base.import_attach || + (!bo->base.sparse && bo->base.pages) || + (bo->base.sparse && !xa_empty(&bo->base.xapages))) res |= DRM_GEM_OBJECT_RESIDENT; return res;