From patchwork Wed Mar 26 02:14:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adri=C3=A1n_Larumbe?= X-Patchwork-Id: 14029694 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C9EEBC3600D for ; Wed, 26 Mar 2025 02:15:33 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A655410E2A3; Wed, 26 Mar 2025 02:15:30 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (1024-bit key; unprotected) header.d=collabora.com header.i=adrian.larumbe@collabora.com header.b="lJ9CyPyN"; dkim-atps=neutral Received: from sender4-pp-f112.zoho.com (sender4-pp-f112.zoho.com [136.143.188.112]) by gabe.freedesktop.org (Postfix) with ESMTPS id B193F10E2A3 for ; Wed, 26 Mar 2025 02:15:27 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; t=1742955315; cv=none; d=zohomail.com; s=zohoarc; b=dMCydFgVGb0hKAHeEKcKHFrQJnPIUfBTKy9Q22muUIESOqe2dOCm2baB6POTJWnorsYUel8jNGwvFZ+XJ4PjwXM2SzE0cx7X5UKz3sIIljUfRxkxJuVrmcigCu+4Am4GrlXkzZDixtpLLNvF1++9/sRJ40bztLDOo9KGM03mla0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1742955315; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; bh=vsMAwuSHWK3nfxRNgzm50oxQnEhwhSDIhkkuUteoLM0=; b=VxyJMSL/IWn+/Cv6TUY/LrqVkMvGgx09COt9HxWT2tel8D7E8mD+tW/6gQD6jN8ONPFS/JpXb0e04gzvcP+zi/KzQQ9NCmM+tX3la5uUevno6IYvwvIncOEAhcrRz8YU0ZlPypYQ4gib4BDRedb27cV5JfxOjafTlGAkNcxx5I8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=collabora.com; spf=pass smtp.mailfrom=adrian.larumbe@collabora.com; dmarc=pass header.from= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1742955315; s=zohomail; d=collabora.com; i=adrian.larumbe@collabora.com; h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-ID:In-Reply-To:References:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To; bh=vsMAwuSHWK3nfxRNgzm50oxQnEhwhSDIhkkuUteoLM0=; b=lJ9CyPyN1CrbjDjto8jfQDnxgPIb9UowWNms0DU7whnSCShQAZmGIhRGOVRQg0LW rB+BOGEHBBWXUZtC3y8RbNvwgtMcHqQrJHaXWHCBtiWu17ciTIxdtkRl2iN++i/Ddcd JxruTfe12YUfBAo3XyVMrJpwweGIJEuuRKI/3Ng4= Received: by mx.zohomail.com with SMTPS id 17429553144887.4970395832125405; Tue, 25 Mar 2025 19:15:14 -0700 (PDT) From: =?utf-8?q?Adri=C3=A1n_Larumbe?= To: Andrew Morton , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Boris Brezillon , Rob Herring , Steven Price , Liviu Dudau Cc: kernel@collabora.com, =?utf-8?q?Adri=C3=A1n_Larumbe?= , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [RFC PATCH v2 1/6] lib/scatterlist.c: Support constructing sgt from page xarray Date: Wed, 26 Mar 2025 02:14:21 +0000 Message-ID: <20250326021433.772196-2-adrian.larumbe@collabora.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250326021433.772196-1-adrian.larumbe@collabora.com> References: <20250326021433.772196-1-adrian.larumbe@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" In preparation for a future commit that will introduce sparse allocation of pages in DRM shmem, a scatterlist function that knows how to deal with an xarray collection of memory pages had to be introduced. Because the new function is identical to the existing one that deals with a page array, the page_array abstraction is also introduced, which hides the way pages are retrieved from a collection. Signed-off-by: Adrián Larumbe --- include/linux/scatterlist.h | 17 ++++ lib/scatterlist.c | 175 +++++++++++++++++++++++++----------- 2 files changed, 142 insertions(+), 50 deletions(-) diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h index d836e7440ee8..cffb0cffcda0 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -447,6 +447,11 @@ int sg_alloc_table_from_pages_segment(struct sg_table *sgt, struct page **pages, unsigned int n_pages, unsigned int offset, unsigned long size, unsigned int max_segment, gfp_t gfp_mask); +int sg_alloc_table_from_xarray_segment(struct sg_table *sgt, struct xarray *pages, + unsigned int idx, unsigned int n_pages, + unsigned int offset, unsigned long size, + unsigned int max_segment, gfp_t gfp_mask); + /** * sg_alloc_table_from_pages - Allocate and initialize an sg table from @@ -478,6 +483,18 @@ static inline int sg_alloc_table_from_pages(struct sg_table *sgt, size, UINT_MAX, gfp_mask); } +static inline int sg_alloc_table_from_xarray(struct sg_table *sgt, + struct xarray *pages, + unsigned int idx, + unsigned int n_pages, + unsigned int offset, + unsigned long size, gfp_t gfp_mask) +{ + return sg_alloc_table_from_xarray_segment(sgt, pages, idx, n_pages, offset, + size, UINT_MAX, gfp_mask); +} + + #ifdef CONFIG_SGL_ALLOC struct scatterlist *sgl_alloc_order(unsigned long long length, unsigned int order, bool chainable, diff --git a/lib/scatterlist.c b/lib/scatterlist.c index 5bb6b8aff232..08b9ed51324e 100644 --- a/lib/scatterlist.c +++ b/lib/scatterlist.c @@ -423,43 +423,53 @@ static bool pages_are_mergeable(struct page *a, struct page *b) return true; } -/** - * sg_alloc_append_table_from_pages - Allocate and initialize an append sg - * table from an array of pages - * @sgt_append: The sg append table to use - * @pages: Pointer to an array of page pointers - * @n_pages: Number of pages in the pages array - * @offset: Offset from start of the first page to the start of a buffer - * @size: Number of valid bytes in the buffer (after offset) - * @max_segment: Maximum size of a scatterlist element in bytes - * @left_pages: Left pages caller have to set after this call - * @gfp_mask: GFP allocation mask - * - * Description: - * In the first call it allocate and initialize an sg table from a list of - * pages, else reuse the scatterlist from sgt_append. Contiguous ranges of - * the pages are squashed into a single scatterlist entry up to the maximum - * size specified in @max_segment. A user may provide an offset at a start - * and a size of valid data in a buffer specified by the page array. The - * returned sg table is released by sg_free_append_table - * - * Returns: - * 0 on success, negative error on failure - * - * Notes: - * If this function returns non-0 (eg failure), the caller must call - * sg_free_append_table() to cleanup any leftover allocations. - * - * In the fist call, sgt_append must by initialized. - */ -int sg_alloc_append_table_from_pages(struct sg_append_table *sgt_append, - struct page **pages, unsigned int n_pages, unsigned int offset, - unsigned long size, unsigned int max_segment, - unsigned int left_pages, gfp_t gfp_mask) +struct page_array { + union { + struct page **array; + struct xarray *xarray; + }; + + struct page * (* const get_page)(struct page_array, unsigned int); +}; + +static inline struct page *page_array_get_page(struct page_array a, + unsigned int index) { - unsigned int chunks, cur_page, seg_len, i, prv_len = 0; + return a.array[index]; +} + +static inline struct page *page_xarray_get_page(struct page_array a, + unsigned int index) +{ + return xa_load(a.xarray, index); +} + +#define PAGE_ARRAY(pages) \ + ((struct page_array) { \ + .array = pages, \ + .get_page = page_array_get_page, \ + }) + +#define PAGE_XARRAY(pages) \ + ((struct page_array) { \ + .xarray = pages, \ + .get_page = page_xarray_get_page, \ + }) + +static inline int +sg_alloc_append_table_from_page_array(struct sg_append_table *sgt_append, + struct page_array pages, + unsigned int first_page, + unsigned int n_pages, + unsigned int offset, unsigned long size, + unsigned int max_segment, + unsigned int left_pages, gfp_t gfp_mask) +{ + unsigned int chunks, seg_len, i, prv_len = 0; unsigned int added_nents = 0; struct scatterlist *s = sgt_append->prv; + unsigned int cur_pg_index = first_page; + unsigned int last_pg_index = first_page + n_pages - 1; struct page *last_pg; /* @@ -475,24 +485,26 @@ int sg_alloc_append_table_from_pages(struct sg_append_table *sgt_append, if (sgt_append->prv) { unsigned long next_pfn; + struct page *page; if (WARN_ON(offset)) return -EINVAL; /* Merge contiguous pages into the last SG */ + page = pages.get_page(pages, cur_pg_index); prv_len = sgt_append->prv->length; next_pfn = (sg_phys(sgt_append->prv) + prv_len) / PAGE_SIZE; - if (page_to_pfn(pages[0]) == next_pfn) { + if (page_to_pfn(page) == next_pfn) { last_pg = pfn_to_page(next_pfn - 1); - while (n_pages && pages_are_mergeable(pages[0], last_pg)) { + while (cur_pg_index <= last_pg_index && + pages_are_mergeable(page, last_pg)) { if (sgt_append->prv->length + PAGE_SIZE > max_segment) break; sgt_append->prv->length += PAGE_SIZE; - last_pg = pages[0]; - pages++; - n_pages--; + last_pg = page; + cur_pg_index++; } - if (!n_pages) + if (cur_pg_index > last_pg_index) goto out; } } @@ -500,26 +512,27 @@ int sg_alloc_append_table_from_pages(struct sg_append_table *sgt_append, /* compute number of contiguous chunks */ chunks = 1; seg_len = 0; - for (i = 1; i < n_pages; i++) { + for (i = cur_pg_index + 1; i <= last_pg_index; i++) { seg_len += PAGE_SIZE; if (seg_len >= max_segment || - !pages_are_mergeable(pages[i], pages[i - 1])) { + !pages_are_mergeable(pages.get_page(pages, i), + pages.get_page(pages, i - 1))) { chunks++; seg_len = 0; } } /* merging chunks and putting them into the scatterlist */ - cur_page = 0; for (i = 0; i < chunks; i++) { unsigned int j, chunk_size; /* look for the end of the current chunk */ seg_len = 0; - for (j = cur_page + 1; j < n_pages; j++) { + for (j = cur_pg_index + 1; j <= last_pg_index; j++) { seg_len += PAGE_SIZE; if (seg_len >= max_segment || - !pages_are_mergeable(pages[j], pages[j - 1])) + !pages_are_mergeable(pages.get_page(pages, j), + pages.get_page(pages, j - 1))) break; } @@ -535,13 +548,13 @@ int sg_alloc_append_table_from_pages(struct sg_append_table *sgt_append, sgt_append->prv->length = prv_len; return PTR_ERR(s); } - chunk_size = ((j - cur_page) << PAGE_SHIFT) - offset; - sg_set_page(s, pages[cur_page], + chunk_size = ((j - cur_pg_index) << PAGE_SHIFT) - offset; + sg_set_page(s, pages.get_page(pages, cur_pg_index), min_t(unsigned long, size, chunk_size), offset); added_nents++; size -= chunk_size; offset = 0; - cur_page = j; + cur_pg_index = j; } sgt_append->sgt.nents += added_nents; sgt_append->sgt.orig_nents = sgt_append->sgt.nents; @@ -551,6 +564,46 @@ int sg_alloc_append_table_from_pages(struct sg_append_table *sgt_append, sg_mark_end(s); return 0; } + +/** + * sg_alloc_append_table_from_pages - Allocate and initialize an append sg + * table from an array of pages + * @sgt_append: The sg append table to use + * @pages: Pointer to an array of page pointers + * @n_pages: Number of pages in the pages array + * @offset: Offset from start of the first page to the start of a buffer + * @size: Number of valid bytes in the buffer (after offset) + * @max_segment: Maximum size of a scatterlist element in bytes + * @left_pages: Left pages caller have to set after this call + * @gfp_mask: GFP allocation mask + * + * Description: + * In the first call it allocate and initialize an sg table from a list of + * pages, else reuse the scatterlist from sgt_append. Contiguous ranges of + * the pages are squashed into a single scatterlist entry up to the maximum + * size specified in @max_segment. A user may provide an offset at a start + * and a size of valid data in a buffer specified by the page array. The + * returned sg table is released by sg_free_append_table + * + * Returns: + * 0 on success, negative error on failure + * + * Notes: + * If this function returns non-0 (eg failure), the caller must call + * sg_free_append_table() to cleanup any leftover allocations. + * + * In the fist call, sgt_append must by initialized. + */ +int sg_alloc_append_table_from_pages(struct sg_append_table *sgt_append, + struct page **pages, unsigned int n_pages, unsigned int offset, + unsigned long size, unsigned int max_segment, + unsigned int left_pages, gfp_t gfp_mask) +{ + struct page_array parray = PAGE_ARRAY(pages); + + return sg_alloc_append_table_from_page_array(sgt_append, parray, 0, n_pages, offset, + size, max_segment, left_pages, gfp_mask); +} EXPORT_SYMBOL(sg_alloc_append_table_from_pages); /** @@ -582,10 +635,11 @@ int sg_alloc_table_from_pages_segment(struct sg_table *sgt, struct page **pages, gfp_t gfp_mask) { struct sg_append_table append = {}; + struct page_array parray = PAGE_ARRAY(pages); int err; - err = sg_alloc_append_table_from_pages(&append, pages, n_pages, offset, - size, max_segment, 0, gfp_mask); + err = sg_alloc_append_table_from_page_array(&append, parray, 0, n_pages, offset, + size, max_segment, 0, gfp_mask); if (err) { sg_free_append_table(&append); return err; @@ -596,6 +650,27 @@ int sg_alloc_table_from_pages_segment(struct sg_table *sgt, struct page **pages, } EXPORT_SYMBOL(sg_alloc_table_from_pages_segment); +int sg_alloc_table_from_xarray_segment(struct sg_table *sgt, struct xarray *pages, + unsigned int idx, unsigned int n_pages, + unsigned int offset, unsigned long size, + unsigned int max_segment, gfp_t gfp_mask) +{ + struct sg_append_table append = {}; + struct page_array parray = PAGE_XARRAY(pages); + int err; + + err = sg_alloc_append_table_from_page_array(&append, parray, idx, n_pages, offset, + size, max_segment, 0, gfp_mask); + if (err) { + sg_free_append_table(&append); + return err; + } + memcpy(sgt, &append.sgt, sizeof(*sgt)); + WARN_ON(append.total_nents != sgt->orig_nents); + return 0; +} +EXPORT_SYMBOL(sg_alloc_table_from_xarray_segment); + #ifdef CONFIG_SGL_ALLOC /** From patchwork Wed Mar 26 02:14:22 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adri=C3=A1n_Larumbe?= X-Patchwork-Id: 14029696 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 12310C3600D for ; Wed, 26 Mar 2025 02:15:38 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B7A9610E63D; Wed, 26 Mar 2025 02:15:36 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (1024-bit key; unprotected) header.d=collabora.com header.i=adrian.larumbe@collabora.com header.b="VnTfUOjy"; dkim-atps=neutral Received: from sender4-pp-f112.zoho.com (sender4-pp-f112.zoho.com [136.143.188.112]) by gabe.freedesktop.org (Postfix) with ESMTPS id 230FB10E080 for ; Wed, 26 Mar 2025 02:15:34 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; t=1742955319; cv=none; d=zohomail.com; s=zohoarc; b=buHt9+8wJ2ksc9/06QreSVQ1jLvLR69XLdNhzNlGH1n+VdbQzbuUWwwOvf0kZwVPmMEkhBkDBOJYSANDple11op8JHPd9zvpQw1hU1/6VLUQSSII7Y6yjwaGw//3PbGmafz0cU2iGFPBILRDFzSrV2pp5D35LIWt9hmkBeLAjnI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1742955319; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; bh=eDDJBa43wl24zfW0m/nKztfofIyTDy2Iq6m6CG3HtAk=; b=QSMSm9uIWhwqtw6WdRYV6gjbqeUEtU8pc+ID24SdLiYl/uYPRyXbrjnAHxS1cWssNF+gXlf7JtOEkLdJkNmUnN5OvQL7o2ksPoqHPM5JoAWmvax/QesyvxeeCgxp9OTrvUu2NwMstgUGo/BkBj/z9qZo62uYYkovgz4Ps6S1Uzk= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=collabora.com; spf=pass smtp.mailfrom=adrian.larumbe@collabora.com; dmarc=pass header.from= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1742955319; s=zohomail; d=collabora.com; i=adrian.larumbe@collabora.com; h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-ID:In-Reply-To:References:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To; bh=eDDJBa43wl24zfW0m/nKztfofIyTDy2Iq6m6CG3HtAk=; b=VnTfUOjyyEJaAd4e+E2GUEG9Q8ZbsX/4wbg87KRRuZ7DnBZeq14dWpyF+w0oOv6d e0abxBxVT5cw7dyBjtIrJQHkJfv5QHxnnf8+rAZDN0kxOXGToZmgS3Q5OwirmEYqtc+ 8tSKQzxRTEJdHxEzzByVlpSav5+PJKS+15KaZbLE= Received: by mx.zohomail.com with SMTPS id 1742955317701794.8257212544316; Tue, 25 Mar 2025 19:15:17 -0700 (PDT) From: =?utf-8?q?Adri=C3=A1n_Larumbe?= To: Andrew Morton , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Boris Brezillon , Rob Herring , Steven Price , Liviu Dudau Cc: kernel@collabora.com, =?utf-8?q?Adri=C3=A1n_Larumbe?= , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [RFC PATCH v2 2/6] drm/shmem: Introduce the notion of sparse objects Date: Wed, 26 Mar 2025 02:14:22 +0000 Message-ID: <20250326021433.772196-3-adrian.larumbe@collabora.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250326021433.772196-1-adrian.larumbe@collabora.com> References: <20250326021433.772196-1-adrian.larumbe@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Sparse DRM objects will store their backing pages in an xarray, to avoid the overhead of preallocating a huge struct page pointer array when only a very small range of indices might be assigned. For now, only the definition of a sparse object as a union alternative to a 'dense' object is provided, with functions that exploit it being part of later commits. Signed-off-by: Adrián Larumbe --- drivers/gpu/drm/drm_gem_shmem_helper.c | 68 +++++++++++++++++++++++++- include/drm/drm_gem_shmem_helper.h | 23 ++++++++- 2 files changed, 88 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index d99dee67353a..5f75eb1230f6 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -128,6 +128,31 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t } EXPORT_SYMBOL_GPL(drm_gem_shmem_create); +/** + * drm_gem_shmem_create_sparse - Allocate a sparse object with the given size + * @dev: DRM device + * @size: Size of the sparse object to allocate + * + * This function creates a sparse shmem GEM object. + * + * Returns: + * A struct drm_gem_shmem_object * on success or an ERR_PTR()-encoded negative + * error code on failure. + */ +struct drm_gem_shmem_object *drm_gem_shmem_create_sparse(struct drm_device *dev, size_t size) +{ + struct drm_gem_shmem_object *shmem = + __drm_gem_shmem_create(dev, size, false, NULL); + + if (!IS_ERR(shmem)) { + shmem->sparse = true; + xa_init_flags(&shmem->xapages, XA_FLAGS_ALLOC); + } + + return shmem; +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_create_sparse); + /** * drm_gem_shmem_create_with_mnt - Allocate an object with the given size in a * given mountpoint @@ -173,8 +198,8 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) sg_free_table(shmem->sgt); kfree(shmem->sgt); } - if (shmem->pages) - drm_gem_shmem_put_pages(shmem); + + drm_gem_shmem_put_pages(shmem); drm_WARN_ON(obj->dev, shmem->pages_use_count); @@ -196,6 +221,12 @@ static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) if (shmem->pages_use_count++ > 0) return 0; + /* We only allow increasing the user count in the case of + * sparse shmem objects with some backed pages for now + */ + if (shmem->sparse && xa_empty(&shmem->xapages)) + return -EINVAL; + pages = drm_gem_get_pages(obj); if (IS_ERR(pages)) { drm_dbg_kms(obj->dev, "Failed to get pages (%ld)\n", @@ -231,6 +262,14 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) dma_resv_assert_held(shmem->base.resv); + if (!shmem->sparse) { + if (!shmem->pages) + return; + } else { + /* Not implemented yet */ + return; + } + if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count)) return; @@ -404,8 +443,15 @@ void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, { struct drm_gem_object *obj = &shmem->base; + if (shmem->sparse) { + drm_err(obj->dev, "UM unmapping of sparse shmem objects not implemented\n"); + return; + } + if (drm_gem_is_imported(obj)) { dma_buf_vunmap(obj->dma_buf, map); + } else if (obj->import_attach) { + dma_buf_vunmap(obj->import_attach->dmabuf, map); } else { dma_resv_assert_held(shmem->base.resv); @@ -541,6 +587,12 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf) struct page *page; pgoff_t page_offset; + /* TODO: Implement UM mapping of sparse shmem objects */ + if (drm_WARN_ON(obj->dev, shmem->sparse)) { + drm_err(obj->dev, "UM mapping of sparse shmem objects not implemented\n"); + return VM_FAULT_SIGBUS; + } + /* We don't use vmf->pgoff since that has the fake offset */ page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT; @@ -566,8 +618,14 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma) struct drm_gem_object *obj = vma->vm_private_data; struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + /* TODO: Implement UM mapping of sparse shmem objects */ + if (drm_WARN_ON(obj->dev, shmem->sparse)) + return; + drm_WARN_ON(obj->dev, drm_gem_is_imported(obj)); + drm_WARN_ON(obj->dev, obj->import_attach); + dma_resv_lock(shmem->base.resv, NULL); /* @@ -690,6 +748,9 @@ struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; + if (drm_WARN_ON(obj->dev, shmem->sparse)) + return ERR_PTR(-EINVAL); + drm_WARN_ON(obj->dev, drm_gem_is_imported(obj)); return drm_prime_pages_to_sg(obj->dev, shmem->pages, obj->size >> PAGE_SHIFT); @@ -702,6 +763,9 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_ int ret; struct sg_table *sgt; + if (drm_WARN_ON(obj->dev, shmem->sparse)) + return ERR_PTR(-EINVAL); + if (shmem->sgt) return shmem->sgt; diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index cef5a6b5a4d6..00e47512b30f 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -6,6 +6,7 @@ #include #include #include +#include #include #include @@ -29,7 +30,10 @@ struct drm_gem_shmem_object { /** * @pages: Page table */ - struct page **pages; + union { + struct page **pages; + struct xarray xapages; + }; /** * @pages_use_count: @@ -91,12 +95,18 @@ struct drm_gem_shmem_object { * @map_wc: map object write-combined (instead of using shmem defaults). */ bool map_wc : 1; + + /** + * @sparse: the object is only partially backed by pages + */ + bool sparse : 1; }; #define to_drm_gem_shmem_obj(obj) \ container_of(obj, struct drm_gem_shmem_object, base) struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size); +struct drm_gem_shmem_object *drm_gem_shmem_create_sparse(struct drm_device *dev, size_t size); struct drm_gem_shmem_object *drm_gem_shmem_create_with_mnt(struct drm_device *dev, size_t size, struct vfsmount *gemfs); @@ -210,6 +220,10 @@ static inline struct sg_table *drm_gem_shmem_object_get_sg_table(struct drm_gem_ { struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + /* Use the specific sparse shmem get_sg_table function instead */ + if (WARN_ON(shmem->sparse)) + return ERR_PTR(-EINVAL); + return drm_gem_shmem_get_sg_table(shmem); } @@ -229,6 +243,10 @@ static inline int drm_gem_shmem_object_vmap(struct drm_gem_object *obj, { struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + /* TODO: Implement kernel mapping of sparse shmem objects */ + if (WARN_ON(shmem->sparse)) + return -EACCES; + return drm_gem_shmem_vmap(shmem, map); } @@ -263,6 +281,9 @@ static inline int drm_gem_shmem_object_mmap(struct drm_gem_object *obj, struct v { struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + if (shmem->sparse) + return -EACCES; + return drm_gem_shmem_mmap(shmem, vma); } From patchwork Wed Mar 26 02:14:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adri=C3=A1n_Larumbe?= X-Patchwork-Id: 14029695 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7683EC36005 for ; Wed, 26 Mar 2025 02:15:37 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id AC93010E080; Wed, 26 Mar 2025 02:15:36 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (1024-bit key; unprotected) header.d=collabora.com header.i=adrian.larumbe@collabora.com header.b="HBDmjPwb"; dkim-atps=neutral Received: from sender4-pp-f112.zoho.com (sender4-pp-f112.zoho.com [136.143.188.112]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8FC0D10E080 for ; Wed, 26 Mar 2025 02:15:33 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; t=1742955323; cv=none; d=zohomail.com; s=zohoarc; b=PR+2CEuiTe4RGa18gO0RGsX6+14UsNC982cgzVfTWXZK/InopzF5hNmRQ2Umf7S2F4RbfyRajYrTYdEW7HDg6unXQl4A1mtSO22xhgEq7obLrqTMUNrbs3JtxT4+oC2q8VWiqS1N3kQpiAPHpB0+K/I3k9UHWhjADKVzowO9M1M= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1742955323; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; bh=kXGLmdYL5utl+URRB2IhZ0LySRvXMl8oQiv1N+1XSco=; b=BZM1ANsRGLSW8/HLRd6izhWBCN43x/5laTSRhHpsvu8gfxQIO7v8AHxBSb6gvxgz/VG3ND3r84g265S83lMLgdnLJKSR1LGOYGpx96rGkRZzHk9j7KN9qb8k9S7zFEb5BqXzNSQNflESK+4IPmKiq9HXNW1zFR5XDkcYiG/TjEY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=collabora.com; spf=pass smtp.mailfrom=adrian.larumbe@collabora.com; dmarc=pass header.from= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1742955323; s=zohomail; d=collabora.com; i=adrian.larumbe@collabora.com; h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-ID:In-Reply-To:References:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To; bh=kXGLmdYL5utl+URRB2IhZ0LySRvXMl8oQiv1N+1XSco=; b=HBDmjPwbbM2ObA3eZ5gi4XxGaZyh9SFqGbN9tA4By/UzqwXSKbzvb7nt72WgAk0h AK4E4fztdSqBRAef8OIvz7qLBNYvYDD14nIUiIyzOu6UDACnmUg6M3Btt/S7tC/rnvj G9fNghMBiC/peMip3opSWB/+MchoLj/+mT4Zdf9s= Received: by mx.zohomail.com with SMTPS id 1742955321182133.1525005800388; Tue, 25 Mar 2025 19:15:21 -0700 (PDT) From: =?utf-8?q?Adri=C3=A1n_Larumbe?= To: Andrew Morton , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Boris Brezillon , Rob Herring , Steven Price , Liviu Dudau Cc: kernel@collabora.com, =?utf-8?q?Adri=C3=A1n_Larumbe?= , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [RFC PATCH v2 3/6] drm/shmem: Implement sparse allocation of pages for shmem objects Date: Wed, 26 Mar 2025 02:14:23 +0000 Message-ID: <20250326021433.772196-4-adrian.larumbe@collabora.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250326021433.772196-1-adrian.larumbe@collabora.com> References: <20250326021433.772196-1-adrian.larumbe@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Add a new function that lets drivers allocate pages for a subset of the shmem object's virtual address range, and another function for obtaining an SG table from those pages, so that memory can be mapped onto an MMU. Add also a new function for putting the pages of a sparse page array. The sparse allocation function allowed a gfp argument to allow allocations other than GFP_KERNEL, in cases where memory allocation can race with the shrinker's memory reclaim path There is factorization potential with drm_gem_put_pages and drm_get_pages_, but it is yet to be decided what this should look like. Signed-off-by: Adrián Larumbe --- drivers/gpu/drm/drm_gem.c | 117 ++++++++++++++++ drivers/gpu/drm/drm_gem_shmem_helper.c | 182 ++++++++++++++++++++++++- include/drm/drm_gem.h | 6 + include/drm/drm_gem_shmem_helper.h | 4 + 4 files changed, 303 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index c6240bab3fa5..fa9b3f01f9ac 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -679,6 +679,123 @@ void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages, } EXPORT_SYMBOL(drm_gem_put_pages); +/** + * drm_get_pages_xarray - helper to allocate backing pages for a GEM object + * from shmem, and store them in an xarray. + * @obj: obj in question + * @pa: xarray that holds the backing pages + * @page_offset: shmem index of the very first page to allocate + * @npages: number of consecutive shmem pages to allocate + * @gfp: additional allocation flags + * + * This reads the page-array of the shmem-backing storage of the given gem + * object. The input xarray is where the pages are stored. If a page is not + * allocated or swapped-out, this will allocate/swap-in the required pages. + * Only the requested range is covered with physical pages. + * + * Use drm_gem_put_xarray_page_range() to release the same xarray subset of pages. + * + * This uses the GFP-mask set on the shmem-mapping (see mapping_set_gfp_mask()), + * and any further mask bits set in the gfp input parameter. + * + * This function is only valid on objects initialized with + * drm_gem_object_init(), but not for those initialized with + * drm_gem_private_object_init() only. + */ +int drm_get_pages_xarray(struct drm_gem_object *obj, struct xarray *pa, + pgoff_t page_offset, unsigned int npages, gfp_t gfp) +{ + struct address_space *mapping; + struct page *page; + int ret = 0; + int i; + + if (WARN_ON(!obj->filp)) + return -EINVAL; + + /* This is the shared memory object that backs the GEM resource */ + mapping = obj->filp->f_mapping; + + /* We already BUG_ON() for non-page-aligned sizes in + * drm_gem_object_init(), so we should never hit this unless + * driver author is doing something really wrong: + */ + WARN_ON((obj->size & (PAGE_SIZE - 1)) != 0); + + mapping = obj->filp->f_mapping; + mapping_set_unevictable(mapping); + + for (i = 0; i < npages; i++) { + page = shmem_read_mapping_page_gfp(mapping, page_offset + i, + mapping_gfp_mask(mapping) | gfp); + if (IS_ERR(page)) { + ret = PTR_ERR(page); + goto err_free_pages; + } + + /* Add the page into the xarray */ + ret = xa_err(xa_store(pa, page_offset + i, page, gfp)); + if (ret) { + put_page(page); + goto err_free_pages; + } + } + + return ret; + +err_free_pages: + while (--i) { + page = xa_erase(pa, page_offset + i); + if (drm_WARN_ON(obj->dev, !page)) + continue; + put_page(page); + } + + return ret; +} +EXPORT_SYMBOL(drm_get_pages_xarray); + +/** + * drm_gem_put_xarray_page_range - helper to free some backing pages for a + * sparse GEM object + * @pa: xarray that holds the backing pages + * @idx: xarray index of the first page tof ree + * @npages: number of consecutive pages in the xarray to free + * @dirty: if true, pages will be marked as dirty + * @accessed: if true, the pages will be marked as accessed + */ +void drm_gem_put_xarray_page_range(struct xarray *pa, unsigned long idx, + unsigned int npages, bool dirty, bool accessed) +{ + struct folio_batch fbatch; + struct page *page; + + folio_batch_init(&fbatch); + + xa_for_each(pa, idx, page) { + struct folio *folio = page_folio(page); + + if (dirty) + folio_mark_dirty(folio); + if (accessed) + folio_mark_accessed(folio); + + /* Undo the reference we took when populating the table */ + if (!folio_batch_add(&fbatch, folio)) + drm_gem_check_release_batch(&fbatch); + + xa_erase(pa, idx); + + idx += folio_nr_pages(folio) - 1; + } + + if (folio_batch_count(&fbatch)) + drm_gem_check_release_batch(&fbatch); + + WARN_ON((idx+1) != npages); +} +EXPORT_SYMBOL(drm_gem_put_xarray_page_range); + static int objects_lookup(struct drm_file *filp, u32 *handle, int count, struct drm_gem_object **objs) { diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 5f75eb1230f6..1bf33e5a1c4c 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -174,6 +174,34 @@ struct drm_gem_shmem_object *drm_gem_shmem_create_with_mnt(struct drm_device *de } EXPORT_SYMBOL_GPL(drm_gem_shmem_create_with_mnt); +static void drm_gem_shmem_put_pages_sparse(struct drm_gem_shmem_object *shmem) +{ + struct page *page; + unsigned long idx; + + if (drm_WARN_ON(shmem->base.dev, !shmem->sparse)) + return; + + idx = 0; + xa_for_each(&shmem->xapages, idx, page) { + unsigned long consecutive = 1; + + if (!page) + continue; + + while (xa_load(&shmem->xapages, idx + consecutive)) + consecutive++; + + drm_gem_put_xarray_page_range(&shmem->xapages, idx, consecutive, + shmem->pages_mark_dirty_on_put, + shmem->pages_mark_accessed_on_put); + + idx += consecutive; + } + + drm_WARN_ON(shmem->base.dev, !xa_empty(&shmem->xapages)); +} + /** * drm_gem_shmem_free - Free resources associated with a shmem GEM object * @shmem: shmem GEM object to free @@ -266,8 +294,8 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) if (!shmem->pages) return; } else { - /* Not implemented yet */ - return; + if (xa_empty(&shmem->xapages)) + return; } if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count)) @@ -281,10 +309,15 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); #endif - drm_gem_put_pages(obj, shmem->pages, - shmem->pages_mark_dirty_on_put, - shmem->pages_mark_accessed_on_put); - shmem->pages = NULL; + if (!shmem->sparse) { + drm_gem_put_pages(obj, shmem->pages, + shmem->pages_mark_dirty_on_put, + shmem->pages_mark_accessed_on_put); + shmem->pages = NULL; + } else { + drm_gem_shmem_put_pages_sparse(shmem); + xa_destroy(&shmem->xapages); + } } EXPORT_SYMBOL(drm_gem_shmem_put_pages); @@ -797,6 +830,103 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_ return ERR_PTR(ret); } +static int +drm_gem_shmem_sparse_populate_locked(struct drm_gem_shmem_object *shmem, + unsigned int n_pages, pgoff_t page_offset, + gfp_t gfp) +{ + bool first_alloc; + int ret; + + if (!shmem->sparse) + return -EINVAL; + + dma_resv_assert_held(shmem->base.resv); + + /* If the mapping exists, then bail out immediately */ + if (xa_load(&shmem->xapages, page_offset) != NULL) + return -EEXIST; + + first_alloc = xa_empty(&shmem->xapages); + + ret = drm_get_pages_xarray(&shmem->base, &shmem->xapages, + page_offset, n_pages, gfp); + if (ret) + return ret; + + if (first_alloc) + shmem->pages_use_count = 1; + + return 0; +} + +static struct sg_table * +drm_gem_shmem_sparse_get_sgt_range(struct drm_gem_shmem_object *shmem, + unsigned int n_pages, pgoff_t page_offset, + gfp_t gfp) +{ + struct drm_gem_object *obj = &shmem->base; + struct sg_table *sgt; + int ret; + + if (drm_WARN_ON(obj->dev, !shmem->sparse)) + return ERR_PTR(-EINVAL); + + /* If the page range wasn't allocated, then bail out immediately */ + if (xa_load(&shmem->xapages, page_offset) == NULL) + return ERR_PTR(-EINVAL); + + sgt = kzalloc(sizeof(*sgt), GFP_NOWAIT); + if (!sgt) + return ERR_PTR(-ENOMEM); + + ret = sg_alloc_table_from_xarray(sgt, &shmem->xapages, page_offset, + n_pages, 0, n_pages * PAGE_SIZE, gfp); + if (ret) + goto err_free_sgtable; + + ret = dma_map_sgtable(obj->dev->dev, sgt, DMA_BIDIRECTIONAL, 0); + if (ret) + goto err_free_sgtable; + + return sgt; + +err_free_sgtable: + kfree(sgt); + return ERR_PTR(ret); +} + +static struct sg_table * +drm_gem_shmem_get_sparse_pages_locked(struct drm_gem_shmem_object *shmem, + unsigned int n_pages, pgoff_t page_offset, + gfp_t gfp) +{ + struct sg_table *sgt; + int ret; + + if (!shmem->sparse) + return ERR_PTR(-EINVAL); + + dma_resv_assert_held(shmem->base.resv); + + ret = drm_gem_shmem_sparse_populate_locked(shmem, n_pages, page_offset, gfp); + if (ret) + return ERR_PTR(ret); + + sgt = drm_gem_shmem_sparse_get_sgt_range(shmem, n_pages, page_offset, gfp); + if (IS_ERR(sgt)) { + ret = PTR_ERR(sgt); + goto err_free_pages; + } + + return sgt; + +err_free_pages: + drm_gem_put_xarray_page_range(&shmem->xapages, page_offset, + n_pages, false, false); + return ERR_PTR(ret); +} + /** * drm_gem_shmem_get_pages_sgt - Pin pages, dma map them, and return a * scatter/gather table for a shmem GEM object. @@ -828,6 +958,46 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem) } EXPORT_SYMBOL_GPL(drm_gem_shmem_get_pages_sgt); +/** + * drm_gem_shmem_get_sparse_pages_sgt - Pin pages, dma map them, and return a + * scatter/gather table for a sparse shmem GEM object. + * @shmem: shmem GEM object + * @n_pages: number of pages to pin and map + * @page_offset: shmem file index of the first page to allocate and map + * @gfp: Further allocation flags + * + * This function conceptually does the same thing as drm_gem_shmem_get_pages_sgt, + * but only for a contiguous subset of pages from the underlying shmem file. + * The allocation flags allows users to allocate pages with a mask other than + * GFP_KERNEL, in cases where it can race with shmem shrinkers. + * + * Returns: + * A pointer to the scatter/gather table of pinned pages or errno on failure. + */ +struct sg_table * +drm_gem_shmem_get_sparse_pages_sgt(struct drm_gem_shmem_object *shmem, + unsigned int n_pages, pgoff_t page_offset, + gfp_t gfp) +{ + struct drm_gem_object *obj = &shmem->base; + struct sg_table *sgt; + int ret; + + if (drm_WARN_ON(obj->dev, !shmem->sparse)) + return ERR_PTR(-EINVAL); + + ret = dma_resv_lock(shmem->base.resv, NULL); + if (ret) + return ERR_PTR(ret); + + sgt = drm_gem_shmem_get_sparse_pages_locked(shmem, n_pages, page_offset, gfp); + + dma_resv_unlock(shmem->base.resv); + + return sgt; +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sparse_pages_sgt); + /** * drm_gem_shmem_prime_import_sg_table - Produce a shmem GEM object from * another driver's scatter/gather table of pinned pages diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index 2bf893eabb4b..d8288a119bc3 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -39,6 +39,7 @@ #include #include #include +#include #include @@ -534,6 +535,11 @@ struct page **drm_gem_get_pages(struct drm_gem_object *obj); void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages, bool dirty, bool accessed); +int drm_get_pages_xarray(struct drm_gem_object *obj, struct xarray *pa, + pgoff_t page_offset, unsigned int npages, gfp_t gfp); +void drm_gem_put_xarray_page_range(struct xarray *pa, unsigned long idx, + unsigned int npages, bool dirty, bool accessed); + void drm_gem_lock(struct drm_gem_object *obj); void drm_gem_unlock(struct drm_gem_object *obj); diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 00e47512b30f..cbe4548e3ff6 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -138,6 +138,10 @@ void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem); struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem); struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem); +struct sg_table *drm_gem_shmem_get_sparse_pages_sgt(struct drm_gem_shmem_object *shmem, + unsigned int n_pages, pgoff_t page_offset, + gfp_t gfp); + void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem, struct drm_printer *p, unsigned int indent); From patchwork Wed Mar 26 02:14:24 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adri=C3=A1n_Larumbe?= X-Patchwork-Id: 14029697 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 36E2BC3600C for ; Wed, 26 Mar 2025 02:15:43 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A507110E642; Wed, 26 Mar 2025 02:15:42 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (1024-bit key; unprotected) header.d=collabora.com header.i=adrian.larumbe@collabora.com header.b="LOnbgMTa"; dkim-atps=neutral Received: from sender4-pp-f112.zoho.com (sender4-pp-f112.zoho.com [136.143.188.112]) by gabe.freedesktop.org (Postfix) with ESMTPS id 228D610E080 for ; Wed, 26 Mar 2025 02:15:36 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; t=1742955325; cv=none; d=zohomail.com; s=zohoarc; b=D993yEkMW8ZLVF3kchalGOTPS5dUiIKfLjM3KhemKUmwj/CNtlz42s3ggZvlupj5/r49ECAP5iprNCOvLhI9Fn/A1RD4fma8D8Lgz5WJyfTqeKAQSJlP2uYOzpxSMM6+xG5f/S0kui0xCtbV5y838e1R6zPID3A9C+3ecPaLxUY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1742955325; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; bh=CkVWMn1qdF1+NRmxxMqwW4Ffn+XH6JIvJ+h06D/D6o8=; b=l2wAIPJsYRgu/7uVoB32ST1TMxqGN9oG/vTeXsSfANvs2ZbgS1dna4qXfC0IyjHxXbqE+FNiyyVo27BVjWQmdO61Ygt3P/nZyZAGTT0m0YV4i8GoGnOdMMQdhPosvxigh9b6OCRmDEcX43lpexHAGDDDQ9AspK2na94jXerzRak= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=collabora.com; spf=pass smtp.mailfrom=adrian.larumbe@collabora.com; dmarc=pass header.from= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1742955325; s=zohomail; d=collabora.com; i=adrian.larumbe@collabora.com; h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-ID:In-Reply-To:References:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To; bh=CkVWMn1qdF1+NRmxxMqwW4Ffn+XH6JIvJ+h06D/D6o8=; b=LOnbgMTaZDNlBkCjgAAhDsJjCJktEiuZCkZKBQOmXuE2oiu7wrsex+TOHjlqZ2ZT 0RmarQvyR13jqfLzoBoOVxwnuR4taNtV621GS+/Ke/zXvBA8lGMiIQ3PQho3IeqBHML sNp24WHx0vaf+WOdOHV5z55yget2oPxCsyQ5j1GY= Received: by mx.zohomail.com with SMTPS id 1742955324455246.24181735240438; Tue, 25 Mar 2025 19:15:24 -0700 (PDT) From: =?utf-8?q?Adri=C3=A1n_Larumbe?= To: Andrew Morton , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Boris Brezillon , Rob Herring , Steven Price , Liviu Dudau Cc: kernel@collabora.com, =?utf-8?q?Adri=C3=A1n_Larumbe?= , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [RFC PATCH v2 4/6] drm/panfrost: Use shmem sparse allocation for heap BOs Date: Wed, 26 Mar 2025 02:14:24 +0000 Message-ID: <20250326021433.772196-5-adrian.larumbe@collabora.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250326021433.772196-1-adrian.larumbe@collabora.com> References: <20250326021433.772196-1-adrian.larumbe@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Panfrost heap BOs grow on demand when the GPU triggers a page fault after accessing an address within the BO's virtual range. We still store the sgts we get back from the shmem sparse allocation function, since it was decided management of sparse memory SGTs should be done by client drivers rather than the shmem subsystem. Signed-off-by: Adrián Larumbe --- drivers/gpu/drm/panfrost/panfrost_gem.c | 12 ++-- drivers/gpu/drm/panfrost/panfrost_gem.h | 2 +- drivers/gpu/drm/panfrost/panfrost_mmu.c | 86 +++++-------------------- 3 files changed, 26 insertions(+), 74 deletions(-) diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c index 8e0ff3efede7..0cda2c4e524f 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -40,10 +40,10 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj) int n_sgt = bo->base.base.size / SZ_2M; for (i = 0; i < n_sgt; i++) { - if (bo->sgts[i].sgl) { - dma_unmap_sgtable(pfdev->dev, &bo->sgts[i], + if (bo->sgts[i]) { + dma_unmap_sgtable(pfdev->dev, bo->sgts[i], DMA_BIDIRECTIONAL, 0); - sg_free_table(&bo->sgts[i]); + sg_free_table(bo->sgts[i]); } } kvfree(bo->sgts); @@ -274,7 +274,11 @@ panfrost_gem_create(struct drm_device *dev, size_t size, u32 flags) if (flags & PANFROST_BO_HEAP) size = roundup(size, SZ_2M); - shmem = drm_gem_shmem_create(dev, size); + if (flags & PANFROST_BO_HEAP) + shmem = drm_gem_shmem_create_sparse(dev, size); + else + shmem = drm_gem_shmem_create(dev, size); + if (IS_ERR(shmem)) return ERR_CAST(shmem); diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.h b/drivers/gpu/drm/panfrost/panfrost_gem.h index 7516b7ecf7fe..2a8d0752011e 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.h +++ b/drivers/gpu/drm/panfrost/panfrost_gem.h @@ -11,7 +11,7 @@ struct panfrost_mmu; struct panfrost_gem_object { struct drm_gem_shmem_object base; - struct sg_table *sgts; + struct sg_table **sgts; /* * Use a list for now. If searching a mapping ever becomes the diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index b91019cd5acb..de343c4e399a 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -441,14 +441,11 @@ addr_to_mapping(struct panfrost_device *pfdev, int as, u64 addr) static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, u64 addr) { - int ret, i; struct panfrost_gem_mapping *bomapping; struct panfrost_gem_object *bo; - struct address_space *mapping; - struct drm_gem_object *obj; pgoff_t page_offset; struct sg_table *sgt; - struct page **pages; + int ret = 0; bomapping = addr_to_mapping(pfdev, as, addr); if (!bomapping) @@ -459,94 +456,45 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, dev_WARN(pfdev->dev, "matching BO is not heap type (GPU VA = %llx)", bomapping->mmnode.start << PAGE_SHIFT); ret = -EINVAL; - goto err_bo; + goto fault_out; } WARN_ON(bomapping->mmu->as != as); /* Assume 2MB alignment and size multiple */ addr &= ~((u64)SZ_2M - 1); - page_offset = addr >> PAGE_SHIFT; - page_offset -= bomapping->mmnode.start; + page_offset = (addr >> PAGE_SHIFT) - bomapping->mmnode.start; - obj = &bo->base.base; - - dma_resv_lock(obj->resv, NULL); - - if (!bo->base.pages) { + if (!bo->sgts) { bo->sgts = kvmalloc_array(bo->base.base.size / SZ_2M, - sizeof(struct sg_table), GFP_KERNEL | __GFP_ZERO); + sizeof(struct sg_table *), GFP_KERNEL | __GFP_ZERO); if (!bo->sgts) { ret = -ENOMEM; - goto err_unlock; - } - - pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT, - sizeof(struct page *), GFP_KERNEL | __GFP_ZERO); - if (!pages) { - kvfree(bo->sgts); - bo->sgts = NULL; - ret = -ENOMEM; - goto err_unlock; - } - bo->base.pages = pages; - bo->base.pages_use_count = 1; - } else { - pages = bo->base.pages; - if (pages[page_offset]) { - /* Pages are already mapped, bail out. */ - goto out; + goto fault_out; } } - mapping = bo->base.base.filp->f_mapping; - mapping_set_unevictable(mapping); + sgt = drm_gem_shmem_get_sparse_pages_sgt(&bo->base, NUM_FAULT_PAGES, + page_offset, GFP_NOWAIT); + if (IS_ERR(sgt)) { + if (WARN_ON(PTR_ERR(sgt) != -EEXIST)) + ret = PTR_ERR(sgt); + else + ret = 0; - for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) { - /* Can happen if the last fault only partially filled this - * section of the pages array before failing. In that case - * we skip already filled pages. - */ - if (pages[i]) - continue; - - pages[i] = shmem_read_mapping_page(mapping, i); - if (IS_ERR(pages[i])) { - ret = PTR_ERR(pages[i]); - pages[i] = NULL; - goto err_unlock; - } + goto fault_out; } - sgt = &bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)]; - ret = sg_alloc_table_from_pages(sgt, pages + page_offset, - NUM_FAULT_PAGES, 0, SZ_2M, GFP_KERNEL); - if (ret) - goto err_unlock; - - ret = dma_map_sgtable(pfdev->dev, sgt, DMA_BIDIRECTIONAL, 0); - if (ret) - goto err_map; - mmu_map_sg(pfdev, bomapping->mmu, addr, IOMMU_WRITE | IOMMU_READ | IOMMU_NOEXEC, sgt); + bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)] = sgt; + bomapping->active = true; bo->heap_rss_size += SZ_2M; dev_dbg(pfdev->dev, "mapped page fault @ AS%d %llx", as, addr); -out: - dma_resv_unlock(obj->resv); - - panfrost_gem_mapping_put(bomapping); - - return 0; - -err_map: - sg_free_table(sgt); -err_unlock: - dma_resv_unlock(obj->resv); -err_bo: +fault_out: panfrost_gem_mapping_put(bomapping); return ret; } From patchwork Wed Mar 26 02:14:25 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adri=C3=A1n_Larumbe?= X-Patchwork-Id: 14029698 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DAC21C36005 for ; Wed, 26 Mar 2025 02:15:43 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4787E10E643; Wed, 26 Mar 2025 02:15:43 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (1024-bit key; unprotected) header.d=collabora.com header.i=adrian.larumbe@collabora.com header.b="Pz4wy852"; dkim-atps=neutral Received: from sender4-pp-f112.zoho.com (sender4-pp-f112.zoho.com [136.143.188.112]) by gabe.freedesktop.org (Postfix) with ESMTPS id 3CA3710E642 for ; Wed, 26 Mar 2025 02:15:39 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; t=1742955329; cv=none; d=zohomail.com; s=zohoarc; b=AVIT6ySTRQ1Imu+ucy2Dzhyb2FJkiCe7K5I0YwNBjY9UMzzBDykyCx6iBeYN7+MLUSWLbSzQqtdBG6i5dKKxMaziyiinQD6rD4Y248131FLNKSPIeHaIN1QgspIkBoMXE1hWw80vmVRf+wpk0VYs7TUsAolZboxx7Ru4VD8pKNE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1742955329; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; bh=N6EeRE7HqfdBtYDSzkFIhVK4YQkzAdXLdg8zrxDgL4k=; b=i7CpF8tzro9wd8czMBVwYXeuIEk/xDMTSBOuxETs8FabCwQkTjLSyvBfJvEDlcFsOuJeqM1sRTNtUG2v6L+h7SmnmGqwHAnraNhBG0q07eri7LZCLPU3eXYpu5iLEbvTHv52QBp/houtDiGEiw8g2vgY6mGSn9sfGU1PKZs+31s= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=collabora.com; spf=pass smtp.mailfrom=adrian.larumbe@collabora.com; dmarc=pass header.from= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1742955329; s=zohomail; d=collabora.com; i=adrian.larumbe@collabora.com; h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-ID:In-Reply-To:References:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To; bh=N6EeRE7HqfdBtYDSzkFIhVK4YQkzAdXLdg8zrxDgL4k=; b=Pz4wy852f6C+t8RXrYy3l5Vn6wYcFD3Ph0iCOiBCmpooyplmEaumi7VpBPRyT53T DlhtZ0C59AHLM1QQz4PSLjvCM+5Gwnpk4OL0pq3UxUWGFYSRgvQGAtM+nklik+Nuz3K TT6Y5T58EQwh6CFAN+P4PbCWY/EUibBbgSVlftaY= Received: by mx.zohomail.com with SMTPS id 1742955327646847.7799386101219; Tue, 25 Mar 2025 19:15:27 -0700 (PDT) From: =?utf-8?q?Adri=C3=A1n_Larumbe?= To: Andrew Morton , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Boris Brezillon , Rob Herring , Steven Price , Liviu Dudau Cc: kernel@collabora.com, =?utf-8?q?Adri=C3=A1n_Larumbe?= , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [RFC PATCH v2 5/6] drm/shmem: Add a helper to check object's page backing status Date: Wed, 26 Mar 2025 02:14:25 +0000 Message-ID: <20250326021433.772196-6-adrian.larumbe@collabora.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250326021433.772196-1-adrian.larumbe@collabora.com> References: <20250326021433.772196-1-adrian.larumbe@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Provide a helper function that lets shmem API users know whether a given object is backed by physical pages, or else in the case of a sparse shmem object, at least one of them is populated. The obvious user is fdinfo, which needs to know an object's resident status. Signed-off-by: Adrián Larumbe --- drivers/gpu/drm/drm_gem_shmem_helper.c | 18 ++++++++++++++++++ include/drm/drm_gem_shmem_helper.h | 2 ++ 2 files changed, 20 insertions(+) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 1bf33e5a1c4c..79ac7c7c953f 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -1033,6 +1033,24 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev, } EXPORT_SYMBOL_GPL(drm_gem_shmem_prime_import_sg_table); +/** + * drm_gem_shmem_is_populated - Tell whether the shem object is backed by + * at least one page of physical memory + * @shmem: shmem GEM object + * + * Returns: + * A boolean, where the 'true' value depends on at least one page being preset + * in a sparse object's xarray, or all the shmem file pages for PRIME buffers + * and regular shmem objects. + */ +bool drm_gem_shmem_is_populated(struct drm_gem_shmem_object *shmem) +{ + return (shmem->base.import_attach || + (!shmem->sparse && shmem->pages) || + (shmem->sparse && !xa_empty(&shmem->xapages))); +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_is_populated); + MODULE_DESCRIPTION("DRM SHMEM memory-management helpers"); MODULE_IMPORT_NS("DMA_BUF"); MODULE_LICENSE("GPL v2"); diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index cbe4548e3ff6..60d2b8ef230b 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -302,6 +302,8 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev, int drm_gem_shmem_dumb_create(struct drm_file *file, struct drm_device *dev, struct drm_mode_create_dumb *args); +bool drm_gem_shmem_is_populated(struct drm_gem_shmem_object *shmem); + /** * DRM_GEM_SHMEM_DRIVER_OPS - Default shmem GEM operations * From patchwork Wed Mar 26 02:14:26 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adri=C3=A1n_Larumbe?= X-Patchwork-Id: 14029699 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 55B8DC36005 for ; Wed, 26 Mar 2025 02:15:49 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id BB9AB10E646; Wed, 26 Mar 2025 02:15:48 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (1024-bit key; unprotected) header.d=collabora.com header.i=adrian.larumbe@collabora.com header.b="KkSoxwZv"; dkim-atps=neutral Received: from sender4-pp-f112.zoho.com (sender4-pp-f112.zoho.com [136.143.188.112]) by gabe.freedesktop.org (Postfix) with ESMTPS id 6010C10E642 for ; Wed, 26 Mar 2025 02:15:42 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; t=1742955331; cv=none; d=zohomail.com; s=zohoarc; b=APqRv5QSeNXNQcwrL/Pvn8A47UhR/9Ptx1Y1pOHuWcMfZfUvn/vsQEKwz3LRJ6YhrlGQew0PB+8RoeR89ca+L8AtUiX5EpH2enM2qB3h2TJZYxGjLeHgqYCIUwsvifROCdQeyzmwI1Luc1oL1BWgURZUoAUFDgmiXySdwuAX10s= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1742955331; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; bh=GGLDXR9qBTP8e0YXcd61U+HKCSx3eRR1MnzILCHIPbo=; b=bPS50hydls3tSOJWzXXi6LKpVaSbUHDbpY/TcEHDdstZz/z6TTp7Z4eK5CUalYCdk9Csr6r7o2A4aNdmnkM7m1F0rru1spUZWnWwFSjmse84F9rWcImq1YkJHvO7HLzroem2TSFVDVbp0COt0dt6Pj5InoSIK8phDFr2+RRd9+A= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=collabora.com; spf=pass smtp.mailfrom=adrian.larumbe@collabora.com; dmarc=pass header.from= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1742955331; s=zohomail; d=collabora.com; i=adrian.larumbe@collabora.com; h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-ID:In-Reply-To:References:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To; bh=GGLDXR9qBTP8e0YXcd61U+HKCSx3eRR1MnzILCHIPbo=; b=KkSoxwZvVTXwxFllLdJ9d57fxsPJZFqq7dw9fFe10P3qF2ZHsiyL5Q8fiPlm8Cmw xdq0U7C2f/PyW2ardRTotiU3wxGCOxqmM+x+7OkxNV4tgEWGExXslq8Dhgeaghqs5vD KHGTGm4+eD+NkkHCpkGAeflDSNsyYO4OD8y909w0= Received: by mx.zohomail.com with SMTPS id 1742955330853789.2493136203924; Tue, 25 Mar 2025 19:15:30 -0700 (PDT) From: =?utf-8?q?Adri=C3=A1n_Larumbe?= To: Andrew Morton , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Boris Brezillon , Rob Herring , Steven Price , Liviu Dudau Cc: kernel@collabora.com, =?utf-8?q?Adri=C3=A1n_Larumbe?= , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [RFC PATCH v2 6/6] drm/panfrost/panthor: Take sparse objects into account for fdinfo Date: Wed, 26 Mar 2025 02:14:26 +0000 Message-ID: <20250326021433.772196-7-adrian.larumbe@collabora.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250326021433.772196-1-adrian.larumbe@collabora.com> References: <20250326021433.772196-1-adrian.larumbe@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Make use of the new shmem helper for deciding whether a GEM object has backing pages. Signed-off-by: Adrián Larumbe --- drivers/gpu/drm/panfrost/panfrost_gem.c | 2 +- drivers/gpu/drm/panthor/panthor_gem.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c index 0cda2c4e524f..2c6d73a7b5e5 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -200,7 +200,7 @@ static enum drm_gem_object_status panfrost_gem_status(struct drm_gem_object *obj struct panfrost_gem_object *bo = to_panfrost_bo(obj); enum drm_gem_object_status res = 0; - if (bo->base.base.import_attach || bo->base.pages) + if (drm_gem_shmem_is_populated(&bo->base)) res |= DRM_GEM_OBJECT_RESIDENT; if (bo->base.madv == PANFROST_MADV_DONTNEED) diff --git a/drivers/gpu/drm/panthor/panthor_gem.c b/drivers/gpu/drm/panthor/panthor_gem.c index 8244a4e6c2a2..48930fe7b398 100644 --- a/drivers/gpu/drm/panthor/panthor_gem.c +++ b/drivers/gpu/drm/panthor/panthor_gem.c @@ -155,7 +155,7 @@ static enum drm_gem_object_status panthor_gem_status(struct drm_gem_object *obj) struct panthor_gem_object *bo = to_panthor_bo(obj); enum drm_gem_object_status res = 0; - if (bo->base.base.import_attach || bo->base.pages) + if (drm_gem_shmem_is_populated(&bo->base)) res |= DRM_GEM_OBJECT_RESIDENT; return res;