From patchwork Mon Aug 19 08:34:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Thomas Hellstrom X-Patchwork-Id: 13768072 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 45514C54722 for ; Mon, 19 Aug 2024 08:35:11 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B48BC10E1FC; Mon, 19 Aug 2024 08:35:09 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="W8p3LdtP"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.9]) by gabe.freedesktop.org (Postfix) with ESMTPS id DCC8510E1FA; Mon, 19 Aug 2024 08:35:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724056508; x=1755592508; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=064RvgdOVAxw/rwXM/rtOYX7EYbk9sjI92Jzwqjk1OM=; b=W8p3LdtP5rGbjdeSv8ZB3NMpgdmd3c8imqYGLvnugjVYwk+RSNKizIbJ I+6XowakmaqO7PxHGbxZH+D0I+rlhPc/iPNBzgc/UFBQUcqMqmjc7S8zL w9P6vIXG6/gZaOZvh7wb9Y6RrkY0NityE1HbzdeoFUOWpji14mZUnLv52 rxr7SDrZECAo+9jdTykMrfbHyNpVxrxiXxS8yyebgXhBJI8SFa1lYTnmb n5fCDzj28/IVtBIisl+KLoE6EHWhmsAW4thE4aHfifu9+TLL95rNtwO5P d+qw205I4ruGKlNF6fjuF9I8rcZdoc6NcInSSEKzXaHk13W7nsv8S8gUO g==; X-CSE-ConnectionGUID: UvdITtWKQcmZ5HcYK59+ZQ== X-CSE-MsgGUID: kkqK3TKFTwarPYrlLHBwKA== X-IronPort-AV: E=McAfee;i="6700,10204,11168"; a="32958456" X-IronPort-AV: E=Sophos;i="6.10,158,1719903600"; d="scan'208";a="32958456" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by fmvoesa103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Aug 2024 01:35:08 -0700 X-CSE-ConnectionGUID: TEZSi+sYSbykoiZNCGOQ5A== X-CSE-MsgGUID: Jov9CQ9hSsWstU18f5bgBA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,158,1719903600"; d="scan'208";a="59962234" Received: from oandoniu-mobl3.ger.corp.intel.com (HELO fedora..) ([10.245.244.132]) by fmviesa006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Aug 2024 01:35:06 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , =?utf-8?q?Christian_K=C3=B6nig?= , Somalapuram Amaranath , Matthew Brost , dri-devel@lists.freedesktop.org, Paulo Zanoni Subject: [PATCH v9 1/6] drm/ttm: Add a virtual base class for graphics memory backup Date: Mon, 19 Aug 2024 10:34:44 +0200 Message-ID: <20240819083449.56701-2-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240819083449.56701-1-thomas.hellstrom@linux.intel.com> References: <20240819083449.56701-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Initially intended for experimenting with different backup solutions (shmem vs direct swap cache insertion), abstract the backup destination using a virtual base class. Also provide a sample implementation for shmem. While when settling on a preferred backup solution, one could perhaps skip the abstraction, this functionality may actually come in handy for configurable dedicated graphics memory backup to fast nvme files or similar, whithout affecting swap-space. Could indeed be useful for VRAM backup on S4 and other cases. v5: - Fix a UAF. (kernel test robot, Dan Carptenter) v6: - Rename ttm_backup_shmem_copy_page() function argument (Matthew Brost) - Add some missing documentation v8: - Use folio_file_page to get to the page we want to writeback instead of using the first page of the folio. Cc: Christian König Cc: Somalapuram Amaranath Cc: Matthew Brost Cc: Signed-off-by: Thomas Hellström Reviewed-by: Matthew Brost #v7 --- drivers/gpu/drm/ttm/Makefile | 2 +- drivers/gpu/drm/ttm/ttm_backup_shmem.c | 139 +++++++++++++++++++++++++ include/drm/ttm/ttm_backup.h | 137 ++++++++++++++++++++++++ 3 files changed, 277 insertions(+), 1 deletion(-) create mode 100644 drivers/gpu/drm/ttm/ttm_backup_shmem.c create mode 100644 include/drm/ttm/ttm_backup.h diff --git a/drivers/gpu/drm/ttm/Makefile b/drivers/gpu/drm/ttm/Makefile index dad298127226..5e980dd90e41 100644 --- a/drivers/gpu/drm/ttm/Makefile +++ b/drivers/gpu/drm/ttm/Makefile @@ -4,7 +4,7 @@ ttm-y := ttm_tt.o ttm_bo.o ttm_bo_util.o ttm_bo_vm.o ttm_module.o \ ttm_execbuf_util.o ttm_range_manager.o ttm_resource.o ttm_pool.o \ - ttm_device.o ttm_sys_manager.o + ttm_device.o ttm_sys_manager.o ttm_backup_shmem.o ttm-$(CONFIG_AGP) += ttm_agp_backend.o obj-$(CONFIG_DRM_TTM) += ttm.o diff --git a/drivers/gpu/drm/ttm/ttm_backup_shmem.c b/drivers/gpu/drm/ttm/ttm_backup_shmem.c new file mode 100644 index 000000000000..cfe4140cc59d --- /dev/null +++ b/drivers/gpu/drm/ttm/ttm_backup_shmem.c @@ -0,0 +1,139 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2024 Intel Corporation + */ + +#include +#include + +/** + * struct ttm_backup_shmem - A shmem based ttm_backup subclass. + * @backup: The base struct ttm_backup + * @filp: The associated shmem object + */ +struct ttm_backup_shmem { + struct ttm_backup backup; + struct file *filp; +}; + +static struct ttm_backup_shmem *to_backup_shmem(struct ttm_backup *backup) +{ + return container_of(backup, struct ttm_backup_shmem, backup); +} + +static void ttm_backup_shmem_drop(struct ttm_backup *backup, unsigned long handle) +{ + handle -= 1; + shmem_truncate_range(file_inode(to_backup_shmem(backup)->filp), handle, + handle + 1); +} + +static int ttm_backup_shmem_copy_page(struct ttm_backup *backup, struct page *dst, + unsigned long handle, bool intr) +{ + struct file *filp = to_backup_shmem(backup)->filp; + struct address_space *mapping = filp->f_mapping; + struct folio *from_folio; + + handle -= 1; + from_folio = shmem_read_folio(mapping, handle); + if (IS_ERR(from_folio)) + return PTR_ERR(from_folio); + + /* Note: Use drm_memcpy_from_wc? */ + copy_highpage(dst, folio_file_page(from_folio, handle)); + folio_put(from_folio); + + return 0; +} + +static unsigned long +ttm_backup_shmem_backup_page(struct ttm_backup *backup, struct page *page, + bool writeback, pgoff_t i, gfp_t page_gfp, + gfp_t alloc_gfp) +{ + struct file *filp = to_backup_shmem(backup)->filp; + struct address_space *mapping = filp->f_mapping; + unsigned long handle = 0; + struct folio *to_folio; + int ret; + + to_folio = shmem_read_folio_gfp(mapping, i, alloc_gfp); + if (IS_ERR(to_folio)) + return handle; + + folio_mark_accessed(to_folio); + folio_lock(to_folio); + folio_mark_dirty(to_folio); + copy_highpage(folio_file_page(to_folio, i), page); + handle = i + 1; + + if (writeback && !folio_mapped(to_folio) && folio_clear_dirty_for_io(to_folio)) { + struct writeback_control wbc = { + .sync_mode = WB_SYNC_NONE, + .nr_to_write = SWAP_CLUSTER_MAX, + .range_start = 0, + .range_end = LLONG_MAX, + .for_reclaim = 1, + }; + folio_set_reclaim(to_folio); + ret = mapping->a_ops->writepage(folio_file_page(to_folio, i), &wbc); + if (!folio_test_writeback(to_folio)) + folio_clear_reclaim(to_folio); + /* If writepage succeeds, it unlocks the folio */ + if (ret) + folio_unlock(to_folio); + } else { + folio_unlock(to_folio); + } + + folio_put(to_folio); + + return handle; +} + +static void ttm_backup_shmem_fini(struct ttm_backup *backup) +{ + struct ttm_backup_shmem *sbackup = to_backup_shmem(backup); + + fput(sbackup->filp); + kfree(sbackup); +} + +static const struct ttm_backup_ops ttm_backup_shmem_ops = { + .drop = ttm_backup_shmem_drop, + .copy_backed_up_page = ttm_backup_shmem_copy_page, + .backup_page = ttm_backup_shmem_backup_page, + .fini = ttm_backup_shmem_fini, +}; + +/** + * ttm_backup_shmem_create() - Create a shmem-based struct backup. + * @size: The maximum size (in bytes) to back up. + * + * Create a backup utilizing shmem objects. + * + * Return: A pointer to a struct ttm_backup on success, + * an error pointer on error. + */ +struct ttm_backup *ttm_backup_shmem_create(loff_t size) +{ + struct ttm_backup_shmem *sbackup = + kzalloc(sizeof(*sbackup), GFP_KERNEL | __GFP_ACCOUNT); + struct file *filp; + + if (!sbackup) + return ERR_PTR(-ENOMEM); + + filp = shmem_file_setup("ttm shmem backup", size, 0); + if (IS_ERR(filp)) { + kfree(sbackup); + return ERR_CAST(filp); + } + + sbackup->filp = filp; + sbackup->backup.ops = &ttm_backup_shmem_ops; + + return &sbackup->backup; +} +EXPORT_SYMBOL_GPL(ttm_backup_shmem_create); diff --git a/include/drm/ttm/ttm_backup.h b/include/drm/ttm/ttm_backup.h new file mode 100644 index 000000000000..5f8c7d3069ef --- /dev/null +++ b/include/drm/ttm/ttm_backup.h @@ -0,0 +1,137 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2024 Intel Corporation + */ + +#ifndef _TTM_BACKUP_H_ +#define _TTM_BACKUP_H_ + +#include +#include + +struct ttm_backup; + +/** + * ttm_backup_handle_to_page_ptr() - Convert handle to struct page pointer + * @handle: The handle to convert. + * + * Converts an opaque handle received from the + * struct ttm_backoup_ops::backup_page() function to an (invalid) + * struct page pointer suitable for a struct page array. + * + * Return: An (invalid) struct page pointer. + */ +static inline struct page * +ttm_backup_handle_to_page_ptr(unsigned long handle) +{ + return (struct page *)(handle << 1 | 1); +} + +/** + * ttm_backup_page_ptr_is_handle() - Whether a struct page pointer is a handle + * @page: The struct page pointer to check. + * + * Return: true if the struct page pointer is a handld returned from + * ttm_backup_handle_to_page_ptr(). False otherwise. + */ +static inline bool ttm_backup_page_ptr_is_handle(const struct page *page) +{ + return (unsigned long)page & 1; +} + +/** + * ttm_backup_page_ptr_to_handle() - Convert a struct page pointer to a handle + * @page: The struct page pointer to convert + * + * Return: The handle that was previously used in + * ttm_backup_handle_to_page_ptr() to obtain a struct page pointer, suitable + * for use as argument in the struct ttm_backup_ops drop() or + * copy_backed_up_page() functions. + */ +static inline unsigned long +ttm_backup_page_ptr_to_handle(const struct page *page) +{ + WARN_ON(!ttm_backup_page_ptr_is_handle(page)); + return (unsigned long)page >> 1; +} + +/** struct ttm_backup_ops - A struct ttm_backup backend operations */ +struct ttm_backup_ops { + /** + * drop - release memory associated with a handle + * @backup: The struct backup pointer used to obtain the handle + * @handle: The handle obtained from the @backup_page function. + */ + void (*drop)(struct ttm_backup *backup, unsigned long handle); + + /** + * copy_backed_up_page - Copy the contents of a previously backed + * up page + * @backup: The struct backup pointer used to back up the page. + * @dst: The struct page to copy into. + * @handle: The handle returned when the page was backed up. + * @intr: Try to perform waits interruptable or at least killable. + * + * Return: 0 on success, Negative error code on failure, notably + * -EINTR if @intr was set to true and a signal is pending. + */ + int (*copy_backed_up_page)(struct ttm_backup *backup, struct page *dst, + unsigned long handle, bool intr); + + /** + * backup_page - Backup a page + * @backup: The struct backup pointer to use. + * @page: The page to back up. + * @writeback: Whether to perform immediate writeback of the page. + * This may have performance implications. + * @i: A unique integer for each page and each struct backup. + * This is a hint allowing the backup backend to avoid managing + * its address space separately. + * @page_gfp: The gfp value used when the page was allocated. + * This is used for accounting purposes. + * @alloc_gfp: The gpf to be used when the backend needs to allocaete + * memory. + * + * Return: A handle on success. 0 on failure. + * (This is following the swp_entry_t convention). + * + * Note: This function could be extended to back up a folio and + * backends would then split the folio internally if needed. + * Drawback is that the caller would then have to keep track of + * the folio size- and usage. + */ + unsigned long (*backup_page)(struct ttm_backup *backup, struct page *page, + bool writeback, pgoff_t i, gfp_t page_gfp, + gfp_t alloc_gfp); + /** + * fini - Free the struct backup resources after last use. + * @backup: Pointer to the struct backup whose resources to free. + * + * After a call to @fini, it's illegal to use the @backup pointer. + */ + void (*fini)(struct ttm_backup *backup); +}; + +/** + * struct ttm_backup - Abstract a backup backend. + * @ops: The operations as described above. + * + * The struct ttm_backup is intended to be subclassed by the + * backend implementation. + */ +struct ttm_backup { + const struct ttm_backup_ops *ops; +}; + +/** + * ttm_backup_shmem_create() - Create a shmem-based struct backup. + * @size: The maximum size (in bytes) to back up. + * + * Create a backup utilizing shmem objects. + * + * Return: A pointer to a struct ttm_backup on success, + * an error pointer on error. + */ +struct ttm_backup *ttm_backup_shmem_create(loff_t size); + +#endif From patchwork Mon Aug 19 08:34:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Thomas Hellstrom X-Patchwork-Id: 13768073 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 039C9C52D7C for ; Mon, 19 Aug 2024 08:35:13 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0B3A910E1FF; Mon, 19 Aug 2024 08:35:11 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="GHN+0dOr"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.9]) by gabe.freedesktop.org (Postfix) with ESMTPS id EF42310E1FE; Mon, 19 Aug 2024 08:35:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724056510; x=1755592510; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=AyGSFHUW2gds6DvHfYJQrpvK8Kb5mJcRTUBlCGMjbug=; b=GHN+0dOriwZYOkI5AxVmzXIXHh6eSiaxAQTj1alD2O4rwXGm0CiFH1pj We9SMVPHdi0yQHCBrkrf07poAYJ1pQdCN94CUqAkMBMCkhi4L9Xk48cRS mB5rBZkJosbRvXBSVzoQNiAHaPyoJXTGc2zgEoBf/QeYmMboh0A2q52NZ eZHlaED19WDieQcGK/CBdTOnf25k143R0ngxrdxYpcf75OfAUrioP/qIN ZK4H5cY7bT7hYDYKzMLNGy4LyLtsWd5iLXFlNYlOqCuW3eBaYsPM14jWJ r56+UEKbq1IE1Y6KhqhfzfzV3TwOX5IdwGnfWZ1TtQwvCy9wxl8hX/6u7 Q==; X-CSE-ConnectionGUID: WiyrOYt6SLK2lutjb2Gu4g== X-CSE-MsgGUID: WwzMuZhPQgqjoTuYy9UXGQ== X-IronPort-AV: E=McAfee;i="6700,10204,11168"; a="32958467" X-IronPort-AV: E=Sophos;i="6.10,158,1719903600"; d="scan'208";a="32958467" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by fmvoesa103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Aug 2024 01:35:10 -0700 X-CSE-ConnectionGUID: jUtt9vLAQySpGWSjfkoW8Q== X-CSE-MsgGUID: dO4FsLTXT4etccnOTmPtQg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,158,1719903600"; d="scan'208";a="59962253" Received: from oandoniu-mobl3.ger.corp.intel.com (HELO fedora..) ([10.245.244.132]) by fmviesa006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Aug 2024 01:35:08 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , =?utf-8?q?Christian_K=C3=B6nig?= , Somalapuram Amaranath , Matthew Brost , dri-devel@lists.freedesktop.org, Paulo Zanoni Subject: [PATCH v9 2/6] drm/ttm/pool: Provide a helper to shrink pages Date: Mon, 19 Aug 2024 10:34:45 +0200 Message-ID: <20240819083449.56701-3-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240819083449.56701-1-thomas.hellstrom@linux.intel.com> References: <20240819083449.56701-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Provide a helper to shrink ttm_tt page-vectors on a per-page basis. A ttm_backup backend could then in theory get away with allocating a single temporary page for each struct ttm_tt. This is accomplished by splitting larger pages before trying to back them up. In the future we could allow ttm_backup to handle backing up large pages as well, but currently there's no benefit in doing that, since the shmem backup backend would have to split those anyway to avoid allocating too much temporary memory, and if the backend instead inserts pages into the swap-cache, those are split on reclaim by the core. Due to potential backup- and recover errors, allow partially swapped out struct ttm_tt's, although mark them as swapped out stopping them from being swapped out a second time. More details in the ttm_pool.c DOC section. v2: - A couple of cleanups and error fixes in ttm_pool_back_up_tt. - s/back_up/backup/ - Add a writeback parameter to the exported interface. v8: - Use a struct for flags for readability (Matt Brost) - Address misc other review comments (Matt Brost) v9: - Update the kerneldoc for the ttm_tt::backup field. Cc: Christian König Cc: Somalapuram Amaranath Cc: Matthew Brost Cc: Signed-off-by: Thomas Hellström --- drivers/gpu/drm/ttm/ttm_pool.c | 394 +++++++++++++++++++++++++++++++-- drivers/gpu/drm/ttm/ttm_tt.c | 37 ++++ include/drm/ttm/ttm_pool.h | 6 + include/drm/ttm/ttm_tt.h | 30 +++ 4 files changed, 454 insertions(+), 13 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c index 8504dbe19c1a..0d224cd9f8eb 100644 --- a/drivers/gpu/drm/ttm/ttm_pool.c +++ b/drivers/gpu/drm/ttm/ttm_pool.c @@ -41,6 +41,7 @@ #include #endif +#include #include #include #include @@ -58,6 +59,32 @@ struct ttm_pool_dma { unsigned long vaddr; }; +/** + * struct ttm_pool_tt_restore - State representing restore from backup + * @alloced_pages: Total number of already allocated pages for the ttm_tt. + * @restored_pages: Number of (sub) pages restored from swap for this + * chunk of 1 << @order pages. + * @first_page: The ttm page ptr representing for @old_pages[0]. + * @caching_divide: Page pointer where subsequent pages are cached. + * @old_pages: Backup copy of page pointers that were replaced by the new + * page allocation. + * @pool: The pool used for page allocation while restoring. + * @order: The order of the last page allocated while restoring. + * + * Recovery from backup might fail when we've recovered less than the + * full ttm_tt. In order not to loose any data (yet), keep information + * around that allows us to restart a failed ttm backup recovery. + */ +struct ttm_pool_tt_restore { + pgoff_t alloced_pages; + pgoff_t restored_pages; + struct page **first_page; + struct page **caching_divide; + struct ttm_pool *pool; + unsigned int order; + struct page *old_pages[]; +}; + static unsigned long page_pool_size; MODULE_PARM_DESC(page_pool_size, "Number of pages in the WC/UC/DMA pool"); @@ -354,11 +381,102 @@ static unsigned int ttm_pool_page_order(struct ttm_pool *pool, struct page *p) return p->private; } +/* + * To be able to insert single pages into backup directly, + * we need to split multi-order page allocations and make them look + * like single-page allocations. + */ +static void ttm_pool_split_for_swap(struct ttm_pool *pool, struct page *p) +{ + unsigned int order = ttm_pool_page_order(pool, p); + pgoff_t nr; + + if (!order) + return; + + split_page(p, order); + nr = 1UL << order; + while (nr--) + (p++)->private = 0; +} + +/** + * DOC: Partial backup and restoration of a struct ttm_tt. + * + * Swapout using ttm_backup::ops::backup_page() and swapin using + * ttm_backup::ops::copy_backed_up_page() may fail. + * The former most likely due to lack of swap-space or memory, the latter due + * to lack of memory or because of signal interruption during waits. + * + * Backupfailure is easily handled by using a ttm_tt pages vector that holds + * both swap entries and page pointers. This has to be taken into account when + * restoring such a ttm_tt from backup, and when freeing it while backed up. + * When restoring, for simplicity, new pages are actually allocated from the + * pool and the contents of any old pages are copied in and then the old pages + * are released. + * + * For restoration failures, the struct ttm_pool_tt_restore holds sufficient state + * to be able to resume an interrupted restore, and that structure is freed once + * the restoration is complete. If the struct ttm_tt is destroyed while there + * is a valid struct ttm_pool_tt_restore attached, that is also properly taken + * care of. + */ + +static bool ttm_pool_restore_valid(const struct ttm_pool_tt_restore *restore) +{ + return restore && restore->restored_pages < (1 << restore->order); +} + +static int ttm_pool_restore_tt(struct ttm_pool_tt_restore *restore, + struct ttm_backup *backup, + struct ttm_operation_ctx *ctx) +{ + unsigned int i, nr = 1 << restore->order; + int ret = 0; + + if (!ttm_pool_restore_valid(restore)) + return 0; + + for (i = restore->restored_pages; i < nr; ++i) { + struct page *p = restore->old_pages[i]; + + if (ttm_backup_page_ptr_is_handle(p)) { + unsigned long handle = ttm_backup_page_ptr_to_handle(p); + + if (handle == 0) + continue; + + ret = backup->ops->copy_backed_up_page + (backup, restore->first_page[i], + handle, ctx->interruptible); + if (ret) + break; + + backup->ops->drop(backup, handle); + } else if (p) { + /* + * We could probably avoid splitting the old page + * using clever logic, but ATM we don't care. + */ + ttm_pool_split_for_swap(restore->pool, p); + copy_highpage(restore->first_page[i], p); + __free_pages(p, 0); + } + + restore->restored_pages++; + restore->old_pages[i] = NULL; + cond_resched(); + } + + return ret; +} + /* Called when we got a page, either from a pool or newly allocated */ static int ttm_pool_page_allocated(struct ttm_pool *pool, unsigned int order, struct page *p, dma_addr_t **dma_addr, unsigned long *num_pages, - struct page ***pages) + struct page ***pages, + struct ttm_pool_tt_restore *restore) { unsigned int i; int r; @@ -369,6 +487,16 @@ static int ttm_pool_page_allocated(struct ttm_pool *pool, unsigned int order, return r; } + if (restore) { + memcpy(restore->old_pages, *pages, + (1 << order) * sizeof(*restore->old_pages)); + memset(*pages, 0, (1 << order) * sizeof(**pages)); + restore->order = order; + restore->restored_pages = 0; + restore->first_page = *pages; + restore->alloced_pages += 1UL << order; + } + *num_pages -= 1 << order; for (i = 1 << order; i; --i, ++(*pages), ++p) **pages = p; @@ -394,22 +522,39 @@ static void ttm_pool_free_range(struct ttm_pool *pool, struct ttm_tt *tt, pgoff_t start_page, pgoff_t end_page) { struct page **pages = &tt->pages[start_page]; + struct ttm_backup *backup = tt->backup; unsigned int order; pgoff_t i, nr; for (i = start_page; i < end_page; i += nr, pages += nr) { struct ttm_pool_type *pt = NULL; + struct page *p = *pages; + + if (ttm_backup_page_ptr_is_handle(p)) { + unsigned long handle = ttm_backup_page_ptr_to_handle(p); + + nr = 1; + if (handle != 0) + backup->ops->drop(backup, handle); + continue; + } + + if (pool) { + order = ttm_pool_page_order(pool, p); + nr = (1UL << order); + if (tt->dma_address) + ttm_pool_unmap(pool, tt->dma_address[i], nr); - order = ttm_pool_page_order(pool, *pages); - nr = (1UL << order); - if (tt->dma_address) - ttm_pool_unmap(pool, tt->dma_address[i], nr); + pt = ttm_pool_select_type(pool, caching, order); + } else { + order = p->private; + nr = (1UL << order); + } - pt = ttm_pool_select_type(pool, caching, order); if (pt) - ttm_pool_type_give(pt, *pages); + ttm_pool_type_give(pt, p); else - ttm_pool_free_page(pool, caching, order, *pages); + ttm_pool_free_page(pool, caching, order, p); } } @@ -453,9 +598,36 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, else gfp_flags |= GFP_HIGHUSER; - for (order = min_t(unsigned int, MAX_PAGE_ORDER, __fls(num_pages)); - num_pages; - order = min_t(unsigned int, order, __fls(num_pages))) { + order = min_t(unsigned int, MAX_PAGE_ORDER, __fls(num_pages)); + + if (tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP) { + if (!tt->restore) { + gfp_t gfp = GFP_KERNEL | __GFP_NOWARN; + + if (ctx->gfp_retry_mayfail) + gfp |= __GFP_RETRY_MAYFAIL; + + tt->restore = + kvzalloc(struct_size(tt->restore, old_pages, + (size_t)1 << order), gfp); + if (!tt->restore) + return -ENOMEM; + } else if (ttm_pool_restore_valid(tt->restore)) { + struct ttm_pool_tt_restore *restore = tt->restore; + + num_pages -= restore->alloced_pages; + order = min_t(unsigned int, order, __fls(num_pages)); + pages += restore->alloced_pages; + r = ttm_pool_restore_tt(restore, tt->backup, ctx); + if (r) + return r; + caching = restore->caching_divide; + } + + tt->restore->pool = pool; + } + + for (; num_pages; order = min_t(unsigned int, order, __fls(num_pages))) { struct ttm_pool_type *pt; page_caching = tt->caching; @@ -472,11 +644,19 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, r = ttm_pool_page_allocated(pool, order, p, &dma_addr, &num_pages, - &pages); + &pages, + tt->restore); if (r) goto error_free_page; caching = pages; + if (ttm_pool_restore_valid(tt->restore)) { + r = ttm_pool_restore_tt(tt->restore, tt->backup, + ctx); + if (r) + goto error_free_all; + } + if (num_pages < (1 << order)) break; @@ -496,9 +676,17 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, caching = pages; } r = ttm_pool_page_allocated(pool, order, p, &dma_addr, - &num_pages, &pages); + &num_pages, &pages, + tt->restore); if (r) goto error_free_page; + + if (ttm_pool_restore_valid(tt->restore)) { + r = ttm_pool_restore_tt(tt->restore, tt->backup, ctx); + if (r) + goto error_free_all; + } + if (PageHighMem(p)) caching = pages; } @@ -517,12 +705,26 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, if (r) goto error_free_all; + if (tt->restore) { + kvfree(tt->restore); + tt->restore = NULL; + } + + if (tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP) + tt->page_flags &= ~(TTM_TT_FLAG_PRIV_BACKED_UP | + TTM_TT_FLAG_SWAPPED); + return 0; error_free_page: ttm_pool_free_page(pool, page_caching, order, p); error_free_all: + if (tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP) { + tt->restore->caching_divide = caching; + return r; + } + num_pages = tt->num_pages - num_pages; caching_divide = caching - tt->pages; ttm_pool_free_range(pool, tt, tt->caching, 0, caching_divide); @@ -549,6 +751,172 @@ void ttm_pool_free(struct ttm_pool *pool, struct ttm_tt *tt) } EXPORT_SYMBOL(ttm_pool_free); +/** + * ttm_pool_release_backed_up() - Release content of a swapped-out struct ttm_tt + * @tt: The struct ttm_tt. + * + * Release handles with associated content or any remaining pages of + * a backed-up struct ttm_tt. + */ +void ttm_pool_release_backed_up(struct ttm_tt *tt) +{ + struct ttm_backup *backup = tt->backup; + struct ttm_pool_tt_restore *restore; + pgoff_t i, start_page = 0; + unsigned long handle; + + if (!(tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP)) + return; + + restore = tt->restore; + + if (ttm_pool_restore_valid(restore)) { + pgoff_t nr = 1UL << restore->order; + + for (i = restore->restored_pages; i < nr; ++i) { + struct page *p = restore->old_pages[i]; + + if (ttm_backup_page_ptr_is_handle(p)) { + handle = ttm_backup_page_ptr_to_handle(p); + if (handle == 0) + continue; + + backup->ops->drop(backup, handle); + } else if (p) { + ttm_pool_split_for_swap(restore->pool, p); + __free_pages(p, 0); + } + } + } + + if (restore) { + pgoff_t mid = restore->caching_divide - tt->pages; + + start_page = restore->alloced_pages; + /* Pages that might be dma-mapped and non-cached */ + ttm_pool_free_range(restore->pool, tt, tt->caching, + 0, mid); + /* Pages that might be dma-mapped but cached */ + ttm_pool_free_range(restore->pool, tt, ttm_cached, + mid, restore->alloced_pages); + } + + /* Shrunken pages. Cached and not dma-mapped. */ + ttm_pool_free_range(NULL, tt, ttm_cached, start_page, tt->num_pages); + + if (restore) { + kvfree(restore); + tt->restore = NULL; + } + + tt->page_flags &= ~(TTM_TT_FLAG_PRIV_BACKED_UP | TTM_TT_FLAG_SWAPPED); +} + +/** + * ttm_pool_backup_tt() - Back up or purge a struct ttm_tt + * @pool: The pool used when allocating the struct ttm_tt. + * @ttm: The struct ttm_tt. + * @flags: Flags to govern the backup behaviour. + * + * Back up or purge a struct ttm_tt. If @purge is true, then + * all pages will be freed directly to the system rather than to the pool + * they were allocated from, making the function behave similarly to + * ttm_pool_free(). If @purge is false the pages will be backed up instead, + * exchanged for handles. + * A subsequent call to ttm_pool_alloc() will then read back the content and + * a subsequent call to ttm_pool_release_shrunken() will drop it. + * If backup of a page fails for whatever reason, @ttm will still be + * partially backed up, retaining those pages for which backup fails. + * + * Return: Number of pages actually backed up or freed, or negative + * error code on error. + */ +long ttm_pool_backup_tt(struct ttm_pool *pool, struct ttm_tt *ttm, + const struct ttm_backup_flags *flags) +{ + struct ttm_backup *backup = ttm->backup; + struct page *page; + unsigned long handle; + gfp_t alloc_gfp; + gfp_t gfp; + int ret = 0; + pgoff_t shrunken = 0; + pgoff_t i, num_pages; + + if ((!get_nr_swap_pages() && !flags->purge) || + pool->use_dma_alloc || + (ttm->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP)) + return -EBUSY; + +#ifdef CONFIG_X86 + /* Anything returned to the system needs to be cached. */ + if (ttm->caching != ttm_cached) + set_pages_array_wb(ttm->pages, ttm->num_pages); +#endif + + if (ttm->dma_address || flags->purge) { + for (i = 0; i < ttm->num_pages; i += num_pages) { + unsigned int order; + + page = ttm->pages[i]; + if (unlikely(!page)) { + num_pages = 1; + continue; + } + + order = ttm_pool_page_order(pool, page); + num_pages = 1UL << order; + if (ttm->dma_address) + ttm_pool_unmap(pool, ttm->dma_address[i], + num_pages); + if (flags->purge) { + shrunken += num_pages; + page->private = 0; + __free_pages(page, order); + memset(ttm->pages + i, 0, + num_pages * sizeof(*ttm->pages)); + } + } + } + + if (flags->purge) + return shrunken; + + if (pool->use_dma32) + gfp = GFP_DMA32; + else + gfp = GFP_HIGHUSER; + + alloc_gfp = GFP_KERNEL | __GFP_HIGH | __GFP_NOWARN | __GFP_RETRY_MAYFAIL; + + for (i = 0; i < ttm->num_pages; ++i) { + page = ttm->pages[i]; + if (unlikely(!page)) + continue; + + ttm_pool_split_for_swap(pool, page); + + handle = backup->ops->backup_page(backup, page, flags->writeback, i, + gfp, alloc_gfp); + if (handle) { + ttm->pages[i] = ttm_backup_handle_to_page_ptr(handle); + put_page(page); + shrunken++; + } else { + /* We allow partially shrunken tts */ + ret = -ENOMEM; + break; + } + cond_resched(); + } + + if (shrunken) + ttm->page_flags |= (TTM_TT_FLAG_PRIV_BACKED_UP | + TTM_TT_FLAG_SWAPPED); + + return shrunken ? shrunken : ret; +} + /** * ttm_pool_init - Initialize a pool * diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c index 4b51b9023126..f520b8c93f03 100644 --- a/drivers/gpu/drm/ttm/ttm_tt.c +++ b/drivers/gpu/drm/ttm/ttm_tt.c @@ -40,6 +40,7 @@ #include #include #include +#include #include #include @@ -158,6 +159,8 @@ static void ttm_tt_init_fields(struct ttm_tt *ttm, ttm->swap_storage = NULL; ttm->sg = bo->sg; ttm->caching = caching; + ttm->restore = NULL; + ttm->backup = NULL; } int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo, @@ -182,6 +185,12 @@ void ttm_tt_fini(struct ttm_tt *ttm) fput(ttm->swap_storage); ttm->swap_storage = NULL; + ttm_pool_release_backed_up(ttm); + if (ttm->backup) { + ttm->backup->ops->fini(ttm->backup); + ttm->backup = NULL; + } + if (ttm->pages) kvfree(ttm->pages); else @@ -253,6 +262,34 @@ int ttm_tt_swapin(struct ttm_tt *ttm) } EXPORT_SYMBOL_FOR_TESTS_ONLY(ttm_tt_swapin); +/** + * ttm_tt_backup() - Helper to back up a struct ttm_tt. + * @bdev: The TTM device. + * @tt: The struct ttm_tt. + * @flags: Flags that govern the backup behaviour. + * + * Update the page accounting and call ttm_pool_shrink_tt to free pages + * or back them up. + * + * Return: Number of pages freed or swapped out, or negative error code on + * error. + */ +long ttm_tt_backup(struct ttm_device *bdev, struct ttm_tt *tt, + const struct ttm_backup_flags flags) +{ + long ret; + + if (WARN_ON(IS_ERR_OR_NULL(tt->backup))) + return 0; + + ret = ttm_pool_backup_tt(&bdev->pool, tt, &flags); + + if (ret > 0) + tt->page_flags &= ~TTM_TT_FLAG_PRIV_POPULATED; + + return ret; +} + /** * ttm_tt_swapout - swap out tt object * diff --git a/include/drm/ttm/ttm_pool.h b/include/drm/ttm/ttm_pool.h index 160d954a261e..3112a4be835c 100644 --- a/include/drm/ttm/ttm_pool.h +++ b/include/drm/ttm/ttm_pool.h @@ -33,6 +33,7 @@ struct device; struct seq_file; +struct ttm_backup_flags; struct ttm_operation_ctx; struct ttm_pool; struct ttm_tt; @@ -89,6 +90,11 @@ void ttm_pool_fini(struct ttm_pool *pool); int ttm_pool_debugfs(struct ttm_pool *pool, struct seq_file *m); +void ttm_pool_release_backed_up(struct ttm_tt *tt); + +long ttm_pool_backup_tt(struct ttm_pool *pool, struct ttm_tt *ttm, + const struct ttm_backup_flags *flags); + int ttm_pool_mgr_init(unsigned long num_pages); void ttm_pool_mgr_fini(void); diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h index 2b9d856ff388..1c28196c9be0 100644 --- a/include/drm/ttm/ttm_tt.h +++ b/include/drm/ttm/ttm_tt.h @@ -32,11 +32,13 @@ #include #include +struct ttm_backup; struct ttm_device; struct ttm_tt; struct ttm_resource; struct ttm_buffer_object; struct ttm_operation_ctx; +struct ttm_pool_tt_restore; /** * struct ttm_tt - This is a structure holding the pages, caching- and aperture @@ -85,6 +87,9 @@ struct ttm_tt { * fault handling abuses the DMA api a bit and dma_map_attrs can't be * used to assure pgprot always matches. * + * TTM_TT_FLAG_PRIV_BACKED_UP: TTM internal only. This is set if the + * struct ttm_tt has been (possibly partially) backed up. + * * TTM_TT_FLAG_PRIV_POPULATED: TTM internal only. DO NOT USE. This is * set by TTM after ttm_tt_populate() has successfully returned, and is * then unset when TTM calls ttm_tt_unpopulate(). @@ -96,6 +101,7 @@ struct ttm_tt { #define TTM_TT_FLAG_DECRYPTED BIT(4) #define TTM_TT_FLAG_PRIV_POPULATED BIT(5) +#define TTM_TT_FLAG_PRIV_BACKED_UP BIT(6) uint32_t page_flags; /** @num_pages: Number of pages in the page array. */ uint32_t num_pages; @@ -105,11 +111,20 @@ struct ttm_tt { dma_addr_t *dma_address; /** @swap_storage: Pointer to shmem struct file for swap storage. */ struct file *swap_storage; + /** + * @backup: Pointer to backup struct for backed up tts. + * Could be unified with @swap_storage. Meanwhile, the driver's + * ttm_tt_create() callback is responsible for assigning + * this field. + */ + struct ttm_backup *backup; /** * @caching: The current caching state of the pages, see enum * ttm_caching. */ enum ttm_caching caching; + /** @restore: Partial restoration from backup state. TTM private */ + struct ttm_pool_tt_restore *restore; }; /** @@ -230,6 +245,21 @@ void ttm_tt_mgr_init(unsigned long num_pages, unsigned long num_dma32_pages); struct ttm_kmap_iter *ttm_kmap_iter_tt_init(struct ttm_kmap_iter_tt *iter_tt, struct ttm_tt *tt); unsigned long ttm_tt_pages_limit(void); + +/** + * struct ttm_backup_flags - Flags to govern backup behaviour. + * @purge: Free pages without backing up. Bypass pools. + * @writeback: Attempt to copy contents directly to swap space, even + * if that means blocking on writes to external memory. + */ +struct ttm_backup_flags { + u32 purge : 1; + u32 writeback : 1; +}; + +long ttm_tt_backup(struct ttm_device *bdev, struct ttm_tt *tt, + const struct ttm_backup_flags flags); + #if IS_ENABLED(CONFIG_AGP) #include From patchwork Mon Aug 19 08:34:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Thomas Hellstrom X-Patchwork-Id: 13768074 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 65C39C54722 for ; Mon, 19 Aug 2024 08:35:16 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C933910E1FE; Mon, 19 Aug 2024 08:35:15 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="geZ9080b"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.9]) by gabe.freedesktop.org (Postfix) with ESMTPS id D622810E1FE; Mon, 19 Aug 2024 08:35:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724056512; x=1755592512; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hr4iF3ejgvbaraReaQTKXIYPqicqSTkSR/bXb2WsSYw=; b=geZ9080b24wFXoizQ0AJDT/KvYo0til+ebYB/DJ+MXyLauYqltDr7+qL ZE67qn6JMC6wglZ3uaJZgdykYRua6Jc1/Bymg25aSRuTOWcGNC9VNwzlz HUDFIAxiykZfr1mzEdermC/KjBOES9hDN9r2d5s3slafmc8/bZfD+buaK Cf0b460PreY1n0floNKuYM5n2k4hxSk30SoKqTQdZ18X/BJMYW0xbyggu o7cSWbcOOmpVB8f3vBp4dfJZMY9hRD5CT/lAgRBA5ZQAif1p4YYjz1WO6 KP/t+p4540w+w7NPW2/bRXySWs7HScOfkVSBR3N4u8EhZfHpJA7DhiRpe w==; X-CSE-ConnectionGUID: IHOr/TNiQvavEWDmrFnyYw== X-CSE-MsgGUID: uK9q60EPQ62lNwgBCHlcfg== X-IronPort-AV: E=McAfee;i="6700,10204,11168"; a="32958477" X-IronPort-AV: E=Sophos;i="6.10,158,1719903600"; d="scan'208";a="32958477" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by fmvoesa103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Aug 2024 01:35:12 -0700 X-CSE-ConnectionGUID: sHOrctzySya/JwyMC/u8yg== X-CSE-MsgGUID: tNGzBK44Q2qZ1ZKMIDt6zA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,158,1719903600"; d="scan'208";a="59962267" Received: from oandoniu-mobl3.ger.corp.intel.com (HELO fedora..) ([10.245.244.132]) by fmviesa006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Aug 2024 01:35:10 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , =?utf-8?q?Christian_K=C3=B6nig?= , Somalapuram Amaranath , Matthew Brost , dri-devel@lists.freedesktop.org, Paulo Zanoni Subject: [PATCH v9 3/6] drm/ttm: Use fault-injection to test error paths Date: Mon, 19 Aug 2024 10:34:46 +0200 Message-ID: <20240819083449.56701-4-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240819083449.56701-1-thomas.hellstrom@linux.intel.com> References: <20240819083449.56701-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Use fault-injection to test partial TTM swapout and interrupted swapin. Return -EINTR for swapin to test the callers ability to handle and restart the swapin, and on swapout perform a partial swapout to test that the swapin and release_shrunken functionality. v8: - Use the core fault-injection system. v9: - Fix compliation failure for !CONFIG_FAULT_INJECTION Cc: Christian König Cc: Somalapuram Amaranath Cc: Matthew Brost Cc: Signed-off-by: Thomas Hellström Reviewed-by: Matthew Brost #v7 --- drivers/gpu/drm/ttm/ttm_pool.c | 27 ++++++++++++++++++++++++++- 1 file changed, 26 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c index 0d224cd9f8eb..b2718aef2edf 100644 --- a/drivers/gpu/drm/ttm/ttm_pool.c +++ b/drivers/gpu/drm/ttm/ttm_pool.c @@ -48,6 +48,13 @@ #include "ttm_module.h" +#ifdef CONFIG_FAULT_INJECTION +#include +static DECLARE_FAULT_ATTR(backup_fault_inject); +#else +#define should_fail(...) false +#endif + /** * struct ttm_pool_dma - Helper object for coherent DMA mappings * @@ -431,6 +438,7 @@ static int ttm_pool_restore_tt(struct ttm_pool_tt_restore *restore, struct ttm_backup *backup, struct ttm_operation_ctx *ctx) { + static unsigned long __maybe_unused swappedin; unsigned int i, nr = 1 << restore->order; int ret = 0; @@ -446,6 +454,12 @@ static int ttm_pool_restore_tt(struct ttm_pool_tt_restore *restore, if (handle == 0) continue; + if (IS_ENABLED(CONFIG_FAULT_INJECTION) && ctx->interruptible && + should_fail(&backup_fault_inject, 1)) { + ret = -EINTR; + break; + } + ret = backup->ops->copy_backed_up_page (backup, restore->first_page[i], handle, ctx->interruptible); @@ -889,7 +903,14 @@ long ttm_pool_backup_tt(struct ttm_pool *pool, struct ttm_tt *ttm, alloc_gfp = GFP_KERNEL | __GFP_HIGH | __GFP_NOWARN | __GFP_RETRY_MAYFAIL; - for (i = 0; i < ttm->num_pages; ++i) { + num_pages = ttm->num_pages; + + /* Pretend doing fault injection by shrinking only half of the pages. */ + + if (IS_ENABLED(CONFIG_FAULT_INJECTION) && should_fail(&backup_fault_inject, 1)) + num_pages = DIV_ROUND_UP(num_pages, 2); + + for (i = 0; i < num_pages; ++i) { page = ttm->pages[i]; if (unlikely(!page)) continue; @@ -1178,6 +1199,10 @@ int ttm_pool_mgr_init(unsigned long num_pages) &ttm_pool_debugfs_globals_fops); debugfs_create_file("page_pool_shrink", 0400, ttm_debugfs_root, NULL, &ttm_pool_debugfs_shrink_fops); +#ifdef CONFIG_FAULT_INJECTION + fault_create_debugfs_attr("backup_fault_inject", ttm_debugfs_root, + &backup_fault_inject); +#endif #endif mm_shrinker = shrinker_alloc(0, "drm-ttm_pool"); From patchwork Mon Aug 19 08:34:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Thomas Hellstrom X-Patchwork-Id: 13768075 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 70334C5472C for ; Mon, 19 Aug 2024 08:35:17 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A64D510E201; Mon, 19 Aug 2024 08:35:16 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="STxuV7FE"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.9]) by gabe.freedesktop.org (Postfix) with ESMTPS id 11F3310E1FE; Mon, 19 Aug 2024 08:35:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724056514; x=1755592514; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TGZF/qxfwNIUwT6g6yBbTppuED0Uixq0VmCK9Zhau3s=; b=STxuV7FEdpLUWVmS29029+PGs6tapbKozq9/oLZ/9MM5bLqvezI7XJNO A7d+R0u6vBycLjnzV0fnQorLCId8L1tkgEWmFu5D9C3F7/YX0Y3b+ObsP Y82sBpoKfQEMKLM+om68ewgsFF1E9rF+bqk8bjKE5QdK7weiU6ZTLt6hW ueW2Pr4i3cJEApg48bynD00oYbTHMyjpTvO8Z07AbWUB3I2x84ZcQ2xTt BaT8IzpIkfChlM8my8P1BNp76EHKNG07nnoMSyfk6Z9f7ndVoVmiPOOFg F2DjnSe2W/sBT21mZbCcZVPjiupTueMV1HZZGdXOwJaK5XBWDhXKul047 w==; X-CSE-ConnectionGUID: T2O+uy/AQvamUEK40/lWbA== X-CSE-MsgGUID: qWEVfxqyRJeQuPhfsLaANw== X-IronPort-AV: E=McAfee;i="6700,10204,11168"; a="32958491" X-IronPort-AV: E=Sophos;i="6.10,158,1719903600"; d="scan'208";a="32958491" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by fmvoesa103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Aug 2024 01:35:14 -0700 X-CSE-ConnectionGUID: YOPXeawZSjiksxzjhRPNOw== X-CSE-MsgGUID: mOkEmyLURmKmEFqVDb1elQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,158,1719903600"; d="scan'208";a="59962282" Received: from oandoniu-mobl3.ger.corp.intel.com (HELO fedora..) ([10.245.244.132]) by fmviesa006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Aug 2024 01:35:12 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Matthew Brost , Somalapuram Amaranath , =?utf-8?q?Christian_?= =?utf-8?q?K=C3=B6nig?= , Paulo Zanoni , dri-devel@lists.freedesktop.org Subject: [PATCH v9 4/6] drm/ttm: Add a shrinker helper and export the LRU walker for driver use Date: Mon, 19 Aug 2024 10:34:47 +0200 Message-ID: <20240819083449.56701-5-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240819083449.56701-1-thomas.hellstrom@linux.intel.com> References: <20240819083449.56701-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Following the design direction communicated here: https://lore.kernel.org/linux-mm/b7491378-defd-4f1c-31e2-29e4c77e2d67@amd.com/T/#ma918844aa8a6efe8768fdcda0c6590d5c93850c9 Export the LRU walker for driver shrinker use and add a bo shrinker helper for initial use by the xe driver. v8: - Split out from another patch. - Use a struct for bool arguments to increase readability (Matt Brost). - Unmap user-space cpu-mappings before shrinking pages. - Explain non-fatal error codes (Matt Brost) Signed-off-by: Thomas Hellström Reviewed-by: Matthew Brost --- drivers/gpu/drm/ttm/ttm_bo_util.c | 65 +++++++++++++++++++++++++++++++ include/drm/ttm/ttm_bo.h | 17 ++++++++ 2 files changed, 82 insertions(+) diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c index 3c07f4712d5c..3490e3347de9 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c @@ -919,3 +919,68 @@ s64 ttm_lru_walk_for_evict(struct ttm_lru_walk *walk, struct ttm_device *bdev, return progress; } +EXPORT_SYMBOL(ttm_lru_walk_for_evict); + +/** + * ttm_bo_try_shrink - LRU walk helper to shrink a ttm buffer object. + * @walk: The struct xe_ttm_lru_walk that describes the walk. + * @bo: The buffer object. + * @flags: Flags governing the shrinking behaviour. + * + * The function uses the ttm_tt_back_up functionality to back up or + * purge a struct ttm_tt. If the bo is not in system, it's first + * moved there, unless @flags.allow_move is false. + * + * Return: The number of pages shrunken or purged, or + * negative error code on failure. + */ +long ttm_bo_try_shrink(struct ttm_lru_walk *walk, struct ttm_buffer_object *bo, + const struct ttm_bo_shrink_flags flags) +{ + static const struct ttm_place sys_placement_flags = { + .fpfn = 0, + .lpfn = 0, + .mem_type = TTM_PL_SYSTEM, + .flags = 0, + }; + static struct ttm_placement sys_placement = { + .num_placement = 1, + .placement = &sys_placement_flags, + }; + struct ttm_operation_ctx *ctx = walk->ctx; + struct ttm_tt *tt = bo->ttm; + long lret; + + dma_resv_assert_held(bo->base.resv); + + if (!tt || !ttm_tt_is_populated(tt)) + return 0; + + if (flags.allow_move && bo->resource->mem_type != TTM_PL_SYSTEM) { + int ret = ttm_bo_validate(bo, &sys_placement, ctx); + + /* Consider -ENOMEM and -ENOSPC non-fatal. */ + if (ret) { + if (ret == -ENOMEM || ret == -ENOSPC) + ret = -EBUSY; + return ret; + } + } + + ttm_bo_unmap_virtual(bo); + lret = ttm_bo_wait_ctx(bo, ctx); + if (lret < 0) { + if (lret == -ERESTARTSYS) + return lret; + return 0; + } + + lret = ttm_tt_backup(bo->bdev, tt, (struct ttm_backup_flags) + {.purge = flags.purge, + .writeback = flags.writeback}); + if (lret < 0 && lret != -EINTR) + return 0; + + return lret; +} +EXPORT_SYMBOL(ttm_bo_try_shrink); diff --git a/include/drm/ttm/ttm_bo.h b/include/drm/ttm/ttm_bo.h index d1a732d56259..479ada85cea1 100644 --- a/include/drm/ttm/ttm_bo.h +++ b/include/drm/ttm/ttm_bo.h @@ -229,6 +229,23 @@ struct ttm_lru_walk { s64 ttm_lru_walk_for_evict(struct ttm_lru_walk *walk, struct ttm_device *bdev, struct ttm_resource_manager *man, s64 target); +/** + * struct ttm_bo_shrink_flags - flags to govern the bo shrinking behaviour + * @purge: Purge the content rather than backing it up. + * @writeback: Attempt to immediately write content to swap space. + * @allow_move: Allow moving to system before shrinking. This is typically + * not desired for zombie- or ghost objects (with zombie object meaning + * objects with a zero gem object refcount) + */ +struct ttm_bo_shrink_flags { + u32 purge : 1; + u32 writeback : 1; + u32 allow_move : 1; +}; + +long ttm_bo_try_shrink(struct ttm_lru_walk *walk, struct ttm_buffer_object *bo, + const struct ttm_bo_shrink_flags flags); + /** * ttm_bo_get - reference a struct ttm_buffer_object * From patchwork Mon Aug 19 08:34:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Thomas Hellstrom X-Patchwork-Id: 13768076 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A41B5C5320E for ; Mon, 19 Aug 2024 08:35:18 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E27CD10E202; Mon, 19 Aug 2024 08:35:17 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="iVZorEPS"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.9]) by gabe.freedesktop.org (Postfix) with ESMTPS id 410EF10E202; Mon, 19 Aug 2024 08:35:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724056517; x=1755592517; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7INc+hw7jFd7k4aNXLA+ga+c35EFkhuzItcOtupZTaA=; b=iVZorEPSEnl1PXRtiU/83m0beXdzX++0GnBTB88C3rwA6LTx2/kMV/h/ 2lJRqWUbrNvslVksYSTKPKnTU4oyA6wcPAwwKSGP+gnFmBURVRYA7mgf8 fmQga6VxHJVu2osU4i0tCpl3gaTIANPF+5cvfDAWQWISC5Mchf2Mfkz6M rYYUgMxmmYWm78MUO6TNuohMHfd5gZg3hFYHymuHtW3ucTYYrhDkNQuY+ RcYFdpav35hcImNoSwD0lCAdM7HbuMxmX3JlRQe1aBt6FFXLtXOpnIXzy cgFaY59U+bnTaYZuiPbTrnvlYav/v7gTZ/J/U3pJnbcIOilTONQzlf/Jk A==; X-CSE-ConnectionGUID: Ja4XM76yScaTYG8CGv/pPA== X-CSE-MsgGUID: SgVFGvaNSpCUgkRrxolLwg== X-IronPort-AV: E=McAfee;i="6700,10204,11168"; a="32958500" X-IronPort-AV: E=Sophos;i="6.10,158,1719903600"; d="scan'208";a="32958500" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by fmvoesa103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Aug 2024 01:35:16 -0700 X-CSE-ConnectionGUID: Nw6pd0AySxilsIoUSmEb+w== X-CSE-MsgGUID: GF9rKKp/SbaHT2yJZ/7lDw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,158,1719903600"; d="scan'208";a="59962289" Received: from oandoniu-mobl3.ger.corp.intel.com (HELO fedora..) ([10.245.244.132]) by fmviesa006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Aug 2024 01:35:14 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , =?utf-8?q?Christian_K=C3=B6nig?= , Somalapuram Amaranath , Matthew Brost , dri-devel@lists.freedesktop.org, Paulo Zanoni Subject: [PATCH v9 5/6] drm/xe: Add a shrinker for xe bos Date: Mon, 19 Aug 2024 10:34:48 +0200 Message-ID: <20240819083449.56701-6-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240819083449.56701-1-thomas.hellstrom@linux.intel.com> References: <20240819083449.56701-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Rather than relying on the TTM watermark accounting add a shrinker for xe_bos in TT or system memory. Leverage the newly added TTM per-page shrinking and shmem backup support. Although xe doesn't fully support WONTNEED (purgeable) bos yet, introduce and add shrinker support for purgeable ttm_tts. v2: - Cleanups bugfixes and a KUNIT shrinker test. - Add writeback support, and activate if kswapd. v3: - Move the try_shrink() helper to core TTM. - Minor cleanups. v4: - Add runtime pm for the shrinker. Shrinking may require an active device for CCS metadata copying. v5: - Separately purge ghost- and zombie objects in the shrinker. - Fix a format specifier - type inconsistency. (Kernel test robot). v7: - s/long/s64/ (Christian König) - s/sofar/progress/ (Matt Brost) v8: - Rebase on Xe KUNIT update. - Add content verifying to the shrinker kunit test. - Split out TTM changes to a separate patch. - Get rid of multiple bool arguments for clarity (Matt Brost) - Avoid an error pointer dereference (Matt Brost) - Avoid an integer overflow (Matt Auld) - Address misc review comments by Matt Brost. v9: - Fix a compliation error. - Rebase. Cc: Christian König Cc: Somalapuram Amaranath Cc: Matthew Brost Cc: Signed-off-by: Thomas Hellström --- drivers/gpu/drm/xe/Makefile | 1 + drivers/gpu/drm/xe/tests/xe_bo.c | 224 +++++++++++++++++++++ drivers/gpu/drm/xe/xe_bo.c | 166 +++++++++++++-- drivers/gpu/drm/xe/xe_bo.h | 36 ++++ drivers/gpu/drm/xe/xe_device.c | 8 + drivers/gpu/drm/xe/xe_device_types.h | 2 + drivers/gpu/drm/xe/xe_shrinker.c | 289 +++++++++++++++++++++++++++ drivers/gpu/drm/xe/xe_shrinker.h | 18 ++ 8 files changed, 728 insertions(+), 16 deletions(-) create mode 100644 drivers/gpu/drm/xe/xe_shrinker.c create mode 100644 drivers/gpu/drm/xe/xe_shrinker.h diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile index b9670ae09a9e..3ca597253fd2 100644 --- a/drivers/gpu/drm/xe/Makefile +++ b/drivers/gpu/drm/xe/Makefile @@ -92,6 +92,7 @@ xe-y += xe_bb.o \ xe_ring_ops.o \ xe_sa.o \ xe_sched_job.o \ + xe_shrinker.o \ xe_step.o \ xe_sync.o \ xe_tile.o \ diff --git a/drivers/gpu/drm/xe/tests/xe_bo.c b/drivers/gpu/drm/xe/tests/xe_bo.c index 8dac069483e8..536e8eca9108 100644 --- a/drivers/gpu/drm/xe/tests/xe_bo.c +++ b/drivers/gpu/drm/xe/tests/xe_bo.c @@ -6,6 +6,11 @@ #include #include +#include +#include + +#include + #include "tests/xe_kunit_helpers.h" #include "tests/xe_pci_test.h" #include "tests/xe_test.h" @@ -358,9 +363,228 @@ static void xe_bo_evict_kunit(struct kunit *test) evict_test_run_device(xe); } +struct xe_bo_link { + struct list_head link; + struct xe_bo *bo; + u32 val; +}; + +#define XE_BO_SHRINK_SIZE ((unsigned long)SZ_64M) + +static int shrink_test_fill_random(struct xe_bo *bo, struct rnd_state *state, + struct xe_bo_link *link) +{ + struct iosys_map map; + int ret = ttm_bo_vmap(&bo->ttm, &map); + size_t __maybe_unused i; + + if (ret) + return ret; + + for (i = 0; i < bo->ttm.base.size; i += sizeof(u32)) { + u32 val = prandom_u32_state(state); + + iosys_map_wr(&map, i, u32, val); + if (i == 0) + link->val = val; + } + + ttm_bo_vunmap(&bo->ttm, &map); + return 0; +} + +static bool shrink_test_verify(struct kunit *test, struct xe_bo *bo, + unsigned int bo_nr, struct rnd_state *state, + struct xe_bo_link *link) +{ + struct iosys_map map; + int ret = ttm_bo_vmap(&bo->ttm, &map); + size_t i; + bool failed = false; + + if (ret) { + KUNIT_FAIL(test, "Error mapping bo %u for content check.\n", bo_nr); + return true; + } + + for (i = 0; i < bo->ttm.base.size; i += sizeof(u32)) { + u32 val = prandom_u32_state(state); + + if (iosys_map_rd(&map, i, u32) != val) { + KUNIT_FAIL(test, "Content not preserved, bo %u offset 0x%016llx", + bo_nr, (unsigned long long)i); + kunit_info(test, "Failed value is 0x%08x, recorded 0x%08x\n", + (unsigned int)iosys_map_rd(&map, i, u32), val); + if (i == 0 && val != link->val) + kunit_info(test, "Looks like PRNG is out of sync.\n"); + failed = true; + break; + } + } + + ttm_bo_vunmap(&bo->ttm, &map); + + return failed; +} + +/* + * Try to create system bos corresponding to twice the amount + * of available system memory to test shrinker functionality. + * If no swap space is available to accommodate the + * memory overcommit, mark bos purgeable. + */ +static int shrink_test_run_device(struct xe_device *xe) +{ + struct kunit *test = kunit_get_current_test(); + LIST_HEAD(bos); + struct xe_bo_link *link, *next; + struct sysinfo si; + size_t total, alloced; + unsigned int interrupted = 0, successful = 0, count = 0; + struct rnd_state prng; + u64 rand_seed; + bool failed = false; + + rand_seed = get_random_u64(); + prandom_seed_state(&prng, rand_seed); + + si_meminfo(&si); + total = si.freeram * si.mem_unit; + + kunit_info(test, "Free ram is %lu bytes. Will allocate twice of that.\n", + (unsigned long)total); + + total <<= 1; + for (alloced = 0; alloced < total ; alloced += XE_BO_SHRINK_SIZE) { + struct xe_bo *bo; + unsigned int mem_type; + struct xe_ttm_tt *xe_tt; + + link = kzalloc(sizeof(*link), GFP_KERNEL); + if (!link) { + KUNIT_FAIL(test, "Unexpected link allocation failure\n"); + failed = true; + break; + } + + INIT_LIST_HEAD(&link->link); + + /* We can create bos using WC caching here. But it is slower. */ + bo = xe_bo_create_user(xe, NULL, NULL, XE_BO_SHRINK_SIZE, + DRM_XE_GEM_CPU_CACHING_WB, + XE_BO_FLAG_SYSTEM); + if (IS_ERR(bo)) { + if (bo != ERR_PTR(-ENOMEM) && bo != ERR_PTR(-ENOSPC) && + bo != ERR_PTR(-EINTR) && bo != ERR_PTR(-ERESTARTSYS)) + KUNIT_FAIL(test, "Error creating bo: %pe\n", bo); + kfree(link); + failed = true; + break; + } + xe_bo_lock(bo, false); + xe_tt = container_of(bo->ttm.ttm, typeof(*xe_tt), ttm); + + /* + * If we're low on swap entries, we can't shrink unless the bo + * is marked purgeable. + */ + if (get_nr_swap_pages() < (XE_BO_SHRINK_SIZE >> PAGE_SHIFT) * 128) { + long num_pages = xe_tt->ttm.num_pages; + + xe_tt->purgeable = true; + xe_shrinker_mod_pages(xe->mem.shrinker, -num_pages, + num_pages); + } else { + int ret = shrink_test_fill_random(bo, &prng, link); + + if (ret) { + xe_bo_unlock(bo); + xe_bo_put(bo); + KUNIT_FAIL(test, "Error filling bo with random data: %pe\n", + ERR_PTR(ret)); + kfree(link); + failed = true; + break; + } + } + + mem_type = bo->ttm.resource->mem_type; + xe_bo_unlock(bo); + link->bo = bo; + list_add_tail(&link->link, &bos); + + if (mem_type != XE_PL_TT) { + KUNIT_FAIL(test, "Bo in incorrect memory type: %u\n", + bo->ttm.resource->mem_type); + failed = true; + } + cond_resched(); + if (signal_pending(current)) + break; + } + + /* + * Read back and destroy bos. Reset the pseudo-random seed to get an + * identical pseudo-random number sequence for readback. + */ + prandom_seed_state(&prng, rand_seed); + list_for_each_entry_safe(link, next, &bos, link) { + static struct ttm_operation_ctx ctx = {.interruptible = true}; + struct xe_bo *bo = link->bo; + struct xe_ttm_tt *xe_tt; + int ret; + + count++; + if (!signal_pending(current) && !failed) { + bool purgeable, intr = false; + + xe_bo_lock(bo, NULL); + + /* xe_tt->purgeable is cleared on validate. */ + xe_tt = container_of(bo->ttm.ttm, typeof(*xe_tt), ttm); + purgeable = xe_tt->purgeable; + do { + ret = ttm_bo_validate(&bo->ttm, &tt_placement, &ctx); + if (ret == -EINTR) + intr = true; + } while (ret == -EINTR && !signal_pending(current)); + + if (!ret && !purgeable) + failed = shrink_test_verify(test, bo, count, &prng, link); + + xe_bo_unlock(bo); + if (ret) { + KUNIT_FAIL(test, "Validation failed: %pe\n", + ERR_PTR(ret)); + failed = true; + } else if (intr) { + interrupted++; + } else { + successful++; + } + } + xe_bo_put(link->bo); + list_del(&link->link); + kfree(link); + } + kunit_info(test, "Readbacks interrupted: %u successful: %u\n", + interrupted, successful); + + return 0; +} + +static void xe_bo_shrink_kunit(struct kunit *test) +{ + struct xe_device *xe = test->priv; + + shrink_test_run_device(xe); +} + static struct kunit_case xe_bo_tests[] = { KUNIT_CASE_PARAM(xe_ccs_migrate_kunit, xe_pci_live_device_gen_param), KUNIT_CASE_PARAM(xe_bo_evict_kunit, xe_pci_live_device_gen_param), + KUNIT_CASE_PARAM_ATTR(xe_bo_shrink_kunit, xe_pci_live_device_gen_param, + {.speed = KUNIT_SPEED_SLOW}), {} }; diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c index ce8282e67e84..04d30e77ae8b 100644 --- a/drivers/gpu/drm/xe/xe_bo.c +++ b/drivers/gpu/drm/xe/xe_bo.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -25,6 +26,7 @@ #include "xe_pm.h" #include "xe_preempt_fence.h" #include "xe_res_cursor.h" +#include "xe_shrinker.h" #include "xe_trace_bo.h" #include "xe_ttm_stolen_mgr.h" #include "xe_vm.h" @@ -278,11 +280,15 @@ static void xe_evict_flags(struct ttm_buffer_object *tbo, } } +/* struct xe_ttm_tt - Subclassed ttm_tt for xe */ struct xe_ttm_tt { struct ttm_tt ttm; - struct device *dev; + /** @xe - The xe device */ + struct xe_device *xe; struct sg_table sgt; struct sg_table *sg; + /** @purgeable - Whether the bo is purgeable (WONTNEED) */ + bool purgeable; }; static int xe_tt_map_sg(struct ttm_tt *tt) @@ -291,7 +297,8 @@ static int xe_tt_map_sg(struct ttm_tt *tt) unsigned long num_pages = tt->num_pages; int ret; - XE_WARN_ON(tt->page_flags & TTM_TT_FLAG_EXTERNAL); + XE_WARN_ON((tt->page_flags & TTM_TT_FLAG_EXTERNAL) && + !(tt->page_flags & TTM_TT_FLAG_EXTERNAL_MAPPABLE)); if (xe_tt->sg) return 0; @@ -299,13 +306,13 @@ static int xe_tt_map_sg(struct ttm_tt *tt) ret = sg_alloc_table_from_pages_segment(&xe_tt->sgt, tt->pages, num_pages, 0, (u64)num_pages << PAGE_SHIFT, - xe_sg_segment_size(xe_tt->dev), + xe_sg_segment_size(xe_tt->xe->drm.dev), GFP_KERNEL); if (ret) return ret; xe_tt->sg = &xe_tt->sgt; - ret = dma_map_sgtable(xe_tt->dev, xe_tt->sg, DMA_BIDIRECTIONAL, + ret = dma_map_sgtable(xe_tt->xe->drm.dev, xe_tt->sg, DMA_BIDIRECTIONAL, DMA_ATTR_SKIP_CPU_SYNC); if (ret) { sg_free_table(xe_tt->sg); @@ -321,7 +328,7 @@ static void xe_tt_unmap_sg(struct ttm_tt *tt) struct xe_ttm_tt *xe_tt = container_of(tt, struct xe_ttm_tt, ttm); if (xe_tt->sg) { - dma_unmap_sgtable(xe_tt->dev, xe_tt->sg, + dma_unmap_sgtable(xe_tt->xe->drm.dev, xe_tt->sg, DMA_BIDIRECTIONAL, 0); sg_free_table(xe_tt->sg); xe_tt->sg = NULL; @@ -336,21 +343,47 @@ struct sg_table *xe_bo_sg(struct xe_bo *bo) return xe_tt->sg; } +/* + * Account ttm pages against the device shrinker's shrinkable and + * purgeable counts. + */ +static void xe_ttm_tt_account_add(struct ttm_tt *tt) +{ + struct xe_ttm_tt *xe_tt = container_of(tt, struct xe_ttm_tt, ttm); + + if (xe_tt->purgeable) + xe_shrinker_mod_pages(xe_tt->xe->mem.shrinker, 0, tt->num_pages); + else + xe_shrinker_mod_pages(xe_tt->xe->mem.shrinker, tt->num_pages, 0); +} + +static void xe_ttm_tt_account_subtract(struct ttm_tt *tt) +{ + struct xe_ttm_tt *xe_tt = container_of(tt, struct xe_ttm_tt, ttm); + + if (xe_tt->purgeable) + xe_shrinker_mod_pages(xe_tt->xe->mem.shrinker, 0, -(long)tt->num_pages); + else + xe_shrinker_mod_pages(xe_tt->xe->mem.shrinker, -(long)tt->num_pages, 0); +} + static struct ttm_tt *xe_ttm_tt_create(struct ttm_buffer_object *ttm_bo, u32 page_flags) { struct xe_bo *bo = ttm_to_xe_bo(ttm_bo); struct xe_device *xe = xe_bo_device(bo); - struct xe_ttm_tt *tt; + struct xe_ttm_tt *xe_tt; + struct ttm_tt *tt; unsigned long extra_pages; enum ttm_caching caching = ttm_cached; int err; - tt = kzalloc(sizeof(*tt), GFP_KERNEL); - if (!tt) + xe_tt = kzalloc(sizeof(*xe_tt), GFP_KERNEL); + if (!xe_tt) return NULL; - tt->dev = xe->drm.dev; + tt = &xe_tt->ttm; + xe_tt->xe = xe; extra_pages = 0; if (xe_bo_needs_ccs_pages(bo)) @@ -396,42 +429,135 @@ static struct ttm_tt *xe_ttm_tt_create(struct ttm_buffer_object *ttm_bo, caching = ttm_uncached; } - err = ttm_tt_init(&tt->ttm, &bo->ttm, page_flags, caching, extra_pages); + if (ttm_bo->type != ttm_bo_type_sg) + page_flags |= TTM_TT_FLAG_EXTERNAL | TTM_TT_FLAG_EXTERNAL_MAPPABLE; + + err = ttm_tt_init(tt, &bo->ttm, page_flags, caching, extra_pages); if (err) { - kfree(tt); + kfree(xe_tt); return NULL; } - return &tt->ttm; + tt->backup = ttm_backup_shmem_create((loff_t)tt->num_pages << PAGE_SHIFT); + if (IS_ERR(tt->backup)) { + tt->backup = NULL; + ttm_tt_fini(tt); + kfree(xe_tt); + return NULL; + } + + return tt; } static int xe_ttm_tt_populate(struct ttm_device *ttm_dev, struct ttm_tt *tt, struct ttm_operation_ctx *ctx) { + struct xe_ttm_tt *xe_tt = container_of(tt, struct xe_ttm_tt, ttm); int err; /* * dma-bufs are not populated with pages, and the dma- * addresses are set up when moved to XE_PL_TT. */ - if (tt->page_flags & TTM_TT_FLAG_EXTERNAL) + if ((tt->page_flags & TTM_TT_FLAG_EXTERNAL) && + !(tt->page_flags & TTM_TT_FLAG_EXTERNAL_MAPPABLE)) return 0; err = ttm_pool_alloc(&ttm_dev->pool, tt, ctx); if (err) return err; - return err; + xe_tt->purgeable = false; + xe_ttm_tt_account_add(tt); + + return 0; } static void xe_ttm_tt_unpopulate(struct ttm_device *ttm_dev, struct ttm_tt *tt) { - if (tt->page_flags & TTM_TT_FLAG_EXTERNAL) + if ((tt->page_flags & TTM_TT_FLAG_EXTERNAL) && + !(tt->page_flags & TTM_TT_FLAG_EXTERNAL_MAPPABLE)) return; xe_tt_unmap_sg(tt); - return ttm_pool_free(&ttm_dev->pool, tt); + ttm_pool_free(&ttm_dev->pool, tt); + xe_ttm_tt_account_subtract(tt); +} + +/** + * xe_bo_shrink() - Try to shrink an xe bo. + * @walk: - The walk parameters + * @bo: The TTM buffer object + * @flags: Flags governing the shrink behaviour. + * + * Try to shrink- or purge a bo, and if it succeeds, unmap dma. + * Note that we need to be able to handle also non xe bos + * (ghost bos), but only if the struct ttm_tt is embedded in + * a struct xe_ttm_tt. + * + * Return: The number of pages shrunken or purged, or negative error + * code on failure. + */ +long xe_bo_shrink(struct ttm_lru_walk *walk, struct ttm_buffer_object *bo, + const struct xe_bo_shrink_flags flags) +{ + struct ttm_tt *tt = bo->ttm; + struct xe_ttm_tt *xe_tt = container_of(tt, struct xe_ttm_tt, ttm); + struct ttm_place place = {.mem_type = bo->resource->mem_type}; + struct xe_bo *xe_bo = ttm_to_xe_bo(bo); + struct xe_device *xe = xe_tt->xe; + bool needs_rpm; + long lret = 0L; + + if (!tt || !ttm_tt_is_populated(tt) || + !(tt->page_flags & TTM_TT_FLAG_EXTERNAL_MAPPABLE) || + (flags.purge && !xe_tt->purgeable)) + return 0L; + + if (!ttm_bo_eviction_valuable(bo, &place)) + return 0L; + + /* Beware of zombies (GEM object refcount == 0) and ghosts. */ + if (!xe_bo_is_xe_bo(bo) || !xe_bo_get_unless_zero(xe_bo)) { + lret = ttm_bo_wait_ctx(bo, walk->ctx); + if (lret) + return lret; + + /* + * We don't allow move from TT to SYSTEM for these objects, + * hence we need to unmap sg first. + */ + xe_tt_unmap_sg(tt); + return ttm_bo_try_shrink(walk, bo, (struct ttm_bo_shrink_flags) + {.purge = true, + .writeback = false, + .allow_move = false}); + } + + /* System CCS needs gpu copy when moving PL_TT -> PL_SYSTEM */ + needs_rpm = (!IS_DGFX(xe) && bo->resource->mem_type != XE_PL_SYSTEM && + xe_bo_needs_ccs_pages(xe_bo) && !xe_tt->purgeable); + if (needs_rpm && !xe_pm_runtime_get_if_active(xe)) + goto out_unref; + + lret = ttm_bo_try_shrink(walk, bo, (struct ttm_bo_shrink_flags) + {.purge = xe_tt->purgeable, + .writeback = flags.writeback, + .allow_move = true}); + if (needs_rpm) + xe_pm_runtime_put(xe); + + if (lret > 0) { + xe_assert(xe, !ttm_tt_is_populated(tt)); + + xe_ttm_tt_account_subtract(tt); + } + +out_unref: + xe_bo_put(xe_bo); + + return lret; } static void xe_ttm_tt_destroy(struct ttm_device *ttm_dev, struct ttm_tt *tt) @@ -1698,6 +1824,8 @@ int xe_bo_pin_external(struct xe_bo *bo) } ttm_bo_pin(&bo->ttm); + if (bo->ttm.ttm && ttm_tt_is_populated(bo->ttm.ttm)) + xe_ttm_tt_account_subtract(bo->ttm.ttm); /* * FIXME: If we always use the reserve / unreserve functions for locking @@ -1756,6 +1884,8 @@ int xe_bo_pin(struct xe_bo *bo) } ttm_bo_pin(&bo->ttm); + if (bo->ttm.ttm && ttm_tt_is_populated(bo->ttm.ttm)) + xe_ttm_tt_account_subtract(bo->ttm.ttm); /* * FIXME: If we always use the reserve / unreserve functions for locking @@ -1790,6 +1920,8 @@ void xe_bo_unpin_external(struct xe_bo *bo) spin_unlock(&xe->pinned.lock); ttm_bo_unpin(&bo->ttm); + if (bo->ttm.ttm && ttm_tt_is_populated(bo->ttm.ttm)) + xe_ttm_tt_account_add(bo->ttm.ttm); /* * FIXME: If we always use the reserve / unreserve functions for locking @@ -1818,6 +1950,8 @@ void xe_bo_unpin(struct xe_bo *bo) } ttm_bo_unpin(&bo->ttm); + if (bo->ttm.ttm && ttm_tt_is_populated(bo->ttm.ttm)) + xe_ttm_tt_account_add(bo->ttm.ttm); } /** diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h index 935a94279026..2c70a6bb57eb 100644 --- a/drivers/gpu/drm/xe/xe_bo.h +++ b/drivers/gpu/drm/xe/xe_bo.h @@ -64,6 +64,7 @@ #define XE_BO_PROPS_INVALID (-1) struct sg_table; +struct xe_ttm_lru_walk; struct xe_bo *xe_bo_alloc(void); void xe_bo_free(struct xe_bo *bo); @@ -126,6 +127,28 @@ static inline struct xe_bo *xe_bo_get(struct xe_bo *bo) return bo; } +/* + * xe_bo_get_unless_zero() - Conditionally obtain a GEM object refcount on an + * xe bo + * @bo: The bo for which we want to obtain a refcount. + * + * There is a short window between where the bo's GEM object refcount reaches + * zero and where we put the final ttm_bo reference. Code in the eviction- and + * shrinking path should therefore attempt to grab a gem object reference before + * trying to use members outside of the base class ttm object. This function is + * intended for that purpose. On successful return, this function must be paired + * with an xe_bo_put(). + * + * Return: @bo on success, NULL on failure. + */ +static inline __must_check struct xe_bo *xe_bo_get_unless_zero(struct xe_bo *bo) +{ + if (!bo || !kref_get_unless_zero(&bo->ttm.base.refcount)) + return NULL; + + return bo; +} + static inline void xe_bo_put(struct xe_bo *bo) { if (bo) @@ -315,6 +338,19 @@ static inline unsigned int xe_sg_segment_size(struct device *dev) #define i915_gem_object_flush_if_display(obj) ((void)(obj)) +/** + * struct xe_bo_shrink_flags - flags governing the shrink behaviour. + * @purge: Only purging allowed. Don't shrink if bo not purgeable. + * @writeback: Attempt to immediately move content to swap. + */ +struct xe_bo_shrink_flags { + u32 purge : 1; + u32 writeback : 1; +}; + +long xe_bo_shrink(struct ttm_lru_walk *walk, struct ttm_buffer_object *bo, + const struct xe_bo_shrink_flags flags); + #if IS_ENABLED(CONFIG_DRM_XE_KUNIT_TEST) /** * xe_bo_is_mem_type - Whether the bo currently resides in the given diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c index b6db7e082d88..a50f2e9e9236 100644 --- a/drivers/gpu/drm/xe/xe_device.c +++ b/drivers/gpu/drm/xe/xe_device.c @@ -48,6 +48,7 @@ #include "xe_pcode.h" #include "xe_pm.h" #include "xe_query.h" +#include "xe_shrinker.h" #include "xe_sriov.h" #include "xe_tile.h" #include "xe_ttm_stolen_mgr.h" @@ -297,6 +298,9 @@ static void xe_device_destroy(struct drm_device *dev, void *dummy) if (xe->unordered_wq) destroy_workqueue(xe->unordered_wq); + if (!IS_ERR_OR_NULL(xe->mem.shrinker)) + xe_shrinker_destroy(xe->mem.shrinker); + ttm_device_fini(&xe->ttm); } @@ -326,6 +330,10 @@ struct xe_device *xe_device_create(struct pci_dev *pdev, if (err) goto err; + xe->mem.shrinker = xe_shrinker_create(xe); + if (IS_ERR(xe->mem.shrinker)) + return ERR_CAST(xe->mem.shrinker); + xe->info.devid = pdev->device; xe->info.revid = pdev->revision; xe->info.force_execlist = xe_modparam.force_execlist; diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h index fc89420d0ba6..7c89cc764850 100644 --- a/drivers/gpu/drm/xe/xe_device_types.h +++ b/drivers/gpu/drm/xe/xe_device_types.h @@ -339,6 +339,8 @@ struct xe_device { struct xe_mem_region vram; /** @mem.sys_mgr: system TTM manager */ struct ttm_resource_manager sys_mgr; + /** @mem.sys_mgr: system memory shrinker. */ + struct xe_shrinker *shrinker; } mem; /** @sriov: device level virtualization data */ diff --git a/drivers/gpu/drm/xe/xe_shrinker.c b/drivers/gpu/drm/xe/xe_shrinker.c new file mode 100644 index 000000000000..4de98c1dd4a7 --- /dev/null +++ b/drivers/gpu/drm/xe/xe_shrinker.c @@ -0,0 +1,289 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2024 Intel Corporation + */ + +#include +#include + +#include +#include + +#include "xe_bo.h" +#include "xe_pm.h" +#include "xe_shrinker.h" + +/** + * struct xe_shrinker - per-device shrinker + * @xe: Back pointer to the device. + * @lock: Lock protecting accounting. + * @shrinkable_pages: Number of pages that are currently shrinkable. + * @purgeable_pages: Number of pages that are currently purgeable. + * @shrink: Pointer to the mm shrinker. + * @pm_worker: Worker to wake up the device if required. + */ +struct xe_shrinker { + struct xe_device *xe; + rwlock_t lock; + long shrinkable_pages; + long purgeable_pages; + struct shrinker *shrink; + struct work_struct pm_worker; +}; + +/** + * struct xe_shrink_lru_walk - lru_walk subclass for shrinker + * @walk: The embedded base class. + * @xe: Pointer to the xe device. + * @purge: Purgeable only request from the srinker. + * @writeback: Try to write back to persistent storage. + */ +struct xe_shrink_lru_walk { + struct ttm_lru_walk walk; + struct xe_device *xe; + bool purge; + bool writeback; +}; + +static struct xe_shrinker *to_xe_shrinker(struct shrinker *shrink) +{ + return shrink->private_data; +} + +static struct xe_shrink_lru_walk * +to_xe_shrink_lru_walk(struct ttm_lru_walk *walk) +{ + return container_of(walk, struct xe_shrink_lru_walk, walk); +} + +/** + * xe_shrinker_mod_pages() - Modify shrinker page accounting + * @shrinker: Pointer to the struct xe_shrinker. + * @shrinkable: Shrinkable pages delta. May be negative. + * @purgeable: Purgeable page delta. May be negative. + * + * Modifies the shrinkable and purgeable pages accounting. + */ +void +xe_shrinker_mod_pages(struct xe_shrinker *shrinker, long shrinkable, long purgeable) +{ + write_lock(&shrinker->lock); + shrinker->shrinkable_pages += shrinkable; + shrinker->purgeable_pages += purgeable; + write_unlock(&shrinker->lock); +} + +static s64 xe_shrinker_process_bo(struct ttm_lru_walk *walk, struct ttm_buffer_object *bo) +{ + struct xe_shrink_lru_walk *shrink_walk = to_xe_shrink_lru_walk(walk); + + return xe_bo_shrink(walk, bo, (struct xe_bo_shrink_flags) + {.purge = shrink_walk->purge, + .writeback = shrink_walk->writeback}); +} + +static s64 xe_shrinker_walk(struct xe_shrink_lru_walk *shrink_walk, s64 target) +{ + struct xe_device *xe = shrink_walk->xe; + struct ttm_resource_manager *man; + unsigned int mem_type; + s64 progress = 0; + s64 lret; + + for (mem_type = XE_PL_SYSTEM; mem_type <= XE_PL_TT; ++mem_type) { + man = ttm_manager_type(&xe->ttm, mem_type); + if (!man || !man->use_tt) + continue; + + lret = ttm_lru_walk_for_evict(&shrink_walk->walk, &xe->ttm, man, target); + if (lret < 0) + return lret; + + progress += lret; + if (progress >= target) + break; + } + + return progress; +} + +static unsigned long +xe_shrinker_count(struct shrinker *shrink, struct shrink_control *sc) +{ + struct xe_shrinker *shrinker = to_xe_shrinker(shrink); + unsigned long num_pages; + + num_pages = get_nr_swap_pages(); + read_lock(&shrinker->lock); + num_pages = min_t(unsigned long, num_pages, shrinker->shrinkable_pages); + num_pages += shrinker->purgeable_pages; + read_unlock(&shrinker->lock); + + return num_pages ? num_pages : SHRINK_EMPTY; +} + +static const struct ttm_lru_walk_ops xe_shrink_ops = { + .process_bo = xe_shrinker_process_bo, +}; + +/* + * Check if we need runtime pm, and if so try to grab a reference if + * already active. If grabbing a reference fails, queue a worker that + * does it for us outside of reclaim, but don't wait for it to complete. + * If bo shrinking needs an rpm reference and we don't have it (yet), + * that bo will be skipped anyway. + */ +static bool xe_shrinker_runtime_pm_get(struct xe_shrinker *shrinker, bool force, + unsigned long nr_to_scan) +{ + struct xe_device *xe = shrinker->xe; + + if (IS_DGFX(xe) || !xe_device_has_flat_ccs(xe) || + !get_nr_swap_pages()) + return false; + + if (!force) { + read_lock(&shrinker->lock); + force = (nr_to_scan > shrinker->purgeable_pages); + read_unlock(&shrinker->lock); + if (!force) + return false; + } + + if (!xe_pm_runtime_get_if_active(xe)) { + queue_work(xe->unordered_wq, &shrinker->pm_worker); + return false; + } + + return true; +} + +static void xe_shrinker_runtime_pm_put(struct xe_shrinker *shrinker, bool runtime_pm) +{ + if (runtime_pm) + xe_pm_runtime_put(shrinker->xe); +} + +static unsigned long xe_shrinker_scan(struct shrinker *shrink, struct shrink_control *sc) +{ + struct xe_shrinker *shrinker = to_xe_shrinker(shrink); + bool is_kswapd = current_is_kswapd(); + struct ttm_operation_ctx ctx = { + .interruptible = false, + .no_wait_gpu = !is_kswapd, + }; + unsigned long nr_to_scan, freed = 0; + struct xe_shrink_lru_walk shrink_walk = { + .walk = { + .ops = &xe_shrink_ops, + .ctx = &ctx, + .trylock_only = true, + }, + .xe = shrinker->xe, + .purge = true, + .writeback = is_kswapd, + }; + bool runtime_pm; + bool purgeable; + s64 ret; + + sc->nr_scanned = 0; + nr_to_scan = sc->nr_to_scan; + + read_lock(&shrinker->lock); + purgeable = !!shrinker->purgeable_pages; + read_unlock(&shrinker->lock); + + /* Might need runtime PM. Try to wake early if it looks like it. */ + runtime_pm = xe_shrinker_runtime_pm_get(shrinker, false, nr_to_scan); + + while (purgeable && freed < nr_to_scan) { + ret = xe_shrinker_walk(&shrink_walk, nr_to_scan); + if (ret <= 0) + break; + + freed += ret; + } + + sc->nr_scanned = freed; + if (freed < nr_to_scan) + nr_to_scan -= freed; + else + nr_to_scan = 0; + if (!nr_to_scan) + goto out; + + /* If we didn't wake before, try to do it now if needed. */ + if (!runtime_pm) + runtime_pm = xe_shrinker_runtime_pm_get(shrinker, true, 0); + + shrink_walk.purge = false; + nr_to_scan = sc->nr_to_scan; + while (freed < nr_to_scan) { + ret = xe_shrinker_walk(&shrink_walk, nr_to_scan); + if (ret <= 0) + break; + + freed += ret; + } + + sc->nr_scanned = freed; + +out: + xe_shrinker_runtime_pm_put(shrinker, runtime_pm); + return freed ? freed : SHRINK_STOP; +} + +/* Wake up the device for shrinking. */ +static void xe_shrinker_pm(struct work_struct *work) +{ + struct xe_shrinker *shrinker = + container_of(work, typeof(*shrinker), pm_worker); + + xe_pm_runtime_get(shrinker->xe); + xe_pm_runtime_put(shrinker->xe); +} + +/** + * xe_shrinker_create() - Create an xe per-device shrinker + * @xe: Pointer to the xe device. + * + * Returns: A pointer to the created shrinker on success, + * Negative error code on failure. + */ +struct xe_shrinker *xe_shrinker_create(struct xe_device *xe) +{ + struct xe_shrinker *shrinker = kzalloc(sizeof(*shrinker), GFP_KERNEL); + + if (!shrinker) + return ERR_PTR(-ENOMEM); + + shrinker->shrink = shrinker_alloc(0, "xe system shrinker"); + if (!shrinker->shrink) { + kfree(shrinker); + return ERR_PTR(-ENOMEM); + } + + INIT_WORK(&shrinker->pm_worker, xe_shrinker_pm); + shrinker->xe = xe; + rwlock_init(&shrinker->lock); + shrinker->shrink->count_objects = xe_shrinker_count; + shrinker->shrink->scan_objects = xe_shrinker_scan; + shrinker->shrink->private_data = shrinker; + shrinker_register(shrinker->shrink); + + return shrinker; +} + +/** + * xe_shrinker_destroy() - Destroy an xe per-device shrinker + * @shrinker: Pointer to the shrinker to destroy. + */ +void xe_shrinker_destroy(struct xe_shrinker *shrinker) +{ + xe_assert(shrinker->xe, !shrinker->shrinkable_pages); + xe_assert(shrinker->xe, !shrinker->purgeable_pages); + shrinker_free(shrinker->shrink); + flush_work(&shrinker->pm_worker); + kfree(shrinker); +} diff --git a/drivers/gpu/drm/xe/xe_shrinker.h b/drivers/gpu/drm/xe/xe_shrinker.h new file mode 100644 index 000000000000..28a038f4fcbf --- /dev/null +++ b/drivers/gpu/drm/xe/xe_shrinker.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2024 Intel Corporation + */ + +#ifndef _XE_SHRINKER_H_ +#define _XE_SHRINKER_H_ + +struct xe_shrinker; +struct xe_device; + +void xe_shrinker_mod_pages(struct xe_shrinker *shrinker, long shrinkable, long purgeable); + +struct xe_shrinker *xe_shrinker_create(struct xe_device *xe); + +void xe_shrinker_destroy(struct xe_shrinker *shrinker); + +#endif From patchwork Mon Aug 19 08:34:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Thomas Hellstrom X-Patchwork-Id: 13768077 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1A0FAC3DA4A for ; Mon, 19 Aug 2024 08:35:24 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 93E7F10E208; Mon, 19 Aug 2024 08:35:23 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="Jr2VRqDL"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.9]) by gabe.freedesktop.org (Postfix) with ESMTPS id 1DB9D10E204; Mon, 19 Aug 2024 08:35:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724056518; x=1755592518; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0KFU9ZMergHZ/SU1J2ekt+OnEkGu+cCrtUWZv5+geK4=; b=Jr2VRqDLV8WKG7HG3WmVSsZIZUEOQXK2ZFn6fgYv1hn5PPKJfSr7ZYAD sNgainHYjsmVGDKVBPkJ0RbkdlTOTAqlLMLQeyUxlPG+T+0umawTZzqog gcxY0Qch7Q9CYa/vqxQlxPxos34n6EvbFVHi7X8amGwWAq0rNZ2EF+1RE 7tA/jeb14uT2Y37yi3V/xiRpnRaS/p+RRNLbya7Q03aKDhVDzBLBGnVXI 7Ihlfu6wZxQa6s+LIFxiRG8x43jaqB23kd0aGeLMT1dbahWO24f1MBZrm UKhfemExIUnWbRfj4GyFNJT7HD+G5/ugG0joLuRSjBVgvLhDkJA1p8mlr w==; X-CSE-ConnectionGUID: rXn94rx4QjKb0hiqH73MqA== X-CSE-MsgGUID: P1IWpsxKQwikFfKwqJPtrA== X-IronPort-AV: E=McAfee;i="6700,10204,11168"; a="32958510" X-IronPort-AV: E=Sophos;i="6.10,158,1719903600"; d="scan'208";a="32958510" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by fmvoesa103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Aug 2024 01:35:18 -0700 X-CSE-ConnectionGUID: YPPdtB8oQd2jprwVTpHSLw== X-CSE-MsgGUID: 9miYVEapRa6jZY+oGzuMEg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,158,1719903600"; d="scan'208";a="59962299" Received: from oandoniu-mobl3.ger.corp.intel.com (HELO fedora..) ([10.245.244.132]) by fmviesa006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Aug 2024 01:35:16 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Matthew Brost , Somalapuram Amaranath , =?utf-8?q?Christian_?= =?utf-8?q?K=C3=B6nig?= , Paulo Zanoni , dri-devel@lists.freedesktop.org Subject: [PATCH v9 6/6] drm/xe: Increase the XE_PL_TT watermark Date: Mon, 19 Aug 2024 10:34:49 +0200 Message-ID: <20240819083449.56701-7-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240819083449.56701-1-thomas.hellstrom@linux.intel.com> References: <20240819083449.56701-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" The XE_PL_TT watermark was set to 50% of system memory. The idea behind that was unclear since the net effect is that TT memory will be evicted to TTM_PL_SYSTEM memory if that watermark is exceeded, requiring PPGTT rebinds and dma remapping. But there is no similar watermark for TTM_PL_1SYSTEM memory. The TTM functionality that tries to swap out system memory to shmem objects if a 50% limit of total system memory is reached is orthogonal to this, and with the shrinker added, it's no longer in effect. Replace the 50% TTM_PL_TT limit with a 100% limit, in effect allowing all graphics memory to be bound to the device unless it has been swapped out by the shrinker. Signed-off-by: Thomas Hellström Reviewed-by: Matthew Brost --- drivers/gpu/drm/xe/xe_ttm_sys_mgr.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/gpu/drm/xe/xe_ttm_sys_mgr.c b/drivers/gpu/drm/xe/xe_ttm_sys_mgr.c index 9844a8edbfe1..d38b91872da3 100644 --- a/drivers/gpu/drm/xe/xe_ttm_sys_mgr.c +++ b/drivers/gpu/drm/xe/xe_ttm_sys_mgr.c @@ -108,9 +108,8 @@ int xe_ttm_sys_mgr_init(struct xe_device *xe) u64 gtt_size; si_meminfo(&si); + /* Potentially restrict amount of TT memory here. */ gtt_size = (u64)si.totalram * si.mem_unit; - /* TTM limits allocation of all TTM devices by 50% of system memory */ - gtt_size /= 2; man->use_tt = true; man->func = &xe_ttm_sys_mgr_func;