From patchwork Tue Mar 5 10:34:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Thomas Hellstrom X-Patchwork-Id: 13582033 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 648DAC54E41 for ; Tue, 5 Mar 2024 10:35:21 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 677FC1129F8; Tue, 5 Mar 2024 10:35:17 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="dFBQt1mX"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) by gabe.freedesktop.org (Postfix) with ESMTPS id B65AA1129F7; Tue, 5 Mar 2024 10:35:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1709634917; x=1741170917; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=8nsGE66kifWKKmgDjTOjvVKb+fry3HL0lKMAlbSxESI=; b=dFBQt1mXJhD7jSgjpQivu/v29sbchwjMU/SnlkRJNoR4gZyci/G0W8MS If3SnyW5zEjFllnU0sZ1Q/YOIUys/+d9yYV/2OZc+jGesxStcDJNd0Nb2 vIu9vxaK1iyBs7UZJzHctpx5+djE1N418IKpG4nVF+bF/hA07oqkgosLY 58MWX86C+FZnIUxXnCtwezlqDYgQFHUeXpSDALqMY5gqaQHobvHaU+jlk QU/w0uXhYjn+4vBBCF8Mel6gIKJ1QaPQdjAkNn/+ERoWD+S4+kHnkm8eP zXYhHve8wv7OwO4//MFgNOagBeJ53sMP7SvEMggPp9yEHmNsDE4mQSibL A==; X-IronPort-AV: E=McAfee;i="6600,9927,11003"; a="4040334" X-IronPort-AV: E=Sophos;i="6.06,205,1705392000"; d="scan'208";a="4040334" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Mar 2024 02:35:16 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,205,1705392000"; d="scan'208";a="9184157" Received: from haslam-mobl1.ger.corp.intel.com (HELO fedora..) ([10.249.254.144]) by fmviesa007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Mar 2024 02:35:15 -0800 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , =?utf-8?q?Christian_K=C3=B6nig?= , Somalapuram Amaranath , dri-devel@lists.freedesktop.org Subject: [PATCH v2 2/4] drm/ttm: Use LRU hitches Date: Tue, 5 Mar 2024 11:34:43 +0100 Message-ID: <20240305103445.75919-3-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240305103445.75919-1-thomas.hellstrom@linux.intel.com> References: <20240305103445.75919-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Have iterators insert themselves into the list they are iterating over using hitch list nodes. Since only the iterator owner can remove these list nodes from the list, it's safe to unlock the list and when continuing, use them as a starting point. Due to the way LRU bumping works in TTM, newly added items will not be missed, and bumped items will be iterated over a second time before reaching the end of the list. The exception is list with bulk move sublists. When bumping a sublist, a hitch that is part of that sublist will also be moved and we might miss items if restarting from it. This will be addressed in a later patch. v2: - Updated ttm_resource_cursor_fini() documentation. Cc: Christian König Cc: Somalapuram Amaranath Cc: Signed-off-by: Thomas Hellström --- drivers/gpu/drm/ttm/ttm_bo.c | 1 + drivers/gpu/drm/ttm/ttm_device.c | 9 ++- drivers/gpu/drm/ttm/ttm_resource.c | 94 ++++++++++++++++++++---------- include/drm/ttm/ttm_resource.h | 16 +++-- 4 files changed, 82 insertions(+), 38 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index e059b1e1b13b..b6f75a0ff2e5 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -622,6 +622,7 @@ int ttm_mem_evict_first(struct ttm_device *bdev, if (locked) dma_resv_unlock(res->bo->base.resv); } + ttm_resource_cursor_fini_locked(&cursor); if (!bo) { if (busy_bo && !ttm_bo_get_unless_zero(busy_bo)) diff --git a/drivers/gpu/drm/ttm/ttm_device.c b/drivers/gpu/drm/ttm/ttm_device.c index f27406e851e5..e8a6a1dab669 100644 --- a/drivers/gpu/drm/ttm/ttm_device.c +++ b/drivers/gpu/drm/ttm/ttm_device.c @@ -169,12 +169,17 @@ int ttm_device_swapout(struct ttm_device *bdev, struct ttm_operation_ctx *ctx, num_pages = PFN_UP(bo->base.size); ret = ttm_bo_swapout(bo, ctx, gfp_flags); /* ttm_bo_swapout has dropped the lru_lock */ - if (!ret) + if (!ret) { + ttm_resource_cursor_fini(&cursor); return num_pages; - if (ret != -EBUSY) + } + if (ret != -EBUSY) { + ttm_resource_cursor_fini(&cursor); return ret; + } } } + ttm_resource_cursor_fini_locked(&cursor); spin_unlock(&bdev->lru_lock); return 0; } diff --git a/drivers/gpu/drm/ttm/ttm_resource.c b/drivers/gpu/drm/ttm/ttm_resource.c index ee1865f82cb4..971014fca10a 100644 --- a/drivers/gpu/drm/ttm/ttm_resource.c +++ b/drivers/gpu/drm/ttm/ttm_resource.c @@ -32,6 +32,37 @@ #include +/** + * ttm_resource_cursor_fini_locked() - Finalize the LRU list cursor usage + * @cursor: The struct ttm_resource_cursor to finalize. + * + * The function pulls the LRU list cursor off any lists it was previusly + * attached to. Needs to be called with the LRU lock held. The function + * can be called multiple times after eachother. + */ +void ttm_resource_cursor_fini_locked(struct ttm_resource_cursor *cursor) +{ + lockdep_assert_held(&cursor->man->bdev->lru_lock); + list_del_init(&cursor->hitch.link); +} + +/** + * ttm_resource_cursor_fini() - Finalize the LRU list cursor usage + * @cursor: The struct ttm_resource_cursor to finalize. + * + * The function pulls the LRU list cursor off any lists it was previusly + * attached to. Needs to be called without the LRU list lock held. The + * function can be called multiple times after eachother. + */ +void ttm_resource_cursor_fini(struct ttm_resource_cursor *cursor) +{ + spinlock_t *lru_lock = &cursor->man->bdev->lru_lock; + + spin_lock(lru_lock); + ttm_resource_cursor_fini_locked(cursor); + spin_unlock(lru_lock); +} + /** * ttm_lru_bulk_move_init - initialize a bulk move structure * @bulk: the structure to init @@ -483,62 +514,63 @@ void ttm_resource_manager_debug(struct ttm_resource_manager *man, EXPORT_SYMBOL(ttm_resource_manager_debug); /** - * ttm_resource_manager_first - * - * @man: resource manager to iterate over + * ttm_resource_manager_next() - Continue iterating over the resource manager + * resources * @cursor: cursor to record the position * - * Returns the first resource from the resource manager. + * Return: The next resource from the resource manager. */ struct ttm_resource * -ttm_resource_manager_first(struct ttm_resource_manager *man, - struct ttm_resource_cursor *cursor) +ttm_resource_manager_next(struct ttm_resource_cursor *cursor) { + struct ttm_resource_manager *man = cursor->man; struct ttm_lru_item *lru; lockdep_assert_held(&man->bdev->lru_lock); - for (cursor->priority = 0; cursor->priority < TTM_MAX_BO_PRIORITY; - ++cursor->priority) - list_for_each_entry(lru, &man->lru[cursor->priority], link) { - if (ttm_lru_item_is_res(lru)) + do { + lru = &cursor->hitch; + list_for_each_entry_continue(lru, &man->lru[cursor->priority], link) { + if (ttm_lru_item_is_res(lru)) { + list_move(&cursor->hitch.link, &lru->link); return ttm_lru_item_to_res(lru); + } } + if (++cursor->priority >= TTM_MAX_BO_PRIORITY) + break; + + list_move(&cursor->hitch.link, &man->lru[cursor->priority]); + } while (true); + + list_del_init(&cursor->hitch.link); + return NULL; } /** - * ttm_resource_manager_next - * + * ttm_resource_manager_first() - Start iterating over the resources + * of a resource manager * @man: resource manager to iterate over * @cursor: cursor to record the position - * @res: the current resource pointer * - * Returns the next resource from the resource manager. + * Initializes the cursor and starts iterating. When done iterating, + * the caller must explicitly call ttm_resource_cursor_fini(). + * + * Return: The first resource from the resource manager. */ struct ttm_resource * -ttm_resource_manager_next(struct ttm_resource_manager *man, - struct ttm_resource_cursor *cursor, - struct ttm_resource *res) +ttm_resource_manager_first(struct ttm_resource_manager *man, + struct ttm_resource_cursor *cursor) { - struct ttm_lru_item *lru = &res->lru; - lockdep_assert_held(&man->bdev->lru_lock); - list_for_each_entry_continue(lru, &man->lru[cursor->priority], link) { - if (ttm_lru_item_is_res(lru)) - return ttm_lru_item_to_res(lru); - } + cursor->priority = 0; + cursor->man = man; + ttm_lru_item_init(&cursor->hitch, TTM_LRU_HITCH); + list_move(&cursor->hitch.link, &man->lru[cursor->priority]); - for (++cursor->priority; cursor->priority < TTM_MAX_BO_PRIORITY; - ++cursor->priority) - list_for_each_entry(lru, &man->lru[cursor->priority], link) { - if (ttm_lru_item_is_res(lru)) - ttm_lru_item_to_res(lru); - } - - return NULL; + return ttm_resource_manager_next(cursor); } static void ttm_kmap_iter_iomap_map_local(struct ttm_kmap_iter *iter, diff --git a/include/drm/ttm/ttm_resource.h b/include/drm/ttm/ttm_resource.h index cad8c5476198..b9043c183205 100644 --- a/include/drm/ttm/ttm_resource.h +++ b/include/drm/ttm/ttm_resource.h @@ -271,15 +271,23 @@ ttm_lru_item_to_res(struct ttm_lru_item *item) /** * struct ttm_resource_cursor - * + * @man: The resource manager currently being iterated over + * @hitch: A hitch list node inserted before the next resource + * to iterate over. * @priority: the current priority * * Cursor to iterate over the resources in a manager. */ struct ttm_resource_cursor { + struct ttm_resource_manager *man; + struct ttm_lru_item hitch; unsigned int priority; }; +void ttm_resource_cursor_fini_locked(struct ttm_resource_cursor *cursor); + +void ttm_resource_cursor_fini(struct ttm_resource_cursor *cursor); + /** * struct ttm_lru_bulk_move_pos * @@ -435,9 +443,7 @@ struct ttm_resource * ttm_resource_manager_first(struct ttm_resource_manager *man, struct ttm_resource_cursor *cursor); struct ttm_resource * -ttm_resource_manager_next(struct ttm_resource_manager *man, - struct ttm_resource_cursor *cursor, - struct ttm_resource *res); +ttm_resource_manager_next(struct ttm_resource_cursor *cursor); /** * ttm_resource_manager_for_each_res - iterate over all resources @@ -449,7 +455,7 @@ ttm_resource_manager_next(struct ttm_resource_manager *man, */ #define ttm_resource_manager_for_each_res(man, cursor, res) \ for (res = ttm_resource_manager_first(man, cursor); res; \ - res = ttm_resource_manager_next(man, cursor, res)) + res = ttm_resource_manager_next(cursor)) struct ttm_kmap_iter * ttm_kmap_iter_iomap_init(struct ttm_kmap_iter_iomap *iter_io,