From patchwork Wed Feb 15 16:13:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Thomas Hellstrom X-Patchwork-Id: 13141849 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA590C636CC for ; Wed, 15 Feb 2023 16:15:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8C1F96B0080; Wed, 15 Feb 2023 11:15:09 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 872556B0083; Wed, 15 Feb 2023 11:15:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7139D6B0085; Wed, 15 Feb 2023 11:15:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 627F36B0080 for ; Wed, 15 Feb 2023 11:15:09 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 2B311A03FC for ; Wed, 15 Feb 2023 16:15:09 +0000 (UTC) X-FDA: 80470025538.26.802DE35 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by imf09.hostedemail.com (Postfix) with ESMTP id BBABA14001F for ; Wed, 15 Feb 2023 16:15:06 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="N1Y/9ltL"; spf=none (imf09.hostedemail.com: domain of thomas.hellstrom@linux.intel.com has no SPF policy when checking 134.134.136.31) smtp.mailfrom=thomas.hellstrom@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676477707; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5AJ86aFa/hQZ7gI/NxjT0V+pJu5Zkvv9R7pQr0ZShgY=; b=xJ8vd+rV7BHddeHTcszVtWObFfbhCv4AFIAxVibPJjRhWTl0GTwCwvwHysG1pznav54GAo eROKZhkbTTNPYccIbHXa2Mzgm41oaL7lDoBhKBfd/ErpygXKEs9x2CidTj5Fs5Mw1Lz7tZ gSvnXe0r3G/rHbQi1e5llGz4aC11h/k= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="N1Y/9ltL"; spf=none (imf09.hostedemail.com: domain of thomas.hellstrom@linux.intel.com has no SPF policy when checking 134.134.136.31) smtp.mailfrom=thomas.hellstrom@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676477707; a=rsa-sha256; cv=none; b=k0NSzd8ZwIgCgvJz5EBkNM1nF0vHwQkppp3iI08VRcz+0j42+92598jv7wVJaAcIVv5ch4 g0eIVTyQv8GlzJzTtKZpKMPWPJMN7eD4vvEyyRH4H+odDB11Cf5aqET1DnV//GSi+pGQd5 +vfWyEJWbdkNiJSUyL9e195wPlUiNb4= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1676477706; x=1708013706; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=o5ahpK5T7/o9yYCKobDBT3y231WpHzebkC3ym7Te37s=; b=N1Y/9ltLgEaGITbuXr/vNrFBvOrSymVwKASzVAP83o98jPrk2Zrqy4QN MHtG9zGVejQd2Ei2FTeMBIRO7hKS+qqD5KyYVVH4yp8vc8v26G8xWHoJJ nathCuVAF5awjHcSN+8QbKNzki1pSgL2cV0qyHwaT6fqauBTsep0+c+jt jcmqnjRNHq3EQtfxqHoYK1R2BO1jHwxoONJFeuW8ttIK2px08UNvjyIAu dnIcLfigaWjHm6dqY0Iz72JuuJppuFk6EDHxl415VPBjOehseW9ifZogr ZO+wu1a4BnxNsSoWX7AlYGP5/qejELyAYz9c1NrD8dCZ3tME98CLf9ihw w==; X-IronPort-AV: E=McAfee;i="6500,9779,10622"; a="393870900" X-IronPort-AV: E=Sophos;i="5.97,300,1669104000"; d="scan'208";a="393870900" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Feb 2023 08:15:06 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10622"; a="758472414" X-IronPort-AV: E=Sophos;i="5.97,300,1669104000"; d="scan'208";a="758472414" Received: from auliel-mobl1.ger.corp.intel.com (HELO thellstr-mobl1.intel.com) ([10.249.254.14]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Feb 2023 08:15:00 -0800 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: dri-devel@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Andrew Morton , "Matthew Wilcox (Oracle)" , Miaohe Lin , David Hildenbrand , Johannes Weiner , Peter Xu , NeilBrown , Daniel Vetter , Christian Koenig , Dave Airlie , Dave Hansen , Matthew Auld , linux-graphics-maintainer@vmware.com, linux-mm@kvack.org, intel-gfx@lists.freedesktop.org Subject: [RFC PATCH 07/16] drm/ttm: Reduce the number of used allocation orders for TTM pages Date: Wed, 15 Feb 2023 17:13:56 +0100 Message-Id: <20230215161405.187368-8-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20230215161405.187368-1-thomas.hellstrom@linux.intel.com> References: <20230215161405.187368-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: BBABA14001F X-Stat-Signature: 9tw64hqat3oh8bjmogmy6hjf5ako8adi X-HE-Tag: 1676477706-653694 X-HE-Meta: U2FsdGVkX18uhE0n5Dpr2md7qifZ2F0zJdyj31UBYlJoEdALeTmPCSfePX63uim1Qt9Zonwb2tpH+MeFDRb54dqroU24hbpDfyE9aOlwzC8ELmgI1awmLedlJQ9fsLREohOzx5PW0eVE5oBph8A/tDODrKHe88soKfcDHwS9d5hdCpz08yrgoO2ljms+MmNJNBdvoG07Bqc5uWelniYsh+UbUDjMYwxu3ZFjAjRrwTrP8nY7fK3IP6BRszeJSKdZDtPpgkHptIR1Pq97xqunrbbtvbqowFz/UA0lPWeTxRP/KawCzILa2c+Di5JKpVNF4/zUhfiAcOIFIloatOWf9U93KF3fUdmHPn0/XGcIdQeLHXj96FJe2irsyKOz5a4O5b7PcHiH7YCvHfv5k/JscS0lHs0wYON6DyCxd5w4KtCg2Gdlvs1j3LiouVFDPQz0ohKvuFg0O62B8fJogR1qW4ziAVd6/ZOlf7ANE/FWVHiCC8T+1FO8BqHN8lIwRbrq++Ql4FbC5Z8URErS87GouTDGOrceSnNmEWonHQVnbPhMa7MUqovbkeWNsC5TlQjwRU8IkES+kW32MVSzHv68e27NzvfmBOPXZYUevr4+CDRFWRZpENi9LCdbO55Zv6AiGQrv27+DJct3JRraLz9u3BptSYIUbMiAxscivuNdrfY6iQhcDg4cboSkHMFx1XNScWXVWz+G9OJ9odxXOzUo8ZTTqI1L8d4UVK6DZpP+FTvUCp3qjgUlVO6Yxhxj0/Qy5PuzXALKcBslMZmLk3I2dM/cmqa8LZMlQUjNLux89A1Pn28su3emAFn6CgT8cNFtgBitOuZt0OskDEtX0MjP9CiOKMRFhtws4h5WEDPgS1gNDORIclPJLtzNBn3OfZwDXWoTjsRVvJ9/impokI/SrlbVu0apcIh71ZhbzZTrHxWCAS8Ib4AyK0Y/5v8aOzrmdRRWhyc5X2Y78+WNzqQ UVgKXVp5 wOpJdjXdqtkv93YYkDz2O5AWFYpdugxFrIt99/EZt8IV9rjqdws0pafMQGpg3UHZ4VEa5llhPZpe2syoDaY5dxQis+oSdXZy3dWyxao3G0+7EWVwcgO/5ZnetJZE0yfoi+V3A3b7r3SNer/1Eg4Wm5kukqUwT/cyL0euzHrMoknUu040EUd6s/atK3Xe6ninRukHp X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When swapping out, we will split multi-order pages both in order to move them to the swap-cache and to be able to return memory to the swap cache as soon as possible on a page-by-page basis. By reducing the page max order to the system PMD size, we can be nicer to the system and avoid splitting gigantic pages. On top of this we also include the 64K page size in the page sizes tried, since that appears to be a common size for GPU applications. Looking forward to when we might be able to swap out PMD size folios without splitting, this will also be a benefit. Signed-off-by: Thomas Hellström --- drivers/gpu/drm/ttm/ttm_pool.c | 58 ++++++++++++++++++++++++++-------- 1 file changed, 45 insertions(+), 13 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c index 1cc7591a9542..8787fb6a218b 100644 --- a/drivers/gpu/drm/ttm/ttm_pool.c +++ b/drivers/gpu/drm/ttm/ttm_pool.c @@ -31,6 +31,8 @@ * cause they are rather slow compared to alloc_pages+map. */ +#define pr_fmt(fmt) "[TTM POOL] " fmt + #include #include #include @@ -47,6 +49,18 @@ #include "ttm_module.h" +#define TTM_MAX_ORDER (PMD_SHIFT - PAGE_SHIFT) +#define TTM_64K_ORDER (16 - PAGE_SHIFT) +#if (TTM_MAX_ORDER < TTM_64K_ORDER) +#undef TTM_MAX_ORDER +#define TTM_MAX_ORDER TTM_64K_ORDER +#endif +#if ((MAX_ORDER - 1) < TTM_MAX_ORDER) +#undef TTM_MAX_ORDER +#define TTM_MAX_ORDER (MAX_ORDER - 1) +#endif +#define TTM_DIM_ORDER (TTM_MAX_ORDER + 1) + /** * struct ttm_pool_dma - Helper object for coherent DMA mappings * @@ -65,16 +79,18 @@ module_param(page_pool_size, ulong, 0644); static atomic_long_t allocated_pages; -static struct ttm_pool_type global_write_combined[MAX_ORDER]; -static struct ttm_pool_type global_uncached[MAX_ORDER]; +static struct ttm_pool_type global_write_combined[TTM_DIM_ORDER]; +static struct ttm_pool_type global_uncached[TTM_DIM_ORDER]; -static struct ttm_pool_type global_dma32_write_combined[MAX_ORDER]; -static struct ttm_pool_type global_dma32_uncached[MAX_ORDER]; +static struct ttm_pool_type global_dma32_write_combined[TTM_DIM_ORDER]; +static struct ttm_pool_type global_dma32_uncached[TTM_DIM_ORDER]; static spinlock_t shrinker_lock; static struct list_head shrinker_list; static struct shrinker mm_shrinker; +static unsigned int ttm_pool_orders[] = {TTM_MAX_ORDER, 0, 0}; + /* Allocate pages of size 1 << order with the given gfp_flags */ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags, unsigned int order) @@ -400,6 +416,17 @@ static void __ttm_pool_free(struct ttm_pool *pool, struct ttm_tt *tt, } } +static unsigned int ttm_pool_select_order(unsigned int order, pgoff_t num_pages) +{ + unsigned int *cur_order = ttm_pool_orders; + + order = min_t(unsigned int, __fls(num_pages), order); + while (order < *cur_order) + ++cur_order; + + return *cur_order; +} + /** * ttm_pool_alloc - Fill a ttm_tt object * @@ -439,9 +466,8 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, else gfp_flags |= GFP_HIGHUSER; - for (order = min_t(unsigned int, MAX_ORDER - 1, __fls(num_pages)); - num_pages; - order = min_t(unsigned int, order, __fls(num_pages))) { + order = ttm_pool_select_order(ttm_pool_orders[0], num_pages); + for (; num_pages; order = ttm_pool_select_order(order, num_pages)) { struct ttm_pool_type *pt; page_caching = tt->caching; @@ -558,7 +584,7 @@ void ttm_pool_init(struct ttm_pool *pool, struct device *dev, if (use_dma_alloc) { for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i) - for (j = 0; j < MAX_ORDER; ++j) + for (j = 0; j < TTM_DIM_ORDER; ++j) ttm_pool_type_init(&pool->caching[i].orders[j], pool, i, j); } @@ -578,7 +604,7 @@ void ttm_pool_fini(struct ttm_pool *pool) if (pool->use_dma_alloc) { for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i) - for (j = 0; j < MAX_ORDER; ++j) + for (j = 0; j < TTM_DIM_ORDER; ++j) ttm_pool_type_fini(&pool->caching[i].orders[j]); } @@ -632,7 +658,7 @@ static void ttm_pool_debugfs_header(struct seq_file *m) unsigned int i; seq_puts(m, "\t "); - for (i = 0; i < MAX_ORDER; ++i) + for (i = 0; i < TTM_DIM_ORDER; ++i) seq_printf(m, " ---%2u---", i); seq_puts(m, "\n"); } @@ -643,7 +669,7 @@ static void ttm_pool_debugfs_orders(struct ttm_pool_type *pt, { unsigned int i; - for (i = 0; i < MAX_ORDER; ++i) + for (i = 0; i < TTM_DIM_ORDER; ++i) seq_printf(m, " %8u", ttm_pool_type_count(&pt[i])); seq_puts(m, "\n"); } @@ -749,10 +775,16 @@ int ttm_pool_mgr_init(unsigned long num_pages) if (!page_pool_size) page_pool_size = num_pages; + if (TTM_64K_ORDER < TTM_MAX_ORDER) + ttm_pool_orders[1] = TTM_64K_ORDER; + + pr_debug("Used orders are %u %u %u\n", ttm_pool_orders[0], + ttm_pool_orders[1], ttm_pool_orders[2]); + spin_lock_init(&shrinker_lock); INIT_LIST_HEAD(&shrinker_list); - for (i = 0; i < MAX_ORDER; ++i) { + for (i = 0; i < TTM_DIM_ORDER; ++i) { ttm_pool_type_init(&global_write_combined[i], NULL, ttm_write_combined, i); ttm_pool_type_init(&global_uncached[i], NULL, ttm_uncached, i); @@ -785,7 +817,7 @@ void ttm_pool_mgr_fini(void) { unsigned int i; - for (i = 0; i < MAX_ORDER; ++i) { + for (i = 0; i < TTM_DIM_ORDER; ++i) { ttm_pool_type_fini(&global_write_combined[i]); ttm_pool_type_fini(&global_uncached[i]);