From patchwork Wed Feb 15 16:13:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Thomas Hellstrom X-Patchwork-Id: 13141848 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2ABE9C636D7 for ; Wed, 15 Feb 2023 16:15:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BCFC06B007D; Wed, 15 Feb 2023 11:15:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B7FD96B0080; Wed, 15 Feb 2023 11:15:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A220F6B0083; Wed, 15 Feb 2023 11:15:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 93C476B007D for ; Wed, 15 Feb 2023 11:15:04 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 1543680335 for ; Wed, 15 Feb 2023 16:15:04 +0000 (UTC) X-FDA: 80470025328.21.54E09A1 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by imf09.hostedemail.com (Postfix) with ESMTP id 8DBE114000E for ; Wed, 15 Feb 2023 16:15:01 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=KxTfYbT7; spf=none (imf09.hostedemail.com: domain of thomas.hellstrom@linux.intel.com has no SPF policy when checking 134.134.136.31) smtp.mailfrom=thomas.hellstrom@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676477702; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=d7T6pXParRmDXIabKFZiri2qDsuPl5ZmoYsQucuTASU=; b=Es6Kg98KU7wPxpv4iS8QZZOwFo5NBu4T4SQfgVDzg+puyt18HYArmZgOi8SjUzye86Mty3 WIgBxVhIWdo35Z5r0bXnxnoLkmIrcZCIkc/AzabtfwEl6/UrxVkCy8Gmwl52ZapGSMIJje r/oOuKhmBxD3/PhFBNhFUkri9TMjM8E= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=KxTfYbT7; spf=none (imf09.hostedemail.com: domain of thomas.hellstrom@linux.intel.com has no SPF policy when checking 134.134.136.31) smtp.mailfrom=thomas.hellstrom@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676477702; a=rsa-sha256; cv=none; b=UgnOMpBMaV8rFPe+RAuxMMCUa6th/tABqljkDsqFrfCLcpn7Y8cFgLbBi7zXWxKljiwaod +DfJA36vmTb/IV3zSu5FfC43b6WeaOVbM1zIpbjYB2AL903OgfXpbs9QxKiKzqWnF7gEi+ QecM4z356QzeOwavit72JeifcQ7+3jk= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1676477701; x=1708013701; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6/IPmi7YTji8TNgTR+ciw3L6Y9HiGiLfBZFPL4xWuE4=; b=KxTfYbT7QsDFIR/gxuxbkzjvu9PV6liTvVQHNxsmfs/naowOziL61ZHC RY60VhzWtzC00w475gzgcHArNzIomgXhzzANqf8rmT9epWrtfZzkvlhBY c/ozcIdeDClepWbo3IDL4VXy73NEPa5BfXnZk7e+vT2mC6sHNKNoEfrrV Yzr2qhSyIJ/QML9vwzUaDkPBWnhVB8+maffWArDROhRMsW2Kd9xuy8sUY KK+PSD7i0qS6MjQ/g2QZMnQgyvdP6IUutba/KB01icXEFAIaerSyIGahd THtuVIlPLRYyNuNTHEbGHe1Sc/tV+nNCpel/NAPmB9Lwge3y85oqwkQXe w==; X-IronPort-AV: E=McAfee;i="6500,9779,10622"; a="393870847" X-IronPort-AV: E=Sophos;i="5.97,300,1669104000"; d="scan'208";a="393870847" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Feb 2023 08:15:00 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10622"; a="758472313" X-IronPort-AV: E=Sophos;i="5.97,300,1669104000"; d="scan'208";a="758472313" Received: from auliel-mobl1.ger.corp.intel.com (HELO thellstr-mobl1.intel.com) ([10.249.254.14]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Feb 2023 08:14:55 -0800 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: dri-devel@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Andrew Morton , "Matthew Wilcox (Oracle)" , Miaohe Lin , David Hildenbrand , Johannes Weiner , Peter Xu , NeilBrown , Daniel Vetter , Christian Koenig , Dave Airlie , Dave Hansen , Matthew Auld , linux-graphics-maintainer@vmware.com, linux-mm@kvack.org, intel-gfx@lists.freedesktop.org Subject: [RFC PATCH 06/16] drm/ttm: Don't use watermark accounting on shrinkable pools Date: Wed, 15 Feb 2023 17:13:55 +0100 Message-Id: <20230215161405.187368-7-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20230215161405.187368-1-thomas.hellstrom@linux.intel.com> References: <20230215161405.187368-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 8DBE114000E X-Stat-Signature: xzxhb9zk1zozsbjpr5esk7h945qw9pka X-HE-Tag: 1676477701-667826 X-HE-Meta: U2FsdGVkX18Z9s+VZNyNWgD7Lm3V/JTNBvRHGQIRUl8YJLrFW5fYtYJd8k9WRx7v5v6C8vspUpvncda75MI1fVbb+/Qt99q1c7mdw/1ML7OgclOgzVFBkDgUVd30TinmDASY+sQmyU+W2oF9LsEtOnfoq/f72QIkZLD5JODQzvrMYKqQ18Zjz98btUsIKChDQJFAxI6RM/B+yprn+uNsrVmg3d8eLHhITsvYZ/B4FQEWx9iSMQOhirpJyKlf+63eqvHfKcBCt5RDgPRPA2KO4iwbRGVx9VcQ3U/UF0F8LL64q8QPvm2hr9yVILD5CH9pUVHvdfHKOrQ7y1AM153zw7N9sxkXkx4BkmsbQhFwD2iqSIgjbtGcVwMC8BpGfcP7PxIkhWcs1mLUuYZxQfV+fdqSrvtuV2CfIdSKvhH8H3Djc2L/Cr9M1FugCb2QsphFlWmdL2YzoL1zBq4EmKtvDDg9TnGDnqUogstPONIrW9bmJ/qISYsCk3MMxv+0fjkMeWJagM/0C/fncMzUUr3ak9Jto5sNtTlAzq4nxuy5HZ625OROf0ylWaNr1hbcBU18pDNn91c8KM9mNqbs+RIXQbRvc4TV8wIO7hYV8ggWbDKQbGs01nn3GqId4ueJEQpNaeAGB9VSdP2RSYKjIlcUTIfFrQIZpRcKftpUpb340su/xmQqyk+Fz7n8PA/DUdYfUNFCb5UMHuxgoK4plzhgQwS6tduwTrCrJ29yS0wLJWFiUSdBn+pYndKjHwEEhdJYTGpDZKo2TiuSjEubdm8n3SG3dVxATog9ZEJ0/asOI3hvlr0/8XLTPdzwiZoNGcAx6RZPV4vKV//moadAPtN/Jdy4B3dHibLWjYtUYwaqlW478AQc9lb4njo17SvQCFQZt7zKQZm6c1e2Bdis2jQMw9U81K4U4oOV0nYPBkVPFbnkqquPM/bqj37n+RnCx5z5PWAUmBQSmbEEZ1RT+qk iRo6ably 6h+z4B1VP3X8QGP5nhj4ix5RfP1vYCP4eOKjK9rEnQ2VN5/WVJ1SICSR4zCjVNHt9cckbXw6FISsF8U1V3oazH7obhZIfgreCaKu1lFs79S1Cr4cLwl28d/KALtUMu3h9nxVQmY4onxAqjS6hyy1nIza6HjfWz/d2F2LNHhF6P2MCm0DY0iLWhll8UhIJmZATvJ59 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Clarify the meaning of the ttm_tt pages_limit watermarks as the max number of pages not accessible by shrinkers, and update accordingly so that memory allocated by TTM devices that support shrinking is not accounted against those limits. In particular this means that devices using the dma_alloc pool will still be using the watermark method. Signed-off-by: Thomas Hellström --- drivers/gpu/drm/ttm/ttm_device.c | 3 ++- drivers/gpu/drm/ttm/ttm_tt.c | 43 +++++++++++++++++++------------- include/drm/ttm/ttm_pool.h | 15 +++++++++++ 3 files changed, 42 insertions(+), 19 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_device.c b/drivers/gpu/drm/ttm/ttm_device.c index a3cac42bb456..e0a2be3ed13d 100644 --- a/drivers/gpu/drm/ttm/ttm_device.c +++ b/drivers/gpu/drm/ttm/ttm_device.c @@ -168,7 +168,8 @@ long ttm_device_swapout(struct ttm_device *bdev, struct ttm_operation_ctx *ctx, unsigned i; long ret; - if (reason != TTM_SHRINK_WATERMARK && !bdev->funcs->bo_shrink) + if (reason != TTM_SHRINK_WATERMARK && + (!bdev->funcs->bo_shrink || !ttm_pool_can_shrink(&bdev->pool))) return 0; spin_lock(&bdev->lru_lock); diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c index a68c14de0161..771e5f3c2fee 100644 --- a/drivers/gpu/drm/ttm/ttm_tt.c +++ b/drivers/gpu/drm/ttm/ttm_tt.c @@ -54,6 +54,21 @@ module_param_named(dma32_pages_limit, ttm_dma32_pages_limit, ulong, 0644); static atomic_long_t ttm_pages_allocated; static atomic_long_t ttm_dma32_pages_allocated; +static bool ttm_tt_shrinkable(const struct ttm_device *bdev, + const struct ttm_tt *tt) +{ + return !!bdev->funcs->bo_shrink && + ttm_pool_can_shrink(&bdev->pool) && + !(tt->page_flags & TTM_TT_FLAG_EXTERNAL); +} + +static void ttm_tt_mod_allocated(bool dma32, long value) +{ + atomic_long_add(value, &ttm_pages_allocated); + if (dma32) + atomic_long_add(value, &ttm_dma32_pages_allocated); +} + /* * Allocates a ttm structure for the given BO. */ @@ -304,12 +319,9 @@ int ttm_tt_populate(struct ttm_device *bdev, if (ttm_tt_is_populated(ttm)) return 0; - if (!(ttm->page_flags & TTM_TT_FLAG_EXTERNAL)) { - atomic_long_add(ttm->num_pages, &ttm_pages_allocated); - if (bdev->pool.use_dma32) - atomic_long_add(ttm->num_pages, - &ttm_dma32_pages_allocated); - } + if (!(ttm->page_flags & TTM_TT_FLAG_EXTERNAL) && + !ttm_tt_shrinkable(bdev, ttm)) + ttm_tt_mod_allocated(bdev->pool.use_dma32, ttm->num_pages); while (atomic_long_read(&ttm_pages_allocated) > ttm_pages_limit || atomic_long_read(&ttm_dma32_pages_allocated) > @@ -343,12 +355,10 @@ int ttm_tt_populate(struct ttm_device *bdev, return 0; error: - if (!(ttm->page_flags & TTM_TT_FLAG_EXTERNAL)) { - atomic_long_sub(ttm->num_pages, &ttm_pages_allocated); - if (bdev->pool.use_dma32) - atomic_long_sub(ttm->num_pages, - &ttm_dma32_pages_allocated); - } + if (!(ttm->page_flags & TTM_TT_FLAG_EXTERNAL) && + !ttm_tt_shrinkable(bdev, ttm)) + ttm_tt_mod_allocated(bdev->pool.use_dma32, -(long)ttm->num_pages); + return ret; } EXPORT_SYMBOL(ttm_tt_populate); @@ -363,12 +373,9 @@ void ttm_tt_unpopulate(struct ttm_device *bdev, struct ttm_tt *ttm) else ttm_pool_free(&bdev->pool, ttm); - if (!(ttm->page_flags & TTM_TT_FLAG_EXTERNAL)) { - atomic_long_sub(ttm->num_pages, &ttm_pages_allocated); - if (bdev->pool.use_dma32) - atomic_long_sub(ttm->num_pages, - &ttm_dma32_pages_allocated); - } + if (!(ttm->page_flags & TTM_TT_FLAG_EXTERNAL) && + !ttm_tt_shrinkable(bdev, ttm)) + ttm_tt_mod_allocated(bdev->pool.use_dma32, -(long)ttm->num_pages); ttm->page_flags &= ~TTM_TT_FLAG_PRIV_POPULATED; } diff --git a/include/drm/ttm/ttm_pool.h b/include/drm/ttm/ttm_pool.h index ef09b23d29e3..c1200552892e 100644 --- a/include/drm/ttm/ttm_pool.h +++ b/include/drm/ttm/ttm_pool.h @@ -89,4 +89,19 @@ int ttm_pool_debugfs(struct ttm_pool *pool, struct seq_file *m); int ttm_pool_mgr_init(unsigned long num_pages); void ttm_pool_mgr_fini(void); +/** + * ttm_pool_can_shrink - Whether page allocations from this pool are shrinkable + * @pool: The pool. + * + * Return: true if shrinkable, false if not. + */ +static inline bool ttm_pool_can_shrink(const struct ttm_pool *pool) +{ + /* + * The dma_alloc pool pages can't be inserted into the + * swap cache. Nor can they be split. + */ + return !pool->use_dma_alloc; +} + #endif