From patchwork Thu Feb 15 17:44:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 13558924 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 58AC2C48BEB for ; Thu, 15 Feb 2024 17:46:55 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 428B010E401; Thu, 15 Feb 2024 17:46:49 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="ngGxD5bJ"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) by gabe.freedesktop.org (Postfix) with ESMTPS id 358DC10E136; Thu, 15 Feb 2024 17:46:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1708019206; x=1739555206; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=ydBIJ4BSOuMNv4/d1kEpcsLwARN5fMaGVW2a+5OgKdE=; b=ngGxD5bJHXUr7ph1JZw176wfNxGFr2aestoliCf1sRvZaPLke98JGB29 i5163OrAPzCVCq7rn4voD/slBYLIF8ltKU/qIVtRYBJCxNTpOF8TN+zdA ADtlHTSww2oWn+OVy0xdv0HaODBWLeeFp0q9544FwG0y6FbYYi97CzgNQ HkQTOYSzHO8L8l8Zb0Um0IRN38TeKAsU/NNZywvE5utHBeSwrtbWQmHgK 4EaII5a6VjPIBGrXDQX4hYVJd4EfcvHitN9l8M0jWjubS46YppGBo1jWF Jai2337NCUVPrhoByavYMKnF2zaF0apB5AnDDGoPVXncDjKCWOtmdkthq A==; X-IronPort-AV: E=McAfee;i="6600,9927,10985"; a="13513978" X-IronPort-AV: E=Sophos;i="6.06,162,1705392000"; d="scan'208";a="13513978" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Feb 2024 09:46:41 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,162,1705392000"; d="scan'208";a="3892065" Received: from dhalpin-mobl1.ger.corp.intel.com (HELO mwauld-mobl1.intel.com) ([10.252.21.158]) by fmviesa006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Feb 2024 09:46:39 -0800 From: Matthew Auld To: intel-xe@lists.freedesktop.org Cc: dri-devel@lists.freedesktop.org, Geert Uytterhoeven , Arunpravin Paneer Selvam , =?utf-8?q?Chris?= =?utf-8?q?tian_K=C3=B6nig?= , Maxime Ripard Subject: [PATCH 1/6] drm/tests/drm_buddy: fix 32b build Date: Thu, 15 Feb 2024 17:44:32 +0000 Message-ID: <20240215174431.285069-7-matthew.auld@intel.com> X-Mailer: git-send-email 2.43.0 MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Doesn't seem to compile on 32b, presumably due to u64 mod/division. Simplest is to just switch over to u32 here. Also make print modifiers consistent with that. Fixes: a64056bb5a32 ("drm/tests/drm_buddy: add alloc_contiguous test") Reported-by: Geert Uytterhoeven Signed-off-by: Matthew Auld Cc: Arunpravin Paneer Selvam Cc: Christian König Cc: Maxime Ripard Reviewed-by: Arunpravin Paneer Selvam --- drivers/gpu/drm/tests/drm_buddy_test.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/tests/drm_buddy_test.c b/drivers/gpu/drm/tests/drm_buddy_test.c index fee6bec757d1..edacc1adb28f 100644 --- a/drivers/gpu/drm/tests/drm_buddy_test.c +++ b/drivers/gpu/drm/tests/drm_buddy_test.c @@ -21,7 +21,7 @@ static inline u64 get_size(int order, u64 chunk_size) static void drm_test_buddy_alloc_contiguous(struct kunit *test) { - u64 mm_size, ps = SZ_4K, i, n_pages, total; + u32 mm_size, ps = SZ_4K, i, n_pages, total; struct drm_buddy_block *block; struct drm_buddy mm; LIST_HEAD(left); @@ -56,30 +56,30 @@ static void drm_test_buddy_alloc_contiguous(struct kunit *test) KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, ps, ps, list, 0), - "buddy_alloc hit an error size=%d\n", + "buddy_alloc hit an error size=%u\n", ps); } while (++i < n_pages); KUNIT_ASSERT_TRUE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, 3 * ps, ps, &allocated, DRM_BUDDY_CONTIGUOUS_ALLOCATION), - "buddy_alloc didn't error size=%d\n", 3 * ps); + "buddy_alloc didn't error size=%u\n", 3 * ps); drm_buddy_free_list(&mm, &middle); KUNIT_ASSERT_TRUE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, 3 * ps, ps, &allocated, DRM_BUDDY_CONTIGUOUS_ALLOCATION), - "buddy_alloc didn't error size=%llu\n", 3 * ps); + "buddy_alloc didn't error size=%u\n", 3 * ps); KUNIT_ASSERT_TRUE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, 2 * ps, ps, &allocated, DRM_BUDDY_CONTIGUOUS_ALLOCATION), - "buddy_alloc didn't error size=%llu\n", 2 * ps); + "buddy_alloc didn't error size=%u\n", 2 * ps); drm_buddy_free_list(&mm, &right); KUNIT_ASSERT_TRUE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, 3 * ps, ps, &allocated, DRM_BUDDY_CONTIGUOUS_ALLOCATION), - "buddy_alloc didn't error size=%llu\n", 3 * ps); + "buddy_alloc didn't error size=%u\n", 3 * ps); /* * At this point we should have enough contiguous space for 2 blocks, * however they are never buddies (since we freed middle and right) so @@ -88,13 +88,13 @@ static void drm_test_buddy_alloc_contiguous(struct kunit *test) KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, 2 * ps, ps, &allocated, DRM_BUDDY_CONTIGUOUS_ALLOCATION), - "buddy_alloc hit an error size=%d\n", 2 * ps); + "buddy_alloc hit an error size=%u\n", 2 * ps); drm_buddy_free_list(&mm, &left); KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, 3 * ps, ps, &allocated, DRM_BUDDY_CONTIGUOUS_ALLOCATION), - "buddy_alloc hit an error size=%d\n", 3 * ps); + "buddy_alloc hit an error size=%u\n", 3 * ps); total = 0; list_for_each_entry(block, &allocated, link) From patchwork Thu Feb 15 17:44:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 13558923 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C87F9C48BF0 for ; Thu, 15 Feb 2024 17:46:50 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0812210E377; Thu, 15 Feb 2024 17:46:49 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="SdqqSvEF"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) by gabe.freedesktop.org (Postfix) with ESMTPS id D317110E18F; Thu, 15 Feb 2024 17:46:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1708019207; x=1739555207; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=QsAcbwqS9NG5zOC8meKG7XXp4iiGY4OFkDlGSkASW1E=; b=SdqqSvEF2fQ5YHAl8wexDDquArKRctin5PCrdeOFgF7yX29iXoGLmJ0u lioLzoyMdhLB25UNG0xunFu0hosvRVLwO8dsNI4PhjSM4ggQa3FPXuBd2 ACF9EzwFVi3V9nXu+/XXGfuptbG7+xiIPB/yf/kv6JexJA/1s1CO1A6w3 BcVdfm2Rzcl54HvL2AzYrHzZ0dNYAqdbuOClZ/H6A0sWeYHcBnyyv4gK9 x7jZlAaQcX3MqwPnjfkVPFSNTfCwOe/y5RPX09QPYlcynb6UHV5QTmh/9 0aAImthOfKb70spcyIWkwRKwq+k9aj2yeKayA0GKDiphj7DORIGAJDftR w==; X-IronPort-AV: E=McAfee;i="6600,9927,10985"; a="13513990" X-IronPort-AV: E=Sophos;i="6.06,162,1705392000"; d="scan'208";a="13513990" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Feb 2024 09:46:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,162,1705392000"; d="scan'208";a="3892077" Received: from dhalpin-mobl1.ger.corp.intel.com (HELO mwauld-mobl1.intel.com) ([10.252.21.158]) by fmviesa006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Feb 2024 09:46:40 -0800 From: Matthew Auld To: intel-xe@lists.freedesktop.org Cc: dri-devel@lists.freedesktop.org, Arunpravin Paneer Selvam , =?utf-8?q?Chris?= =?utf-8?q?tian_K=C3=B6nig?= , stable@vger.kernel.org Subject: [PATCH 2/6] drm/buddy: fix range bias Date: Thu, 15 Feb 2024 17:44:33 +0000 Message-ID: <20240215174431.285069-8-matthew.auld@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240215174431.285069-7-matthew.auld@intel.com> References: <20240215174431.285069-7-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" There is a corner case here where start/end is after/before the block range we are currently checking. If so we need to be sure that splitting the block will eventually give use the block size we need. To do that we should adjust the block range to account for the start/end, and only continue with the split if the size/alignment will fit the requested size. Not doing so can result in leaving split blocks unmerged when it eventually fails. Fixes: afea229fe102 ("drm: improve drm_buddy_alloc function") Signed-off-by: Matthew Auld Cc: Arunpravin Paneer Selvam Cc: Christian König Cc: # v5.18+ Reviewed-by: Arunpravin Paneer Selvam --- drivers/gpu/drm/drm_buddy.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/drivers/gpu/drm/drm_buddy.c b/drivers/gpu/drm/drm_buddy.c index c1a99bf4dffd..d09540d4065b 100644 --- a/drivers/gpu/drm/drm_buddy.c +++ b/drivers/gpu/drm/drm_buddy.c @@ -332,6 +332,7 @@ alloc_range_bias(struct drm_buddy *mm, u64 start, u64 end, unsigned int order) { + u64 req_size = mm->chunk_size << order; struct drm_buddy_block *block; struct drm_buddy_block *buddy; LIST_HEAD(dfs); @@ -367,6 +368,15 @@ alloc_range_bias(struct drm_buddy *mm, if (drm_buddy_block_is_allocated(block)) continue; + if (block_start < start || block_end > end) { + u64 adjusted_start = max(block_start, start); + u64 adjusted_end = min(block_end, end); + + if (round_down(adjusted_end + 1, req_size) <= + round_up(adjusted_start, req_size)) + continue; + } + if (contains(start, end, block_start, block_end) && order == drm_buddy_block_order(block)) { /* From patchwork Thu Feb 15 17:44:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 13558925 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A76C6C4829E for ; Thu, 15 Feb 2024 17:46:58 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4BEA010E41C; Thu, 15 Feb 2024 17:46:49 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="HVhiXsVM"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2706310E349; Thu, 15 Feb 2024 17:46:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1708019208; x=1739555208; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=buAiuefRHwWbEFIpK0sT1GYX6pLKRgPn9ZYVE986lHk=; b=HVhiXsVMDPlzpiiQtE4ecjxkY7kwjIKrVtUYAeBilBEHrMJIs+Egao8V jODvsJcYnr2bnZued9dIX8i1gr8PZNSKq7bdP9JsWhPRikyHYttvvKdEN mSuuEDP7UzbmC+Z2/L9pk34f9LnmGx+oSz3KwdDra/zAj5LaFAg+7+hzl 65bLvrqGst+tiJ7MZV5JgDKTQZjIujErs4A/kRGrucdNV9piFSBOO8iQU U29DVc3XiRK6kOlIqI6OIsexyROypsIcg4Y7TX1+YW6SdK/LTuNfW8w0f 9/9xg7aQxbcFxSUhYEiuhZbpjv2u/3Aw1xSSbK8QXIv5gWyTcCvA+Stya g==; X-IronPort-AV: E=McAfee;i="6600,9927,10985"; a="13514005" X-IronPort-AV: E=Sophos;i="6.06,162,1705392000"; d="scan'208";a="13514005" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Feb 2024 09:46:44 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,162,1705392000"; d="scan'208";a="3892094" Received: from dhalpin-mobl1.ger.corp.intel.com (HELO mwauld-mobl1.intel.com) ([10.252.21.158]) by fmviesa006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Feb 2024 09:46:42 -0800 From: Matthew Auld To: intel-xe@lists.freedesktop.org Cc: dri-devel@lists.freedesktop.org, Arunpravin Paneer Selvam , =?utf-8?q?Chris?= =?utf-8?q?tian_K=C3=B6nig?= Subject: [PATCH 3/6] drm/buddy: check range allocation matches alignment Date: Thu, 15 Feb 2024 17:44:34 +0000 Message-ID: <20240215174431.285069-9-matthew.auld@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240215174431.285069-7-matthew.auld@intel.com> References: <20240215174431.285069-7-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Likely not a big deal for real users, but for consistency we should respect the min_page_size here. Main issue is that bias allocations turns into normal range allocation if the range and size matches exactly, and in the next patch we want to add some unit tests for this part of the api. Signed-off-by: Matthew Auld Cc: Arunpravin Paneer Selvam Cc: Christian König Reviewed-by: Arunpravin Paneer Selvam --- drivers/gpu/drm/drm_buddy.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/drm_buddy.c b/drivers/gpu/drm/drm_buddy.c index d09540d4065b..ee9913016626 100644 --- a/drivers/gpu/drm/drm_buddy.c +++ b/drivers/gpu/drm/drm_buddy.c @@ -771,8 +771,12 @@ int drm_buddy_alloc_blocks(struct drm_buddy *mm, return -EINVAL; /* Actual range allocation */ - if (start + size == end) + if (start + size == end) { + if (!IS_ALIGNED(start | end, min_block_size)) + return -EINVAL; + return __drm_buddy_alloc_range(mm, start, size, NULL, blocks); + } original_size = size; original_min_size = min_block_size; From patchwork Thu Feb 15 17:44:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 13558927 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 98F93C48BEB for ; Thu, 15 Feb 2024 17:47:02 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4A5AA10E4DD; Thu, 15 Feb 2024 17:46:56 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="EM2IK+wI"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) by gabe.freedesktop.org (Postfix) with ESMTPS id 386A610E377; Thu, 15 Feb 2024 17:46:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1708019208; x=1739555208; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5+bP2KSgpt3/5xNowTMTbYJb8sytVAr2fC/pbuFQ3CA=; b=EM2IK+wIM60jjzhGuG7//wJ5UXLqn927yfWj2qnugSzDT1sG9lgBKqML kGY/05O73yXdBAvB6nbHuZ01z8iLCxqXDsGxZN1+5ZdX4nkhFhKL+sZAw Fs97/rXWEON8mZzxf/QEo2NgBuinwOZXJ7AaCI79HwL7CURGJ84JtsUSG XsLKu7tvmW3VvnD9hzHYqli031htzycugktu07WVRjw8iCiVw2yeo0m5l IeBjbsMvQIBToivt1C6KxEBL8QPImXwL686+n3peSfjphuij9y2Cc2dXY cBINxYI1CBMMLMiZ7vaCmiVb5xfJchkhcKmLsIGQ0sX5IygU4VOMnkNqW g==; X-IronPort-AV: E=McAfee;i="6600,9927,10985"; a="13514016" X-IronPort-AV: E=Sophos;i="6.06,162,1705392000"; d="scan'208";a="13514016" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Feb 2024 09:46:45 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,162,1705392000"; d="scan'208";a="3892107" Received: from dhalpin-mobl1.ger.corp.intel.com (HELO mwauld-mobl1.intel.com) ([10.252.21.158]) by fmviesa006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Feb 2024 09:46:44 -0800 From: Matthew Auld To: intel-xe@lists.freedesktop.org Cc: dri-devel@lists.freedesktop.org, Arunpravin Paneer Selvam , =?utf-8?q?Chris?= =?utf-8?q?tian_K=C3=B6nig?= Subject: [PATCH 4/6] drm/tests/drm_buddy: add alloc_range_bias test Date: Thu, 15 Feb 2024 17:44:35 +0000 Message-ID: <20240215174431.285069-10-matthew.auld@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240215174431.285069-7-matthew.auld@intel.com> References: <20240215174431.285069-7-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Sanity check range bias with DRM_BUDDY_RANGE_ALLOCATION. Signed-off-by: Matthew Auld Cc: Arunpravin Paneer Selvam Cc: Christian König Reviewed-by: Arunpravin Paneer Selvam --- drivers/gpu/drm/tests/drm_buddy_test.c | 218 +++++++++++++++++++++++++ 1 file changed, 218 insertions(+) diff --git a/drivers/gpu/drm/tests/drm_buddy_test.c b/drivers/gpu/drm/tests/drm_buddy_test.c index edacc1adb28f..3d4b29686132 100644 --- a/drivers/gpu/drm/tests/drm_buddy_test.c +++ b/drivers/gpu/drm/tests/drm_buddy_test.c @@ -14,11 +14,216 @@ #include "../lib/drm_random.h" +static unsigned int random_seed; + static inline u64 get_size(int order, u64 chunk_size) { return (1 << order) * chunk_size; } +static void drm_test_buddy_alloc_range_bias(struct kunit *test) +{ + u32 mm_size, ps, bias_size, bias_start, bias_end, bias_rem; + DRM_RND_STATE(prng, random_seed); + unsigned int i, count, *order; + struct drm_buddy mm; + LIST_HEAD(allocated); + + bias_size = SZ_1M; + ps = roundup_pow_of_two(prandom_u32_state(&prng) % bias_size); + ps = max(SZ_4K, ps); + mm_size = (SZ_8M-1) & ~(ps-1); /* Multiple roots */ + + kunit_info(test, "mm_size=%u, ps=%u\n", mm_size, ps); + + KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_init(&mm, mm_size, ps), + "buddy_init failed\n"); + + count = mm_size / bias_size; + order = drm_random_order(count, &prng); + KUNIT_EXPECT_TRUE(test, order); + + /* + * Idea is to split the address space into uniform bias ranges, and then + * in some random order allocate within each bias, using various + * patterns within. This should detect if allocations leak out from a + * given bias, for example. + */ + + for (i = 0; i < count; i++) { + LIST_HEAD(tmp); + u64 size; + + bias_start = order[i] * bias_size; + bias_end = bias_start + bias_size; + bias_rem = bias_size; + + /* internal round_up too big */ + KUNIT_ASSERT_TRUE_MSG(test, + drm_buddy_alloc_blocks(&mm, bias_start, + bias_end, bias_size + ps, bias_size, + &allocated, + DRM_BUDDY_RANGE_ALLOCATION), + "buddy_alloc failed with bias(%x-%x), size=%u, ps=%u\n", + bias_start, bias_end, bias_size, bias_size); + + /* size too big */ + KUNIT_ASSERT_TRUE_MSG(test, + drm_buddy_alloc_blocks(&mm, bias_start, + bias_end, bias_size + ps, ps, + &allocated, + DRM_BUDDY_RANGE_ALLOCATION), + "buddy_alloc didn't fail with bias(%x-%x), size=%u, ps=%u\n", + bias_start, bias_end, bias_size + ps, ps); + + /* bias range too small for size */ + KUNIT_ASSERT_TRUE_MSG(test, + drm_buddy_alloc_blocks(&mm, bias_start + ps, + bias_end, bias_size, ps, + &allocated, + DRM_BUDDY_RANGE_ALLOCATION), + "buddy_alloc didn't fail with bias(%x-%x), size=%u, ps=%u\n", + bias_start + ps, bias_end, bias_size, ps); + + /* bias misaligned */ + KUNIT_ASSERT_TRUE_MSG(test, + drm_buddy_alloc_blocks(&mm, bias_start + ps, + bias_end - ps, + bias_size >> 1, bias_size >> 1, + &allocated, + DRM_BUDDY_RANGE_ALLOCATION), + "buddy_alloc h didn't fail with bias(%x-%x), size=%u, ps=%u\n", + bias_start + ps, bias_end - ps, bias_size >> 1, bias_size >> 1); + + /* single big page */ + KUNIT_ASSERT_FALSE_MSG(test, + drm_buddy_alloc_blocks(&mm, bias_start, + bias_end, bias_size, bias_size, + &tmp, + DRM_BUDDY_RANGE_ALLOCATION), + "buddy_alloc i failed with bias(%x-%x), size=%u, ps=%u\n", + bias_start, bias_end, bias_size, bias_size); + drm_buddy_free_list(&mm, &tmp); + + /* single page with internal round_up */ + KUNIT_ASSERT_FALSE_MSG(test, + drm_buddy_alloc_blocks(&mm, bias_start, + bias_end, ps, bias_size, + &tmp, + DRM_BUDDY_RANGE_ALLOCATION), + "buddy_alloc failed with bias(%x-%x), size=%u, ps=%u\n", + bias_start, bias_end, ps, bias_size); + drm_buddy_free_list(&mm, &tmp); + + /* random size within */ + size = max(round_up(prandom_u32_state(&prng) % bias_rem, ps), ps); + if (size) + KUNIT_ASSERT_FALSE_MSG(test, + drm_buddy_alloc_blocks(&mm, bias_start, + bias_end, size, ps, + &tmp, + DRM_BUDDY_RANGE_ALLOCATION), + "buddy_alloc failed with bias(%x-%x), size=%u, ps=%u\n", + bias_start, bias_end, size, ps); + + bias_rem -= size; + /* too big for current avail */ + KUNIT_ASSERT_TRUE_MSG(test, + drm_buddy_alloc_blocks(&mm, bias_start, + bias_end, bias_rem + ps, ps, + &allocated, + DRM_BUDDY_RANGE_ALLOCATION), + "buddy_alloc didn't fail with bias(%x-%x), size=%u, ps=%u\n", + bias_start, bias_end, bias_rem + ps, ps); + + if (bias_rem) { + /* random fill of the remainder */ + size = max(round_up(prandom_u32_state(&prng) % bias_rem, ps), ps); + size = max(size, ps); + + KUNIT_ASSERT_FALSE_MSG(test, + drm_buddy_alloc_blocks(&mm, bias_start, + bias_end, size, ps, + &allocated, + DRM_BUDDY_RANGE_ALLOCATION), + "buddy_alloc failed with bias(%x-%x), size=%u, ps=%u\n", + bias_start, bias_end, size, ps); + /* + * Intentionally allow some space to be left + * unallocated, and ideally not always on the bias + * boundaries. + */ + drm_buddy_free_list(&mm, &tmp); + } else { + list_splice_tail(&tmp, &allocated); + } + } + + kfree(order); + drm_buddy_free_list(&mm, &allocated); + drm_buddy_fini(&mm); + + /* + * Something more free-form. Idea is to pick a random starting bias + * range within the address space and then start filling it up. Also + * randomly grow the bias range in both directions as we go along. This + * should give us bias start/end which is not always uniform like above, + * and in some cases will require the allocator to jump over already + * allocated nodes in the middle of the address space. + */ + + KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_init(&mm, mm_size, ps), + "buddy_init failed\n"); + + bias_start = round_up(prandom_u32_state(&prng) % (mm_size - ps), ps); + bias_end = round_up(bias_start + prandom_u32_state(&prng) % (mm_size - bias_start), ps); + bias_end = max(bias_end, bias_start + ps); + bias_rem = bias_end - bias_start; + + do { + u64 size = max(round_up(prandom_u32_state(&prng) % bias_rem, ps), ps); + + KUNIT_ASSERT_FALSE_MSG(test, + drm_buddy_alloc_blocks(&mm, bias_start, + bias_end, size, ps, + &allocated, + DRM_BUDDY_RANGE_ALLOCATION), + "buddy_alloc failed with bias(%x-%x), size=%u, ps=%u\n", + bias_start, bias_end, size); + bias_rem -= size; + + /* + * Try to randomly grow the bias range in both directions, or + * only one, or perhaps don't grow at all. + */ + do { + u64 old_bias_start = bias_start; + u64 old_bias_end = bias_end; + + if (bias_start) + bias_start -= round_up(prandom_u32_state(&prng) % bias_start, ps); + if (bias_end != mm_size) + bias_end += round_up(prandom_u32_state(&prng) % (mm_size - bias_end), ps); + + bias_rem += old_bias_start - bias_start; + bias_rem += bias_end - old_bias_end; + } while (!bias_rem && (bias_start || bias_end != mm_size)); + } while (bias_rem); + + KUNIT_ASSERT_EQ(test, bias_start, 0); + KUNIT_ASSERT_EQ(test, bias_end, mm_size); + KUNIT_ASSERT_TRUE_MSG(test, + drm_buddy_alloc_blocks(&mm, bias_start, bias_end, + ps, ps, + &allocated, + DRM_BUDDY_RANGE_ALLOCATION), + "buddy_alloc passed with bias(%x-%x), size=%u\n", + bias_start, bias_end, ps); + + drm_buddy_free_list(&mm, &allocated); + drm_buddy_fini(&mm); +} + static void drm_test_buddy_alloc_contiguous(struct kunit *test) { u32 mm_size, ps = SZ_4K, i, n_pages, total; @@ -363,17 +568,30 @@ static void drm_test_buddy_alloc_limit(struct kunit *test) drm_buddy_fini(&mm); } +static int drm_buddy_suite_init(struct kunit_suite *suite) +{ + while (!random_seed) + random_seed = get_random_u32(); + + kunit_info(suite, "Testing DRM buddy manager, with random_seed=0x%x\n", + random_seed); + + return 0; +} + static struct kunit_case drm_buddy_tests[] = { KUNIT_CASE(drm_test_buddy_alloc_limit), KUNIT_CASE(drm_test_buddy_alloc_optimistic), KUNIT_CASE(drm_test_buddy_alloc_pessimistic), KUNIT_CASE(drm_test_buddy_alloc_pathological), KUNIT_CASE(drm_test_buddy_alloc_contiguous), + KUNIT_CASE(drm_test_buddy_alloc_range_bias), {} }; static struct kunit_suite drm_buddy_test_suite = { .name = "drm_buddy", + .suite_init = drm_buddy_suite_init, .test_cases = drm_buddy_tests, }; From patchwork Thu Feb 15 17:44:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 13558926 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C320BC48BC4 for ; Thu, 15 Feb 2024 17:47:00 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2F56110EA15; Thu, 15 Feb 2024 17:46:50 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="VXsjrsI9"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) by gabe.freedesktop.org (Postfix) with ESMTPS id 39C1410E401; Thu, 15 Feb 2024 17:46:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1708019208; x=1739555208; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Eo1jaMxJ1EFxtBjsZhs7ZWH/VbKo8GiKJ167yHvTH7w=; b=VXsjrsI9C5HGW4G4EPBDceUFudLhRdrIfEoe8A6/78HCvcHxhwEXJzwO g/h3xJ+ZOCGzfAlnkE4kN7p9i/ivUuJ1diYY0gcBe8jYm9gpBGwCHRb3Z YS3HFPCgk/K2xAFsOhTxJsXyOhzTgPmTPfXA5YivL8tA9pvEOWX9xegjA Q3ITQKrY60v6bwu4SuQBVghqc7zdqR/QfX7O2FiuiSu/+3cm2J6Pfnmmi BAgAc9rGllcZCPT9zIRoGwyZDXlxPlzNEmcRJC7tfAkP8AqbCEogrN5Ww EVI/CqdbasXMxtlsztHpzhhji8d4OZhKbhqumt5EQ5AWJM54aUW7uc9Yl g==; X-IronPort-AV: E=McAfee;i="6600,9927,10985"; a="13514027" X-IronPort-AV: E=Sophos;i="6.06,162,1705392000"; d="scan'208";a="13514027" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Feb 2024 09:46:46 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,162,1705392000"; d="scan'208";a="3892124" Received: from dhalpin-mobl1.ger.corp.intel.com (HELO mwauld-mobl1.intel.com) ([10.252.21.158]) by fmviesa006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Feb 2024 09:46:45 -0800 From: Matthew Auld To: intel-xe@lists.freedesktop.org Cc: dri-devel@lists.freedesktop.org, Matt Roper Subject: [PATCH 5/6] drm/xe/stolen: lower the default alignment Date: Thu, 15 Feb 2024 17:44:36 +0000 Message-ID: <20240215174431.285069-11-matthew.auld@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240215174431.285069-7-matthew.auld@intel.com> References: <20240215174431.285069-7-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" No need to be so aggressive here. The upper layers will already apply the needed alignment, plus some allocations might wish to skip it. Main issue is that we might want to have start/end bias range which doesn't match the default alignment which is rejected by the allocator. Signed-off-by: Matthew Auld Cc: Matt Roper --- drivers/gpu/drm/xe/xe_ttm_stolen_mgr.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/xe/xe_ttm_stolen_mgr.c b/drivers/gpu/drm/xe/xe_ttm_stolen_mgr.c index 662f1e9bfc65..2e94f90e1018 100644 --- a/drivers/gpu/drm/xe/xe_ttm_stolen_mgr.c +++ b/drivers/gpu/drm/xe/xe_ttm_stolen_mgr.c @@ -203,7 +203,7 @@ void xe_ttm_stolen_mgr_init(struct xe_device *xe) { struct xe_ttm_stolen_mgr *mgr = drmm_kzalloc(&xe->drm, sizeof(*mgr), GFP_KERNEL); struct pci_dev *pdev = to_pci_dev(xe->drm.dev); - u64 stolen_size, io_size, pgsize; + u64 stolen_size, io_size; int err; if (IS_SRIOV_VF(xe)) @@ -220,10 +220,6 @@ void xe_ttm_stolen_mgr_init(struct xe_device *xe) return; } - pgsize = xe->info.vram_flags & XE_VRAM_FLAGS_NEED64K ? SZ_64K : SZ_4K; - if (pgsize < PAGE_SIZE) - pgsize = PAGE_SIZE; - /* * We don't try to attempt partial visible support for stolen vram, * since stolen is always at the end of vram, and the BAR size is pretty @@ -234,7 +230,7 @@ void xe_ttm_stolen_mgr_init(struct xe_device *xe) io_size = stolen_size; err = __xe_ttm_vram_mgr_init(xe, &mgr->base, XE_PL_STOLEN, stolen_size, - io_size, pgsize); + io_size, SZ_4K); if (err) { drm_dbg_kms(&xe->drm, "Stolen mgr init failed: %i\n", err); return; From patchwork Thu Feb 15 17:44:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 13558928 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4DAD0C4829E for ; Thu, 15 Feb 2024 17:47:04 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id CEF2D10E923; Thu, 15 Feb 2024 17:46:56 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="JVQX/q27"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) by gabe.freedesktop.org (Postfix) with ESMTPS id CAC5010E136; Thu, 15 Feb 2024 17:46:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1708019209; x=1739555209; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=PlVKByctW1DoAUvKuZe/1fiYUxkkK/LQWJlA5HGfayo=; b=JVQX/q27hcn3MkJXOdsEKCO52a+Ggc3XmWQ1vg4XGpzDM/k+anVAjXZU MVXa1KzQ4S7Kkbz07aTzAl2gohXLTefLE8Q3HctbhW7K4ZejeaPvz5YIW ei7VOddprCDIblVQ9FeITlU5ZVFGMKVgOZaQtHFPK9cus5EQhRSygnUnJ LDn7lYUMJdW5CMtQyJx1c7Xm7QJBv12k4DWtmN0fGXIBKgsXd9g2DGMtD JmA2mWU7ZK1op5Bpk72HzQjcvJaY3PYtWpV/ZBuCcKWja6PaxJtTgbPvU rlkXyk37Pu1ZoGjRG+iQoLeYuN0Cl+ymnKMyfq6hu9Z/4cBijqiWhaB7P g==; X-IronPort-AV: E=McAfee;i="6600,9927,10985"; a="13514038" X-IronPort-AV: E=Sophos;i="6.06,162,1705392000"; d="scan'208";a="13514038" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Feb 2024 09:46:48 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,162,1705392000"; d="scan'208";a="3892137" Received: from dhalpin-mobl1.ger.corp.intel.com (HELO mwauld-mobl1.intel.com) ([10.252.21.158]) by fmviesa006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Feb 2024 09:46:46 -0800 From: Matthew Auld To: intel-xe@lists.freedesktop.org Cc: dri-devel@lists.freedesktop.org, Matt Roper Subject: [PATCH 6/6] drm/xe/stolen: ignore first page for FBC Date: Thu, 15 Feb 2024 17:44:37 +0000 Message-ID: <20240215174431.285069-12-matthew.auld@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240215174431.285069-7-matthew.auld@intel.com> References: <20240215174431.285069-7-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Seems like we can potentially hit underruns if the CFB offset is within the first page of stolen. Just like i915 skip the first page. BSpec: 50214 Reported-by: Matt Roper Signed-off-by: Matthew Auld Reviewed-by: Maarten Lankhorst --- drivers/gpu/drm/xe/compat-i915-headers/i915_gem_stolen.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/gpu/drm/xe/compat-i915-headers/i915_gem_stolen.h b/drivers/gpu/drm/xe/compat-i915-headers/i915_gem_stolen.h index bd233007c1b7..003474cfdf31 100644 --- a/drivers/gpu/drm/xe/compat-i915-headers/i915_gem_stolen.h +++ b/drivers/gpu/drm/xe/compat-i915-headers/i915_gem_stolen.h @@ -19,6 +19,9 @@ static inline int i915_gem_stolen_insert_node_in_range(struct xe_device *xe, int err; u32 flags = XE_BO_CREATE_PINNED_BIT | XE_BO_CREATE_STOLEN_BIT; + if (start < SZ_4K) + start = SZ_4K; + if (align) size = ALIGN(size, align);