From patchwork Fri Jun 17 19:05:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Beckett X-Patchwork-Id: 12885903 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 83C04CCA47C for ; Fri, 17 Jun 2022 19:05:48 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id DE09810F2C5; Fri, 17 Jun 2022 19:05:43 +0000 (UTC) Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by gabe.freedesktop.org (Postfix) with ESMTPS id 923D910F2AF; Fri, 17 Jun 2022 19:05:42 +0000 (UTC) Received: from hermes-devbox.fritz.box (82-71-8-225.dsl.in-addr.zen.co.uk [82.71.8.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbeckett) by madras.collabora.co.uk (Postfix) with ESMTPSA id 1F3E366017B7; Fri, 17 Jun 2022 20:05:41 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1655492741; bh=KHg3eOnYqUoUcg57EXhLlpoQHXoJLlQC7w8GyVbN4OU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IIBvAjFPtkVClQ1mBiVjn1VM/ywQXL4o4og1pGdVGyrzpuF5iXpLhcQYo71P14vqv IfdgKx5TDnGAo0hLPWoBqPbwfdXkW0A5VVXh5T03hWVsLjacjA7G9ahSEkiSiy3CTv yv1Vi1u+CqVqDDWAUVEpu59N/4mFHL7WQsSFAPv5ljRshFFo8GIYK8An6LQrkaLPgo IxeX+z2sh21OclcQaQAfK63mioC1C6M3wisfOxHbiPCe23IbLNlaBvnYTVO8zniy7r nc3G8eJCgU0wsB7wwSQ4TqVLGkv5XJa+8rVrPXDI/aDuMew8czB9g1L2Txm7e/6//R Ol81QZJdjruOg== From: Robert Beckett To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter Subject: [PATCH v6 01/10] drm/i915/ttm: dont trample cache_level overrides during ttm move Date: Fri, 17 Jun 2022 19:05:07 +0000 Message-Id: <20220617190516.2805572-2-bob.beckett@collabora.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220617190516.2805572-1-bob.beckett@collabora.com> References: <20220617190516.2805572-1-bob.beckett@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Robert Beckett , =?utf-8?q?Thomas_Hellstr?= =?utf-8?q?=C3=B6m?= , kernel@collabora.com, Matthew Auld , linux-kernel@vger.kernel.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Various places within the driver override the default chosen cache_level. Before ttm, these overrides were permanent until explicitly changed again or for the lifetime of the buffer. TTM movement code came along and decided that it could make that decision at that time, which is usually well after object creation, so overrode the cache_level decision and reverted it back to its default decision. Add logic to indicate whether the caching mode has been set by anything other than the move logic. If so, assume that the code that overrode the defaults knows best and keep it. Signed-off-by: Robert Beckett --- drivers/gpu/drm/i915/gem/i915_gem_object.c | 1 + drivers/gpu/drm/i915/gem/i915_gem_object_types.h | 1 + drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 1 + drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c | 9 ++++++--- 4 files changed, 9 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c index 06b1b188ce5a..519887769c08 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c @@ -125,6 +125,7 @@ void i915_gem_object_set_cache_coherency(struct drm_i915_gem_object *obj, struct drm_i915_private *i915 = to_i915(obj->base.dev); obj->cache_level = cache_level; + obj->ttm.cache_level_override = true; if (cache_level != I915_CACHE_NONE) obj->cache_coherent = (I915_BO_CACHE_COHERENT_FOR_READ | diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index 2c88bdb8ff7c..6632ed52e919 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -605,6 +605,7 @@ struct drm_i915_gem_object { struct i915_gem_object_page_iter get_io_page; struct drm_i915_gem_object *backup; bool created:1; + bool cache_level_override:1; } ttm; /* diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index 4c25d9b2f138..27d59639177f 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -1241,6 +1241,7 @@ int __i915_gem_ttm_object_init(struct intel_memory_region *mem, i915_gem_object_init_memory_region(obj, mem); i915_ttm_adjust_domains_after_move(obj); i915_ttm_adjust_gem_after_move(obj); + obj->ttm.cache_level_override = false; i915_gem_object_unlock(obj); return 0; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c index a10716f4e717..4c1de0b4a10f 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c @@ -123,9 +123,12 @@ void i915_ttm_adjust_gem_after_move(struct drm_i915_gem_object *obj) obj->mem_flags |= i915_ttm_cpu_maps_iomem(bo->resource) ? I915_BO_FLAG_IOMEM : I915_BO_FLAG_STRUCT_PAGE; - cache_level = i915_ttm_cache_level(to_i915(bo->base.dev), bo->resource, - bo->ttm); - i915_gem_object_set_cache_coherency(obj, cache_level); + if (!obj->ttm.cache_level_override) { + cache_level = i915_ttm_cache_level(to_i915(bo->base.dev), + bo->resource, bo->ttm); + i915_gem_object_set_cache_coherency(obj, cache_level); + obj->ttm.cache_level_override = false; + } } /** From patchwork Fri Jun 17 19:05:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Beckett X-Patchwork-Id: 12885906 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A1142C43334 for ; Fri, 17 Jun 2022 19:06:02 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id CDE8C10F3F4; Fri, 17 Jun 2022 19:05:49 +0000 (UTC) Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by gabe.freedesktop.org (Postfix) with ESMTPS id 40FD410F2C5; Fri, 17 Jun 2022 19:05:43 +0000 (UTC) Received: from hermes-devbox.fritz.box (82-71-8-225.dsl.in-addr.zen.co.uk [82.71.8.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbeckett) by madras.collabora.co.uk (Postfix) with ESMTPSA id BE19B66017DF; Fri, 17 Jun 2022 20:05:41 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1655492742; bh=JT0TnJkB0rL8RAGj5L8WxKKfTjJ2qr1XwBthtCgxh3Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Anldpq7wlROSc+hLiByOyaVG5S6tpj3HJfdgyeyPL33JqkkObbaMHGcDKsRQbiIG4 OTFVRrz6NmQipgBNa/UMgY7A26gj10Sg87LhExWPNkxwaJJc3fOR+lC2tOFj9QAo8s 5ms5RFLPaY01NgjmjiSqmEIrsN0tFDxcBYUX8EiGx70yrQOc+GdBMp0xnAjw8mL9h7 bRgg1kVgAwLkJG1BhW0XhJMqxupSTZxISivyA3tVR8eTxnZ7a2wdC8KejAOFW5Q2ai 8pjNTAec65GInP6X9ybQIMEQ0okDHR6UrMNOTngJcxs32UBlFL4jT8NC57GxT0eEk5 NQZD1CcStKjwA== From: Robert Beckett To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter Subject: [PATCH v6 02/10] drm/i915: limit ttm to dma32 for i965G[M] Date: Fri, 17 Jun 2022 19:05:08 +0000 Message-Id: <20220617190516.2805572-3-bob.beckett@collabora.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220617190516.2805572-1-bob.beckett@collabora.com> References: <20220617190516.2805572-1-bob.beckett@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Robert Beckett , =?utf-8?q?Thomas_Hellstr?= =?utf-8?q?=C3=B6m?= , kernel@collabora.com, Matthew Auld , linux-kernel@vger.kernel.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" i965G[M] cannot relocate objects above 4GiB. Ensure ttm uses dma32 on these systems. Signed-off-by: Robert Beckett --- drivers/gpu/drm/i915/intel_region_ttm.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/intel_region_ttm.c b/drivers/gpu/drm/i915/intel_region_ttm.c index 62ff77445b01..fd2ecfdd8fa1 100644 --- a/drivers/gpu/drm/i915/intel_region_ttm.c +++ b/drivers/gpu/drm/i915/intel_region_ttm.c @@ -32,10 +32,15 @@ int intel_region_ttm_device_init(struct drm_i915_private *dev_priv) { struct drm_device *drm = &dev_priv->drm; + bool use_dma32 = false; + + /* i965g[m] cannot relocate objects above 4GiB. */ + if (IS_I965GM(dev_priv) || IS_I965G(dev_priv)) + use_dma32 = true; return ttm_device_init(&dev_priv->bdev, i915_ttm_driver(), drm->dev, drm->anon_inode->i_mapping, - drm->vma_offset_manager, false, false); + drm->vma_offset_manager, false, use_dma32); } /** From patchwork Fri Jun 17 19:05:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Beckett X-Patchwork-Id: 12885904 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D9422C433EF for ; Fri, 17 Jun 2022 19:05:57 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 53C4210F3A2; Fri, 17 Jun 2022 19:05:46 +0000 (UTC) Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by gabe.freedesktop.org (Postfix) with ESMTPS id D996010F1A5; Fri, 17 Jun 2022 19:05:43 +0000 (UTC) Received: from hermes-devbox.fritz.box (82-71-8-225.dsl.in-addr.zen.co.uk [82.71.8.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbeckett) by madras.collabora.co.uk (Postfix) with ESMTPSA id 6D90666017FA; Fri, 17 Jun 2022 20:05:42 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1655492742; bh=8vOVoJXWNBNoAe7uHxF6kK6AtRixKpYtIjs6ocUktn4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GPTBn8NOGqnuUZcxrYuvmdiXrILvdzQYT9WQLvHPfa0X/+sokBAfL1UlCZFzFsiKM aiT+fl7fyAbSYHinJ0wjc9QdskgUbkBRfcgOCI9nujh5TLviWQov79tPoQJ8zFmA8z y/UWiwNg1SH7vBnn/TK+ez48JzKw+El5HueSMO6h6b7PIayMiwAZlSBVfGQDeODYfA 2iH8bzoYwYTlQ9o1ER5Mbajbu/Gp8bXKkzTd4yXGuPjVi/wtZJNXyIY1rdkaRusi+m LJ/1FmwS5GMVREwBccCqk2H3YDNrcv+MwUF81JEdYaT6DNjxZIGYaq2ibDuOjczU5N IF06SlMgd97ng== From: Robert Beckett To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter Subject: [PATCH v6 03/10] drm/i915/ttm: only trust snooping for dgfx when deciding default cache_level Date: Fri, 17 Jun 2022 19:05:09 +0000 Message-Id: <20220617190516.2805572-4-bob.beckett@collabora.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220617190516.2805572-1-bob.beckett@collabora.com> References: <20220617190516.2805572-1-bob.beckett@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Robert Beckett , =?utf-8?q?Thomas_Hellstr?= =?utf-8?q?=C3=B6m?= , kernel@collabora.com, Matthew Auld , linux-kernel@vger.kernel.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" By default i915_ttm_cache_level() decides I915_CACHE_LLC if HAS_SNOOP. This is divergent from existing backends code which only considers HAS_LLC. Testing shows that trusting snooping on gen5- is unreliable and bsw via ggtt mappings, so limit DGFX for now and maintain previous behaviour. Signed-off-by: Robert Beckett --- drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c index 4c1de0b4a10f..40249fa28a7a 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c @@ -46,7 +46,9 @@ static enum i915_cache_level i915_ttm_cache_level(struct drm_i915_private *i915, struct ttm_resource *res, struct ttm_tt *ttm) { - return ((HAS_LLC(i915) || HAS_SNOOP(i915)) && + bool can_snoop = HAS_SNOOP(i915) && IS_DGFX(i915); + + return ((HAS_LLC(i915) || can_snoop) && !i915_ttm_gtt_binds_lmem(res) && ttm->caching == ttm_cached) ? I915_CACHE_LLC : I915_CACHE_NONE; From patchwork Fri Jun 17 19:05:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Beckett X-Patchwork-Id: 12885905 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7237FCCA479 for ; Fri, 17 Jun 2022 19:06:00 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E1FBE10F3F6; Fri, 17 Jun 2022 19:05:46 +0000 (UTC) Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by gabe.freedesktop.org (Postfix) with ESMTPS id 84C6210F302; Fri, 17 Jun 2022 19:05:44 +0000 (UTC) Received: from hermes-devbox.fritz.box (82-71-8-225.dsl.in-addr.zen.co.uk [82.71.8.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbeckett) by madras.collabora.co.uk (Postfix) with ESMTPSA id 165E26601815; Fri, 17 Jun 2022 20:05:43 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1655492743; bh=NNjcoDidsEevqXEiV8GzBg83apsOHvIp6XGvRc7cnZo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jSvyx8+gs1CHopAFpYO5MJaVdzl6X9s8j5SACF6sv1957YEYE66fqf3EYkIOX2yUX NTSLlwSu2MqjAxKsUgdRpkXhYq4l0UygtfSTZfi6LwxHV4w+OGszl42oUCt3WRHCk+ eZ3yCKWC5aHuQ7U5XO82sPbuZf/9reySB36IK4DOqHdbOLfHn4r60YALq5qKVQNmZf MheUmgwSQ/qFmnAB20vaSfBfB6WkOjRw2srv7+NzxrCs1Ao+BzYl0V+aowtNr0w7fK lTSoWnetEf9y2rTStfowLYGb6t2Faat+WiC2Ur6PcGfX6Ks2VvHsfdFk6ltMd97y2w GS1yxse8A22vg== From: Robert Beckett To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter Subject: [PATCH v6 04/10] drm/i915/gem: selftest should not attempt mmap of private regions Date: Fri, 17 Jun 2022 19:05:10 +0000 Message-Id: <20220617190516.2805572-5-bob.beckett@collabora.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220617190516.2805572-1-bob.beckett@collabora.com> References: <20220617190516.2805572-1-bob.beckett@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Robert Beckett , =?utf-8?q?Thomas_Hellstr?= =?utf-8?q?=C3=B6m?= , kernel@collabora.com, Matthew Auld , linux-kernel@vger.kernel.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" During testing make can_mmap consider whether the region is private. Signed-off-by: Robert Beckett --- drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c index 5bc93a1ce3e3..76181e28c75e 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c @@ -869,6 +869,9 @@ static bool can_mmap(struct drm_i915_gem_object *obj, enum i915_mmap_type type) struct drm_i915_private *i915 = to_i915(obj->base.dev); bool no_map; + if (obj->mm.region && obj->mm.region->private) + return false; + if (obj->ops->mmap_offset) return type == I915_MMAP_TYPE_FIXED; else if (type == I915_MMAP_TYPE_FIXED) From patchwork Fri Jun 17 19:05:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Bob Beckett X-Patchwork-Id: 12885907 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9CFE8C43334 for ; Fri, 17 Jun 2022 19:06:04 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 800A410F3DA; Fri, 17 Jun 2022 19:05:49 +0000 (UTC) Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by gabe.freedesktop.org (Postfix) with ESMTPS id BACE610F2E2; Fri, 17 Jun 2022 19:05:45 +0000 (UTC) Received: from hermes-devbox.fritz.box (82-71-8-225.dsl.in-addr.zen.co.uk [82.71.8.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbeckett) by madras.collabora.co.uk (Postfix) with ESMTPSA id D52DD660183D; Fri, 17 Jun 2022 20:05:43 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1655492744; bh=4aLhOjNbnMWEd5TE2kdvFPY0DN2hydFT2IvNHlkBa+w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OCicus/ftdfZ8ofwE1lJOs8hVMIqvo/KPg9e27qudrl3EV8w9C7BSnGhUsEFDu9By 1AnC912yZU0P9LNpKxi3tcOyFq67NT3MTwLtuLFQr/tsuySeven2fhkepTgNUQo5MU s3l4CgcgydmAM3etKwJynrbA2sHQKqWn4vO/ddiRpVk4XBtv0FnL/UM25efQM7LAI/ UyQw1Luw0QqdE65ngnlG6noqLt4aGhVmhrc95TrQTefq0Or7zTw5M5XolfFq8T9m1M r6/3Q8wiTCIs9qdlia/UTDyBfwHE4X4cf7PPTs5dcIeCp0ZBlH/z+T9lyHlVKOs68S svtEIo/LX5OWw== From: Robert Beckett To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter Subject: [PATCH v6 05/10] drm/i915: instantiate ttm ranger manager for stolen memory Date: Fri, 17 Jun 2022 19:05:11 +0000 Message-Id: <20220617190516.2805572-6-bob.beckett@collabora.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220617190516.2805572-1-bob.beckett@collabora.com> References: <20220617190516.2805572-1-bob.beckett@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Robert Beckett , =?utf-8?q?Thomas_Hellstr?= =?utf-8?q?=C3=B6m?= , kernel@collabora.com, Matthew Auld , linux-kernel@vger.kernel.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" prepare for ttm based stolen region by using ttm range manager as the resource manager for stolen region. Signed-off-by: Robert Beckett Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c | 6 ++-- drivers/gpu/drm/i915/intel_region_ttm.c | 31 +++++++++++++++----- 2 files changed, 27 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c index 40249fa28a7a..675e9ab30396 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c @@ -60,11 +60,13 @@ i915_ttm_region(struct ttm_device *bdev, int ttm_mem_type) struct drm_i915_private *i915 = container_of(bdev, typeof(*i915), bdev); /* There's some room for optimization here... */ - GEM_BUG_ON(ttm_mem_type != I915_PL_SYSTEM && - ttm_mem_type < I915_PL_LMEM0); + GEM_BUG_ON(ttm_mem_type == I915_PL_GGTT); + if (ttm_mem_type == I915_PL_SYSTEM) return intel_memory_region_lookup(i915, INTEL_MEMORY_SYSTEM, 0); + if (ttm_mem_type == I915_PL_STOLEN) + return i915->mm.stolen_region; return intel_memory_region_lookup(i915, INTEL_MEMORY_LOCAL, ttm_mem_type - I915_PL_LMEM0); diff --git a/drivers/gpu/drm/i915/intel_region_ttm.c b/drivers/gpu/drm/i915/intel_region_ttm.c index fd2ecfdd8fa1..694e9acb69e2 100644 --- a/drivers/gpu/drm/i915/intel_region_ttm.c +++ b/drivers/gpu/drm/i915/intel_region_ttm.c @@ -54,7 +54,7 @@ void intel_region_ttm_device_fini(struct drm_i915_private *dev_priv) /* * Map the i915 memory regions to TTM memory types. We use the - * driver-private types for now, reserving TTM_PL_VRAM for stolen + * driver-private types for now, reserving I915_PL_STOLEN for stolen * memory and TTM_PL_TT for GGTT use if decided to implement this. */ int intel_region_to_ttm_type(const struct intel_memory_region *mem) @@ -63,11 +63,17 @@ int intel_region_to_ttm_type(const struct intel_memory_region *mem) GEM_BUG_ON(mem->type != INTEL_MEMORY_LOCAL && mem->type != INTEL_MEMORY_MOCK && - mem->type != INTEL_MEMORY_SYSTEM); + mem->type != INTEL_MEMORY_SYSTEM && + mem->type != INTEL_MEMORY_STOLEN_SYSTEM && + mem->type != INTEL_MEMORY_STOLEN_LOCAL); if (mem->type == INTEL_MEMORY_SYSTEM) return TTM_PL_SYSTEM; + if (mem->type == INTEL_MEMORY_STOLEN_SYSTEM || + mem->type == INTEL_MEMORY_STOLEN_LOCAL) + return I915_PL_STOLEN; + type = mem->instance + TTM_PL_PRIV; GEM_BUG_ON(type >= TTM_NUM_MEM_TYPES); @@ -91,10 +97,16 @@ int intel_region_ttm_init(struct intel_memory_region *mem) int mem_type = intel_region_to_ttm_type(mem); int ret; - ret = i915_ttm_buddy_man_init(bdev, mem_type, false, - resource_size(&mem->region), - mem->io_size, - mem->min_page_size, PAGE_SIZE); + if (mem_type == I915_PL_STOLEN) { + ret = ttm_range_man_init(bdev, mem_type, false, + resource_size(&mem->region) >> PAGE_SHIFT); + mem->is_range_manager = true; + } else { + ret = i915_ttm_buddy_man_init(bdev, mem_type, false, + resource_size(&mem->region), + mem->io_size, + mem->min_page_size, PAGE_SIZE); + } if (ret) return ret; @@ -114,6 +126,7 @@ int intel_region_ttm_init(struct intel_memory_region *mem) int intel_region_ttm_fini(struct intel_memory_region *mem) { struct ttm_resource_manager *man = mem->region_private; + int mem_type = intel_region_to_ttm_type(mem); int ret = -EBUSY; int count; @@ -144,8 +157,10 @@ int intel_region_ttm_fini(struct intel_memory_region *mem) if (ret || !man) return ret; - ret = i915_ttm_buddy_man_fini(&mem->i915->bdev, - intel_region_to_ttm_type(mem)); + if (mem_type == I915_PL_STOLEN) + ret = ttm_range_man_fini(&mem->i915->bdev, mem_type); + else + ret = i915_ttm_buddy_man_fini(&mem->i915->bdev, mem_type); GEM_WARN_ON(ret); mem->region_private = NULL; From patchwork Fri Jun 17 19:05:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Bob Beckett X-Patchwork-Id: 12885911 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C4229C43334 for ; Fri, 17 Jun 2022 19:06:16 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6D12110F54D; Fri, 17 Jun 2022 19:05:56 +0000 (UTC) Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by gabe.freedesktop.org (Postfix) with ESMTPS id 267A210F32F; Fri, 17 Jun 2022 19:05:46 +0000 (UTC) Received: from hermes-devbox.fritz.box (82-71-8-225.dsl.in-addr.zen.co.uk [82.71.8.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbeckett) by madras.collabora.co.uk (Postfix) with ESMTPSA id 79CA96601856; Fri, 17 Jun 2022 20:05:44 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1655492744; bh=AgP5Riz/FHb587iBL1vYiW/g4qmDc92BtRCaR0jjFGw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=i0IJQ4CM1JVpElwf2AKhAy23FWvVZ30CQEKxXqA/ohJKbUAyoKnY/4qsB3LxifWKl +BUx8fo0BXRnHFUjAKUQJi946Y8wuLAIqKptm63nSm76kDiER5e2QvC5STkDH41aLo jX2tFaizBFg2BUwQ6l9jxS5vEA7tTmpwd93ygXKVhU1qAUH75fCvFa8F+iGZMxj/V7 iVAqDp5jDB1Jv0epMXdgk9KccLklx+jZkjHSbpkHB1BPrLJXi9jbWcaajIPzH8h2T5 N33fcSGQngUR2zdUK9wq6dmvcGeVaAThMuvEK6c1Xyh4cb4oNOm3Z81Thq0zPBlaCt Cz3/npceisA6w== From: Robert Beckett To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter Subject: [PATCH v6 06/10] drm/i915: sanitize mem_flags for stolen buffers Date: Fri, 17 Jun 2022 19:05:12 +0000 Message-Id: <20220617190516.2805572-7-bob.beckett@collabora.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220617190516.2805572-1-bob.beckett@collabora.com> References: <20220617190516.2805572-1-bob.beckett@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Robert Beckett , =?utf-8?q?Thomas_Hellstr?= =?utf-8?q?=C3=B6m?= , kernel@collabora.com, Matthew Auld , linux-kernel@vger.kernel.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Stolen regions are not page backed or considered iomem. Prevent flags indicating such. This correctly prevents stolen buffers from attempting to directly map them. See i915_gem_object_has_struct_page() and i915_gem_object_has_iomem() usage for where it would break otherwise. Signed-off-by: Robert Beckett Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c index 675e9ab30396..81c67ca9edda 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c @@ -14,6 +14,7 @@ #include "gem/i915_gem_region.h" #include "gem/i915_gem_ttm.h" #include "gem/i915_gem_ttm_move.h" +#include "gem/i915_gem_stolen.h" #include "gt/intel_engine_pm.h" #include "gt/intel_gt.h" @@ -124,8 +125,9 @@ void i915_ttm_adjust_gem_after_move(struct drm_i915_gem_object *obj) obj->mem_flags &= ~(I915_BO_FLAG_STRUCT_PAGE | I915_BO_FLAG_IOMEM); - obj->mem_flags |= i915_ttm_cpu_maps_iomem(bo->resource) ? I915_BO_FLAG_IOMEM : - I915_BO_FLAG_STRUCT_PAGE; + if (!i915_gem_object_is_stolen(obj)) + obj->mem_flags |= i915_ttm_cpu_maps_iomem(bo->resource) ? I915_BO_FLAG_IOMEM : + I915_BO_FLAG_STRUCT_PAGE; if (!obj->ttm.cache_level_override) { cache_level = i915_ttm_cache_level(to_i915(bo->base.dev), From patchwork Fri Jun 17 19:05:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Bob Beckett X-Patchwork-Id: 12885910 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5FEFACCA47C for ; Fri, 17 Jun 2022 19:06:15 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6853310F59A; Fri, 17 Jun 2022 19:05:55 +0000 (UTC) Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8AB3910F3DA; Fri, 17 Jun 2022 19:05:46 +0000 (UTC) Received: from hermes-devbox.fritz.box (82-71-8-225.dsl.in-addr.zen.co.uk [82.71.8.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbeckett) by madras.collabora.co.uk (Postfix) with ESMTPSA id 24ECF6601873; Fri, 17 Jun 2022 20:05:45 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1655492745; bh=vcV9d53cSiZS9S6xUBuwTwjcLWVgxyTlm1c7ldt4HDk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=i+hkHucKjHRuY4mnbF4cULgiFLdGwWTTkdYA19c1mXaMSaVDocMSd0hhkEod7f0g1 cx6UyvPXgtBY016pdNBnizarsszi2Y3Hr9cOSkReL9AMg93T5joh9nA85IZmv7NPha u73H+nMNgFBAxzX08t0/X3twi3Mf2EGnIFdZy6cVQPIwBDyqzKnxZywhn7X1TSXYVV 1LOticG/tJ0Zd0a7H0DLor8qAEKf1QvjauOAIO5MxdWZwlVTSPQknJkSeImUIj9kGr LlfbC7dfOpLnHRjR+jTnRkltsontDBTQvYq1FwYk4d5m4RwAf2+yWLTQbqTtFRkVY3 2BMZ3r6fdTOfQ== From: Robert Beckett To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter Subject: [PATCH v6 07/10] drm/i915: ttm move/clear logic fix Date: Fri, 17 Jun 2022 19:05:13 +0000 Message-Id: <20220617190516.2805572-8-bob.beckett@collabora.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220617190516.2805572-1-bob.beckett@collabora.com> References: <20220617190516.2805572-1-bob.beckett@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Robert Beckett , =?utf-8?q?Thomas_Hellstr?= =?utf-8?q?=C3=B6m?= , kernel@collabora.com, Matthew Auld , linux-kernel@vger.kernel.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" ttm managed buffers start off with system resource definitions and ttm_tt tracking structures allocated (though unpopulated). currently this prevents clearing of buffers on first move to desired placements. The desired behaviour is to clear user allocated buffers and any kernel buffers that specifically requests it only. Make the logic match the desired behaviour. Signed-off-by: Robert Beckett Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c | 22 +++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c index 81c67ca9edda..a3f8fc056dbc 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c @@ -3,6 +3,7 @@ * Copyright © 2021 Intel Corporation */ +#include "drm/ttm/ttm_tt.h" #include #include "i915_deps.h" @@ -476,6 +477,25 @@ __i915_ttm_move(struct ttm_buffer_object *bo, return fence; } +static bool +allow_clear(struct drm_i915_gem_object *obj, struct ttm_tt *ttm, struct ttm_resource *dst_mem) +{ + /* never clear stolen */ + if (dst_mem->mem_type == I915_PL_STOLEN) + return false; + /* + * we want to clear user buffers and any kernel buffers + * that specifically request clearing. + */ + if (obj->flags & I915_BO_ALLOC_USER) + return true; + + if (ttm && ttm->page_flags & TTM_TT_FLAG_ZERO_ALLOC) + return true; + + return false; +} + /** * i915_ttm_move - The TTM move callback used by i915. * @bo: The buffer object. @@ -526,7 +546,7 @@ int i915_ttm_move(struct ttm_buffer_object *bo, bool evict, return PTR_ERR(dst_rsgt); clear = !i915_ttm_cpu_maps_iomem(bo->resource) && (!ttm || !ttm_tt_is_populated(ttm)); - if (!(clear && ttm && !(ttm->page_flags & TTM_TT_FLAG_ZERO_ALLOC))) { + if (!clear || allow_clear(obj, ttm, dst_mem)) { struct i915_deps deps; i915_deps_init(&deps, GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN); From patchwork Fri Jun 17 19:05:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Beckett X-Patchwork-Id: 12885912 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D31F5C43334 for ; Fri, 17 Jun 2022 19:06:18 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id EEB0710F4B0; Fri, 17 Jun 2022 19:05:57 +0000 (UTC) Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by gabe.freedesktop.org (Postfix) with ESMTPS id 41A4D10F409; Fri, 17 Jun 2022 19:05:47 +0000 (UTC) Received: from hermes-devbox.fritz.box (82-71-8-225.dsl.in-addr.zen.co.uk [82.71.8.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbeckett) by madras.collabora.co.uk (Postfix) with ESMTPSA id D653D660188F; Fri, 17 Jun 2022 20:05:45 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1655492746; bh=xMYv7v3aHxlEQIE2QtkE00E2gW9W0dS8yiKhilZWusw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dAVY77wpcO6Bp+jrFpjoE6HElhGM5esj4asmzkFh4NMDCpZ3A74k/c0iCzIOYMZS6 AQZfO/+hVVQ8RGW8RkNqztDa2/GE6tMo4pp/OQ6f4Nwx7uqOsh8PAe1DN+wiIeGwJf B2GSDYNmv82fggknXiLimmfaeIpHvicT9DlWhSsC6MKWRTggqfxPW/ZzP/bNgypdQj e2J8X8WlP80Xj5iTMeJ3nPeZbccdqWQfBawZpUhA7IUiq46q7j1qi5NfD4VWz75jsO IdrzFXcTHwkZ1WpOogrGpNNeATkqxOPal+8i0vgsMf7+J+uaQhu6G0rGxaDX2roFVh BsuTV32LbSpPQ== From: Robert Beckett To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter Subject: [PATCH v6 08/10] drm/i915: allow memory region creators to alloc and free the region Date: Fri, 17 Jun 2022 19:05:14 +0000 Message-Id: <20220617190516.2805572-9-bob.beckett@collabora.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220617190516.2805572-1-bob.beckett@collabora.com> References: <20220617190516.2805572-1-bob.beckett@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Robert Beckett , =?utf-8?q?Thomas_Hellstr?= =?utf-8?q?=C3=B6m?= , kernel@collabora.com, Matthew Auld , linux-kernel@vger.kernel.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" add callbacks for alloc and free. this allows region creators to allocate any extra storage they may require. Signed-off-by: Robert Beckett --- drivers/gpu/drm/i915/intel_memory_region.c | 16 +++++++++++++--- drivers/gpu/drm/i915/intel_memory_region.h | 2 ++ 2 files changed, 15 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/intel_memory_region.c b/drivers/gpu/drm/i915/intel_memory_region.c index e38d2db1c3e3..3da07a712f90 100644 --- a/drivers/gpu/drm/i915/intel_memory_region.c +++ b/drivers/gpu/drm/i915/intel_memory_region.c @@ -231,7 +231,10 @@ intel_memory_region_create(struct drm_i915_private *i915, struct intel_memory_region *mem; int err; - mem = kzalloc(sizeof(*mem), GFP_KERNEL); + if (ops->alloc) + mem = ops->alloc(); + else + mem = kzalloc(sizeof(*mem), GFP_KERNEL); if (!mem) return ERR_PTR(-ENOMEM); @@ -265,7 +268,10 @@ intel_memory_region_create(struct drm_i915_private *i915, if (mem->ops->release) mem->ops->release(mem); err_free: - kfree(mem); + if (mem->ops->free) + mem->ops->free(mem); + else + kfree(mem); return ERR_PTR(err); } @@ -288,7 +294,11 @@ void intel_memory_region_destroy(struct intel_memory_region *mem) GEM_WARN_ON(!list_empty_careful(&mem->objects.list)); mutex_destroy(&mem->objects.lock); - if (!ret) + if (ret) + return; + if (mem->ops->free) + mem->ops->free(mem); + else kfree(mem); } diff --git a/drivers/gpu/drm/i915/intel_memory_region.h b/drivers/gpu/drm/i915/intel_memory_region.h index 3d8378c1b447..048955b5429f 100644 --- a/drivers/gpu/drm/i915/intel_memory_region.h +++ b/drivers/gpu/drm/i915/intel_memory_region.h @@ -61,6 +61,8 @@ struct intel_memory_region_ops { resource_size_t size, resource_size_t page_size, unsigned int flags); + struct intel_memory_region *(*alloc)(void); + void (*free)(struct intel_memory_region *mem); }; struct intel_memory_region { From patchwork Fri Jun 17 19:05:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Beckett X-Patchwork-Id: 12885909 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7A719CCA47C for ; Fri, 17 Jun 2022 19:06:12 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D1D6610F447; Fri, 17 Jun 2022 19:05:52 +0000 (UTC) Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by gabe.freedesktop.org (Postfix) with ESMTPS id 0F69F10F32F; Fri, 17 Jun 2022 19:05:48 +0000 (UTC) Received: from hermes-devbox.fritz.box (82-71-8-225.dsl.in-addr.zen.co.uk [82.71.8.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbeckett) by madras.collabora.co.uk (Postfix) with ESMTPSA id 96C1466018AF; Fri, 17 Jun 2022 20:05:46 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1655492746; bh=BJ87eisXj9Ppx4nwiDdWpEsHsZ0BoBn8nQEON2j1/N8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CwhRcRKbJuopMPiRZLEn/1R8Isag4LuaoXiTIJeBSKNA68rv9FAseSLrBlWy8yp/C v4yVg6d0nOSRIN0L7S7OMwMQIr8y/hdpN1quN+d/V8SRKHsL7s4mrpU6BtrnDb+PPO gr6v2MgM9SkQjxzO+rlANBJPKA68bwMDmMro7/L7MRdBV5d2F5/equfr7AH6dthOkW B+fN4nwgpAjcp5Doi439CZcvZjGH4QdL1rCNMFGcLbTXVPR34y2UbiQk4/LnCusShU DUVsPMTDFAJJSEAcVafmQFaqfLURqt2XRgO7TjjCAfzUOYjWjL8XP78qudQi8QB+aD JNi1jRZGgPU3w== From: Robert Beckett To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter Subject: [PATCH v6 09/10] drm/i915/ttm: add buffer pin on alloc flag Date: Fri, 17 Jun 2022 19:05:15 +0000 Message-Id: <20220617190516.2805572-10-bob.beckett@collabora.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220617190516.2805572-1-bob.beckett@collabora.com> References: <20220617190516.2805572-1-bob.beckett@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Robert Beckett , =?utf-8?q?Thomas_Hellstr?= =?utf-8?q?=C3=B6m?= , kernel@collabora.com, Matthew Auld , linux-kernel@vger.kernel.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" For situations where allocations need to fail on alloc instead of delayed get_pages, add a new alloc flag to pin the ttm bo. This makes sure that the resource has been allocated during buffer creation, allowing it to fail with an error if the placement is exhausted. This allows existing fallback options for stolen backend allocation like create_ring_vma to work as expected. Signed-off-by: Robert Beckett --- .../gpu/drm/i915/gem/i915_gem_object_types.h | 13 ++++++---- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 25 ++++++++++++++++++- 2 files changed, 32 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index 6632ed52e919..07bc11247a3e 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -325,17 +325,20 @@ struct drm_i915_gem_object { * dealing with userspace objects the CPU fault handler is free to ignore this. */ #define I915_BO_ALLOC_GPU_ONLY BIT(6) +/* object should be pinned in destination region from allocation */ +#define I915_BO_ALLOC_PINNED BIT(7) #define I915_BO_ALLOC_FLAGS (I915_BO_ALLOC_CONTIGUOUS | \ I915_BO_ALLOC_VOLATILE | \ I915_BO_ALLOC_CPU_CLEAR | \ I915_BO_ALLOC_USER | \ I915_BO_ALLOC_PM_VOLATILE | \ I915_BO_ALLOC_PM_EARLY | \ - I915_BO_ALLOC_GPU_ONLY) -#define I915_BO_READONLY BIT(7) -#define I915_TILING_QUIRK_BIT 8 /* unknown swizzling; do not release! */ -#define I915_BO_PROTECTED BIT(9) -#define I915_BO_WAS_BOUND_BIT 10 + I915_BO_ALLOC_GPU_ONLY | \ + I915_BO_ALLOC_PINNED) +#define I915_BO_READONLY BIT(8) +#define I915_TILING_QUIRK_BIT 9 /* unknown swizzling; do not release! */ +#define I915_BO_PROTECTED BIT(10) +#define I915_BO_WAS_BOUND_BIT 11 /** * @mem_flags - Mutable placement-related flags * diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index 27d59639177f..bb988608296d 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -998,6 +998,13 @@ static void i915_ttm_delayed_free(struct drm_i915_gem_object *obj) { GEM_BUG_ON(!obj->ttm.created); + /* stolen objects are pinned for lifetime. Unpin before putting */ + if (obj->flags & I915_BO_ALLOC_PINNED) { + ttm_bo_reserve(i915_gem_to_ttm(obj), true, false, NULL); + ttm_bo_unpin(i915_gem_to_ttm(obj)); + ttm_bo_unreserve(i915_gem_to_ttm(obj)); + } + ttm_bo_put(i915_gem_to_ttm(obj)); } @@ -1193,6 +1200,9 @@ int __i915_gem_ttm_object_init(struct intel_memory_region *mem, .no_wait_gpu = false, }; enum ttm_bo_type bo_type; + struct ttm_place _place; + struct ttm_placement _placement; + struct ttm_placement *placement; int ret; drm_gem_private_object_init(&i915->drm, &obj->base, size); @@ -1222,6 +1232,17 @@ int __i915_gem_ttm_object_init(struct intel_memory_region *mem, */ i915_gem_object_make_unshrinkable(obj); + if (obj->flags & I915_BO_ALLOC_PINNED) { + i915_ttm_place_from_region(mem, &_place, obj->bo_offset, + obj->base.size, obj->flags); + _placement.num_placement = 1; + _placement.placement = &_place; + _placement.num_busy_placement = 0; + _placement.busy_placement = NULL; + placement = &_placement; + } else { + placement = &i915_sys_placement; + } /* * If this function fails, it will call the destructor, but * our caller still owns the object. So no freeing in the @@ -1230,7 +1251,7 @@ int __i915_gem_ttm_object_init(struct intel_memory_region *mem, * until successful initialization. */ ret = ttm_bo_init_reserved(&i915->bdev, i915_gem_to_ttm(obj), size, - bo_type, &i915_sys_placement, + bo_type, placement, page_size >> PAGE_SHIFT, &ctx, NULL, NULL, i915_ttm_bo_destroy); if (ret) @@ -1242,6 +1263,8 @@ int __i915_gem_ttm_object_init(struct intel_memory_region *mem, i915_ttm_adjust_domains_after_move(obj); i915_ttm_adjust_gem_after_move(obj); obj->ttm.cache_level_override = false; + if (obj->flags & I915_BO_ALLOC_PINNED) + ttm_bo_pin(i915_gem_to_ttm(obj)); i915_gem_object_unlock(obj); return 0; From patchwork Fri Jun 17 19:05:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Bob Beckett X-Patchwork-Id: 12885908 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 056E2C43334 for ; Fri, 17 Jun 2022 19:06:10 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 686D210F43D; Fri, 17 Jun 2022 19:05:52 +0000 (UTC) Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by gabe.freedesktop.org (Postfix) with ESMTPS id 0FC0D10F416; Fri, 17 Jun 2022 19:05:50 +0000 (UTC) Received: from hermes-devbox.fritz.box (82-71-8-225.dsl.in-addr.zen.co.uk [82.71.8.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbeckett) by madras.collabora.co.uk (Postfix) with ESMTPSA id 8AD5C66017DB; Fri, 17 Jun 2022 20:05:48 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1655492748; bh=RByiPs3fArhkAt8HorEYHgJUSz6C8r89sxG7vMZsoPM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PDRHyknuCBjKnwXvkfIWJoKZF6sucF+fgY4lCZ1w/JqyQC5gQnllCCud7j7aW3nwO pkZMyM89lMlN1Qt7jcI13eYj+CYi/9ttLg0Lr71lPt44gMaf8XSpReQ6cgc7n/GTL1 WE7nwRQJL6Kkj34er2hfJmtSIU8TjmKtIUxVA99LosLR9rqoavkZjAOYM3x42RuVQJ 37zB/qeby4NLrNcLD+B9wZzE5k/aNR5A0OqkMWjVOnsc4xmv7Orbu0wHE6mN1Z6p/0 IWNRVfY6ujem9aKl/lnbeWk0P/Q1AoH0OyWr3tncGICsQ+FCCIYmthAgxLTRdFBPW/ KJfkHYhjx0hcg== From: Robert Beckett To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter Subject: [PATCH v6 10/10] drm/i915: stolen memory use ttm backend Date: Fri, 17 Jun 2022 19:05:16 +0000 Message-Id: <20220617190516.2805572-11-bob.beckett@collabora.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220617190516.2805572-1-bob.beckett@collabora.com> References: <20220617190516.2805572-1-bob.beckett@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Robert Beckett , =?utf-8?q?Thomas_Hellstr?= =?utf-8?q?=C3=B6m?= , kernel@collabora.com, Matthew Auld , linux-kernel@vger.kernel.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" refactor stolen memory region to use ttm. this necessitates using ttm resources to track reserved stolen regions instead of drm_mm_nodes. Signed-off-by: Robert Beckett --- drivers/gpu/drm/i915/display/intel_fbc.c | 78 ++-- .../gpu/drm/i915/gem/i915_gem_object_types.h | 2 - drivers/gpu/drm/i915/gem/i915_gem_stolen.c | 440 +++++++----------- drivers/gpu/drm/i915/gem/i915_gem_stolen.h | 21 +- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 3 +- drivers/gpu/drm/i915/gem/i915_gem_ttm.h | 7 + drivers/gpu/drm/i915/gt/intel_rc6.c | 4 +- drivers/gpu/drm/i915/gt/selftest_reset.c | 16 +- drivers/gpu/drm/i915/i915_debugfs.c | 7 +- drivers/gpu/drm/i915/i915_drv.h | 5 - drivers/gpu/drm/i915/intel_region_ttm.c | 42 +- drivers/gpu/drm/i915/intel_region_ttm.h | 8 +- drivers/gpu/drm/i915/selftests/mock_region.c | 3 +- 13 files changed, 294 insertions(+), 342 deletions(-) diff --git a/drivers/gpu/drm/i915/display/intel_fbc.c b/drivers/gpu/drm/i915/display/intel_fbc.c index 8b807284cde1..6f3afac5e8c9 100644 --- a/drivers/gpu/drm/i915/display/intel_fbc.c +++ b/drivers/gpu/drm/i915/display/intel_fbc.c @@ -38,6 +38,7 @@ * forcibly disable it to allow proper screen updates. */ +#include "gem/i915_gem_stolen.h" #include #include @@ -51,6 +52,7 @@ #include "intel_display_types.h" #include "intel_fbc.h" #include "intel_frontbuffer.h" +#include "gem/i915_gem_region.h" #define for_each_fbc_id(__dev_priv, __fbc_id) \ for ((__fbc_id) = INTEL_FBC_A; (__fbc_id) < I915_MAX_FBCS; (__fbc_id)++) \ @@ -92,8 +94,8 @@ struct intel_fbc { struct mutex lock; unsigned int busy_bits; - struct drm_mm_node compressed_fb; - struct drm_mm_node compressed_llb; + struct ttm_resource *compressed_fb; + struct ttm_resource *compressed_llb; enum intel_fbc_id id; @@ -331,16 +333,20 @@ static void i8xx_fbc_nuke(struct intel_fbc *fbc) static void i8xx_fbc_program_cfb(struct intel_fbc *fbc) { struct drm_i915_private *i915 = fbc->i915; + u64 fb_offset = i915_gem_stolen_reserve_offset(fbc->compressed_fb); + u64 llb_offset = i915_gem_stolen_reserve_offset(fbc->compressed_llb); + GEM_BUG_ON(fb_offset == I915_BO_INVALID_OFFSET); + GEM_BUG_ON(llb_offset == I915_BO_INVALID_OFFSET); GEM_BUG_ON(range_overflows_end_t(u64, i915->dsm.start, - fbc->compressed_fb.start, U32_MAX)); + fb_offset, U32_MAX)); GEM_BUG_ON(range_overflows_end_t(u64, i915->dsm.start, - fbc->compressed_llb.start, U32_MAX)); + llb_offset, U32_MAX)); intel_de_write(i915, FBC_CFB_BASE, - i915->dsm.start + fbc->compressed_fb.start); + i915->dsm.start + fb_offset); intel_de_write(i915, FBC_LL_BASE, - i915->dsm.start + fbc->compressed_llb.start); + i915->dsm.start + llb_offset); } static const struct intel_fbc_funcs i8xx_fbc_funcs = { @@ -448,8 +454,10 @@ static bool g4x_fbc_is_compressing(struct intel_fbc *fbc) static void g4x_fbc_program_cfb(struct intel_fbc *fbc) { struct drm_i915_private *i915 = fbc->i915; + u64 fb_offset = i915_gem_stolen_reserve_offset(fbc->compressed_fb); - intel_de_write(i915, DPFC_CB_BASE, fbc->compressed_fb.start); + GEM_BUG_ON(fb_offset == I915_BO_INVALID_OFFSET); + intel_de_write(i915, DPFC_CB_BASE, fb_offset); } static const struct intel_fbc_funcs g4x_fbc_funcs = { @@ -499,8 +507,10 @@ static bool ilk_fbc_is_compressing(struct intel_fbc *fbc) static void ilk_fbc_program_cfb(struct intel_fbc *fbc) { struct drm_i915_private *i915 = fbc->i915; + u64 fb_offset = i915_gem_stolen_reserve_offset(fbc->compressed_fb); - intel_de_write(i915, ILK_DPFC_CB_BASE(fbc->id), fbc->compressed_fb.start); + GEM_BUG_ON(fb_offset == I915_BO_INVALID_OFFSET); + intel_de_write(i915, ILK_DPFC_CB_BASE(fbc->id), fb_offset); } static const struct intel_fbc_funcs ilk_fbc_funcs = { @@ -744,21 +754,24 @@ static int find_compression_limit(struct intel_fbc *fbc, { struct drm_i915_private *i915 = fbc->i915; u64 end = intel_fbc_stolen_end(i915); - int ret, limit = min_limit; + int limit = min_limit; + struct ttm_resource *res; size /= limit; /* Try to over-allocate to reduce reallocations and fragmentation. */ - ret = i915_gem_stolen_insert_node_in_range(i915, &fbc->compressed_fb, - size <<= 1, 4096, 0, end); - if (ret == 0) + res = i915_gem_stolen_reserve_range(i915, size <<= 1, 0, end); + if (!IS_ERR(res)) { + fbc->compressed_fb = res; return limit; + } for (; limit <= intel_fbc_max_limit(i915); limit <<= 1) { - ret = i915_gem_stolen_insert_node_in_range(i915, &fbc->compressed_fb, - size >>= 1, 4096, 0, end); - if (ret == 0) + res = i915_gem_stolen_reserve_range(i915, size >>= 1, 0, end); + if (!IS_ERR(res)) { + fbc->compressed_fb = res; return limit; + } } return 0; @@ -769,17 +782,18 @@ static int intel_fbc_alloc_cfb(struct intel_fbc *fbc, { struct drm_i915_private *i915 = fbc->i915; int ret; + struct ttm_resource *res; - drm_WARN_ON(&i915->drm, - drm_mm_node_allocated(&fbc->compressed_fb)); - drm_WARN_ON(&i915->drm, - drm_mm_node_allocated(&fbc->compressed_llb)); + drm_WARN_ON(&i915->drm, fbc->compressed_fb); + drm_WARN_ON(&i915->drm, fbc->compressed_llb); if (DISPLAY_VER(i915) < 5 && !IS_G4X(i915)) { - ret = i915_gem_stolen_insert_node(i915, &fbc->compressed_llb, - 4096, 4096); - if (ret) + res = i915_gem_stolen_reserve_range(i915, 4096, I915_GEM_STOLEN_BIAS, 0); + if (IS_ERR(res)) { + ret = PTR_ERR(res); goto err; + } + fbc->compressed_llb = res; } ret = find_compression_limit(fbc, size, min_limit); @@ -793,15 +807,14 @@ static int intel_fbc_alloc_cfb(struct intel_fbc *fbc, drm_dbg_kms(&i915->drm, "reserved %llu bytes of contiguous stolen space for FBC, limit: %d\n", - fbc->compressed_fb.size, fbc->limit); + i915_gem_stolen_reserve_size(fbc->compressed_fb), fbc->limit); return 0; err_llb: - if (drm_mm_node_allocated(&fbc->compressed_llb)) - i915_gem_stolen_remove_node(i915, &fbc->compressed_llb); + i915_gem_stolen_release_range(i915, fetch_and_zero(&fbc->compressed_llb)); err: - if (drm_mm_initialized(&i915->mm.stolen)) + if (IS_ERR(res) && (PTR_ERR(res) == -ENOMEM || PTR_ERR(res) == -ENXIO)) drm_info_once(&i915->drm, "not enough stolen space for compressed buffer (need %d more bytes), disabling. Hint: you may be able to increase stolen memory size in the BIOS to avoid this.\n", size); return -ENOSPC; } @@ -826,10 +839,10 @@ static void __intel_fbc_cleanup_cfb(struct intel_fbc *fbc) if (WARN_ON(intel_fbc_hw_is_active(fbc))) return; - if (drm_mm_node_allocated(&fbc->compressed_llb)) - i915_gem_stolen_remove_node(i915, &fbc->compressed_llb); - if (drm_mm_node_allocated(&fbc->compressed_fb)) - i915_gem_stolen_remove_node(i915, &fbc->compressed_fb); + if (fbc->compressed_llb) + i915_gem_stolen_release_range(i915, fetch_and_zero(&fbc->compressed_llb)); + if (fbc->compressed_fb) + i915_gem_stolen_release_range(i915, fetch_and_zero(&fbc->compressed_fb)); } void intel_fbc_cleanup(struct drm_i915_private *i915) @@ -1034,9 +1047,10 @@ static bool intel_fbc_is_cfb_ok(const struct intel_plane_state *plane_state) { struct intel_plane *plane = to_intel_plane(plane_state->uapi.plane); struct intel_fbc *fbc = plane->fbc; + u64 fb_size = i915_gem_stolen_reserve_size(fbc->compressed_fb); return intel_fbc_min_limit(plane_state) <= fbc->limit && - intel_fbc_cfb_size(plane_state) <= fbc->compressed_fb.size * fbc->limit; + intel_fbc_cfb_size(plane_state) <= fb_size * fbc->limit; } static bool intel_fbc_is_ok(const struct intel_plane_state *plane_state) @@ -1702,7 +1716,7 @@ void intel_fbc_init(struct drm_i915_private *i915) { enum intel_fbc_id fbc_id; - if (!drm_mm_initialized(&i915->mm.stolen)) + if (!i915->mm.stolen_region) mkwrite_device_info(i915)->display.fbc_mask = 0; if (need_fbc_vtd_wa(i915)) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index 07bc11247a3e..13466cca7af8 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -633,8 +633,6 @@ struct drm_i915_gem_object { } userptr; #endif - struct drm_mm_node *stolen; - resource_size_t bo_offset; unsigned long scratch; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c index 47b5e0e342ab..e7fdccf628d8 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c @@ -4,75 +4,111 @@ * Copyright © 2008-2012 Intel Corporation */ +#include "drm/ttm/ttm_placement.h" +#include "gem/i915_gem_object_types.h" #include #include -#include #include #include "gem/i915_gem_lmem.h" #include "gem/i915_gem_region.h" #include "gt/intel_gt.h" #include "gt/intel_region_lmem.h" +#include "gem/i915_gem_ttm.h" #include "i915_drv.h" #include "i915_gem_stolen.h" #include "i915_reg.h" #include "i915_utils.h" #include "i915_vgpu.h" #include "intel_mchbar_regs.h" +#include "intel_region_ttm.h" -/* - * The BIOS typically reserves some of the system's memory for the exclusive - * use of the integrated graphics. This memory is no longer available for - * use by the OS and so the user finds that his system has less memory - * available than he put in. We refer to this memory as stolen. - * - * The BIOS will allocate its framebuffer from the stolen memory. Our - * goal is try to reuse that object for our own fbcon which must always - * be available for panics. Anything else we can reuse the stolen memory - * for is a boon. - */ +struct stolen_region { + struct intel_memory_region region; + struct ttm_resource *wa_resvd; +}; -int i915_gem_stolen_insert_node_in_range(struct drm_i915_private *i915, - struct drm_mm_node *node, u64 size, - unsigned alignment, u64 start, u64 end) +inline struct stolen_region *to_stolen_region(struct intel_memory_region *mem) { - int ret; - - if (!drm_mm_initialized(&i915->mm.stolen)) - return -ENODEV; + return container_of(mem, struct stolen_region, region); +} - /* WaSkipStolenMemoryFirstPage:bdw+ */ - if (GRAPHICS_VER(i915) >= 8 && start < 4096) - start = 4096; +/** + * i915_gem_stolen_reserve_range - reserve a region of space in a given range + * @i915: i915 device instance + * @size: size of region to reserve + * @start: start of search area + * @end: end of search area + * + * Search for @size amount of free space within the region delimeted by @start and @end. + * If found reserve it from future use until later release with @i915_gem_stolen_release_range. + * + * Return: pointer to resource tracking structure on success, ERR_PTR otherwise + */ +struct ttm_resource * +i915_gem_stolen_reserve_range(struct drm_i915_private *i915, + resource_size_t size, + u64 start, u64 end) +{ + struct intel_memory_region *mem = i915->mm.stolen_region; - mutex_lock(&i915->mm.stolen_lock); - ret = drm_mm_insert_node_in_range(&i915->mm.stolen, node, - size, alignment, 0, - start, end, DRM_MM_INSERT_BEST); - mutex_unlock(&i915->mm.stolen_lock); + if (!mem) + return ERR_PTR(-ENODEV); + return intel_region_ttm_resource_alloc(mem, size, start, end, I915_BO_ALLOC_CONTIGUOUS); +} - return ret; +/** + * i915_gem_stolen_reserve_offset - return the offset of the reserved space + * @res: pointer to resource tracking structure to check + * + * Return: The offset of the reserved resource, or I915_BO_INVALID_OFFSET on error + */ +u64 i915_gem_stolen_reserve_offset(struct ttm_resource *res) +{ + if (!res) + return I915_BO_INVALID_OFFSET; + return PFN_PHYS(res->start); } -int i915_gem_stolen_insert_node(struct drm_i915_private *i915, - struct drm_mm_node *node, u64 size, - unsigned alignment) +/** + * i915_gem_stolen_reserve_size - return the reserved size of the reserved space + * @res: pointer to resource tracking structure to check + * + * Return: The size of the reserved resource, or I915_BO_INVALID_OFFSET on error + */ +u64 i915_gem_stolen_reserve_size(struct ttm_resource *res) { - return i915_gem_stolen_insert_node_in_range(i915, node, - size, alignment, - I915_GEM_STOLEN_BIAS, - U64_MAX); + if (!res) + return I915_BO_INVALID_OFFSET; + return PFN_PHYS(res->num_pages); } -void i915_gem_stolen_remove_node(struct drm_i915_private *i915, - struct drm_mm_node *node) +/** + * i915_gem_stolen_release_range - release the reserved area to be free for allocation again + * @i915: i915 device instance + * @res: pointer to resource tracking structure allocated via @i915_gem_stolen_reserve_range + */ +void i915_gem_stolen_release_range(struct drm_i915_private *i915, + struct ttm_resource *res) { - mutex_lock(&i915->mm.stolen_lock); - drm_mm_remove_node(node); - mutex_unlock(&i915->mm.stolen_lock); + struct intel_memory_region *mem = i915->mm.stolen_region; + + intel_region_ttm_resource_free(mem, res); } +/* + * The BIOS typically reserves some of the system's memory for the exclusive + * use of the integrated graphics. This memory is no longer available for + * use by the OS and so the user finds that his system has less memory + * available than he put in. We refer to this memory as stolen. + * + * The BIOS will allocate its framebuffer from the stolen memory. Our + * goal is try to reuse that object for our own fbcon which must always + * be available for panics. Anything else we can reuse the stolen memory + * for is a boon. + */ + static int i915_adjust_stolen(struct drm_i915_private *i915, struct resource *dsm) { @@ -173,14 +209,6 @@ static int i915_adjust_stolen(struct drm_i915_private *i915, return 0; } -static void i915_gem_cleanup_stolen(struct drm_i915_private *i915) -{ - if (!drm_mm_initialized(&i915->mm.stolen)) - return; - - drm_mm_takedown(&i915->mm.stolen); -} - static void g4x_get_stolen_reserved(struct drm_i915_private *i915, struct intel_uncore *uncore, resource_size_t *base, @@ -394,8 +422,7 @@ static int i915_gem_init_stolen(struct intel_memory_region *mem) struct intel_uncore *uncore = &i915->uncore; resource_size_t reserved_base, stolen_top; resource_size_t reserved_total, reserved_size; - - mutex_init(&i915->mm.stolen_lock); + int ret; if (intel_vgpu_active(i915)) { drm_notice(&i915->drm, @@ -513,258 +540,100 @@ static int i915_gem_init_stolen(struct intel_memory_region *mem) return 0; /* Basic memrange allocator for stolen space. */ - drm_mm_init(&i915->mm.stolen, 0, i915->stolen_usable_size); - - return 0; -} - -static void dbg_poison(struct i915_ggtt *ggtt, - dma_addr_t addr, resource_size_t size, - u8 x) -{ -#if IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM) - if (!drm_mm_node_allocated(&ggtt->error_capture)) - return; - - if (ggtt->vm.bind_async_flags & I915_VMA_GLOBAL_BIND) - return; /* beware stop_machine() inversion */ - - GEM_BUG_ON(!IS_ALIGNED(size, PAGE_SIZE)); - - mutex_lock(&ggtt->error_mutex); - while (size) { - void __iomem *s; - - ggtt->vm.insert_page(&ggtt->vm, addr, - ggtt->error_capture.start, - I915_CACHE_NONE, 0); - mb(); - - s = io_mapping_map_wc(&ggtt->iomap, - ggtt->error_capture.start, - PAGE_SIZE); - memset_io(s, x, PAGE_SIZE); - io_mapping_unmap(s); - - addr += PAGE_SIZE; - size -= PAGE_SIZE; - } - mb(); - ggtt->vm.clear_range(&ggtt->vm, ggtt->error_capture.start, PAGE_SIZE); - mutex_unlock(&ggtt->error_mutex); -#endif -} - -static struct sg_table * -i915_pages_create_for_stolen(struct drm_device *dev, - resource_size_t offset, resource_size_t size) -{ - struct drm_i915_private *i915 = to_i915(dev); - struct sg_table *st; - struct scatterlist *sg; - - GEM_BUG_ON(range_overflows(offset, size, resource_size(&i915->dsm))); - - /* We hide that we have no struct page backing our stolen object - * by wrapping the contiguous physical allocation with a fake - * dma mapping in a single scatterlist. - */ + ret = intel_region_ttm_init(mem); + if (ret) + return ret; - st = kmalloc(sizeof(*st), GFP_KERNEL); - if (st == NULL) - return ERR_PTR(-ENOMEM); + /* WaSkipStolenMemoryFirstPage:bdw+ */ + if (GRAPHICS_VER(i915) >= 8) { + struct stolen_region *region = to_stolen_region(mem); + struct ttm_resource *resvd; + + resvd = intel_region_ttm_resource_alloc(mem, 4096, 0, 4096, 0); + if (IS_ERR(resvd)) { + ret = PTR_ERR(resvd); + goto err_no_reserve; + } - if (sg_alloc_table(st, 1, GFP_KERNEL)) { - kfree(st); - return ERR_PTR(-ENOMEM); + region->wa_resvd = resvd; } - sg = st->sgl; - sg->offset = 0; - sg->length = size; + return ret; - sg_dma_address(sg) = (dma_addr_t)i915->dsm.start + offset; - sg_dma_len(sg) = size; +err_no_reserve: + intel_region_ttm_fini(mem); - return st; + return ret; } -static int i915_gem_object_get_pages_stolen(struct drm_i915_gem_object *obj) +struct drm_i915_gem_object * +i915_gem_object_create_stolen(struct drm_i915_private *i915, + resource_size_t size) { - struct drm_i915_private *i915 = to_i915(obj->base.dev); - struct sg_table *pages = - i915_pages_create_for_stolen(obj->base.dev, - obj->stolen->start, - obj->stolen->size); - if (IS_ERR(pages)) - return PTR_ERR(pages); - - dbg_poison(to_gt(i915)->ggtt, - sg_dma_address(pages->sgl), - sg_dma_len(pages->sgl), - POISON_INUSE); - - __i915_gem_object_set_pages(obj, pages, obj->stolen->size); - - return 0; + return i915_gem_object_create_region(i915->mm.stolen_region, size, 0, + I915_BO_ALLOC_CONTIGUOUS | I915_BO_ALLOC_PINNED); } -static void i915_gem_object_put_pages_stolen(struct drm_i915_gem_object *obj, - struct sg_table *pages) +static struct intel_memory_region *alloc_stolen_region(void) { - struct drm_i915_private *i915 = to_i915(obj->base.dev); - /* Should only be called from i915_gem_object_release_stolen() */ + struct stolen_region *region = kzalloc(sizeof(*region), GFP_KERNEL); - dbg_poison(to_gt(i915)->ggtt, - sg_dma_address(pages->sgl), - sg_dma_len(pages->sgl), - POISON_FREE); + if (!region) + return NULL; - sg_free_table(pages); - kfree(pages); + return ®ion->region; } -static void -i915_gem_object_release_stolen(struct drm_i915_gem_object *obj) +static void free_stolen_region(struct intel_memory_region *mem) { - struct drm_i915_private *i915 = to_i915(obj->base.dev); - struct drm_mm_node *stolen = fetch_and_zero(&obj->stolen); - - GEM_BUG_ON(!stolen); - i915_gem_stolen_remove_node(i915, stolen); - kfree(stolen); + struct stolen_region *region = to_stolen_region(mem); - i915_gem_object_release_memory_region(obj); + kfree(region); } -static const struct drm_i915_gem_object_ops i915_gem_object_stolen_ops = { - .name = "i915_gem_object_stolen", - .get_pages = i915_gem_object_get_pages_stolen, - .put_pages = i915_gem_object_put_pages_stolen, - .release = i915_gem_object_release_stolen, -}; - -static int __i915_gem_object_create_stolen(struct intel_memory_region *mem, - struct drm_i915_gem_object *obj, - struct drm_mm_node *stolen) +static int init_stolen_smem(struct intel_memory_region *mem) { - static struct lock_class_key lock_class; - unsigned int cache_level; - unsigned int flags; - int err; - /* - * Stolen objects are always physically contiguous since we just - * allocate one big block underneath using the drm_mm range allocator. + * Initialise stolen early so that we may reserve preallocated + * objects for the BIOS to KMS transition. */ - flags = I915_BO_ALLOC_CONTIGUOUS; - - drm_gem_private_object_init(&mem->i915->drm, &obj->base, stolen->size); - i915_gem_object_init(obj, &i915_gem_object_stolen_ops, &lock_class, flags); - - obj->stolen = stolen; - obj->read_domains = I915_GEM_DOMAIN_CPU | I915_GEM_DOMAIN_GTT; - cache_level = HAS_LLC(mem->i915) ? I915_CACHE_LLC : I915_CACHE_NONE; - i915_gem_object_set_cache_coherency(obj, cache_level); - - if (WARN_ON(!i915_gem_object_trylock(obj, NULL))) - return -EBUSY; + return i915_gem_init_stolen(mem); +} - i915_gem_object_init_memory_region(obj, mem); +static int release_stolen(struct intel_memory_region *mem) +{ + struct stolen_region *region = to_stolen_region(mem); - err = i915_gem_object_pin_pages(obj); - if (err) - i915_gem_object_release_memory_region(obj); - i915_gem_object_unlock(obj); + if (region->wa_resvd) + intel_region_ttm_resource_free(mem, region->wa_resvd); - return err; + return intel_region_ttm_fini(mem); } -static int _i915_gem_object_stolen_init(struct intel_memory_region *mem, - struct drm_i915_gem_object *obj, - resource_size_t offset, - resource_size_t size, - resource_size_t page_size, - unsigned int flags) +static int stolen_object_init(struct intel_memory_region *mem, + struct drm_i915_gem_object *obj, + resource_size_t offset, + resource_size_t size, + resource_size_t page_size, + unsigned int flags) { - struct drm_i915_private *i915 = mem->i915; - struct drm_mm_node *stolen; - int ret; - - if (!drm_mm_initialized(&i915->mm.stolen)) + if (!mem->region_private) return -ENODEV; if (size == 0) return -EINVAL; - /* - * With discrete devices, where we lack a mappable aperture there is no - * possible way to ever access this memory on the CPU side. - */ - if (mem->type == INTEL_MEMORY_STOLEN_LOCAL && !mem->io_size && - !(flags & I915_BO_ALLOC_GPU_ONLY)) - return -ENOSPC; - - stolen = kzalloc(sizeof(*stolen), GFP_KERNEL); - if (!stolen) - return -ENOMEM; - - if (offset != I915_BO_INVALID_OFFSET) { - drm_dbg(&i915->drm, - "creating preallocated stolen object: stolen_offset=%pa, size=%pa\n", - &offset, &size); - - stolen->start = offset; - stolen->size = size; - mutex_lock(&i915->mm.stolen_lock); - ret = drm_mm_reserve_node(&i915->mm.stolen, stolen); - mutex_unlock(&i915->mm.stolen_lock); - } else { - ret = i915_gem_stolen_insert_node(i915, stolen, size, - mem->min_page_size); - } - if (ret) - goto err_free; - - ret = __i915_gem_object_create_stolen(mem, obj, stolen); - if (ret) - goto err_remove; - - return 0; - -err_remove: - i915_gem_stolen_remove_node(i915, stolen); -err_free: - kfree(stolen); - return ret; -} - -struct drm_i915_gem_object * -i915_gem_object_create_stolen(struct drm_i915_private *i915, - resource_size_t size) -{ - return i915_gem_object_create_region(i915->mm.stolen_region, size, 0, 0); -} - -static int init_stolen_smem(struct intel_memory_region *mem) -{ - /* - * Initialise stolen early so that we may reserve preallocated - * objects for the BIOS to KMS transition. - */ - return i915_gem_init_stolen(mem); -} - -static int release_stolen_smem(struct intel_memory_region *mem) -{ - i915_gem_cleanup_stolen(mem->i915); - return 0; + /* default range manager relies on page_size for alignment */ + page_size = max(page_size, mem->min_page_size); + return __i915_gem_ttm_object_init(mem, obj, offset, size, page_size, flags); } static const struct intel_memory_region_ops i915_region_stolen_smem_ops = { .init = init_stolen_smem, - .release = release_stolen_smem, - .init_object = _i915_gem_object_stolen_init, + .release = release_stolen, + .init_object = stolen_object_init, + .alloc = alloc_stolen_region, + .free = free_stolen_region, }; static int init_stolen_lmem(struct intel_memory_region *mem) @@ -793,7 +662,7 @@ static int init_stolen_lmem(struct intel_memory_region *mem) return 0; err_cleanup: - i915_gem_cleanup_stolen(mem->i915); + intel_region_ttm_fini(mem); return err; } @@ -801,14 +670,15 @@ static int release_stolen_lmem(struct intel_memory_region *mem) { if (mem->io_size) io_mapping_fini(&mem->iomap); - i915_gem_cleanup_stolen(mem->i915); - return 0; + return release_stolen(mem); } static const struct intel_memory_region_ops i915_region_stolen_lmem_ops = { .init = init_stolen_lmem, .release = release_stolen_lmem, - .init_object = _i915_gem_object_stolen_init, + .init_object = stolen_object_init, + .alloc = alloc_stolen_region, + .free = free_stolen_region, }; struct intel_memory_region * @@ -896,7 +766,37 @@ i915_gem_stolen_smem_setup(struct drm_i915_private *i915, u16 type, return mem; } -bool i915_gem_object_is_stolen(const struct drm_i915_gem_object *obj) +/** + * i915_gem_object_stolen_offset - return offset from start of stolen region + * @obj: the object to return the offset of + * + * Get the offset from stolen region if this object is currently placed in stolen memory. + * + * Return: offset from stolen if successful, I915_BO_INVALID_OFFSET otherwise + */ +u64 i915_gem_object_stolen_offset(struct drm_i915_gem_object *obj) { - return obj->ops == &i915_gem_object_stolen_ops; + struct ttm_buffer_object *ttm_obj; + + if (!obj || !i915_gem_object_is_stolen(obj)) + return I915_BO_INVALID_OFFSET; + + ttm_obj = i915_gem_to_ttm(obj); + if (ttm_obj->resource->mem_type != I915_PL_STOLEN) + return I915_BO_INVALID_OFFSET; + + return PFN_PHYS(ttm_obj->resource->start); +} + +bool i915_gem_object_is_stolen(struct drm_i915_gem_object *obj) +{ + struct intel_memory_region *mr = READ_ONCE(obj->mm.region); + +#ifdef CONFIG_LOCKDEP + if (i915_gem_object_migratable(obj) && + i915_gem_object_evictable(obj)) + assert_object_held(obj); +#endif + return mr && (mr->type == INTEL_MEMORY_STOLEN_SYSTEM || + mr->type == INTEL_MEMORY_STOLEN_LOCAL); } diff --git a/drivers/gpu/drm/i915/gem/i915_gem_stolen.h b/drivers/gpu/drm/i915/gem/i915_gem_stolen.h index d5005a39d130..b39cb6e6c768 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_stolen.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_stolen.h @@ -10,17 +10,9 @@ struct drm_i915_private; struct drm_mm_node; +struct ttm_resource; struct drm_i915_gem_object; -int i915_gem_stolen_insert_node(struct drm_i915_private *dev_priv, - struct drm_mm_node *node, u64 size, - unsigned alignment); -int i915_gem_stolen_insert_node_in_range(struct drm_i915_private *dev_priv, - struct drm_mm_node *node, u64 size, - unsigned alignment, u64 start, - u64 end); -void i915_gem_stolen_remove_node(struct drm_i915_private *dev_priv, - struct drm_mm_node *node); struct intel_memory_region * i915_gem_stolen_smem_setup(struct drm_i915_private *i915, u16 type, u16 instance); @@ -32,7 +24,16 @@ struct drm_i915_gem_object * i915_gem_object_create_stolen(struct drm_i915_private *dev_priv, resource_size_t size); -bool i915_gem_object_is_stolen(const struct drm_i915_gem_object *obj); +u64 i915_gem_object_stolen_offset(struct drm_i915_gem_object *obj); +bool i915_gem_object_is_stolen(struct drm_i915_gem_object *obj); +struct ttm_resource * +i915_gem_stolen_reserve_range(struct drm_i915_private *i915, + resource_size_t size, + u64 start, u64 end); +u64 i915_gem_stolen_reserve_offset(struct ttm_resource *res); +u64 i915_gem_stolen_reserve_size(struct ttm_resource *res); +void i915_gem_stolen_release_range(struct drm_i915_private *i915, + struct ttm_resource *res); #define I915_GEM_STOLEN_BIAS SZ_128K diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index bb988608296d..c7611efddcf7 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -1203,6 +1203,7 @@ int __i915_gem_ttm_object_init(struct intel_memory_region *mem, struct ttm_place _place; struct ttm_placement _placement; struct ttm_placement *placement; + bool is_stolen = intel_region_to_ttm_type(mem) == I915_PL_STOLEN; int ret; drm_gem_private_object_init(&i915->drm, &obj->base, size); @@ -1222,7 +1223,7 @@ int __i915_gem_ttm_object_init(struct intel_memory_region *mem, obj->base.vma_node.driver_private = i915_gem_to_ttm(obj); /* Forcing the page size is kernel internal only */ - GEM_BUG_ON(page_size && obj->mm.n_placements); + GEM_BUG_ON(page_size && obj->mm.n_placements && !is_stolen); /* * Keep an extra shrink pin to prevent the object from being made diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.h b/drivers/gpu/drm/i915/gem/i915_gem_ttm.h index 73e371aa3850..81654a51df51 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.h @@ -49,6 +49,13 @@ int __i915_gem_ttm_object_init(struct intel_memory_region *mem, resource_size_t size, resource_size_t page_size, unsigned int flags); +int i915_gem_ttm_object_init_in_place(struct intel_memory_region *mem, + struct drm_i915_gem_object *obj, + resource_size_t size, + resource_size_t page_size, + unsigned int flags, + u64 start, + u64 end); /* Internal I915 TTM declarations and definitions below. */ diff --git a/drivers/gpu/drm/i915/gt/intel_rc6.c b/drivers/gpu/drm/i915/gt/intel_rc6.c index f8d0523f4c18..846eef898f7e 100644 --- a/drivers/gpu/drm/i915/gt/intel_rc6.c +++ b/drivers/gpu/drm/i915/gt/intel_rc6.c @@ -355,9 +355,9 @@ static int vlv_rc6_init(struct intel_rc6 *rc6) GEM_BUG_ON(range_overflows_end_t(u64, i915->dsm.start, - pctx->stolen->start, + i915_gem_object_stolen_offset(pctx), U32_MAX)); - pctx_paddr = i915->dsm.start + pctx->stolen->start; + pctx_paddr = i915->dsm.start + i915_gem_object_stolen_offset(pctx); intel_uncore_write(uncore, VLV_PCBR, pctx_paddr); out: diff --git a/drivers/gpu/drm/i915/gt/selftest_reset.c b/drivers/gpu/drm/i915/gt/selftest_reset.c index 37c38bdd5f47..75bc7d90c9dc 100644 --- a/drivers/gpu/drm/i915/gt/selftest_reset.c +++ b/drivers/gpu/drm/i915/gt/selftest_reset.c @@ -6,6 +6,7 @@ #include #include "gem/i915_gem_stolen.h" +#include "intel_region_ttm.h" #include "i915_memcpy.h" #include "i915_selftest.h" @@ -83,6 +84,7 @@ __igt_reset_stolen(struct intel_gt *gt, dma_addr_t dma = (dma_addr_t)dsm->start + (page << PAGE_SHIFT); void __iomem *s; void *in; + bool busy; ggtt->vm.insert_page(&ggtt->vm, dma, ggtt->error_capture.start, @@ -93,9 +95,9 @@ __igt_reset_stolen(struct intel_gt *gt, ggtt->error_capture.start, PAGE_SIZE); - if (!__drm_mm_interval_first(>->i915->mm.stolen, - page << PAGE_SHIFT, - ((page + 1) << PAGE_SHIFT) - 1)) + busy = intel_region_ttm_range_busy(gt->i915->mm.stolen_region, + PFN_PHYS(page), PAGE_SIZE); + if (!busy) memset_io(s, STACK_MAGIC, PAGE_SIZE); in = (void __force *)s; @@ -124,6 +126,7 @@ __igt_reset_stolen(struct intel_gt *gt, void __iomem *s; void *in; u32 x; + bool busy; ggtt->vm.insert_page(&ggtt->vm, dma, ggtt->error_capture.start, @@ -139,10 +142,9 @@ __igt_reset_stolen(struct intel_gt *gt, in = tmp; x = crc32_le(0, in, PAGE_SIZE); - if (x != crc[page] && - !__drm_mm_interval_first(>->i915->mm.stolen, - page << PAGE_SHIFT, - ((page + 1) << PAGE_SHIFT) - 1)) { + busy = intel_region_ttm_range_busy(gt->i915->mm.stolen_region, + PFN_PHYS(page), PAGE_SIZE); + if (x != crc[page] && !busy) { pr_debug("unused stolen page %pa modified by GPU reset\n", &page); if (count++ == 0) diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c index 94e5c29d2ee3..e538b8f71dcc 100644 --- a/drivers/gpu/drm/i915/i915_debugfs.c +++ b/drivers/gpu/drm/i915/i915_debugfs.c @@ -32,6 +32,7 @@ #include +#include "gem/i915_gem_region.h" #include "gem/i915_gem_context.h" #include "gt/intel_gt.h" #include "gt/intel_gt_buffer_pool.h" @@ -157,6 +158,7 @@ i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) struct drm_i915_private *dev_priv = to_i915(obj->base.dev); struct i915_vma *vma; int pin_count = 0; + u64 offset; seq_printf(m, "%pK: %c%c%c %8zdKiB %02x %02x %s%s%s", &obj->base, @@ -241,8 +243,9 @@ i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) spin_unlock(&obj->vma.lock); seq_printf(m, " (pinned x %d)", pin_count); - if (i915_gem_object_is_stolen(obj)) - seq_printf(m, " (stolen: %08llx)", obj->stolen->start); + offset = i915_gem_object_stolen_offset(obj); + if (offset != I915_BO_INVALID_OFFSET) + seq_printf(m, " (stolen: %08llx)", offset); if (i915_gem_object_is_framebuffer(obj)) seq_printf(m, " (fb)"); } diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index eba94fa76b18..1ebb266e7b63 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -224,11 +224,6 @@ struct i915_gem_mm { * support stolen. */ struct intel_memory_region *stolen_region; - /** Memory allocator for GTT stolen memory */ - struct drm_mm stolen; - /** Protects the usage of the GTT stolen memory allocator. This is - * always the inner lock when overlapping with struct_mutex. */ - struct mutex stolen_lock; /* Protects bound_list/unbound_list and #drm_i915_gem_object.mm.link */ spinlock_t obj_lock; diff --git a/drivers/gpu/drm/i915/intel_region_ttm.c b/drivers/gpu/drm/i915/intel_region_ttm.c index 694e9acb69e2..b3850188981b 100644 --- a/drivers/gpu/drm/i915/intel_region_ttm.c +++ b/drivers/gpu/drm/i915/intel_region_ttm.c @@ -194,11 +194,12 @@ intel_region_ttm_resource_to_rsgt(struct intel_memory_region *mem, } } -#ifdef CONFIG_DRM_I915_SELFTEST /** * intel_region_ttm_resource_alloc - Allocate memory resources from a region * @mem: The memory region, * @size: The requested size in bytes + * @start: start of allowed range + * @end: end of allowed range * @flags: Allocation flags * * This functionality is provided only for callers that need to allocate @@ -212,8 +213,9 @@ intel_region_ttm_resource_to_rsgt(struct intel_memory_region *mem, */ struct ttm_resource * intel_region_ttm_resource_alloc(struct intel_memory_region *mem, - resource_size_t offset, resource_size_t size, + u64 start, + u64 end, unsigned int flags) { struct ttm_resource_manager *man = mem->region_private; @@ -222,11 +224,14 @@ intel_region_ttm_resource_alloc(struct intel_memory_region *mem, struct ttm_resource *res; int ret; + if (!man) + return ERR_PTR(-ENODEV); + if (flags & I915_BO_ALLOC_CONTIGUOUS) place.flags |= TTM_PL_FLAG_CONTIGUOUS; - if (offset != I915_BO_INVALID_OFFSET) { - place.fpfn = offset >> PAGE_SHIFT; - place.lpfn = place.fpfn + (size >> PAGE_SHIFT); + if (start || end) { + place.fpfn = PFN_DOWN(start); + place.lpfn = PFN_UP(end); } else if (mem->io_size && mem->io_size < mem->total) { if (flags & I915_BO_ALLOC_GPU_ONLY) { place.flags |= TTM_PL_FLAG_TOPDOWN; @@ -247,8 +252,6 @@ intel_region_ttm_resource_alloc(struct intel_memory_region *mem, return ret ? ERR_PTR(ret) : res; } -#endif - /** * intel_region_ttm_resource_free - Free a resource allocated from a resource manager * @mem: The region the resource was allocated from. @@ -260,9 +263,34 @@ void intel_region_ttm_resource_free(struct intel_memory_region *mem, struct ttm_resource_manager *man = mem->region_private; struct ttm_buffer_object mock_bo = {}; + if (!man) + return; + mock_bo.base.size = res->num_pages << PAGE_SHIFT; mock_bo.bdev = &mem->i915->bdev; res->bo = &mock_bo; man->func->free(man, res); } + +/** + * intel_region_ttm_range_busy - check whether range has any allocations + * @mem: The region to check + * @start: the start of the range to check + * @size: size of the range to check + * + * Return: true if something is alloceted within the region, false otherwise. + */ +bool intel_region_ttm_range_busy(struct intel_memory_region *mem, + u64 start, u64 size) +{ + struct ttm_resource *dummy; + + dummy = intel_region_ttm_resource_alloc(mem, size, start, start + size, + I915_BO_ALLOC_CONTIGUOUS); + if (IS_ERR(dummy)) + return true; + + intel_region_ttm_resource_free(mem, dummy); + return false; +} diff --git a/drivers/gpu/drm/i915/intel_region_ttm.h b/drivers/gpu/drm/i915/intel_region_ttm.h index cf9d86dcf409..1e88472fb2ea 100644 --- a/drivers/gpu/drm/i915/intel_region_ttm.h +++ b/drivers/gpu/drm/i915/intel_region_ttm.h @@ -29,15 +29,17 @@ intel_region_ttm_resource_to_rsgt(struct intel_memory_region *mem, void intel_region_ttm_resource_free(struct intel_memory_region *mem, struct ttm_resource *res); +bool intel_region_ttm_range_busy(struct intel_memory_region *mem, + u64 start, u64 size); + int intel_region_to_ttm_type(const struct intel_memory_region *mem); struct ttm_device_funcs *i915_ttm_driver(void); -#ifdef CONFIG_DRM_I915_SELFTEST struct ttm_resource * intel_region_ttm_resource_alloc(struct intel_memory_region *mem, - resource_size_t offset, resource_size_t size, + u64 start, + u64 end, unsigned int flags); -#endif #endif /* _INTEL_REGION_TTM_H_ */ diff --git a/drivers/gpu/drm/i915/selftests/mock_region.c b/drivers/gpu/drm/i915/selftests/mock_region.c index 670557ce1024..15be2440cdb8 100644 --- a/drivers/gpu/drm/i915/selftests/mock_region.c +++ b/drivers/gpu/drm/i915/selftests/mock_region.c @@ -26,8 +26,9 @@ static int mock_region_get_pages(struct drm_i915_gem_object *obj) int err; obj->mm.res = intel_region_ttm_resource_alloc(obj->mm.region, - obj->bo_offset, obj->base.size, + obj->bo_offset, + obj->bo_offset + obj->base.size, obj->flags); if (IS_ERR(obj->mm.res)) return PTR_ERR(obj->mm.res);