From patchwork Tue Jun 21 10:44:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12889010 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EDD97C43334 for ; Tue, 21 Jun 2022 10:46:37 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 8C28110EEAC; Tue, 21 Jun 2022 10:46:35 +0000 (UTC) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by gabe.freedesktop.org (Postfix) with ESMTPS id 98F6210EEAC; Tue, 21 Jun 2022 10:46:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655808393; x=1687344393; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=gnoeP2i0XKHpPBobzzCY6/A6RrNSoIX71NZFHpagLe8=; b=M/MR1+tP5B+Jt1/NkIcunVEX0kx5hzsDXSXOF+nzYLXiKPivGTkVdmwR /qy+wl7AqhYH0Sc8VCQQsF5xkyQKgGYqb+iWDHrzow6/kh6KXMS1uH7r7 7E2Tndrpu7TU6yF9r9UBi0ldaX3BBJLJ57bBv2OoivkOs/9meIpByuMsL F31MG+z24MaWe+d70Y9qeAvHn0cChr6FIHqor8xEAksPbXliXUJjGLKDk Hd4j1xl6bnTrDsh6ECuSLY5ROQkVaJ8jPPye2igaY3z2vl3PdefdGn3de t3qhGpz5b3pHNGToFM8ifekNOLT15u0hQQdWRG/5+NOGygFyyUvubW4OT w==; X-IronPort-AV: E=McAfee;i="6400,9594,10384"; a="263121902" X-IronPort-AV: E=Sophos;i="5.92,209,1650956400"; d="scan'208";a="263121902" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2022 03:46:33 -0700 X-IronPort-AV: E=Sophos;i="5.92,209,1650956400"; d="scan'208";a="562327027" Received: from jasonmor-mobl1.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.213.200.10]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2022 03:46:30 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Tue, 21 Jun 2022 11:44:23 +0100 Message-Id: <20220621104434.190962-2-matthew.auld@intel.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220621104434.190962-1-matthew.auld@intel.com> References: <20220621104434.190962-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 01/12] drm/doc: add rfc section for small BAR uapi X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Daniel Vetter , Kenneth Graunke , dri-devel@lists.freedesktop.org, mesa-dev@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Add an entry for the new uapi needed for small BAR on DG2+. v2: - Some spelling fixes and other small tweaks. (Akeem & Thomas) - Rework error capture interactions, including no longer needing NEEDS_CPU_ACCESS for objects marked for capture. (Thomas) - Add probed_cpu_visible_size. (Lionel) v3: - Drop the vma query for now. - Add unallocated_cpu_visible_size as part of the region query. - Improve the docs some more, including documenting the expected behaviour on older kernels, since this came up in some offline discussion. v4: - Various improvements all over. (Tvrtko) v5: - Include newer integrated platforms when applying the non-recoverable context and error capture restriction. (Thomas) Signed-off-by: Matthew Auld Cc: Thomas Hellström Cc: Lionel Landwerlin Cc: Tvrtko Ursulin Cc: Jon Bloomfield Cc: Daniel Vetter Cc: Jordan Justen Cc: Kenneth Graunke Cc: Akeem G Abodunrin Cc: mesa-dev@lists.freedesktop.org Acked-by: Tvrtko Ursulin Acked-by: Akeem G Abodunrin Reviewed-by: Thomas Hellström Acked-by: Lionel Landwerlin Acked-by: Jordan Justen --- Documentation/gpu/rfc/i915_small_bar.h | 189 +++++++++++++++++++++++ Documentation/gpu/rfc/i915_small_bar.rst | 47 ++++++ Documentation/gpu/rfc/index.rst | 4 + 3 files changed, 240 insertions(+) create mode 100644 Documentation/gpu/rfc/i915_small_bar.h create mode 100644 Documentation/gpu/rfc/i915_small_bar.rst diff --git a/Documentation/gpu/rfc/i915_small_bar.h b/Documentation/gpu/rfc/i915_small_bar.h new file mode 100644 index 000000000000..752bb2ceb399 --- /dev/null +++ b/Documentation/gpu/rfc/i915_small_bar.h @@ -0,0 +1,189 @@ +/** + * struct __drm_i915_memory_region_info - Describes one region as known to the + * driver. + * + * Note this is using both struct drm_i915_query_item and struct drm_i915_query. + * For this new query we are adding the new query id DRM_I915_QUERY_MEMORY_REGIONS + * at &drm_i915_query_item.query_id. + */ +struct __drm_i915_memory_region_info { + /** @region: The class:instance pair encoding */ + struct drm_i915_gem_memory_class_instance region; + + /** @rsvd0: MBZ */ + __u32 rsvd0; + + /** + * @probed_size: Memory probed by the driver (-1 = unknown) + * + * Note that it should not be possible to ever encounter a zero value + * here, also note that no current region type will ever return -1 here. + * Although for future region types, this might be a possibility. The + * same applies to the other size fields. + */ + __u64 probed_size; + + /** + * @unallocated_size: Estimate of memory remaining (-1 = unknown) + * + * Requires CAP_PERFMON or CAP_SYS_ADMIN to get reliable accounting. + * Without this (or if this is an older kernel) the value here will + * always equal the @probed_size. Note this is only currently tracked + * for I915_MEMORY_CLASS_DEVICE regions (for other types the value here + * will always equal the @probed_size). + */ + __u64 unallocated_size; + + union { + /** @rsvd1: MBZ */ + __u64 rsvd1[8]; + struct { + /** + * @probed_cpu_visible_size: Memory probed by the driver + * that is CPU accessible. (-1 = unknown). + * + * This will be always be <= @probed_size, and the + * remainder (if there is any) will not be CPU + * accessible. + * + * On systems without small BAR, the @probed_size will + * always equal the @probed_cpu_visible_size, since all + * of it will be CPU accessible. + * + * Note this is only tracked for + * I915_MEMORY_CLASS_DEVICE regions (for other types the + * value here will always equal the @probed_size). + * + * Note that if the value returned here is zero, then + * this must be an old kernel which lacks the relevant + * small-bar uAPI support (including + * I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS), but on + * such systems we should never actually end up with a + * small BAR configuration, assuming we are able to load + * the kernel module. Hence it should be safe to treat + * this the same as when @probed_cpu_visible_size == + * @probed_size. + */ + __u64 probed_cpu_visible_size; + + /** + * @unallocated_cpu_visible_size: Estimate of CPU + * visible memory remaining (-1 = unknown). + * + * Note this is only tracked for + * I915_MEMORY_CLASS_DEVICE regions (for other types the + * value here will always equal the + * @probed_cpu_visible_size). + * + * Requires CAP_PERFMON or CAP_SYS_ADMIN to get reliable + * accounting. Without this the value here will always + * equal the @probed_cpu_visible_size. Note this is only + * currently tracked for I915_MEMORY_CLASS_DEVICE + * regions (for other types the value here will also + * always equal the @probed_cpu_visible_size). + * + * If this is an older kernel the value here will be + * zero, see also @probed_cpu_visible_size. + */ + __u64 unallocated_cpu_visible_size; + }; + }; +}; + +/** + * struct __drm_i915_gem_create_ext - Existing gem_create behaviour, with added + * extension support using struct i915_user_extension. + * + * Note that new buffer flags should be added here, at least for the stuff that + * is immutable. Previously we would have two ioctls, one to create the object + * with gem_create, and another to apply various parameters, however this + * creates some ambiguity for the params which are considered immutable. Also in + * general we're phasing out the various SET/GET ioctls. + */ +struct __drm_i915_gem_create_ext { + /** + * @size: Requested size for the object. + * + * The (page-aligned) allocated size for the object will be returned. + * + * Note that for some devices we have might have further minimum + * page-size restrictions (larger than 4K), like for device local-memory. + * However in general the final size here should always reflect any + * rounding up, if for example using the I915_GEM_CREATE_EXT_MEMORY_REGIONS + * extension to place the object in device local-memory. The kernel will + * always select the largest minimum page-size for the set of possible + * placements as the value to use when rounding up the @size. + */ + __u64 size; + + /** + * @handle: Returned handle for the object. + * + * Object handles are nonzero. + */ + __u32 handle; + + /** + * @flags: Optional flags. + * + * Supported values: + * + * I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS - Signal to the kernel that + * the object will need to be accessed via the CPU. + * + * Only valid when placing objects in I915_MEMORY_CLASS_DEVICE, and only + * strictly required on configurations where some subset of the device + * memory is directly visible/mappable through the CPU (which we also + * call small BAR), like on some DG2+ systems. Note that this is quite + * undesirable, but due to various factors like the client CPU, BIOS etc + * it's something we can expect to see in the wild. See + * &__drm_i915_memory_region_info.probed_cpu_visible_size for how to + * determine if this system applies. + * + * Note that one of the placements MUST be I915_MEMORY_CLASS_SYSTEM, to + * ensure the kernel can always spill the allocation to system memory, + * if the object can't be allocated in the mappable part of + * I915_MEMORY_CLASS_DEVICE. + * + * Also note that since the kernel only supports flat-CCS on objects + * that can *only* be placed in I915_MEMORY_CLASS_DEVICE, we therefore + * don't support I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS together with + * flat-CCS. + * + * Without this hint, the kernel will assume that non-mappable + * I915_MEMORY_CLASS_DEVICE is preferred for this object. Note that the + * kernel can still migrate the object to the mappable part, as a last + * resort, if userspace ever CPU faults this object, but this might be + * expensive, and so ideally should be avoided. + * + * On older kernels which lack the relevant small-bar uAPI support (see + * also &__drm_i915_memory_region_info.probed_cpu_visible_size), + * usage of the flag will result in an error, but it should NEVER be + * possible to end up with a small BAR configuration, assuming we can + * also successfully load the i915 kernel module. In such cases the + * entire I915_MEMORY_CLASS_DEVICE region will be CPU accessible, and as + * such there are zero restrictions on where the object can be placed. + */ +#define I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS (1 << 0) + __u32 flags; + + /** + * @extensions: The chain of extensions to apply to this object. + * + * This will be useful in the future when we need to support several + * different extensions, and we need to apply more than one when + * creating the object. See struct i915_user_extension. + * + * If we don't supply any extensions then we get the same old gem_create + * behaviour. + * + * For I915_GEM_CREATE_EXT_MEMORY_REGIONS usage see + * struct drm_i915_gem_create_ext_memory_regions. + * + * For I915_GEM_CREATE_EXT_PROTECTED_CONTENT usage see + * struct drm_i915_gem_create_ext_protected_content. + */ +#define I915_GEM_CREATE_EXT_MEMORY_REGIONS 0 +#define I915_GEM_CREATE_EXT_PROTECTED_CONTENT 1 + __u64 extensions; +}; diff --git a/Documentation/gpu/rfc/i915_small_bar.rst b/Documentation/gpu/rfc/i915_small_bar.rst new file mode 100644 index 000000000000..d6c03ce3b862 --- /dev/null +++ b/Documentation/gpu/rfc/i915_small_bar.rst @@ -0,0 +1,47 @@ +========================== +I915 Small BAR RFC Section +========================== +Starting from DG2 we will have resizable BAR support for device local-memory(i.e +I915_MEMORY_CLASS_DEVICE), but in some cases the final BAR size might still be +smaller than the total probed_size. In such cases, only some subset of +I915_MEMORY_CLASS_DEVICE will be CPU accessible(for example the first 256M), +while the remainder is only accessible via the GPU. + +I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS flag +---------------------------------------------- +New gem_create_ext flag to tell the kernel that a BO will require CPU access. +This becomes important when placing an object in I915_MEMORY_CLASS_DEVICE, where +underneath the device has a small BAR, meaning only some portion of it is CPU +accessible. Without this flag the kernel will assume that CPU access is not +required, and prioritize using the non-CPU visible portion of +I915_MEMORY_CLASS_DEVICE. + +.. kernel-doc:: Documentation/gpu/rfc/i915_small_bar.h + :functions: __drm_i915_gem_create_ext + +probed_cpu_visible_size attribute +--------------------------------- +New struct__drm_i915_memory_region attribute which returns the total size of the +CPU accessible portion, for the particular region. This should only be +applicable for I915_MEMORY_CLASS_DEVICE. We also report the +unallocated_cpu_visible_size, alongside the unallocated_size. + +Vulkan will need this as part of creating a separate VkMemoryHeap with the +VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT set, to represent the CPU visible portion, +where the total size of the heap needs to be known. It also wants to be able to +give a rough estimate of how memory can potentially be allocated. + +.. kernel-doc:: Documentation/gpu/rfc/i915_small_bar.h + :functions: __drm_i915_memory_region_info + +Error Capture restrictions +-------------------------- +With error capture we have two new restrictions: + + 1) Error capture is best effort on small BAR systems; if the pages are not + CPU accessible, at the time of capture, then the kernel is free to skip + trying to capture them. + + 2) On discrete and newer integrated platforms we now reject error capture + on recoverable contexts. In the future the kernel may want to blit during + error capture, when for example something is not currently CPU accessible. diff --git a/Documentation/gpu/rfc/index.rst b/Documentation/gpu/rfc/index.rst index 91e93a705230..5a3bd3924ba6 100644 --- a/Documentation/gpu/rfc/index.rst +++ b/Documentation/gpu/rfc/index.rst @@ -23,3 +23,7 @@ host such documentation: .. toctree:: i915_scheduler.rst + +.. toctree:: + + i915_small_bar.rst From patchwork Tue Jun 21 10:44:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12889011 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8988EC433EF for ; Tue, 21 Jun 2022 10:46:45 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C7EAF10F8B0; Tue, 21 Jun 2022 10:46:40 +0000 (UTC) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by gabe.freedesktop.org (Postfix) with ESMTPS id B3D5210F8B0; Tue, 21 Jun 2022 10:46:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655808396; x=1687344396; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=MU+48X7ZIk7q15qR/O+embKczC/vR65G9dcYO4D7Acw=; b=V9dl9bI/XKwq90eQPMI8sevu7u0+mpoKdj5rWJosKt6ZwNK8xMdXcYXN ENzoHlHOJRoDWuZ4puwU14fgcta4367/TTjlwIZSX2YSiJxIRMDr7a1IL eUsTHyVxzqTT6bDmO3w6FfrPkw2pbrjWTEb3HGv4rSOJ0lW/PBvNlWhjr yoMSVXVqha9IlaTfoNMacyKH8Ep6ibx7tbzIkivuZNnKmSm1GWGBS6PmP tzm1e/CIS1qeFrqC03K3OHWplR+BfvN3uMtUh5m8wkbINCAoedY0LRSMk CROgwW8LwgE+g4rLaIgDSlCDA+pTHJWQ83HgmywjRyuri6QFLbW07AFi8 Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10384"; a="263121912" X-IronPort-AV: E=Sophos;i="5.92,209,1650956400"; d="scan'208";a="263121912" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2022 03:46:36 -0700 X-IronPort-AV: E=Sophos;i="5.92,209,1650956400"; d="scan'208";a="562327035" Received: from jasonmor-mobl1.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.213.200.10]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2022 03:46:33 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Tue, 21 Jun 2022 11:44:24 +0100 Message-Id: <20220621104434.190962-3-matthew.auld@intel.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220621104434.190962-1-matthew.auld@intel.com> References: <20220621104434.190962-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 02/12] drm/i915/uapi: add probed_cpu_visible_size X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Daniel Vetter , Kenneth Graunke , dri-devel@lists.freedesktop.org, Nirmoy Das Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Userspace wants to know the size of CPU visible portion of device local-memory, and on small BAR devices the probed_size is no longer enough. In Vulkan, for example, it would like to know the size in bytes for CPU visible VkMemoryHeap. We already track the io_size for each region, so plumb that through to the region query. v2: Drop the ( -1 = unknown ) stuff, which is confusing since nothing can currently ever return such a value. Testcase: igt@i915_query@query-regions-sanity-check Signed-off-by: Matthew Auld Cc: Thomas Hellström Cc: Lionel Landwerlin Cc: Tvrtko Ursulin Cc: Jon Bloomfield Cc: Daniel Vetter Cc: Jordan Justen Cc: Kenneth Graunke Cc: Akeem G Abodunrin Acked-by: Nirmoy Das Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/i915_query.c | 6 +++ include/uapi/drm/i915_drm.h | 76 +++++++++++++++++-------------- 2 files changed, 48 insertions(+), 34 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_query.c b/drivers/gpu/drm/i915/i915_query.c index 0094f67c63f2..9894add651dd 100644 --- a/drivers/gpu/drm/i915/i915_query.c +++ b/drivers/gpu/drm/i915/i915_query.c @@ -498,6 +498,12 @@ static int query_memregion_info(struct drm_i915_private *i915, info.region.memory_class = mr->type; info.region.memory_instance = mr->instance; info.probed_size = mr->total; + + if (mr->type == INTEL_MEMORY_LOCAL) + info.probed_cpu_visible_size = mr->io_size; + else + info.probed_cpu_visible_size = mr->total; + info.unallocated_size = mr->avail; if (__copy_to_user(info_ptr, &info, sizeof(info))) diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h index de49b68b4fc8..7eacacb00373 100644 --- a/include/uapi/drm/i915_drm.h +++ b/include/uapi/drm/i915_drm.h @@ -3207,36 +3207,6 @@ struct drm_i915_gem_memory_class_instance { * struct drm_i915_memory_region_info - Describes one region as known to the * driver. * - * Note that we reserve some stuff here for potential future work. As an example - * we might want expose the capabilities for a given region, which could include - * things like if the region is CPU mappable/accessible, what are the supported - * mapping types etc. - * - * Note that to extend struct drm_i915_memory_region_info and struct - * drm_i915_query_memory_regions in the future the plan is to do the following: - * - * .. code-block:: C - * - * struct drm_i915_memory_region_info { - * struct drm_i915_gem_memory_class_instance region; - * union { - * __u32 rsvd0; - * __u32 new_thing1; - * }; - * ... - * union { - * __u64 rsvd1[8]; - * struct { - * __u64 new_thing2; - * __u64 new_thing3; - * ... - * }; - * }; - * }; - * - * With this things should remain source compatible between versions for - * userspace, even as we add new fields. - * * Note this is using both struct drm_i915_query_item and struct drm_i915_query. * For this new query we are adding the new query id DRM_I915_QUERY_MEMORY_REGIONS * at &drm_i915_query_item.query_id. @@ -3248,14 +3218,52 @@ struct drm_i915_memory_region_info { /** @rsvd0: MBZ */ __u32 rsvd0; - /** @probed_size: Memory probed by the driver (-1 = unknown) */ + /** + * @probed_size: Memory probed by the driver + * + * Note that it should not be possible to ever encounter a zero value + * here, also note that no current region type will ever return -1 here. + * Although for future region types, this might be a possibility. The + * same applies to the other size fields. + */ __u64 probed_size; - /** @unallocated_size: Estimate of memory remaining (-1 = unknown) */ + /** @unallocated_size: Estimate of memory remaining */ __u64 unallocated_size; - /** @rsvd1: MBZ */ - __u64 rsvd1[8]; + union { + /** @rsvd1: MBZ */ + __u64 rsvd1[8]; + struct { + /** + * @probed_cpu_visible_size: Memory probed by the driver + * that is CPU accessible. + * + * This will be always be <= @probed_size, and the + * remainder (if there is any) will not be CPU + * accessible. + * + * On systems without small BAR, the @probed_size will + * always equal the @probed_cpu_visible_size, since all + * of it will be CPU accessible. + * + * Note this is only tracked for + * I915_MEMORY_CLASS_DEVICE regions (for other types the + * value here will always equal the @probed_size). + * + * Note that if the value returned here is zero, then + * this must be an old kernel which lacks the relevant + * small-bar uAPI support (including + * I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS), but on + * such systems we should never actually end up with a + * small BAR configuration, assuming we are able to load + * the kernel module. Hence it should be safe to treat + * this the same as when @probed_cpu_visible_size == + * @probed_size. + */ + __u64 probed_cpu_visible_size; + }; + }; }; /** From patchwork Tue Jun 21 10:44:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12889012 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CC959C433EF for ; Tue, 21 Jun 2022 10:46:52 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 8960E10FA5D; Tue, 21 Jun 2022 10:46:45 +0000 (UTC) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by gabe.freedesktop.org (Postfix) with ESMTPS id 5434210F8B0; Tue, 21 Jun 2022 10:46:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655808399; x=1687344399; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7sZwNj6A/HzGj9CTkrm5Q25DXvgbvwticPE9Le7z1Tw=; b=JTyu4v+xXBKKxa3JfUNxwn7P+K401kzk81elSDoeJLc0RHM9PZjDiQFI pErBuPJHDwgsuJHjJG6JGiVNoQ9JjrSHSymYLhE4sJBYWlNvNrDquWZQk +mPAhgDnZCl6mkfqLfMXzW/9gD9F6tXEIV/GCLSVXnLfGv83oupiIpYG/ gAQH3FinIckfnBZBOCBbgqWL+j8UDRRzkLpHREpQ5fIKGZ3HhBWf8wqqu n6OVQ1zjaUk+CMsAGwc9MnUSPmp5S80e79UKv6wF65FCRRnAz5yeHxXrr wBLl5xYPB+Eakw1bmbj4oaZUB2VnxrcI/CX2ETyX9KLvxrb3g6PvBb4vo w==; X-IronPort-AV: E=McAfee;i="6400,9594,10384"; a="263121921" X-IronPort-AV: E=Sophos;i="5.92,209,1650956400"; d="scan'208";a="263121921" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2022 03:46:39 -0700 X-IronPort-AV: E=Sophos;i="5.92,209,1650956400"; d="scan'208";a="562327058" Received: from jasonmor-mobl1.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.213.200.10]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2022 03:46:36 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Tue, 21 Jun 2022 11:44:25 +0100 Message-Id: <20220621104434.190962-4-matthew.auld@intel.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220621104434.190962-1-matthew.auld@intel.com> References: <20220621104434.190962-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 03/12] drm/i915/uapi: expose the avail tracking X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Daniel Vetter , Kenneth Graunke , dri-devel@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Vulkan would like to have a rough measure of how much device memory can in theory be allocated. Also add unallocated_cpu_visible_size to track the visible portion, in case the device is using small BAR. Also tweak the locking so we nice consistent values for both the mm->avail and the visible tracking. v2: tweak the locking slightly so we update the mm->avail and visible tracking as one atomic operation, such that userspace doesn't get strange values when sampling the values. Testcase: igt@i915_query@query-regions-unallocated Testcase: igt@i915_query@query-regions-sanity-check Signed-off-by: Matthew Auld Cc: Thomas Hellström Cc: Lionel Landwerlin Cc: Tvrtko Ursulin Cc: Jon Bloomfield Cc: Daniel Vetter Cc: Jordan Justen Cc: Kenneth Graunke Cc: Akeem G Abodunrin Reported-by: kernel test robot Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/i915_query.c | 10 +++++- drivers/gpu/drm/i915/i915_ttm_buddy_manager.c | 31 ++++++++++++++----- drivers/gpu/drm/i915/i915_ttm_buddy_manager.h | 3 ++ drivers/gpu/drm/i915/intel_memory_region.c | 14 +++++++++ drivers/gpu/drm/i915/intel_memory_region.h | 3 ++ include/uapi/drm/i915_drm.h | 31 ++++++++++++++++++- 6 files changed, 82 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_query.c b/drivers/gpu/drm/i915/i915_query.c index 9894add651dd..6ec9c9fb7b0d 100644 --- a/drivers/gpu/drm/i915/i915_query.c +++ b/drivers/gpu/drm/i915/i915_query.c @@ -504,7 +504,15 @@ static int query_memregion_info(struct drm_i915_private *i915, else info.probed_cpu_visible_size = mr->total; - info.unallocated_size = mr->avail; + if (perfmon_capable()) { + intel_memory_region_avail(mr, + &info.unallocated_size, + &info.unallocated_cpu_visible_size); + } else { + info.unallocated_size = info.probed_size; + info.unallocated_cpu_visible_size = + info.probed_cpu_visible_size; + } if (__copy_to_user(info_ptr, &info, sizeof(info))) return -EFAULT; diff --git a/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c b/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c index a5109548abc0..864c8f55cacb 100644 --- a/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c +++ b/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c @@ -104,18 +104,15 @@ static int i915_ttm_buddy_man_alloc(struct ttm_resource_manager *man, min_page_size, &bman_res->blocks, bman_res->flags); - mutex_unlock(&bman->lock); if (unlikely(err)) goto err_free_blocks; if (place->flags & TTM_PL_FLAG_CONTIGUOUS) { u64 original_size = (u64)bman_res->base.num_pages << PAGE_SHIFT; - mutex_lock(&bman->lock); drm_buddy_block_trim(mm, original_size, &bman_res->blocks); - mutex_unlock(&bman->lock); } if (lpfn <= bman->visible_size) { @@ -137,11 +134,10 @@ static int i915_ttm_buddy_man_alloc(struct ttm_resource_manager *man, } } - if (bman_res->used_visible_size) { - mutex_lock(&bman->lock); + if (bman_res->used_visible_size) bman->visible_avail -= bman_res->used_visible_size; - mutex_unlock(&bman->lock); - } + + mutex_unlock(&bman->lock); if (place->lpfn - place->fpfn == n_pages) bman_res->base.start = place->fpfn; @@ -154,7 +150,6 @@ static int i915_ttm_buddy_man_alloc(struct ttm_resource_manager *man, return 0; err_free_blocks: - mutex_lock(&bman->lock); drm_buddy_free_list(mm, &bman_res->blocks); mutex_unlock(&bman->lock); err_free_res: @@ -365,6 +360,26 @@ u64 i915_ttm_buddy_man_visible_size(struct ttm_resource_manager *man) return bman->visible_size; } +/** + * i915_ttm_buddy_man_visible_size - Query the avail tracking for the manager. + * + * @man: The buddy allocator ttm manager + * @avail: The total available memory in pages for the entire manager. + * @visible_avail: The total available memory in pages for the CPU visible + * portion. Note that this will always give the same value as @avail on + * configurations that don't have a small BAR. + */ +void i915_ttm_buddy_man_avail(struct ttm_resource_manager *man, + u64 *avail, u64 *visible_avail) +{ + struct i915_ttm_buddy_manager *bman = to_buddy_manager(man); + + mutex_lock(&bman->lock); + *avail = bman->mm.avail >> PAGE_SHIFT; + *visible_avail = bman->visible_avail; + mutex_unlock(&bman->lock); +} + #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) void i915_ttm_buddy_man_force_visible_size(struct ttm_resource_manager *man, u64 size) diff --git a/drivers/gpu/drm/i915/i915_ttm_buddy_manager.h b/drivers/gpu/drm/i915/i915_ttm_buddy_manager.h index 52d9586d242c..d64620712830 100644 --- a/drivers/gpu/drm/i915/i915_ttm_buddy_manager.h +++ b/drivers/gpu/drm/i915/i915_ttm_buddy_manager.h @@ -61,6 +61,9 @@ int i915_ttm_buddy_man_reserve(struct ttm_resource_manager *man, u64 i915_ttm_buddy_man_visible_size(struct ttm_resource_manager *man); +void i915_ttm_buddy_man_avail(struct ttm_resource_manager *man, + u64 *avail, u64 *avail_visible); + #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) void i915_ttm_buddy_man_force_visible_size(struct ttm_resource_manager *man, u64 size); diff --git a/drivers/gpu/drm/i915/intel_memory_region.c b/drivers/gpu/drm/i915/intel_memory_region.c index e38d2db1c3e3..94ee26e99549 100644 --- a/drivers/gpu/drm/i915/intel_memory_region.c +++ b/drivers/gpu/drm/i915/intel_memory_region.c @@ -279,6 +279,20 @@ void intel_memory_region_set_name(struct intel_memory_region *mem, va_end(ap); } +void intel_memory_region_avail(struct intel_memory_region *mr, + u64 *avail, u64 *visible_avail) +{ + if (mr->type == INTEL_MEMORY_LOCAL) { + i915_ttm_buddy_man_avail(mr->region_private, + avail, visible_avail); + *avail <<= PAGE_SHIFT; + *visible_avail <<= PAGE_SHIFT; + } else { + *avail = mr->total; + *visible_avail = mr->total; + } +} + void intel_memory_region_destroy(struct intel_memory_region *mem) { int ret = 0; diff --git a/drivers/gpu/drm/i915/intel_memory_region.h b/drivers/gpu/drm/i915/intel_memory_region.h index 3d8378c1b447..2214f251bec3 100644 --- a/drivers/gpu/drm/i915/intel_memory_region.h +++ b/drivers/gpu/drm/i915/intel_memory_region.h @@ -127,6 +127,9 @@ int intel_memory_region_reserve(struct intel_memory_region *mem, void intel_memory_region_debug(struct intel_memory_region *mr, struct drm_printer *printer); +void intel_memory_region_avail(struct intel_memory_region *mr, + u64 *avail, u64 *visible_avail); + struct intel_memory_region * i915_gem_ttm_system_setup(struct drm_i915_private *i915, u16 type, u16 instance); diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h index 7eacacb00373..e4847436bab8 100644 --- a/include/uapi/drm/i915_drm.h +++ b/include/uapi/drm/i915_drm.h @@ -3228,7 +3228,15 @@ struct drm_i915_memory_region_info { */ __u64 probed_size; - /** @unallocated_size: Estimate of memory remaining */ + /** + * @unallocated_size: Estimate of memory remaining + * + * Requires CAP_PERFMON or CAP_SYS_ADMIN to get reliable accounting. + * Without this (or if this is an older kernel) the value here will + * always equal the @probed_size. Note this is only currently tracked + * for I915_MEMORY_CLASS_DEVICE regions (for other types the value here + * will always equal the @probed_size). + */ __u64 unallocated_size; union { @@ -3262,6 +3270,27 @@ struct drm_i915_memory_region_info { * @probed_size. */ __u64 probed_cpu_visible_size; + + /** + * @unallocated_cpu_visible_size: Estimate of CPU + * visible memory remaining. + * + * Note this is only tracked for + * I915_MEMORY_CLASS_DEVICE regions (for other types the + * value here will always equal the + * @probed_cpu_visible_size). + * + * Requires CAP_PERFMON or CAP_SYS_ADMIN to get reliable + * accounting. Without this the value here will always + * equal the @probed_cpu_visible_size. Note this is only + * currently tracked for I915_MEMORY_CLASS_DEVICE + * regions (for other types the value here will also + * always equal the @probed_cpu_visible_size). + * + * If this is an older kernel the value here will be + * zero, see also @probed_cpu_visible_size. + */ + __u64 unallocated_cpu_visible_size; }; }; }; From patchwork Tue Jun 21 10:44:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12889013 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C62B0C43334 for ; Tue, 21 Jun 2022 10:46:53 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 535FF10FAB5; Tue, 21 Jun 2022 10:46:45 +0000 (UTC) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by gabe.freedesktop.org (Postfix) with ESMTPS id 1B03A10FA5D; Tue, 21 Jun 2022 10:46:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655808402; x=1687344402; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ErKb+5nswO11npB7mIH/K/2A00HYCX+pE3E4iwkK6Tc=; b=nxf4swz+AUAy6eeguOYqfMuHD7aTvq9MljJrMHY1sfNS9IFcIhFc2zgI P/5Iwn8XZqsWXI3hoWKqT8RWDkXW90lefcSQ5bh/IzkxPd4SP/oLelUSF bIFMu4lkHEG9fKHw29fy5SOyEWdcLxX7kH8ox/KWB+5PNgWewFZMyfble xIlIQK6vgUX2Z0tUhxKEIoGOvwjpOQOLRZExUMVVoOhXlqxFm/Bx/YIzW m0NBQYh1RObHRlHkak6StYVz43jEKF9y0458WMJFd2+VWlijTiqhI3GO7 JLY/ZBo3mNBpSYiqBj5sSr6eLGay3rKEUvKB4g3w/Ma4NYG5KyM6YOnVS w==; X-IronPort-AV: E=McAfee;i="6400,9594,10384"; a="263121932" X-IronPort-AV: E=Sophos;i="5.92,209,1650956400"; d="scan'208";a="263121932" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2022 03:46:41 -0700 X-IronPort-AV: E=Sophos;i="5.92,209,1650956400"; d="scan'208";a="562327073" Received: from jasonmor-mobl1.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.213.200.10]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2022 03:46:39 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Tue, 21 Jun 2022 11:44:26 +0100 Message-Id: <20220621104434.190962-5-matthew.auld@intel.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220621104434.190962-1-matthew.auld@intel.com> References: <20220621104434.190962-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 04/12] drm/i915: remove intel_memory_region avail X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Daniel Vetter , Kenneth Graunke , dri-devel@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" No longer used. Signed-off-by: Matthew Auld Cc: Thomas Hellström Cc: Lionel Landwerlin Cc: Tvrtko Ursulin Cc: Jon Bloomfield Cc: Daniel Vetter Cc: Jordan Justen Cc: Kenneth Graunke Cc: Akeem G Abodunrin Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/intel_memory_region.c | 4 +--- drivers/gpu/drm/i915/intel_memory_region.h | 1 - 2 files changed, 1 insertion(+), 4 deletions(-) diff --git a/drivers/gpu/drm/i915/intel_memory_region.c b/drivers/gpu/drm/i915/intel_memory_region.c index 94ee26e99549..9a4a7fb55582 100644 --- a/drivers/gpu/drm/i915/intel_memory_region.c +++ b/drivers/gpu/drm/i915/intel_memory_region.c @@ -198,8 +198,7 @@ void intel_memory_region_debug(struct intel_memory_region *mr, if (mr->region_private) ttm_resource_manager_debug(mr->region_private, printer); else - drm_printf(printer, "total:%pa, available:%pa bytes\n", - &mr->total, &mr->avail); + drm_printf(printer, "total:%pa bytes\n", &mr->total); } static int intel_memory_region_memtest(struct intel_memory_region *mem, @@ -242,7 +241,6 @@ intel_memory_region_create(struct drm_i915_private *i915, mem->min_page_size = min_page_size; mem->ops = ops; mem->total = size; - mem->avail = mem->total; mem->type = type; mem->instance = instance; diff --git a/drivers/gpu/drm/i915/intel_memory_region.h b/drivers/gpu/drm/i915/intel_memory_region.h index 2214f251bec3..2953ed5c3248 100644 --- a/drivers/gpu/drm/i915/intel_memory_region.h +++ b/drivers/gpu/drm/i915/intel_memory_region.h @@ -75,7 +75,6 @@ struct intel_memory_region { resource_size_t io_size; resource_size_t min_page_size; resource_size_t total; - resource_size_t avail; u16 type; u16 instance; From patchwork Tue Jun 21 10:44:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12889014 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 06861C433EF for ; Tue, 21 Jun 2022 10:46:55 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2D5E110FAF6; Tue, 21 Jun 2022 10:46:47 +0000 (UTC) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by gabe.freedesktop.org (Postfix) with ESMTPS id C2C6710FAF6; Tue, 21 Jun 2022 10:46:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655808404; x=1687344404; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=n5ckTRTLP9wiuuQK9cU4jhKq2TqebvscpQ2sbm9TNY8=; b=ON4k2LAnZiTSB+gsgpuk/DBYC4yOHMqcFnOz+IrJG+6xIQms3UqMInNW iYdOsqVPN5u3jmFGhqOec8hW81zCL1q7ZokdDHd5GF9yC6HbsgB/pHPBf YD/xq9D0oVE/ARufMsQc47D7T/FIcohI/rLud61FmcbvkbXq4B9jshdjl GpuPYn+7+0a1ZEHDF97OBl5knCoopDqUHnmxLtM70Y3M41t2IKjlWORBJ HUSvwmpuvZwUmq43+e7Tq78R1ed61QRBF4I3xyP6KpMI+XrB20DWYU/6S eA+kh12uVjt2g1RwBS3DvcdZKTO1fJssMCpXHgzLOFyrB46/5OanAR/nu g==; X-IronPort-AV: E=McAfee;i="6400,9594,10384"; a="263121939" X-IronPort-AV: E=Sophos;i="5.92,209,1650956400"; d="scan'208";a="263121939" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2022 03:46:44 -0700 X-IronPort-AV: E=Sophos;i="5.92,209,1650956400"; d="scan'208";a="562327086" Received: from jasonmor-mobl1.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.213.200.10]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2022 03:46:42 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Tue, 21 Jun 2022 11:44:27 +0100 Message-Id: <20220621104434.190962-6-matthew.auld@intel.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220621104434.190962-1-matthew.auld@intel.com> References: <20220621104434.190962-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 05/12] drm/i915/uapi: apply ALLOC_GPU_ONLY by default X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Daniel Vetter , Kenneth Graunke , dri-devel@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" On small BAR configurations, when dealing with I915_MEMORY_CLASS_DEVICE allocations, we assume that by default, all userspace allocations should be placed in the non-CPU visible portion. Note that dumb buffers are not included here, since these are not "GPU accelerated" and likely need CPU access. We choose to just always set GPU_ONLY, and let the backend figure out if that should be ignored or not, for example on full BAR systems. In a later patch userspace will be able to provide a hint if CPU access to the buffer is needed. v2(Thomas) - Apply GPU_ONLY on all discrete devices, but only if the BO can be placed in LMEM. Down in the depths this should be turned into a noop, where required, and as an annotation it still make some sense. If we apply it regardless of the placements then we end up needing to check the placements during exec capture. Also it's slightly inconsistent since the NEEDS_CPU_ACCESS can only be applied on objects that can be placed in LMEM. The other annoyance would be gem_create_ext vs plain gem_create, if we were to always apply GPU_ONLY. Testcase: igt@gem-create@create-ext-cpu-access-sanity-check Testcase: igt@gem-create@create-ext-cpu-access-big Signed-off-by: Matthew Auld Cc: Thomas Hellström Cc: Lionel Landwerlin Cc: Tvrtko Ursulin Cc: Jon Bloomfield Cc: Daniel Vetter Cc: Jordan Justen Cc: Kenneth Graunke Cc: Akeem G Abodunrin Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_create.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_create.c b/drivers/gpu/drm/i915/gem/i915_gem_create.c index 5802692ea604..d094cae0ddf1 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_create.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_create.c @@ -427,6 +427,14 @@ i915_gem_create_ext_ioctl(struct drm_device *dev, void *data, ext_data.n_placements = 1; } + /* + * TODO: add a userspace hint to force CPU_ACCESS for the object, which + * can override this. + */ + if (ext_data.n_placements > 1 || + ext_data.placements[0]->type != INTEL_MEMORY_SYSTEM) + ext_data.flags |= I915_BO_ALLOC_GPU_ONLY; + obj = __i915_gem_object_create_user_ext(i915, args->size, ext_data.placements, ext_data.n_placements, From patchwork Tue Jun 21 10:44:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12889015 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id ACC5BC433EF for ; Tue, 21 Jun 2022 10:46:59 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4430510FBE4; Tue, 21 Jun 2022 10:46:51 +0000 (UTC) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7575010FB80; Tue, 21 Jun 2022 10:46:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655808407; x=1687344407; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/crxhMm65GM2yPHAHVlRpYXXgkdZgJIpcygCkVhsBUQ=; b=bebYjBIqqGOuEWha4LYzngP7hSnP7kq3P2lmoPF7fErvu4SpGFyzvREg dX7UnispKJQhMITkwM+bUaCsJVDi4YtUI9FdSkUDiY6yy/6HYDkHu+mUV FaB9AUJlquF7OiLPo9DhrZ7baxNeFm6LNbb7GibrUOrNaMryVYah/p49/ fm3TrB2OJXegVvwPorGrP+zU6CXXmKrA7gksiS08AoV7QfY/knKV7xjGk m8DlTqHLri5smOZjq1VIctKxhtUegSyojQU04sjje4IrqQfosLODt8fvc No0tHlCbXHn+Q8MHbJmogmkDcPYkCcA6Lv+n8mhs0Stl7emeUU1XqZeWu Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10384"; a="263121943" X-IronPort-AV: E=Sophos;i="5.92,209,1650956400"; d="scan'208";a="263121943" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2022 03:46:47 -0700 X-IronPort-AV: E=Sophos;i="5.92,209,1650956400"; d="scan'208";a="562327102" Received: from jasonmor-mobl1.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.213.200.10]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2022 03:46:44 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Tue, 21 Jun 2022 11:44:28 +0100 Message-Id: <20220621104434.190962-7-matthew.auld@intel.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220621104434.190962-1-matthew.auld@intel.com> References: <20220621104434.190962-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 06/12] drm/i915/uapi: add NEEDS_CPU_ACCESS hint X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Kenneth Graunke , dri-devel@lists.freedesktop.org, Daniel Vetter , Nirmoy Das Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" If set, force the allocation to be placed in the mappable portion of I915_MEMORY_CLASS_DEVICE. One big restriction here is that system memory (i.e I915_MEMORY_CLASS_SYSTEM) must be given as a potential placement for the object, that way we can always spill the object into system memory if we can't make space. Testcase: igt@gem-create@create-ext-cpu-access-sanity-check Testcase: igt@gem-create@create-ext-cpu-access-big Signed-off-by: Matthew Auld Cc: Thomas Hellström Cc: Lionel Landwerlin Cc: Jon Bloomfield Cc: Daniel Vetter Cc: Jordan Justen Cc: Kenneth Graunke Cc: Akeem G Abodunrin Reviewed-by: Nirmoy Das Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_create.c | 26 ++++++--- include/uapi/drm/i915_drm.h | 61 +++++++++++++++++++--- 2 files changed, 71 insertions(+), 16 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_create.c b/drivers/gpu/drm/i915/gem/i915_gem_create.c index d094cae0ddf1..33673fe7ee0a 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_create.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_create.c @@ -241,6 +241,7 @@ struct create_ext { struct drm_i915_private *i915; struct intel_memory_region *placements[INTEL_REGION_UNKNOWN]; unsigned int n_placements; + unsigned int placement_mask; unsigned long flags; }; @@ -337,6 +338,7 @@ static int set_placements(struct drm_i915_gem_create_ext_memory_regions *args, for (i = 0; i < args->num_regions; i++) ext_data->placements[i] = placements[i]; + ext_data->placement_mask = mask; return 0; out_dump: @@ -411,7 +413,7 @@ i915_gem_create_ext_ioctl(struct drm_device *dev, void *data, struct drm_i915_gem_object *obj; int ret; - if (args->flags) + if (args->flags & ~I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS) return -EINVAL; ret = i915_user_extensions(u64_to_user_ptr(args->extensions), @@ -427,13 +429,21 @@ i915_gem_create_ext_ioctl(struct drm_device *dev, void *data, ext_data.n_placements = 1; } - /* - * TODO: add a userspace hint to force CPU_ACCESS for the object, which - * can override this. - */ - if (ext_data.n_placements > 1 || - ext_data.placements[0]->type != INTEL_MEMORY_SYSTEM) - ext_data.flags |= I915_BO_ALLOC_GPU_ONLY; + if (args->flags & I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS) { + if (ext_data.n_placements == 1) + return -EINVAL; + + /* + * We always need to be able to spill to system memory, if we + * can't place in the mappable part of LMEM. + */ + if (!(ext_data.placement_mask & BIT(INTEL_REGION_SMEM))) + return -EINVAL; + } else { + if (ext_data.n_placements > 1 || + ext_data.placements[0]->type != INTEL_MEMORY_SYSTEM) + ext_data.flags |= I915_BO_ALLOC_GPU_ONLY; + } obj = __i915_gem_object_create_user_ext(i915, args->size, ext_data.placements, diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h index e4847436bab8..3e78a00220ea 100644 --- a/include/uapi/drm/i915_drm.h +++ b/include/uapi/drm/i915_drm.h @@ -3366,11 +3366,11 @@ struct drm_i915_query_memory_regions { * struct drm_i915_gem_create_ext - Existing gem_create behaviour, with added * extension support using struct i915_user_extension. * - * Note that in the future we want to have our buffer flags here, at least for - * the stuff that is immutable. Previously we would have two ioctls, one to - * create the object with gem_create, and another to apply various parameters, - * however this creates some ambiguity for the params which are considered - * immutable. Also in general we're phasing out the various SET/GET ioctls. + * Note that new buffer flags should be added here, at least for the stuff that + * is immutable. Previously we would have two ioctls, one to create the object + * with gem_create, and another to apply various parameters, however this + * creates some ambiguity for the params which are considered immutable. Also in + * general we're phasing out the various SET/GET ioctls. */ struct drm_i915_gem_create_ext { /** @@ -3378,7 +3378,6 @@ struct drm_i915_gem_create_ext { * * The (page-aligned) allocated size for the object will be returned. * - * * DG2 64K min page size implications: * * On discrete platforms, starting from DG2, we have to contend with GTT @@ -3390,7 +3389,9 @@ struct drm_i915_gem_create_ext { * * Note that the returned size here will always reflect any required * rounding up done by the kernel, i.e 4K will now become 64K on devices - * such as DG2. + * such as DG2. The kernel will always select the largest minimum + * page-size for the set of possible placements as the value to use when + * rounding up the @size. * * Special DG2 GTT address alignment requirement: * @@ -3414,14 +3415,58 @@ struct drm_i915_gem_create_ext { * is deemed to be a good compromise. */ __u64 size; + /** * @handle: Returned handle for the object. * * Object handles are nonzero. */ __u32 handle; - /** @flags: MBZ */ + + /** + * @flags: Optional flags. + * + * Supported values: + * + * I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS - Signal to the kernel that + * the object will need to be accessed via the CPU. + * + * Only valid when placing objects in I915_MEMORY_CLASS_DEVICE, and only + * strictly required on configurations where some subset of the device + * memory is directly visible/mappable through the CPU (which we also + * call small BAR), like on some DG2+ systems. Note that this is quite + * undesirable, but due to various factors like the client CPU, BIOS etc + * it's something we can expect to see in the wild. See + * &drm_i915_memory_region_info.probed_cpu_visible_size for how to + * determine if this system applies. + * + * Note that one of the placements MUST be I915_MEMORY_CLASS_SYSTEM, to + * ensure the kernel can always spill the allocation to system memory, + * if the object can't be allocated in the mappable part of + * I915_MEMORY_CLASS_DEVICE. + * + * Also note that since the kernel only supports flat-CCS on objects + * that can *only* be placed in I915_MEMORY_CLASS_DEVICE, we therefore + * don't support I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS together with + * flat-CCS. + * + * Without this hint, the kernel will assume that non-mappable + * I915_MEMORY_CLASS_DEVICE is preferred for this object. Note that the + * kernel can still migrate the object to the mappable part, as a last + * resort, if userspace ever CPU faults this object, but this might be + * expensive, and so ideally should be avoided. + * + * On older kernels which lack the relevant small-bar uAPI support (see + * also &drm_i915_memory_region_info.probed_cpu_visible_size), + * usage of the flag will result in an error, but it should NEVER be + * possible to end up with a small BAR configuration, assuming we can + * also successfully load the i915 kernel module. In such cases the + * entire I915_MEMORY_CLASS_DEVICE region will be CPU accessible, and as + * such there are zero restrictions on where the object can be placed. + */ +#define I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS (1 << 0) __u32 flags; + /** * @extensions: The chain of extensions to apply to this object. * From patchwork Tue Jun 21 10:44:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12889016 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3E3BBC43334 for ; Tue, 21 Jun 2022 10:47:02 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 41F6C10FD18; Tue, 21 Jun 2022 10:46:56 +0000 (UTC) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4BC0410FBDC; Tue, 21 Jun 2022 10:46:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655808410; x=1687344410; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=giNohRqK8u2ya2LTyxPUSwRWob3bCSGgWs9CD02xaoc=; b=MjoLFG8GMBsgEZBMFL1WWh4OzEpwFgOLyuJ0w4P8j3LvviJ1HkzItWaM kofxjvsWGIUnmP/hXMIXHsjHeIDPECuPC2uxyUmmv8HD6gvrlJWdfOUmt LTr9CY0oWxPrGDPbeiSScyUPjSHdNzJe73ap3EbIZXUZ17KiUhpIDfkOP LFAKPfOT5uw9wWGW7N2vrD01EUEONFhh/dQztmJW4rD+72tpc1ot8xPNL I2PAsmixr2UPlt35ZLQYdeBXHg3XRmpr1qYLhAZwqziHlCSzQ/r3ErKnP 85vuVb+3D8vwSsieIno8K3vGSt7xm0r6Rx5qXn836myN4ZE15DqBpzRFJ A==; X-IronPort-AV: E=McAfee;i="6400,9594,10384"; a="263121950" X-IronPort-AV: E=Sophos;i="5.92,209,1650956400"; d="scan'208";a="263121950" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2022 03:46:50 -0700 X-IronPort-AV: E=Sophos;i="5.92,209,1650956400"; d="scan'208";a="562327113" Received: from jasonmor-mobl1.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.213.200.10]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2022 03:46:47 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Tue, 21 Jun 2022 11:44:29 +0100 Message-Id: <20220621104434.190962-8-matthew.auld@intel.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220621104434.190962-1-matthew.auld@intel.com> References: <20220621104434.190962-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 07/12] drm/i915/error: skip non-mappable pages X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Daniel Vetter , Kenneth Graunke , dri-devel@lists.freedesktop.org, Nirmoy Das Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Skip capturing any lmem pages that can't be copied using the CPU. This in now only best effort on platforms that have small BAR. Testcase: igt@gem-exec-capture@capture-invisible Signed-off-by: Matthew Auld Cc: Thomas Hellström Cc: Lionel Landwerlin Cc: Tvrtko Ursulin Cc: Jon Bloomfield Cc: Daniel Vetter Cc: Jordan Justen Cc: Kenneth Graunke Cc: Akeem G Abodunrin Reviewed-by: Nirmoy Das Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/i915_gpu_error.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c index bff8a111424a..2a1cb8b20381 100644 --- a/drivers/gpu/drm/i915/i915_gpu_error.c +++ b/drivers/gpu/drm/i915/i915_gpu_error.c @@ -1125,11 +1125,15 @@ i915_vma_coredump_create(const struct intel_gt *gt, dma_addr_t dma; for_each_sgt_daddr(dma, iter, vma_res->bi.pages) { + dma_addr_t offset = dma - mem->region.start; void __iomem *s; - s = io_mapping_map_wc(&mem->iomap, - dma - mem->region.start, - PAGE_SIZE); + if (offset + PAGE_SIZE > mem->io_size) { + ret = -EINVAL; + break; + } + + s = io_mapping_map_wc(&mem->iomap, offset, PAGE_SIZE); ret = compress_page(compress, (void __force *)s, dst, true); From patchwork Tue Jun 21 10:44:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12889017 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AC9C9C43334 for ; Tue, 21 Jun 2022 10:47:04 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3384E10FD14; Tue, 21 Jun 2022 10:46:56 +0000 (UTC) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by gabe.freedesktop.org (Postfix) with ESMTPS id 02B7D10FBDC; Tue, 21 Jun 2022 10:46:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655808413; x=1687344413; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=YBSDP90ckR8VRw0Pvhy1R6rqRgl/3m77HaTxtbiIdPA=; b=HxBqDrJb2QLB9yE1b63V43wa5eTIIbHNOTWJH83bd1bs23ftUJlQXkV0 AkXwvolHJLIrsPs/hsoDWo3f1QiWX+obbgRcHbKkMopRaRy6RTnUS02Gd JZ2Kf8dQ1iocOAHPt22iPlxeexNYyVRMHRLT8Bn/y7WLO9AqdfNllAHmq QzbrUN3keNo17ai9ZSYpTU7P6iEy0Mb1yHo9jFIfdHyDmkZALSS6cFUJC fnnjeNHItj0awg9tilQFBy1V4oANdSeeJm1b74zCpU9HYMBTWGdwx1vbo rKf58Tmqn6/xZk7ZpcIJVlclT3if4t4mPX9TRSZgi+RfN+zyDEpetdYD8 Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10384"; a="263121952" X-IronPort-AV: E=Sophos;i="5.92,209,1650956400"; d="scan'208";a="263121952" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2022 03:46:52 -0700 X-IronPort-AV: E=Sophos;i="5.92,209,1650956400"; d="scan'208";a="562327128" Received: from jasonmor-mobl1.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.213.200.10]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2022 03:46:50 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Tue, 21 Jun 2022 11:44:30 +0100 Message-Id: <20220621104434.190962-9-matthew.auld@intel.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220621104434.190962-1-matthew.auld@intel.com> References: <20220621104434.190962-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 08/12] drm/i915/uapi: tweak error capture on recoverable contexts X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Daniel Vetter , Kenneth Graunke , dri-devel@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" A non-recoverable context must be used if the user wants proper error capture on discrete platforms. In the future the kernel may want to blit the contents of some objects when later doing the capture stage. Also extend to newer integrated platforms. v2(Thomas): - Also extend to newer integrated platforms, for capture buffer memory allocation purposes. Testcase: igt@gem_exec_capture@capture-recoverable Signed-off-by: Matthew Auld Cc: Thomas Hellström Cc: Lionel Landwerlin Cc: Tvrtko Ursulin Cc: Jon Bloomfield Cc: Daniel Vetter Cc: Jordan Justen Cc: Kenneth Graunke Cc: Akeem G Abodunrin Reviewed-by: Thomas Hellström Reported-by: kernel test robot --- drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 30fe847c6664..6ac88e2193f1 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -1951,7 +1951,7 @@ eb_find_first_request_added(struct i915_execbuffer *eb) #if IS_ENABLED(CONFIG_DRM_I915_CAPTURE_ERROR) /* Stage with GFP_KERNEL allocations before we enter the signaling critical path */ -static void eb_capture_stage(struct i915_execbuffer *eb) +static int eb_capture_stage(struct i915_execbuffer *eb) { const unsigned int count = eb->buffer_count; unsigned int i = count, j; @@ -1964,6 +1964,10 @@ static void eb_capture_stage(struct i915_execbuffer *eb) if (!(flags & EXEC_OBJECT_CAPTURE)) continue; + if (i915_gem_context_is_recoverable(eb->gem_context) && + (IS_DGFX(eb->i915) || GRAPHICS_VER_FULL(eb->i915) > IP_VER(12, 0))) + return -EINVAL; + for_each_batch_create_order(eb, j) { struct i915_capture_list *capture; @@ -1976,6 +1980,8 @@ static void eb_capture_stage(struct i915_execbuffer *eb) eb->capture_lists[j] = capture; } } + + return 0; } /* Commit once we're in the critical path */ @@ -2017,7 +2023,7 @@ static void eb_capture_list_clear(struct i915_execbuffer *eb) #else -static void eb_capture_stage(struct i915_execbuffer *eb) +static int eb_capture_stage(struct i915_execbuffer *eb) { } @@ -3410,7 +3416,9 @@ i915_gem_do_execbuffer(struct drm_device *dev, } ww_acquire_done(&eb.ww.ctx); - eb_capture_stage(&eb); + err = eb_capture_stage(&eb); + if (err) + goto err_vma; out_fence = eb_requests_create(&eb, in_fence, out_fence_fd); if (IS_ERR(out_fence)) { From patchwork Tue Jun 21 10:44:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12889018 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E4893CCA473 for ; Tue, 21 Jun 2022 10:47:08 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2A74210FE14; Tue, 21 Jun 2022 10:47:01 +0000 (UTC) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by gabe.freedesktop.org (Postfix) with ESMTPS id AE74A10FCEC; Tue, 21 Jun 2022 10:46:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655808415; x=1687344415; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=eM5Dcjm/HcptWxgw8J7RQEwYZvdlz8zP4bPNsCNYTEY=; b=Mlup0aXj/neDIiMwUcYu6d7oIw+J7t2DkorxV1SeG2SAa6ZN2obFFKwJ vsk9oCdBRMlJlCZCQUEfTD/gOoLMfRmlKHvOpN02OCtZC9tSvBgwq0qUB X4KqbmsjKX/X+BTewpHxwj1GHQR5eGWuCI0mAPjyyZ/wtZetvkfEy8dxZ pjo7WjXQnfzglnj5Fte/uuuxr3WcjdwCXRp+VYBfC61olYGoCqFpMq9Ga W/Yd8ndc7wI8k5+aMuS2hKFxVBNPjz2UPTNEdAb8q1n+iTipaeKVehdrO 9gU3zfoeuW63HP42d8p+rR6vC9qxWUHLrnQqJFv34fySgWbxrPRaaXLZY w==; X-IronPort-AV: E=McAfee;i="6400,9594,10384"; a="263121961" X-IronPort-AV: E=Sophos;i="5.92,209,1650956400"; d="scan'208";a="263121961" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2022 03:46:55 -0700 X-IronPort-AV: E=Sophos;i="5.92,209,1650956400"; d="scan'208";a="562327134" Received: from jasonmor-mobl1.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.213.200.10]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2022 03:46:52 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Tue, 21 Jun 2022 11:44:31 +0100 Message-Id: <20220621104434.190962-10-matthew.auld@intel.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220621104434.190962-1-matthew.auld@intel.com> References: <20220621104434.190962-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 09/12] drm/i915/selftests: ensure we reserve a fence slot X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Daniel Vetter , Kenneth Graunke , dri-devel@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" We should always be explicit and allocate a fence slot before adding a new fence. Signed-off-by: Matthew Auld Cc: Thomas Hellström Cc: Lionel Landwerlin Cc: Tvrtko Ursulin Cc: Jon Bloomfield Cc: Daniel Vetter Cc: Jordan Justen Cc: Kenneth Graunke Cc: Akeem G Abodunrin Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c index 5bc93a1ce3e3..7c95b6768610 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c @@ -1221,8 +1221,10 @@ static int __igt_mmap_migrate(struct intel_memory_region **placements, expand32(POISON_INUSE), &rq); i915_gem_object_unpin_pages(obj); if (rq) { - dma_resv_add_fence(obj->base.resv, &rq->fence, - DMA_RESV_USAGE_KERNEL); + err = dma_resv_reserve_fences(obj->base.resv, 1); + if (!err) + dma_resv_add_fence(obj->base.resv, &rq->fence, + DMA_RESV_USAGE_KERNEL); i915_request_put(rq); } i915_gem_object_unlock(obj); From patchwork Tue Jun 21 10:44:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12889019 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 05F1DC433EF for ; Tue, 21 Jun 2022 10:47:08 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 456C410FD26; Tue, 21 Jun 2022 10:47:00 +0000 (UTC) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7608710FD45; Tue, 21 Jun 2022 10:46:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655808418; x=1687344418; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=2ncSoWZoIosHu3XMx3dOyv6DHByBBQMUkSir2uYPcc8=; b=lzJwktDv4Oa4xTkzPyLm11HDi+ncXQPvyAQVW+aWAK5N8CWWEU47t6ih xTSdZOpBlLoujoJUwl9WetCiAbUFqiAWsw8TKZ3lKKcZ1B8PNE5i439NQ X7gh1aQLy8PQ/tqyvWIhKuusqSBEoHc4nf0wfipq1dQlOsHHwSq+hHKsZ mPIUCfiWzHPCYs4WGcBz8zufxwgGPS5pu49Ibna1UoEZxJY5np7NwS64s spJzAzvAbWUwvBC3JZqbL0wiEHAS/+oDSpcTtAco1kqRsZ04p4ft/Gbgq Ru8bCzrqCttiYMERXUbbmdxVgcVZQc6zP5AUjoVCGpjOlETZ9RV9SQjHZ A==; X-IronPort-AV: E=McAfee;i="6400,9594,10384"; a="263121966" X-IronPort-AV: E=Sophos;i="5.92,209,1650956400"; d="scan'208";a="263121966" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2022 03:46:58 -0700 X-IronPort-AV: E=Sophos;i="5.92,209,1650956400"; d="scan'208";a="562327144" Received: from jasonmor-mobl1.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.213.200.10]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2022 03:46:55 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Tue, 21 Jun 2022 11:44:32 +0100 Message-Id: <20220621104434.190962-11-matthew.auld@intel.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220621104434.190962-1-matthew.auld@intel.com> References: <20220621104434.190962-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 10/12] drm/i915/ttm: handle blitter failure on DG2 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Daniel Vetter , Kenneth Graunke , dri-devel@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" If the move or clear operation somehow fails, and the memory underneath is not cleared, like when moving to lmem, then we currently fallback to memcpy or memset. However with small-BAR systems this fallback might no longer be possible. For now we use the set_wedged sledgehammer if we ever encounter such a scenario, and mark the object as borked to plug any holes where access to the memory underneath can happen. Add some basic selftests to exercise this. Signed-off-by: Matthew Auld Cc: Thomas Hellström Cc: Lionel Landwerlin Cc: Tvrtko Ursulin Cc: Jon Bloomfield Cc: Daniel Vetter Cc: Jordan Justen Cc: Kenneth Graunke Cc: Akeem G Abodunrin --- drivers/gpu/drm/i915/gem/i915_gem_object.c | 2 + .../gpu/drm/i915/gem/i915_gem_object_types.h | 17 ++++ drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 17 +++- drivers/gpu/drm/i915/gem/i915_gem_ttm.h | 5 + drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c | 84 ++++++++++++++--- drivers/gpu/drm/i915/gem/i915_gem_ttm_move.h | 1 + .../drm/i915/gem/selftests/i915_gem_migrate.c | 94 ++++++++++++++----- .../drm/i915/gem/selftests/i915_gem_mman.c | 54 +++++++++++ drivers/gpu/drm/i915/i915_vma.c | 25 ++--- 9 files changed, 248 insertions(+), 51 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c index 06b1b188ce5a..741d7df4e6ff 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c @@ -783,6 +783,8 @@ int i915_gem_object_wait_moving_fence(struct drm_i915_gem_object *obj, intr, MAX_SCHEDULE_TIMEOUT); if (!ret) ret = -ETIME; + else if (ret > 0 && obj->mm.ttm_unknown_state) + ret = -EIO; return ret < 0 ? ret : 0; } diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index 2c88bdb8ff7c..40449e384038 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -547,6 +547,23 @@ struct drm_i915_gem_object { */ bool ttm_shrinkable; + /** + * @ttm_unknown_state: Indicate that the object is effectively + * borked. This is write-once and set if we somehow encounter a + * fatal error when moving/clearing the pages, and we are not + * able to fallback to memcpy/memset, like on small-BAR systems. + * The GPU should also be wedged (or in the process) at this + * point. + * + * Only valid to read this after acquiring the dma-resv lock and + * waiting for all DMA_RESV_USAGE_KERNEL fences to be signalled, + * or if we otherwise know that the moving fence has signalled, + * and we are certain the pages underneath are valid for + * immediate access (under normal operation), like just prior to + * binding the object or when setting up the CPU fault handler. + */ + bool ttm_unknown_state; + /** * Priority list of potential placements for this object. */ diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index 4c25d9b2f138..8fc03b5a1d4e 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -266,8 +266,7 @@ static const struct i915_refct_sgt_ops tt_rsgt_ops = { .release = i915_ttm_tt_release }; -static inline bool -i915_gem_object_needs_ccs_pages(struct drm_i915_gem_object *obj) +bool i915_gem_object_needs_ccs_pages(struct drm_i915_gem_object *obj) { bool lmem_placement = false; int i; @@ -675,7 +674,7 @@ static void i915_ttm_swap_notify(struct ttm_buffer_object *bo) i915_ttm_purge(obj); } -static bool i915_ttm_resource_mappable(struct ttm_resource *res) +bool i915_ttm_resource_mappable(struct ttm_resource *res) { struct i915_ttm_buddy_resource *bman_res = to_ttm_buddy_resource(res); @@ -1054,8 +1053,16 @@ static vm_fault_t vm_fault_ttm(struct vm_fault *vmf) } if (drm_dev_enter(dev, &idx)) { - ret = ttm_bo_vm_fault_reserved(vmf, vmf->vma->vm_page_prot, - TTM_BO_VM_NUM_PREFAULT); + /* + * Ensure we check for any fatal errors if we had to move/clear + * the object. The device should already be wedged if we hit + * such an error. + */ + if (i915_gem_object_wait_moving_fence(obj, true)) + ret = VM_FAULT_SIGBUS; + else + ret = ttm_bo_vm_fault_reserved(vmf, vmf->vma->vm_page_prot, + TTM_BO_VM_NUM_PREFAULT); drm_dev_exit(idx); } else { ret = ttm_bo_vm_dummy_page(vmf, vmf->vma->vm_page_prot); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.h b/drivers/gpu/drm/i915/gem/i915_gem_ttm.h index 73e371aa3850..907803930f44 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.h @@ -69,6 +69,8 @@ void i915_ttm_adjust_lru(struct drm_i915_gem_object *obj); int i915_ttm_purge(struct drm_i915_gem_object *obj); +bool i915_gem_object_needs_ccs_pages(struct drm_i915_gem_object *obj); + /** * i915_ttm_gtt_binds_lmem - Should the memory be viewed as LMEM by the GTT? * @mem: struct ttm_resource representing the memory. @@ -92,4 +94,7 @@ static inline bool i915_ttm_cpu_maps_iomem(struct ttm_resource *mem) /* Once / if we support GGTT, this is also false for cached ttm_tts */ return mem->mem_type != I915_PL_SYSTEM; } + +bool i915_ttm_resource_mappable(struct ttm_resource *res); + #endif diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c index a10716f4e717..60b34dbb14f8 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c @@ -33,6 +33,7 @@ #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) static bool fail_gpu_migration; static bool fail_work_allocation; +static bool ban_memcpy; void i915_ttm_migrate_set_failure_modes(bool gpu_migration, bool work_allocation) @@ -40,6 +41,11 @@ void i915_ttm_migrate_set_failure_modes(bool gpu_migration, fail_gpu_migration = gpu_migration; fail_work_allocation = work_allocation; } + +void i915_ttm_migrate_set_ban_memcpy(bool ban) +{ + ban_memcpy = ban; +} #endif static enum i915_cache_level @@ -258,15 +264,23 @@ struct i915_ttm_memcpy_arg { * from the callback for lockdep reasons. * @cb: Callback for the accelerated migration fence. * @arg: The argument for the memcpy functionality. + * @i915: The i915 pointer. + * @obj: The GEM object. + * @memcpy_allowed: Instead of processing the @arg, and falling back to memcpy + * or memset, we wedge the device and set the @obj ttm_unknown_state, to prevent + * further access to the object with the CPU or GPU. On some devices we might + * only be permitted to use the blitter engine for such operations. */ struct i915_ttm_memcpy_work { struct dma_fence fence; struct work_struct work; - /* The fence lock */ spinlock_t lock; struct irq_work irq_work; struct dma_fence_cb cb; struct i915_ttm_memcpy_arg arg; + struct drm_i915_private *i915; + struct drm_i915_gem_object *obj; + bool memcpy_allowed; }; static void i915_ttm_move_memcpy(struct i915_ttm_memcpy_arg *arg) @@ -319,12 +333,27 @@ static void __memcpy_work(struct work_struct *work) struct i915_ttm_memcpy_arg *arg = ©_work->arg; bool cookie = dma_fence_begin_signalling(); - i915_ttm_move_memcpy(arg); + if (copy_work->memcpy_allowed) { + i915_ttm_move_memcpy(arg); + } else { + /* + * Prevent further use of the object. Any future GTT binding or + * CPU access is not allowed once we signal the fence. Outside + * of the fence critical section, we then also then wedge the gpu + * to indicate the device is not functional. + */ + copy_work->obj->mm.ttm_unknown_state = true; + } + dma_fence_end_signalling(cookie); dma_fence_signal(©_work->fence); + if (!copy_work->memcpy_allowed) + intel_gt_set_wedged(©_work->i915->gt0); + i915_ttm_memcpy_release(arg); + i915_gem_object_put(copy_work->obj); dma_fence_put(©_work->fence); } @@ -336,6 +365,7 @@ static void __memcpy_irq_work(struct irq_work *irq_work) dma_fence_signal(©_work->fence); i915_ttm_memcpy_release(arg); + i915_gem_object_put(copy_work->obj); dma_fence_put(©_work->fence); } @@ -389,6 +419,16 @@ i915_ttm_memcpy_work_arm(struct i915_ttm_memcpy_work *work, return &work->fence; } +static bool i915_ttm_memcpy_allowed(struct ttm_buffer_object *bo, + struct ttm_resource *dst_mem) +{ + if (!(i915_ttm_resource_mappable(bo->resource) && + i915_ttm_resource_mappable(dst_mem))) + return false; + + return I915_SELFTEST_ONLY(ban_memcpy) ? false : true; +} + static struct dma_fence * __i915_ttm_move(struct ttm_buffer_object *bo, const struct ttm_operation_ctx *ctx, bool clear, @@ -396,6 +436,9 @@ __i915_ttm_move(struct ttm_buffer_object *bo, struct i915_refct_sgt *dst_rsgt, bool allow_accel, const struct i915_deps *move_deps) { + const bool memcpy_allowed = i915_ttm_memcpy_allowed(bo, dst_mem); + struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo); + struct drm_i915_private *i915 = to_i915(bo->base.dev); struct i915_ttm_memcpy_work *copy_work = NULL; struct i915_ttm_memcpy_arg _arg, *arg = &_arg; struct dma_fence *fence = ERR_PTR(-EINVAL); @@ -423,9 +466,14 @@ __i915_ttm_move(struct ttm_buffer_object *bo, copy_work = kzalloc(sizeof(*copy_work), GFP_KERNEL); if (copy_work) { + copy_work->i915 = i915; + copy_work->memcpy_allowed = memcpy_allowed; + copy_work->obj = i915_gem_object_get(obj); arg = ©_work->arg; - i915_ttm_memcpy_init(arg, bo, clear, dst_mem, dst_ttm, - dst_rsgt); + if (memcpy_allowed) + i915_ttm_memcpy_init(arg, bo, clear, dst_mem, + dst_ttm, dst_rsgt); + fence = i915_ttm_memcpy_work_arm(copy_work, dep); } else { dma_fence_wait(dep, false); @@ -450,17 +498,26 @@ __i915_ttm_move(struct ttm_buffer_object *bo, } /* Error intercept failed or no accelerated migration to start with */ - if (!copy_work) - i915_ttm_memcpy_init(arg, bo, clear, dst_mem, dst_ttm, - dst_rsgt); - i915_ttm_move_memcpy(arg); - i915_ttm_memcpy_release(arg); + + if (memcpy_allowed) { + if (!copy_work) + i915_ttm_memcpy_init(arg, bo, clear, dst_mem, dst_ttm, + dst_rsgt); + i915_ttm_move_memcpy(arg); + i915_ttm_memcpy_release(arg); + } else { + intel_gt_set_wedged(&i915->gt0); + obj->mm.ttm_unknown_state = true; + } + if (copy_work) + i915_gem_object_put(copy_work->obj); kfree(copy_work); - return NULL; + return memcpy_allowed ? NULL : ERR_PTR(-EIO); out: if (!fence && copy_work) { i915_ttm_memcpy_release(arg); + i915_gem_object_put(copy_work->obj); kfree(copy_work); } @@ -539,8 +596,11 @@ int i915_ttm_move(struct ttm_buffer_object *bo, bool evict, } if (migration_fence) { - ret = ttm_bo_move_accel_cleanup(bo, migration_fence, evict, - true, dst_mem); + if (I915_SELFTEST_ONLY(evict && fail_gpu_migration)) + ret = -EIO; /* never feed non-migrate fences into ttm */ + else + ret = ttm_bo_move_accel_cleanup(bo, migration_fence, evict, + true, dst_mem); if (ret) { dma_fence_wait(migration_fence, false); ttm_bo_move_sync_cleanup(bo, dst_mem); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.h b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.h index d2e7f149e05c..8a5d5ab0cc34 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.h @@ -22,6 +22,7 @@ int i915_ttm_move_notify(struct ttm_buffer_object *bo); I915_SELFTEST_DECLARE(void i915_ttm_migrate_set_failure_modes(bool gpu_migration, bool work_allocation)); +I915_SELFTEST_DECLARE(void i915_ttm_migrate_set_ban_memcpy(bool ban)); int i915_gem_obj_copy_ttm(struct drm_i915_gem_object *dst, struct drm_i915_gem_object *src, diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c index 801af51aff62..3fb8bcb04cae 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c @@ -9,6 +9,7 @@ #include "i915_deps.h" +#include "selftests/igt_reset.h" #include "selftests/igt_spinner.h" static int igt_fill_check_buffer(struct drm_i915_gem_object *obj, @@ -109,7 +110,8 @@ static int igt_same_create_migrate(void *arg) static int lmem_pages_migrate_one(struct i915_gem_ww_ctx *ww, struct drm_i915_gem_object *obj, - struct i915_vma *vma) + struct i915_vma *vma, + bool silent_migrate) { int err; @@ -138,7 +140,8 @@ static int lmem_pages_migrate_one(struct i915_gem_ww_ctx *ww, if (i915_gem_object_is_lmem(obj)) { err = i915_gem_object_migrate(obj, ww, INTEL_REGION_SMEM); if (err) { - pr_err("Object failed migration to smem\n"); + if (!silent_migrate) + pr_err("Object failed migration to smem\n"); if (err) return err; } @@ -156,7 +159,8 @@ static int lmem_pages_migrate_one(struct i915_gem_ww_ctx *ww, } else { err = i915_gem_object_migrate(obj, ww, INTEL_REGION_LMEM_0); if (err) { - pr_err("Object failed migration to lmem\n"); + if (!silent_migrate) + pr_err("Object failed migration to lmem\n"); if (err) return err; } @@ -179,7 +183,8 @@ static int __igt_lmem_pages_migrate(struct intel_gt *gt, struct i915_address_space *vm, struct i915_deps *deps, struct igt_spinner *spin, - struct dma_fence *spin_fence) + struct dma_fence *spin_fence, + bool borked_migrate) { struct drm_i915_private *i915 = gt->i915; struct drm_i915_gem_object *obj; @@ -242,7 +247,8 @@ static int __igt_lmem_pages_migrate(struct intel_gt *gt, */ for (i = 1; i <= 5; ++i) { for_i915_gem_ww(&ww, err, true) - err = lmem_pages_migrate_one(&ww, obj, vma); + err = lmem_pages_migrate_one(&ww, obj, vma, + borked_migrate); if (err) goto out_put; } @@ -276,6 +282,9 @@ static int __igt_lmem_pages_migrate(struct intel_gt *gt, out_unlock: i915_gem_object_unlock(obj); out_put: + if (borked_migrate && !obj->mm.ttm_unknown_state) + err = -EINVAL; + i915_gem_object_put(obj); return err; @@ -283,23 +292,45 @@ static int __igt_lmem_pages_migrate(struct intel_gt *gt, static int igt_lmem_pages_failsafe_migrate(void *arg) { - int fail_gpu, fail_alloc, ret; + int fail_gpu, fail_alloc, ban_memcpy, ret; struct intel_gt *gt = arg; for (fail_gpu = 0; fail_gpu < 2; ++fail_gpu) { for (fail_alloc = 0; fail_alloc < 2; ++fail_alloc) { - pr_info("Simulated failure modes: gpu: %d, alloc: %d\n", - fail_gpu, fail_alloc); - i915_ttm_migrate_set_failure_modes(fail_gpu, - fail_alloc); - ret = __igt_lmem_pages_migrate(gt, NULL, NULL, NULL, NULL); - if (ret) - goto out_err; + for (ban_memcpy = 0; ban_memcpy < 2; ++ban_memcpy) { + pr_info("Simulated failure modes: gpu: %d, alloc:%d, ban_memcpy: %d\n", + fail_gpu, fail_alloc, ban_memcpy); + i915_ttm_migrate_set_ban_memcpy(ban_memcpy); + i915_ttm_migrate_set_failure_modes(fail_gpu, + fail_alloc); + ret = __igt_lmem_pages_migrate(gt, NULL, NULL, + NULL, NULL, + ban_memcpy && + fail_gpu); + + if (ban_memcpy && fail_gpu) { + if (ret != -EIO) + ret = -EINVAL; + else + ret = 0; + + if (test_bit(I915_WEDGED, >->reset.flags)) { + igt_global_reset_lock(gt); + intel_gt_reset(gt, ALL_ENGINES, NULL); + igt_global_reset_unlock(gt); + } else { + ret = -EINVAL; + } + } + if (ret) + goto out_err; + } } } out_err: i915_ttm_migrate_set_failure_modes(false, false); + i915_ttm_migrate_set_ban_memcpy(false); return ret; } @@ -370,7 +401,7 @@ static int igt_async_migrate(struct intel_gt *gt) goto out_ce; err = __igt_lmem_pages_migrate(gt, &ppgtt->vm, &deps, &spin, - spin_fence); + spin_fence, false); i915_deps_fini(&deps); dma_fence_put(spin_fence); if (err) @@ -394,23 +425,42 @@ static int igt_async_migrate(struct intel_gt *gt) #define ASYNC_FAIL_ALLOC 1 static int igt_lmem_async_migrate(void *arg) { - int fail_gpu, fail_alloc, ret; + int fail_gpu, fail_alloc, ban_memcpy, ret; struct intel_gt *gt = arg; for (fail_gpu = 0; fail_gpu < 2; ++fail_gpu) { for (fail_alloc = 0; fail_alloc < ASYNC_FAIL_ALLOC; ++fail_alloc) { - pr_info("Simulated failure modes: gpu: %d, alloc: %d\n", - fail_gpu, fail_alloc); - i915_ttm_migrate_set_failure_modes(fail_gpu, - fail_alloc); - ret = igt_async_migrate(gt); - if (ret) - goto out_err; + for (ban_memcpy = 0; ban_memcpy < 2; ++ban_memcpy) { + pr_info("Simulated failure modes: gpu: %d, alloc: %d, ban_memcpy: %d\n", + fail_gpu, fail_alloc, ban_memcpy); + i915_ttm_migrate_set_ban_memcpy(ban_memcpy); + i915_ttm_migrate_set_failure_modes(fail_gpu, + fail_alloc); + ret = igt_async_migrate(gt); + + if (fail_gpu && ban_memcpy) { + if (ret != -EIO) + ret = -EINVAL; + else + ret = 0; + + if (test_bit(I915_WEDGED, >->reset.flags)) { + igt_global_reset_lock(gt); + intel_gt_reset(gt, ALL_ENGINES, NULL); + igt_global_reset_unlock(gt); + } else { + ret = -EINVAL; + } + } + if (ret) + goto out_err; + } } } out_err: i915_ttm_migrate_set_failure_modes(false, false); + i915_ttm_migrate_set_ban_memcpy(false); return ret; } diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c index 7c95b6768610..a052e90fa551 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c @@ -10,6 +10,7 @@ #include "gem/i915_gem_internal.h" #include "gem/i915_gem_region.h" #include "gem/i915_gem_ttm.h" +#include "gem/i915_gem_ttm_move.h" #include "gt/intel_engine_pm.h" #include "gt/intel_gpu_commands.h" #include "gt/intel_gt.h" @@ -21,6 +22,7 @@ #include "i915_selftest.h" #include "selftests/i915_random.h" #include "selftests/igt_flush_test.h" +#include "selftests/igt_reset.h" #include "selftests/igt_mmap.h" struct tile { @@ -1160,6 +1162,7 @@ static int ___igt_mmap_migrate(struct drm_i915_private *i915, #define IGT_MMAP_MIGRATE_FILL (1 << 1) #define IGT_MMAP_MIGRATE_EVICTABLE (1 << 2) #define IGT_MMAP_MIGRATE_UNFAULTABLE (1 << 3) +#define IGT_MMAP_MIGRATE_FAIL_GPU (1 << 4) static int __igt_mmap_migrate(struct intel_memory_region **placements, int n_placements, struct intel_memory_region *expected_mr, @@ -1234,13 +1237,47 @@ static int __igt_mmap_migrate(struct intel_memory_region **placements, if (flags & IGT_MMAP_MIGRATE_EVICTABLE) igt_make_evictable(&objects); + if (flags & IGT_MMAP_MIGRATE_FAIL_GPU) { + err = i915_gem_object_lock(obj, NULL); + if (err) + goto out_put; + + /* + * Ensure we only simulate the gpu failuire when faulting the + * pages. + */ + err = i915_gem_object_wait_moving_fence(obj, true); + i915_gem_object_unlock(obj); + if (err) + goto out_put; + i915_ttm_migrate_set_failure_modes(true, false); + } + err = ___igt_mmap_migrate(i915, obj, addr, flags & IGT_MMAP_MIGRATE_UNFAULTABLE); + if (!err && obj->mm.region != expected_mr) { pr_err("%s region mismatch %s\n", __func__, expected_mr->name); err = -EINVAL; } + if (flags & IGT_MMAP_MIGRATE_FAIL_GPU) { + struct intel_gt *gt = &i915->gt0; + + i915_ttm_migrate_set_failure_modes(false, false); + + if (!obj->mm.ttm_unknown_state) + err = -EINVAL; + + if (test_bit(I915_WEDGED, >->reset.flags)) { + igt_global_reset_lock(gt); + intel_gt_reset(gt, ALL_ENGINES, NULL); + igt_global_reset_unlock(gt); + } else if (!err) { + err = -EINVAL; + } + } + out_put: i915_gem_object_put(obj); igt_close_objects(i915, &objects); @@ -1321,6 +1358,23 @@ static int igt_mmap_migrate(void *arg) IGT_MMAP_MIGRATE_TOPDOWN | IGT_MMAP_MIGRATE_FILL | IGT_MMAP_MIGRATE_UNFAULTABLE); + if (err) + goto out_io_size; + + /* + * Allocate in the non-mappable portion, but force migrating to + * the mappable portion on fault (LMEM -> LMEM). We then also + * simulate a gpu error when moving the pages when faulting the + * pages, which should result in wedging the gpu and returning + * SIGBUS in the fault handler, since we can't fallback to + * memcpy. + */ + err = __igt_mmap_migrate(single, ARRAY_SIZE(single), mr, + IGT_MMAP_MIGRATE_TOPDOWN | + IGT_MMAP_MIGRATE_FILL | + IGT_MMAP_MIGRATE_EVICTABLE | + IGT_MMAP_MIGRATE_FAIL_GPU | + IGT_MMAP_MIGRATE_UNFAULTABLE); out_io_size: mr->io_size = saved_io_size; i915_ttm_buddy_man_force_visible_size(man, diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c index 0bffb70b3c5f..84f8ccb8d0ea 100644 --- a/drivers/gpu/drm/i915/i915_vma.c +++ b/drivers/gpu/drm/i915/i915_vma.c @@ -310,7 +310,7 @@ struct i915_vma_work { struct i915_address_space *vm; struct i915_vm_pt_stash stash; struct i915_vma_resource *vma_res; - struct drm_i915_gem_object *pinned; + struct drm_i915_gem_object *obj; struct i915_sw_dma_fence_cb cb; enum i915_cache_level cache_level; unsigned int flags; @@ -321,17 +321,25 @@ static void __vma_bind(struct dma_fence_work *work) struct i915_vma_work *vw = container_of(work, typeof(*vw), base); struct i915_vma_resource *vma_res = vw->vma_res; + /* + * We are about the bind the object, which must mean we have already + * signaled the work to potentially clear/move the pages underneath. If + * something went wrong at that stage then the object should have + * ttm_unknown_state set, in which case we need to skip the bind. + */ + if (vw->obj->mm.ttm_unknown_state) + return; + vma_res->ops->bind_vma(vma_res->vm, &vw->stash, vma_res, vw->cache_level, vw->flags); - } static void __vma_release(struct dma_fence_work *work) { struct i915_vma_work *vw = container_of(work, typeof(*vw), base); - if (vw->pinned) - i915_gem_object_put(vw->pinned); + if (vw->obj) + i915_gem_object_put(vw->obj); i915_vm_free_pt_stash(vw->vm, &vw->stash); if (vw->vma_res) @@ -517,14 +525,7 @@ int i915_vma_bind(struct i915_vma *vma, } work->base.dma.error = 0; /* enable the queue_work() */ - - /* - * If we don't have the refcounted pages list, keep a reference - * on the object to avoid waiting for the async bind to - * complete in the object destruction path. - */ - if (!work->vma_res->bi.pages_rsgt) - work->pinned = i915_gem_object_get(vma->obj); + work->obj = i915_gem_object_get(vma->obj); } else { ret = i915_gem_object_wait_moving_fence(vma->obj, true); if (ret) { From patchwork Tue Jun 21 10:44:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12889020 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A7D64C43334 for ; Tue, 21 Jun 2022 10:47:09 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 8EE0610FE02; Tue, 21 Jun 2022 10:47:03 +0000 (UTC) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by gabe.freedesktop.org (Postfix) with ESMTPS id 1BA0C10FDF9; Tue, 21 Jun 2022 10:47:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655808421; x=1687344421; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=3vLZaoa0P/DbYRC4Y3v6GKLw59ZYgD7ppzQcq7tXiUk=; b=XYIT2hD5MIuS0kz1Wu1Zf4KU/P3m8+aCYK1aVNa4Cx4B/idCrls7XP6Y Sl0L/w5kvtYQZQj1l0fHqmyCnqAlGZX8foL3JRt7Ay9k73T5h6XcR9Dll eDiSjuHcJEklKKhOMGMH0xRu3B8JWFrGm28470G141M7uOycDUyFNZMEw HC6ihSf3r0LAqs5TBYa2jKQROnDpYWwKgDSX0cpIfDifDVBbBo6hjtkVf /bWzw6MigiVzWlkfMaoTt37oSUufIgxe4zZzxgm2eE8kuhwBzolDX8K4A wjqB24AlNeJ63guBGG9NwCO99hU7PH9syje9h6OjH0ZuyOSYSOstPuae+ Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10384"; a="263121969" X-IronPort-AV: E=Sophos;i="5.92,209,1650956400"; d="scan'208";a="263121969" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2022 03:47:00 -0700 X-IronPort-AV: E=Sophos;i="5.92,209,1650956400"; d="scan'208";a="562327155" Received: from jasonmor-mobl1.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.213.200.10]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2022 03:46:58 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Tue, 21 Jun 2022 11:44:33 +0100 Message-Id: <20220621104434.190962-12-matthew.auld@intel.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220621104434.190962-1-matthew.auld@intel.com> References: <20220621104434.190962-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 11/12] drm/i915: turn on small BAR support X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Daniel Vetter , Kenneth Graunke , dri-devel@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" With the uAPI in place we should now have enough in place to ensure a working system on small BAR configurations. v2: (Nirmoy & Thomas): - s/full BAR/Resizable BAR/ which is hopefully more easily understood by users. Signed-off-by: Matthew Auld Cc: Thomas Hellström Cc: Lionel Landwerlin Cc: Tvrtko Ursulin Cc: Jon Bloomfield Cc: Daniel Vetter Cc: Jordan Justen Cc: Kenneth Graunke Cc: Akeem G Abodunrin --- drivers/gpu/drm/i915/gt/intel_region_lmem.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_region_lmem.c b/drivers/gpu/drm/i915/gt/intel_region_lmem.c index 2ff448047020..82c3d2d0f0e0 100644 --- a/drivers/gpu/drm/i915/gt/intel_region_lmem.c +++ b/drivers/gpu/drm/i915/gt/intel_region_lmem.c @@ -112,12 +112,6 @@ static struct intel_memory_region *setup_lmem(struct intel_gt *gt) flat_ccs_base = intel_gt_mcr_read_any(gt, XEHPSDV_FLAT_CCS_BASE_ADDR); flat_ccs_base = (flat_ccs_base >> XEHPSDV_CCS_BASE_SHIFT) * SZ_64K; - /* FIXME: Remove this when we have small-bar enabled */ - if (pci_resource_len(pdev, 2) < lmem_size) { - drm_err(&i915->drm, "System requires small-BAR support, which is currently unsupported on this kernel\n"); - return ERR_PTR(-EINVAL); - } - if (GEM_WARN_ON(lmem_size < flat_ccs_base)) return ERR_PTR(-EIO); @@ -170,6 +164,10 @@ static struct intel_memory_region *setup_lmem(struct intel_gt *gt) drm_info(&i915->drm, "Local memory available: %pa\n", &lmem_size); + if (io_size < lmem_size) + drm_info(&i915->drm, "Using a reduced BAR size of %lluMiB. Consider enabling 'Resizable BAR' or similar, if available in the BIOS.\n", + (u64)io_size >> 20); + return mem; err_region_put: From patchwork Tue Jun 21 10:44:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12889021 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4794EC433EF for ; Tue, 21 Jun 2022 10:47:15 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0D61F10FCEC; Tue, 21 Jun 2022 10:47:10 +0000 (UTC) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by gabe.freedesktop.org (Postfix) with ESMTPS id 3A19910FDF9; Tue, 21 Jun 2022 10:47:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655808422; x=1687344422; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bY2x4Zz2O+LBM0US9V2QcIb64Gqi8kU1ncs8pLllAVk=; b=HkAJ/vj9w6qqhjoUBPKJ7Vr1yxLrjhFbYYO+v9JgzeRaoWrc3NV6da7L iwjOPeXVE2Yr2Tz/ca2SCOnzPFo51U807Btywqs/GZsjefO0Ue17r5vx/ C5tSCYa5yXuSoYD+slbnKms+JDQkeDbymedKylzkAc136mNToxhVeh9J0 i33YVohheJa6eEp0plGFyDs6UTOC/bFo6zJfEThl/xeKEnmlCGemle0fq YuPz0YtW7rm3ME8XslAnIJ/KJXzncpL2Pc3ZbSK6xcGQEMLi1zAlWt3S1 RlZtwCukkxtEUHj1S8lz49p4Cy/dwv/DCJTI/jnhGw0hooIGhB/R3QIiU A==; X-IronPort-AV: E=McAfee;i="6400,9594,10384"; a="263121970" X-IronPort-AV: E=Sophos;i="5.92,209,1650956400"; d="scan'208";a="263121970" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2022 03:47:02 -0700 X-IronPort-AV: E=Sophos;i="5.92,209,1650956400"; d="scan'208";a="562327156" Received: from jasonmor-mobl1.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.213.200.10]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2022 03:47:01 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Tue, 21 Jun 2022 11:44:34 +0100 Message-Id: <20220621104434.190962-13-matthew.auld@intel.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220621104434.190962-1-matthew.auld@intel.com> References: <20220621104434.190962-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 12/12] HAX: force small BAR on dg2 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: dri-devel@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Just for CI. Signed-off-by: Matthew Auld --- drivers/gpu/drm/i915/gt/intel_region_lmem.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/gpu/drm/i915/gt/intel_region_lmem.c b/drivers/gpu/drm/i915/gt/intel_region_lmem.c index 82c3d2d0f0e0..62c3f8185852 100644 --- a/drivers/gpu/drm/i915/gt/intel_region_lmem.c +++ b/drivers/gpu/drm/i915/gt/intel_region_lmem.c @@ -138,6 +138,11 @@ static struct intel_memory_region *setup_lmem(struct intel_gt *gt) if (!io_size) return ERR_PTR(-EIO); + if (io_size == lmem_size) { + drm_info(&i915->drm, "NOTE!! Forcing small BAR for testing\n"); + io_size = SZ_256M; + } + min_page_size = HAS_64K_PAGES(i915) ? I915_GTT_PAGE_SIZE_64K : I915_GTT_PAGE_SIZE_4K; mem = intel_memory_region_create(i915,