From patchwork Fri Sep 30 13:30:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12995412 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E685FC433FE for ; Fri, 30 Sep 2022 13:31:11 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A653C10ECF3; Fri, 30 Sep 2022 13:31:10 +0000 (UTC) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id 1E6E110ECF3 for ; Fri, 30 Sep 2022 13:31:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1664544668; x=1696080668; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=0qzPAvp6BUFHTYxU4U59W0zgOhzq2vCNvWU+fsXfgew=; b=eiFxAakF1sMGAvlSXIenCV54PzruSoyoYkUjtuWH996dq+hgO2Qw7eoX j50bLV/x1MTDFleYj8dojOXK70XGrfoE8Q6WTbLEebhmL3Qpp2pP9QZ/I L9piWn+4sactvWj5fZy6iVXs6r1dh3Kq9MEOyukCy6AAbrXm9tg+Djd5+ NH6y86WASl/duMwKKTOj/0yZF/An6J0UhFW/ukeQXejlHpvMG42Pp8j/R /EAcU/B8MhjB77U+wo9bqbJ09NRK61cHI7ipWPAC0/1ox33cl6y1kOMJi RELMRhbfA8J2PuSu1yWH4xagx1Fj1Cbi4+84il6y1JrVlUSc3+xNxordS A==; X-IronPort-AV: E=McAfee;i="6500,9779,10486"; a="303676074" X-IronPort-AV: E=Sophos;i="5.93,358,1654585200"; d="scan'208";a="303676074" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Sep 2022 06:31:07 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10486"; a="711798515" X-IronPort-AV: E=Sophos;i="5.93,358,1654585200"; d="scan'208";a="711798515" Received: from dtrawins-mobl1.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.252.7.39]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Sep 2022 06:31:05 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Fri, 30 Sep 2022 14:30:51 +0100 Message-Id: <20220930133052.387165-1-matthew.auld@intel.com> X-Mailer: git-send-email 2.37.3 MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH 1/2] drm/i915/display: consider DG2_RC_CCS_CC when migrating buffers X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Nirmoy Das , Jianshui Yu Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" For these types of display buffers, we need to able to CPU access some part of the backing memory in prepare_plane_clear_colors(). As a result we need to ensure we always place in the mappable part of lmem, which becomes necessary on small-bar systems. Fixes: eb1c535f0d69 ("drm/i915: turn on small BAR support") Reported-by: Jianshui Yu Signed-off-by: Matthew Auld Cc: Ville Syrjälä Cc: Nirmoy Das --- drivers/gpu/drm/i915/display/intel_fb_pin.c | 11 ++++-- drivers/gpu/drm/i915/gem/i915_gem_object.c | 37 ++++++++++++++++++- drivers/gpu/drm/i915/gem/i915_gem_object.h | 4 ++ .../gpu/drm/i915/gem/i915_gem_object_types.h | 3 +- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 5 ++- 5 files changed, 53 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/i915/display/intel_fb_pin.c b/drivers/gpu/drm/i915/display/intel_fb_pin.c index c86e5d4ee016..f83cf41ddd63 100644 --- a/drivers/gpu/drm/i915/display/intel_fb_pin.c +++ b/drivers/gpu/drm/i915/display/intel_fb_pin.c @@ -139,9 +139,14 @@ intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb, ret = i915_gem_object_lock(obj, &ww); if (!ret && phys_cursor) ret = i915_gem_object_attach_phys(obj, alignment); - else if (!ret && HAS_LMEM(dev_priv)) - ret = i915_gem_object_migrate(obj, &ww, INTEL_REGION_LMEM_0); - /* TODO: Do we need to sync when migration becomes async? */ + else if (!ret && HAS_LMEM(dev_priv)) { + unsigned int flags = obj->flags; + + if (intel_fb_rc_ccs_cc_plane(fb) >= 0) + flags &= ~I915_BO_ALLOC_GPU_ONLY; + ret = __i915_gem_object_migrate(obj, &ww, INTEL_REGION_LMEM_0, + flags); + } if (!ret) ret = i915_gem_object_pin_pages(obj); if (ret) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c index 7ff9c7877bec..369006c5317f 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c @@ -652,6 +652,41 @@ bool i915_gem_object_can_migrate(struct drm_i915_gem_object *obj, int i915_gem_object_migrate(struct drm_i915_gem_object *obj, struct i915_gem_ww_ctx *ww, enum intel_region_id id) +{ + return __i915_gem_object_migrate(obj, ww, id, obj->flags); +} + +/** + * __i915_gem_object_migrate - Migrate an object to the desired region id, with + * control of the extra flags + * @obj: The object to migrate. + * @ww: An optional struct i915_gem_ww_ctx. If NULL, the backend may + * not be successful in evicting other objects to make room for this object. + * @id: The region id to migrate to. + * @flags: The object flags. Normally just obj->flags. + * + * Attempt to migrate the object to the desired memory region. The + * object backend must support migration and the object may not be + * pinned, (explicitly pinned pages or pinned vmas). The object must + * be locked. + * On successful completion, the object will have pages pointing to + * memory in the new region, but an async migration task may not have + * completed yet, and to accomplish that, i915_gem_object_wait_migration() + * must be called. + * + * Note: the @ww parameter is not used yet, but included to make sure + * callers put some effort into obtaining a valid ww ctx if one is + * available. + * + * Return: 0 on success. Negative error code on failure. In particular may + * return -ENXIO on lack of region space, -EDEADLK for deadlock avoidance + * if @ww is set, -EINTR or -ERESTARTSYS if signal pending, and + * -EBUSY if the object is pinned. + */ +int __i915_gem_object_migrate(struct drm_i915_gem_object *obj, + struct i915_gem_ww_ctx *ww, + enum intel_region_id id, + unsigned int flags) { struct drm_i915_private *i915 = to_i915(obj->base.dev); struct intel_memory_region *mr; @@ -672,7 +707,7 @@ int i915_gem_object_migrate(struct drm_i915_gem_object *obj, return 0; } - return obj->ops->migrate(obj, mr); + return obj->ops->migrate(obj, mr, flags); } /** diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index a3b7551a57fc..6b9ecff42bb5 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -612,6 +612,10 @@ bool i915_gem_object_migratable(struct drm_i915_gem_object *obj); int i915_gem_object_migrate(struct drm_i915_gem_object *obj, struct i915_gem_ww_ctx *ww, enum intel_region_id id); +int __i915_gem_object_migrate(struct drm_i915_gem_object *obj, + struct i915_gem_ww_ctx *ww, + enum intel_region_id id, + unsigned int flags); bool i915_gem_object_can_migrate(struct drm_i915_gem_object *obj, enum intel_region_id id); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index 40305e2bcd49..d0d6772e6f36 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -107,7 +107,8 @@ struct drm_i915_gem_object_ops { * pinning or for as long as the object lock is held. */ int (*migrate)(struct drm_i915_gem_object *obj, - struct intel_memory_region *mr); + struct intel_memory_region *mr, + unsigned int flags); void (*release)(struct drm_i915_gem_object *obj); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index 3dc6acfcf4ec..5bed353ee9bc 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -848,9 +848,10 @@ static int __i915_ttm_migrate(struct drm_i915_gem_object *obj, } static int i915_ttm_migrate(struct drm_i915_gem_object *obj, - struct intel_memory_region *mr) + struct intel_memory_region *mr, + unsigned int flags) { - return __i915_ttm_migrate(obj, mr, obj->flags); + return __i915_ttm_migrate(obj, mr, flags); } static void i915_ttm_put_pages(struct drm_i915_gem_object *obj, From patchwork Fri Sep 30 13:30:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12995413 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 91B92C433FE for ; Fri, 30 Sep 2022 13:31:22 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D910810ECFB; Fri, 30 Sep 2022 13:31:21 +0000 (UTC) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id CB6E010ECF3 for ; Fri, 30 Sep 2022 13:31:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1664544668; x=1696080668; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Gkpp2w0QmfCalnSjZU+53ky+fTXNOcF0NBZdguwjlaw=; b=McZj8K+ZW2/ybzxcSrf/ig7CoLObA1cgul4unX7BiuUAOU5DjAWOT86p 6aSq0iSlJXdIJ1klaril239MiCN/+6GP8VNy2BCR11pXAri5OM1IcdoWo y3yCL7y8N0uME2V8rw0uLHldkdv1F7sA9lAmiklNFTmuo1Q+DCWgRMzoj KA9u3Q6pdSLv89hJpsOQupGXY2RaATa9kRWksVkpwh5VA0ch4rWjHODOZ 1JNQBb0Cx5rcd+erjfdu/QHRazE0fmO4ijUk/bXhPctz4t17pmVdcr8iD rl3eyn1dwYZOx5A/Yjrcl80FkSwED9JyR8PKfmZb6OqXK1hnHdfAZ993l w==; X-IronPort-AV: E=McAfee;i="6500,9779,10486"; a="303676076" X-IronPort-AV: E=Sophos;i="5.93,358,1654585200"; d="scan'208";a="303676076" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Sep 2022 06:31:08 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10486"; a="711798539" X-IronPort-AV: E=Sophos;i="5.93,358,1654585200"; d="scan'208";a="711798539" Received: from dtrawins-mobl1.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.252.7.39]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Sep 2022 06:31:07 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Fri, 30 Sep 2022 14:30:52 +0100 Message-Id: <20220930133052.387165-2-matthew.auld@intel.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20220930133052.387165-1-matthew.auld@intel.com> References: <20220930133052.387165-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH 2/2] drm/i915: check memory is mappable in read_from_page X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Nirmoy Das , Jianshui Yu Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" On small-bar systems we could be given something non-mappable here, which leads to nasty oops. Make this nicer by checking if the resource is mappable or not, and return an error otherwise. Signed-off-by: Matthew Auld Cc: Jianshui Yu Cc: Ville Syrjälä Cc: Nirmoy Das --- drivers/gpu/drm/i915/gem/i915_gem_object.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c index 369006c5317f..0a3dbb08376a 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c @@ -444,6 +444,16 @@ i915_gem_object_read_from_page_iomap(struct drm_i915_gem_object *obj, u64 offset io_mapping_unmap(src_map); } +static bool object_has_mappable_iomem(struct drm_i915_gem_object *obj) +{ + GEM_BUG_ON(!i915_gem_object_has_iomem(obj)); + + if (IS_DGFX(to_i915(obj->base.dev))) + return i915_ttm_resource_mappable(i915_gem_to_ttm(obj)->resource); + + return true; +} + /** * i915_gem_object_read_from_page - read data from the page of a GEM object * @obj: GEM object to read from @@ -463,10 +473,11 @@ int i915_gem_object_read_from_page(struct drm_i915_gem_object *obj, u64 offset, GEM_BUG_ON(offset >= obj->base.size); GEM_BUG_ON(offset_in_page(offset) > PAGE_SIZE - size); GEM_BUG_ON(!i915_gem_object_has_pinned_pages(obj)); + GEM_BUG_ON(obj->flags & I915_BO_ALLOC_GPU_ONLY); if (i915_gem_object_has_struct_page(obj)) i915_gem_object_read_from_page_kmap(obj, offset, dst, size); - else if (i915_gem_object_has_iomem(obj)) + else if (i915_gem_object_has_iomem(obj) && object_has_mappable_iomem(obj)) i915_gem_object_read_from_page_iomap(obj, offset, dst, size); else return -ENODEV;