From patchwork Mon Nov 22 08:22:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Thomas Hellstrom X-Patchwork-Id: 12631411 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 10481C433F5 for ; Mon, 22 Nov 2021 08:23:26 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 5E8F189D8E; Mon, 22 Nov 2021 08:23:19 +0000 (UTC) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by gabe.freedesktop.org (Postfix) with ESMTPS id 42E2B89C8D; Mon, 22 Nov 2021 08:23:17 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10175"; a="221627488" X-IronPort-AV: E=Sophos;i="5.87,254,1631602800"; d="scan'208";a="221627488" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Nov 2021 00:23:17 -0800 X-IronPort-AV: E=Sophos;i="5.87,254,1631602800"; d="scan'208";a="606331931" Received: from marlonpr-mobl3.ger.corp.intel.com (HELO thellstr-mobl1.intel.com) ([10.249.254.42]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Nov 2021 00:23:13 -0800 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [PATCH v6 4/6] drm/i915/ttm: Correctly handle waiting for gpu when shrinking Date: Mon, 22 Nov 2021 09:22:50 +0100 Message-Id: <20211122082252.223689-5-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211122082252.223689-1-thomas.hellstrom@linux.intel.com> References: <20211122082252.223689-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , matthew.auld@intel.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" With async migration, the shrinker may end up wanting to release the pages of an object while the migration blit is still running, since the GT migration code doesn't set up VMAs and the shrinker is thus oblivious to the fact that the GPU is still using the pages. Add waiting for gpu in the shrinker_release_pages() op and an argument to that function indicating whether the shrinker expects it to not wait for gpu. In the latter case the shrinker_release_pages() op will return -EBUSY if the object is not idle. Signed-off-by: Thomas Hellström Reviewed-by: Matthew Auld --- drivers/gpu/drm/i915/gem/i915_gem_object_types.h | 1 + drivers/gpu/drm/i915/gem/i915_gem_shrinker.c | 1 + drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 7 ++++++- 3 files changed, 8 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index 604ed5ad77f5..f9f7e44099fe 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -59,6 +59,7 @@ struct drm_i915_gem_object_ops { int (*truncate)(struct drm_i915_gem_object *obj); void (*writeback)(struct drm_i915_gem_object *obj); int (*shrinker_release_pages)(struct drm_i915_gem_object *obj, + bool no_gpu_wait, bool should_writeback); int (*pread)(struct drm_i915_gem_object *obj, diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c index dde0a5c232f8..8b4b5f3a432a 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c @@ -60,6 +60,7 @@ static int try_to_writeback(struct drm_i915_gem_object *obj, unsigned int flags) { if (obj->ops->shrinker_release_pages) return obj->ops->shrinker_release_pages(obj, + !(flags & I915_SHRINK_ACTIVE), flags & I915_SHRINK_WRITEBACK); switch (obj->mm.madv) { diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index 350bf1a23db5..e37157b080e4 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -418,6 +418,7 @@ int i915_ttm_purge(struct drm_i915_gem_object *obj) } static int i915_ttm_shrinker_release_pages(struct drm_i915_gem_object *obj, + bool no_wait_gpu, bool should_writeback) { struct ttm_buffer_object *bo = i915_gem_to_ttm(obj); @@ -425,7 +426,7 @@ static int i915_ttm_shrinker_release_pages(struct drm_i915_gem_object *obj, container_of(bo->ttm, typeof(*i915_tt), ttm); struct ttm_operation_ctx ctx = { .interruptible = true, - .no_wait_gpu = false, + .no_wait_gpu = no_wait_gpu, }; struct ttm_placement place = {}; int ret; @@ -438,6 +439,10 @@ static int i915_ttm_shrinker_release_pages(struct drm_i915_gem_object *obj, if (!i915_tt->filp) return 0; + ret = ttm_bo_wait_ctx(bo, &ctx); + if (ret) + return ret; + switch (obj->mm.madv) { case I915_MADV_DONTNEED: return i915_ttm_purge(obj);