From patchwork Tue Jun 21 10:44:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12889017 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AC9C9C43334 for ; Tue, 21 Jun 2022 10:47:04 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3384E10FD14; Tue, 21 Jun 2022 10:46:56 +0000 (UTC) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by gabe.freedesktop.org (Postfix) with ESMTPS id 02B7D10FBDC; Tue, 21 Jun 2022 10:46:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655808413; x=1687344413; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=YBSDP90ckR8VRw0Pvhy1R6rqRgl/3m77HaTxtbiIdPA=; b=HxBqDrJb2QLB9yE1b63V43wa5eTIIbHNOTWJH83bd1bs23ftUJlQXkV0 AkXwvolHJLIrsPs/hsoDWo3f1QiWX+obbgRcHbKkMopRaRy6RTnUS02Gd JZ2Kf8dQ1iocOAHPt22iPlxeexNYyVRMHRLT8Bn/y7WLO9AqdfNllAHmq QzbrUN3keNo17ai9ZSYpTU7P6iEy0Mb1yHo9jFIfdHyDmkZALSS6cFUJC fnnjeNHItj0awg9tilQFBy1V4oANdSeeJm1b74zCpU9HYMBTWGdwx1vbo rKf58Tmqn6/xZk7ZpcIJVlclT3if4t4mPX9TRSZgi+RfN+zyDEpetdYD8 Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10384"; a="263121952" X-IronPort-AV: E=Sophos;i="5.92,209,1650956400"; d="scan'208";a="263121952" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2022 03:46:52 -0700 X-IronPort-AV: E=Sophos;i="5.92,209,1650956400"; d="scan'208";a="562327128" Received: from jasonmor-mobl1.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.213.200.10]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2022 03:46:50 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Tue, 21 Jun 2022 11:44:30 +0100 Message-Id: <20220621104434.190962-9-matthew.auld@intel.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220621104434.190962-1-matthew.auld@intel.com> References: <20220621104434.190962-1-matthew.auld@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 08/12] drm/i915/uapi: tweak error capture on recoverable contexts X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Daniel Vetter , Kenneth Graunke , dri-devel@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" A non-recoverable context must be used if the user wants proper error capture on discrete platforms. In the future the kernel may want to blit the contents of some objects when later doing the capture stage. Also extend to newer integrated platforms. v2(Thomas): - Also extend to newer integrated platforms, for capture buffer memory allocation purposes. Testcase: igt@gem_exec_capture@capture-recoverable Signed-off-by: Matthew Auld Cc: Thomas Hellström Cc: Lionel Landwerlin Cc: Tvrtko Ursulin Cc: Jon Bloomfield Cc: Daniel Vetter Cc: Jordan Justen Cc: Kenneth Graunke Cc: Akeem G Abodunrin Reviewed-by: Thomas Hellström Reported-by: kernel test robot --- drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 30fe847c6664..6ac88e2193f1 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -1951,7 +1951,7 @@ eb_find_first_request_added(struct i915_execbuffer *eb) #if IS_ENABLED(CONFIG_DRM_I915_CAPTURE_ERROR) /* Stage with GFP_KERNEL allocations before we enter the signaling critical path */ -static void eb_capture_stage(struct i915_execbuffer *eb) +static int eb_capture_stage(struct i915_execbuffer *eb) { const unsigned int count = eb->buffer_count; unsigned int i = count, j; @@ -1964,6 +1964,10 @@ static void eb_capture_stage(struct i915_execbuffer *eb) if (!(flags & EXEC_OBJECT_CAPTURE)) continue; + if (i915_gem_context_is_recoverable(eb->gem_context) && + (IS_DGFX(eb->i915) || GRAPHICS_VER_FULL(eb->i915) > IP_VER(12, 0))) + return -EINVAL; + for_each_batch_create_order(eb, j) { struct i915_capture_list *capture; @@ -1976,6 +1980,8 @@ static void eb_capture_stage(struct i915_execbuffer *eb) eb->capture_lists[j] = capture; } } + + return 0; } /* Commit once we're in the critical path */ @@ -2017,7 +2023,7 @@ static void eb_capture_list_clear(struct i915_execbuffer *eb) #else -static void eb_capture_stage(struct i915_execbuffer *eb) +static int eb_capture_stage(struct i915_execbuffer *eb) { } @@ -3410,7 +3416,9 @@ i915_gem_do_execbuffer(struct drm_device *dev, } ww_acquire_done(&eb.ww.ctx); - eb_capture_stage(&eb); + err = eb_capture_stage(&eb); + if (err) + goto err_vma; out_fence = eb_requests_create(&eb, in_fence, out_fence_fd); if (IS_ERR(out_fence)) {