From patchwork Thu Nov 24 12:09:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 13054897 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4BE94C4321E for ; Thu, 24 Nov 2022 12:10:09 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 739E810E6E0; Thu, 24 Nov 2022 12:10:08 +0000 (UTC) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by gabe.freedesktop.org (Postfix) with ESMTPS id 50E7810E6E0 for ; Thu, 24 Nov 2022 12:10:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669291804; x=1700827804; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=VeGMyYTTMhuWLDGwGgpvKEsDAECcBQ0fUZpyt0ZS84s=; b=Ed7NMTvYkFyhAIgZAn8DkoCZTbI+2u2thZ2v4nC3FUx2ziQqg89hKD3H ztupmefw73NGY1X4+Mnp5sPOtESeoYjEijDTMjQG15CWzHptU7QB3YJss HCugSdG7GqVpQZuFkepPrXNxL68aQhUw/StPtG7dBQFHtSJWtyXsVfGX0 nkJcNsKxHrgy4lsN/wgU7+G4buzisqafP0BvdnTNRgYQ+AIE6LbNaYY5b ujFTuyLxtwoCE/3TjgFe14B6hbn6Jv+fBxDKOxuBlMI7Nr2cwTu+5cfG0 M5KfNQvat4UaJOBREQ7PLXCIK7YYnrstDxFkNHcxbT7OT6jROA7pTyxwu Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10540"; a="314323198" X-IronPort-AV: E=Sophos;i="5.96,190,1665471600"; d="scan'208";a="314323198" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Nov 2022 04:10:03 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10540"; a="642330155" X-IronPort-AV: E=Sophos;i="5.96,190,1665471600"; d="scan'208";a="642330155" Received: from gsimsek-mobl1.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.252.2.4]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Nov 2022 04:10:00 -0800 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Thu, 24 Nov 2022 12:09:41 +0000 Message-Id: <20221124120943.241543-1-matthew.auld@intel.com> X-Mailer: git-send-email 2.38.1 MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v4 1/3] drm/i915/migrate: Account for the reserved_space X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: stable@vger.kernel.org, Nirmoy Das , Andrzej Hajda , Chris Wilson Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Chris Wilson If the ring is nearly full when calling into emit_pte(), we might incorrectly trample the reserved_space when constructing the packet to emit the PTEs. This then triggers the GEM_BUG_ON(rq->reserved_space > ring->space) when later submitting the request, since the request itself doesn't have enough space left in the ring to emit things like workarounds, breadcrumbs etc. Testcase: igt@i915_selftests@live_emit_pte_full_ring Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/7535 Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6889 Fixes: cf586021642d ("drm/i915/gt: Pipelined page migration") Signed-off-by: Chris Wilson Signed-off-by: Matthew Auld Cc: Andrzej Hajda Cc: Andi Shyti Cc: Nirmoy Das Cc: # v5.15+ Tested-by: Nirmoy Das Reviewed-by: Nirmoy Das --- drivers/gpu/drm/i915/gt/intel_migrate.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_migrate.c b/drivers/gpu/drm/i915/gt/intel_migrate.c index b405a04135ca..48c3b5168558 100644 --- a/drivers/gpu/drm/i915/gt/intel_migrate.c +++ b/drivers/gpu/drm/i915/gt/intel_migrate.c @@ -342,6 +342,16 @@ static int emit_no_arbitration(struct i915_request *rq) return 0; } +static int max_pte_pkt_size(struct i915_request *rq, int pkt) +{ + struct intel_ring *ring = rq->ring; + + pkt = min_t(int, pkt, (ring->space - rq->reserved_space) / sizeof(u32) + 5); + pkt = min_t(int, pkt, (ring->size - ring->emit) / sizeof(u32) + 5); + + return pkt; +} + static int emit_pte(struct i915_request *rq, struct sgt_dma *it, enum i915_cache_level cache_level, @@ -388,8 +398,7 @@ static int emit_pte(struct i915_request *rq, return PTR_ERR(cs); /* Pack as many PTE updates as possible into a single MI command */ - pkt = min_t(int, dword_length, ring->space / sizeof(u32) + 5); - pkt = min_t(int, pkt, (ring->size - ring->emit) / sizeof(u32) + 5); + pkt = max_pte_pkt_size(rq, dword_length); hdr = cs; *cs++ = MI_STORE_DATA_IMM | REG_BIT(21); /* as qword elements */ @@ -422,8 +431,7 @@ static int emit_pte(struct i915_request *rq, } } - pkt = min_t(int, dword_rem, ring->space / sizeof(u32) + 5); - pkt = min_t(int, pkt, (ring->size - ring->emit) / sizeof(u32) + 5); + pkt = max_pte_pkt_size(rq, dword_rem); hdr = cs; *cs++ = MI_STORE_DATA_IMM | REG_BIT(21);