From patchwork Wed Feb 24 10:02:58 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Gordon X-Patchwork-Id: 8407621 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 3B921C0553 for ; Wed, 24 Feb 2016 13:46:19 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id F007A20303 for ; Wed, 24 Feb 2016 13:46:17 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 971452034C for ; Wed, 24 Feb 2016 13:46:16 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D511D89F41; Wed, 24 Feb 2016 13:46:15 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by gabe.freedesktop.org (Postfix) with ESMTP id 9DFC2892BB for ; Wed, 24 Feb 2016 13:46:12 +0000 (UTC) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga104.fm.intel.com with ESMTP; 24 Feb 2016 02:03:05 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.22,493,1449561600"; d="scan'208";a="910123407" Received: from dsgordon-linux2.isw.intel.com ([10.102.226.88]) by fmsmga001.fm.intel.com with ESMTP; 24 Feb 2016 02:03:04 -0800 From: Dave Gordon To: intel-gfx@lists.freedesktop.org Date: Wed, 24 Feb 2016 10:02:58 +0000 Message-Id: <1456308178-22299-1-git-send-email-david.s.gordon@intel.com> X-Mailer: git-send-email 1.9.1 Organization: Intel Corporation (UK) Ltd. - Co. Reg. #1134945 - Pipers Way, Swindon SN3 1RJ Cc: Ben Widawsky Subject: [Intel-gfx] [PATCH v5] drm/i915/execlists: Move WA_TAIL_DWORDS to callee X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Chris Wilson Currently emit-request starts writing to the ring and reserves space for a workaround to be emitted later whilst submitting the request. It is easier to read if the caller only allocates sufficient space for its access (then the reader can quickly verify that the ring begin allocates the exact space for the number of dwords emitted) and closes the access to the ring. During submit, if we need to add the workaround, we can reacquire ring access, in the assurance that we reserved space for ourselves when beginning the request. v3: Reinstated #define for WA_TAIL_DWORDS so we could accommodate GPUs that required different amounts of padding, generalised NOOP fill [Rodrigo Vivi], added W/A space to reserved amount and updated code comments [Dave Gordon], v4: Made #define for WA_RAIL_DWORDS a function-type macro in case we want different values on different platforms (or engines), to address comment by [Rodrigo Vivi]. But the current value is still the constant (2) on all platforms, so this doesn't add any overhead. v5: Eliminated local variable & multiple indirections. Added long essay comment about WaIdleLiteRestore. [Dave Gordon] Originally-by: Rodrigo Vivi Rewritten-by: Chris Wilson Further-tweaked-by: Dave Gordon Signed-off-by: Dave Gordon Cc: Rodrigo Vivi Cc: Chris Wilson Cc: Ben Widawsky --- drivers/gpu/drm/i915/intel_lrc.c | 111 +++++++++++++++++++++++++-------------- 1 file changed, 72 insertions(+), 39 deletions(-) diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index 3a03646..ac71b13 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -225,6 +225,17 @@ enum { #define GEN8_CTX_ID_SHIFT 32 #define CTX_RCS_INDIRECT_CTX_OFFSET_DEFAULT 0x17 +/* + * Reserve space for some NOOPs at the end of each request, to be used by + * a workaround for not being allowed to do lite restore with HEAD==TAIL + * (WaIdleLiteRestore). + * + * The number of NOOPs is the same constant on all current platforms that + * require this, but in theory could be a platform- or engine- specific + * value based on the request. + */ +#define WA_TAIL_DWORDS(request) (2) + static int intel_lr_context_pin(struct intel_context *ctx, struct intel_engine_cs *engine); static void lrc_setup_hardware_status_page(struct intel_engine_cs *ring, @@ -457,21 +468,31 @@ static void execlists_context_unqueue(struct intel_engine_cs *ring) if (IS_GEN8(ring->dev) || IS_GEN9(ring->dev)) { /* - * WaIdleLiteRestore: make sure we never cause a lite - * restore with HEAD==TAIL + * WaIdleLiteRestore: lite restore must not have HEAD==TAIL. + * + * If a request has previously been submitted (as req1) and + * is now being /re/submitted (as req0), it may actually have + * completed (with HEAD==TAIL), but we don't know that yet. + * + * Unfortunately the hardware requires that we not submit + * a context that is already idle with HEAD==TAIL; but we + * cannot safely check this because there would always be + * an opportunity for a race, where the context /becomes/ + * idle after we check and before resubmission. + * + * So instead we increment the request TAIL here to ensure + * that it is different from the last value seen by the + * hardware, in effect always adding extra work to be done + * even if the context has already completed. That work + * consists of NOOPs added by intel_logical_ring_submit() + * after the end of each request. Advancing TAIL turns + * those NOOPs into part of the current request; if not so + * consumed, they remain in the ringbuffer as a harmless + * prefix to the next request. */ if (req0->elsp_submitted) { - /* - * Apply the wa NOOPS to prevent ring:HEAD == req:TAIL - * as we resubmit the request. See gen8_emit_request() - * for where we prepare the padding after the end of the - * request. - */ - struct intel_ringbuffer *ringbuf; - - ringbuf = req0->ctx->engine[ring->id].ringbuf; req0->tail += 8; - req0->tail &= ringbuf->size - 1; + req0->tail &= req0->ringbuf->size - 1; } } @@ -752,33 +773,47 @@ static int logical_ring_wait_for_space(struct drm_i915_gem_request *req, } /* - * intel_logical_ring_advance_and_submit() - advance the tail and submit the workload - * @request: Request to advance the logical ringbuffer of. + * intel_logical_ring_submit() - submit the workload (to GuC or execlist queue) + * @request: Request to submit * - * The tail is updated in our logical ringbuffer struct, not in the actual context. What - * really happens during submission is that the context and current tail will be placed - * on a queue waiting for the ELSP to be ready to accept a new context submission. At that - * point, the tail *inside* the context is updated and the ELSP written to. + * The tail is updated in our logical ringbuffer struct, not in the actual + * context. What really happens during submission is that the context and + * current tail will be placed on a queue waiting for the ELSP to be ready + * to accept a new context submission. At that point, the tail *inside* the + * context is updated and the ELSP written to by the submitting agent i.e. + * either the driver (in execlist mode), or the GuC (in GuC-submission mode). */ static int -intel_logical_ring_advance_and_submit(struct drm_i915_gem_request *request) +intel_logical_ring_submit(struct drm_i915_gem_request *request) { struct intel_ringbuffer *ringbuf = request->ringbuf; struct drm_i915_private *dev_priv = request->i915; struct intel_engine_cs *engine = request->ring; + int padding = WA_TAIL_DWORDS(request); - intel_logical_ring_advance(ringbuf); request->tail = ringbuf->tail; /* - * Here we add two extra NOOPs as padding to avoid - * lite restore of a context with HEAD==TAIL. - * - * Caller must reserve WA_TAIL_DWORDS for us! + * Fill in a few NOOPs after the end of the request proper, + * as a buffer between requests to be used by a workaround + * for not being allowed to do lite restore with HEAD==TAIL. + * (WaIdleLiteRestore). These words may be consumed by the + * submission mechanism if a context is *re*submitted while + * (potentially) still active; otherwise, they will be left + * as a harmless preamble to the next request. */ - intel_logical_ring_emit(ringbuf, MI_NOOP); - intel_logical_ring_emit(ringbuf, MI_NOOP); - intel_logical_ring_advance(ringbuf); + if (padding > 0) { + /* Safe because we reserved the space earlier */ + int ret = intel_logical_ring_begin(request, padding); + if (WARN_ON(ret != 0)) + return ret; + + do + intel_logical_ring_emit(ringbuf, MI_NOOP); + while (--padding); + + intel_logical_ring_advance(ringbuf); + } if (intel_ring_stopped(engine)) return 0; @@ -907,7 +942,8 @@ int intel_logical_ring_reserve_space(struct drm_i915_gem_request *request) * adding any commands to it then there might not actually be * sufficient room for the submission commands. */ - intel_ring_reserved_space_reserve(request->ringbuf, MIN_SPACE_FOR_ADD_REQUEST); + intel_ring_reserved_space_reserve(request->ringbuf, + MIN_SPACE_FOR_ADD_REQUEST + WA_TAIL_DWORDS(request)); return intel_logical_ring_begin(request, 0); } @@ -1875,13 +1911,6 @@ static void bxt_a_set_seqno(struct intel_engine_cs *ring, u32 seqno) intel_flush_status_page(ring, I915_GEM_HWS_INDEX); } -/* - * Reserve space for 2 NOOPs at the end of each request to be - * used as a workaround for not being allowed to do lite - * restore with HEAD==TAIL (WaIdleLiteRestore). - */ -#define WA_TAIL_DWORDS 2 - static inline u32 hws_seqno_address(struct intel_engine_cs *engine) { return engine->status_page.gfx_addr + I915_GEM_HWS_INDEX_ADDR; @@ -1892,7 +1921,7 @@ static int gen8_emit_request(struct drm_i915_gem_request *request) struct intel_ringbuffer *ringbuf = request->ringbuf; int ret; - ret = intel_logical_ring_begin(request, 6 + WA_TAIL_DWORDS); + ret = intel_logical_ring_begin(request, 6); if (ret) return ret; @@ -1908,7 +1937,9 @@ static int gen8_emit_request(struct drm_i915_gem_request *request) intel_logical_ring_emit(ringbuf, i915_gem_request_get_seqno(request)); intel_logical_ring_emit(ringbuf, MI_USER_INTERRUPT); intel_logical_ring_emit(ringbuf, MI_NOOP); - return intel_logical_ring_advance_and_submit(request); + intel_logical_ring_advance(ringbuf); + + return intel_logical_ring_submit(request); } static int gen8_emit_request_render(struct drm_i915_gem_request *request) @@ -1916,7 +1947,7 @@ static int gen8_emit_request_render(struct drm_i915_gem_request *request) struct intel_ringbuffer *ringbuf = request->ringbuf; int ret; - ret = intel_logical_ring_begin(request, 6 + WA_TAIL_DWORDS); + ret = intel_logical_ring_begin(request, 6); if (ret) return ret; @@ -1933,7 +1964,9 @@ static int gen8_emit_request_render(struct drm_i915_gem_request *request) intel_logical_ring_emit(ringbuf, 0); intel_logical_ring_emit(ringbuf, i915_gem_request_get_seqno(request)); intel_logical_ring_emit(ringbuf, MI_USER_INTERRUPT); - return intel_logical_ring_advance_and_submit(request); + intel_logical_ring_advance(ringbuf); + + return intel_logical_ring_submit(request); } static int intel_lr_context_render_state_init(struct drm_i915_gem_request *req)