From patchwork Wed Dec 10 15:07:09 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Gordon X-Patchwork-Id: 5469721 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id CCE479F444 for ; Wed, 10 Dec 2014 15:07:56 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id EF4ED200E1 for ; Wed, 10 Dec 2014 15:07:55 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 0ABD420125 for ; Wed, 10 Dec 2014 15:07:55 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 8313C6E1D0; Wed, 10 Dec 2014 07:07:54 -0800 (PST) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTP id 03E2C6E1D0 for ; Wed, 10 Dec 2014 07:07:51 -0800 (PST) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga102.fm.intel.com with ESMTP; 10 Dec 2014 07:07:36 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.07,553,1413270000"; d="scan'208";a="635591999" Received: from dsgordon-linux.isw.intel.com ([10.102.226.149]) by fmsmga001.fm.intel.com with ESMTP; 10 Dec 2014 07:07:36 -0800 From: Dave Gordon To: intel-gfx@lists.freedesktop.org Date: Wed, 10 Dec 2014 15:07:09 +0000 Message-Id: <1418224029-20055-3-git-send-email-david.s.gordon@intel.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1418224029-20055-1-git-send-email-david.s.gordon@intel.com> References: <1418224029-20055-1-git-send-email-david.s.gordon@intel.com> Subject: [Intel-gfx] [PATCH 2/2] drm/i915: Track nested calls to intel(_logical)_ring_{begin, advance} X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP With the current deferred-submission model, if a problem arises part-way through the insertion of instructions into the ringbuffer (e.g. due to one of the begin() calls finding there's not enough space), we avoid sending the incomplete sequence to the h/w; but currently have no means of undoing the work so far, which will lead to undefined behaviour when the next batch is submitted (probably TDR will trigger a reset first, though, and clean up the ring state). A future idea is to move to an atomic-submission model, where all the space required for a batch submission is reserved up front, and in the event of failure partway through, the work can be abandoned without side-effects. This will be required for the forthcoming GPU scheduler (specifically, for preemption). To support this, we allow nested begin/advance pairs. Specifically, the outermost pair defines the total space reservation; inner pairs can be nested ad lib, but all inner reservations at any level must fit entirely within the outermost one. Thus, this is permitted: begin(128) - guarantees that up to 128 dwords can now be emitted without waiting for more freespace begin(6) advance begin(10) advance begin(8) advance etc, as long as the total is no more than 128 dwords advance-and-submit The execbuffer code will later be enhanced to use this approach. In the mean time, the traditional single-level begin/advance mechanism remains fully supported. This commit changes only the begin/advance checking code, to permit (but not require) nested begin/advance pairs. Signed-off-by: Dave Gordon Tested-By: PRC QA PRTS (Patch Regression Test System Contact: shuang.he@intel.com) --- drivers/gpu/drm/i915/intel_ringbuffer.h | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h index a10d271..71cb3ef 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.h +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h @@ -416,8 +416,17 @@ static inline void __intel_ringbuffer_begin(struct intel_ringbuffer *ringbuf, WARN_ON(nbytes <= 0); if (ringbuf->rsv_level++) { - /* begin() called twice or more without advance() */ - WARN_ON(1); + /* + * A nested reservation; check that it falls entirely + * within the outer block. Don't adjust remaining space. + */ + WARN_ON(ringbuf->rsv_start < 0); + WARN_ON(ringbuf->rsv_start & 7); + WARN_ON(ringbuf->tail & 7); + WARN_ON(ringbuf->tail > ringbuf->effective_size); + WARN_ON(ringbuf->tail > ringbuf->rsv_start + ringbuf->rsv_size); + WARN_ON(ringbuf->tail + nbytes > ringbuf->effective_size); + WARN_ON(ringbuf->tail + nbytes > ringbuf->rsv_start + ringbuf->rsv_size); } else { /* * A new reservation; validate and record the start and @@ -437,7 +446,7 @@ static inline void __intel_ringbuffer_begin(struct intel_ringbuffer *ringbuf, static inline void __intel_ringbuffer_check(struct intel_ringbuffer *ringbuf) { #if 1 - WARN_ON(ringbuf->rsv_level-- != 1); + WARN_ON(ringbuf->rsv_level-- <= 0); WARN_ON(ringbuf->rsv_start < 0 || ringbuf->rsv_size < 0); WARN_ON(ringbuf->tail & 7); WARN_ON(ringbuf->tail > ringbuf->rsv_start + ringbuf->rsv_size);