From patchwork Wed May 26 06:42:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Brost X-Patchwork-Id: 12280705 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07B6DC2B9F7 for ; Wed, 26 May 2021 06:25:36 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D0AC5613C9 for ; Wed, 26 May 2021 06:25:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D0AC5613C9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0DBFE6EC02; Wed, 26 May 2021 06:25:26 +0000 (UTC) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8236C6EBAA; Wed, 26 May 2021 06:24:51 +0000 (UTC) IronPort-SDR: P9Vyatcjp7yxw3msN0kOSrNDuwzo/dH1MGwT88BExWIiaOGK2TAs0zn83QaOsxJw6wxCY2jQmc dyOfAW4OHwfg== X-IronPort-AV: E=McAfee;i="6200,9189,9995"; a="182033747" X-IronPort-AV: E=Sophos;i="5.82,330,1613462400"; d="scan'208";a="182033747" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 May 2021 23:24:49 -0700 IronPort-SDR: r/TDXWmV4A3uiPmu4SBGH9dLYh9txOMhXdwEbKUSvJAsfyzu2tV5aPEQE0TauBP9n3zRQ3dytg sQwZB+BnyCgA== X-IronPort-AV: E=Sophos;i="5.82,330,1613462400"; d="scan'208";a="633376832" Received: from dhiatt-server.jf.intel.com ([10.54.81.3]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 May 2021 23:24:49 -0700 From: Matthew Brost To: , Subject: [PATCH 14/18] drm/i915/guc: Start protecting access to CTB descriptors Date: Tue, 25 May 2021 23:42:33 -0700 Message-Id: <20210526064237.77853-15-matthew.brost@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210526064237.77853-1-matthew.brost@intel.com> References: <20210526064237.77853-1-matthew.brost@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Michal.Wajdeczko@intel.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Michal Wajdeczko We want to stop using guc.send_mutex while sending CTB messages so we have to start protecting access to CTB send descriptor. For completeness protect also CTB receive descriptor. Add spinlock to struct intel_guc_ct_buffer and start using it. Signed-off-by: Michal Wajdeczko Signed-off-by: Matthew Brost Reviewed-by: Matthew Brost --- drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 14 ++++++++++++-- drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h | 2 ++ 2 files changed, 14 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c index 5b0dece7a7cd..fb875d257536 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c @@ -89,6 +89,8 @@ static void ct_incoming_request_worker_func(struct work_struct *w); */ void intel_guc_ct_init_early(struct intel_guc_ct *ct) { + spin_lock_init(&ct->ctbs.send.lock); + spin_lock_init(&ct->ctbs.recv.lock); spin_lock_init(&ct->requests.lock); INIT_LIST_HEAD(&ct->requests.pending); INIT_LIST_HEAD(&ct->requests.incoming); @@ -476,17 +478,22 @@ static int ct_send(struct intel_guc_ct *ct, GEM_BUG_ON(len & ~GUC_CT_MSG_LEN_MASK); GEM_BUG_ON(!response_buf && response_buf_size); + spin_lock_irqsave(&ct->ctbs.send.lock, flags); + fence = ct_get_next_fence(ct); request.fence = fence; request.status = 0; request.response_len = response_buf_size; request.response_buf = response_buf; - spin_lock_irqsave(&ct->requests.lock, flags); + spin_lock(&ct->requests.lock); list_add_tail(&request.link, &ct->requests.pending); - spin_unlock_irqrestore(&ct->requests.lock, flags); + spin_unlock(&ct->requests.lock); err = ct_write(ct, action, len, fence); + + spin_unlock_irqrestore(&ct->ctbs.send.lock, flags); + if (unlikely(err)) goto unlink; @@ -822,6 +829,7 @@ static int ct_handle_request(struct intel_guc_ct *ct, const u32 *msg) void intel_guc_ct_event_handler(struct intel_guc_ct *ct) { u32 msg[GUC_CT_MSG_LEN_MASK + 1]; /* one extra dw for the header */ + unsigned long flags; int err = 0; if (unlikely(!ct->enabled)) { @@ -830,7 +838,9 @@ void intel_guc_ct_event_handler(struct intel_guc_ct *ct) } do { + spin_lock_irqsave(&ct->ctbs.recv.lock, flags); err = ct_read(ct, msg); + spin_unlock_irqrestore(&ct->ctbs.recv.lock, flags); if (err) break; diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h index fc9486779e87..bc52dc479a14 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h @@ -27,11 +27,13 @@ struct intel_guc; * record (command transport buffer descriptor) and the actual buffer which * holds the commands. * + * @lock: protects access to the commands buffer and buffer descriptor * @desc: pointer to the buffer descriptor * @cmds: pointer to the commands buffer * @size: size of the commands buffer */ struct intel_guc_ct_buffer { + spinlock_t lock; struct guc_ct_buffer_desc *desc; u32 *cmds; u32 size;