From patchwork Wed Jan 11 00:56:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alan Previn X-Patchwork-Id: 13095819 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DF9CDC46467 for ; Wed, 11 Jan 2023 00:57:27 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 14FB710E6B2; Wed, 11 Jan 2023 00:57:00 +0000 (UTC) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id 3FFC110E0F2; Wed, 11 Jan 2023 00:56:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673398613; x=1704934613; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=nFFp3QQi6yvUiKERoiK5E+V1X3pdaqyuuJZz2HPAaRM=; b=av4zPgpyo1vCen4Wkj66y7XwdH6PCwBfHhxUb3vs61QetWPQi7SWRy41 /5174/J/A72bL/7Vd79i0boBGD223JVbOmBoZoDDbLlWpwBnSFzNXh5Rx lq23t8wCGEtbhVVMrVck8x5C8jK9+z58T+PECf77uTaMoybRrrYyV6nhJ xyCD4O6FPtwrWOElcCiPDRPr6f4G3CrI5G8YdUd+okaaJxqgAA3a4r2Xi zdyOejqltFz3glUxdPO9HY5xw3z4vP0K4X7h5lD+uWunk8W+Dva6QDAA3 4+Y0nHb5jrMqOLvDUnGjYYOq+bR9+l8LKWBUR81wYr4LhU+c7BO60R5Rz g==; X-IronPort-AV: E=McAfee;i="6500,9779,10586"; a="325307091" X-IronPort-AV: E=Sophos;i="5.96,315,1665471600"; d="scan'208";a="325307091" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Jan 2023 16:56:52 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10586"; a="831175657" X-IronPort-AV: E=Sophos;i="5.96,315,1665471600"; d="scan'208";a="831175657" Received: from aalteres-desk.fm.intel.com ([10.80.57.53]) by orsmga005.jf.intel.com with ESMTP; 10 Jan 2023 16:56:52 -0800 From: Alan Previn To: intel-gfx@lists.freedesktop.org Subject: [PATCH 5/9] drm/i915/pxp: Add GSC-CS backend to send GSC fw messages Date: Tue, 10 Jan 2023 16:56:36 -0800 Message-Id: <20230111005642.300761-7-alan.previn.teres.alexis@intel.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230111005642.300761-1-alan.previn.teres.alexis@intel.com> References: <20230111005642.300761-1-alan.previn.teres.alexis@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Juston Li , Daniele Ceraolo Spurio , dri-devel@lists.freedesktop.org, Alan Previn Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Add GSC engine based method for sending PXP firmware packets to the GSC firmware for MTL (and future) products. Use the newly added helpers to populate the GSC-CS memory header and send the message packet to the FW by dispatching the GSC_HECI_CMD_PKT instruction on the GSC engine. Signed-off-by: Alan Previn --- drivers/gpu/drm/i915/pxp/intel_pxp_gsccs.c | 92 ++++++++++++++++++++++ 1 file changed, 92 insertions(+) diff --git a/drivers/gpu/drm/i915/pxp/intel_pxp_gsccs.c b/drivers/gpu/drm/i915/pxp/intel_pxp_gsccs.c index 97ca187e6fde..84045e18591e 100644 --- a/drivers/gpu/drm/i915/pxp/intel_pxp_gsccs.c +++ b/drivers/gpu/drm/i915/pxp/intel_pxp_gsccs.c @@ -6,6 +6,7 @@ #include "gem/i915_gem_internal.h" #include "gt/intel_context.h" +#include "gt/uc/intel_gsc_uc_heci_cmd_submit.h" #include "i915_drv.h" #include "intel_pxp_cmd_interface_43.h" @@ -39,6 +40,97 @@ static inline struct gsccs_teelink_priv *pxp_to_gsccs_priv(struct intel_pxp *pxp return (struct gsccs_teelink_priv *)pxp->gsccs_priv; } +static int gsccs_send_message(struct intel_pxp *pxp, + void *msg_in, size_t msg_in_size, + void *msg_out, size_t msg_out_size_max, + size_t *msg_out_len) +{ + struct intel_gt *gt = pxp->ctrl_gt; + struct drm_i915_private *i915 = gt->i915; + struct gsccs_session_resources *exec = &pxp_to_gsccs_priv(pxp)->arb_exec_res; + struct intel_gsc_mtl_header *header = exec->pkt_vaddr; + struct intel_gsc_heci_non_priv_pkt pkt; + size_t max_msg_size; + u32 reply_size; + int ret; + + if (!intel_uc_uses_gsc_uc(>->uc)) + return -ENODEV; + + if (!exec->ce) + return -ENODEV; + + max_msg_size = PXP43_MAX_HECI_IN_SIZE - sizeof(*header); + + if (msg_in_size > max_msg_size || msg_out_size_max > max_msg_size) + return -ENOSPC; + + mutex_lock(&exec->cmd_mutex); + + if (!exec->pkt_vma || !exec->bb_vma) + return -ENOENT; + + memset(header, 0, sizeof(*header)); + intel_gsc_uc_heci_cmd_emit_mtl_header(header, MTL_HECI_CLIENT_PXP, msg_in_size, + exec->host_session_handle, 0); + + memcpy(exec->pkt_vaddr + sizeof(*header), msg_in, msg_in_size); + + pkt.addr_in = i915_vma_offset(exec->pkt_vma); + pkt.size_in = header->message_size; + pkt.addr_out = pkt.addr_in + PXP43_MAX_HECI_IN_SIZE; + pkt.size_out = msg_out_size_max + sizeof(*header); + pkt.heci_pkt_vma = exec->pkt_vma; + pkt.bb_vma = exec->bb_vma; + + ret = intel_gsc_uc_heci_cmd_submit_nonpriv(&pxp->ctrl_gt->uc.gsc, + exec->ce, &pkt, exec->bb_vaddr, 500); + if (ret) { + drm_err(&i915->drm, "failed to send gsc PXP msg (%d)\n", ret); + goto unlock; + } + + /* we keep separate location for reply, so get the response header loc first */ + header = exec->pkt_vaddr + PXP43_MAX_HECI_IN_SIZE; + + /* Response validity marker, status and busyness */ + if (header->validity_marker != MTL_HECI_VALIDITY_MARKER) { + drm_err(&i915->drm, "gsc PXP reply with invalid validity marker\n"); + ret = -EINVAL; + goto unlock; + } + if (header->status != 0) { + drm_dbg(&i915->drm, "gsc PXP reply status has error = 0x%08x\n", + header->status); + ret = -EINVAL; + goto unlock; + } + if (header->flags & MTL_GSC_HDR_FLAG_MSG_PENDING) { + drm_dbg(&i915->drm, "gsc PXP reply is busy\n"); + ret = -EAGAIN; + goto unlock; + } + + reply_size = header->message_size - sizeof(*header); + if (reply_size > msg_out_size_max) { + drm_warn(&i915->drm, "caller with insufficient PXP reply size %u (%ld)\n", + reply_size, msg_out_size_max); + reply_size = msg_out_size_max; + } else if (reply_size != msg_out_size_max) { + drm_dbg(&i915->drm, "caller unexpected PXP reply size %u (%ld)\n", + reply_size, msg_out_size_max); + } + + memcpy(msg_out, exec->pkt_vaddr + PXP43_MAX_HECI_IN_SIZE + sizeof(*header), + reply_size); + if (msg_out_len) + *msg_out_len = reply_size; + +unlock: + mutex_unlock(&exec->cmd_mutex); + return ret; +} + int intel_pxp_gsccs_create_session(struct intel_pxp *pxp, int arb_session_id) {