From patchwork Mon Feb 26 18:52:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Imre Deak X-Patchwork-Id: 13572697 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CB297C5478C for ; Mon, 26 Feb 2024 18:52:27 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 26AF310EE1F; Mon, 26 Feb 2024 18:52:27 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="P/KHBfXB"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.19]) by gabe.freedesktop.org (Postfix) with ESMTPS id C5CA810E63E for ; Mon, 26 Feb 2024 18:52:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1708973545; x=1740509545; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hzvRIE4NTD6/4QxxqOYHLA6lCZpLoJ+Rke+x+InG/Dk=; b=P/KHBfXB+DNmVvpO65LCOoUfNPm5TYeM/SzfH11qGuXGF0ELWMNOFZpM oRyNf76/YLoAh1JVeWZzhHeNd1b/6I7ogPr2Tz/4Z1IeU+lS9ONiUCdVi dBS+zTboYSZyL3vua/LuFid49aVKdU1qjLNAfSDk8cjp1L0XW88oFr7hy pcnvTetA+Hjc2Rx5nT9peTraacT6atDpAXHChHh23L9rq0sFKNloQ1G5Z TxxpZyKvOv91zpaFQZs8HYQpOqbAV2wCveeURFdVrC4jV0dHTG/cS5tnY igUUjQa/vSg7mpWUzWue4WNmsdTVdJJ0CxYAkM/DfOD6IlUWVt37B0bfh w==; X-IronPort-AV: E=McAfee;i="6600,9927,10996"; a="3151297" X-IronPort-AV: E=Sophos;i="6.06,186,1705392000"; d="scan'208";a="3151297" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by fmvoesa113.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Feb 2024 10:52:24 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,186,1705392000"; d="scan'208";a="7121041" Received: from ideak-desk.fi.intel.com ([10.237.72.78]) by orviesa007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Feb 2024 10:52:23 -0800 From: Imre Deak To: intel-gfx@lists.freedesktop.org Cc: =?utf-8?b?VmlsbGUgU3lyasOkbMOk?= , Uma Shankar Subject: [PATCH v3 01/21] drm/dp: Add drm_dp_max_dprx_data_rate() Date: Mon, 26 Feb 2024 20:52:43 +0200 Message-Id: <20240226185246.1276018-1-imre.deak@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240220211841.448846-2-imre.deak@intel.com> References: <20240220211841.448846-2-imre.deak@intel.com> MIME-Version: 1.0 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Copy intel_dp_max_data_rate() to DRM core. It will be needed by a follow-up DP tunnel patch, checking the maximum rate the DPRX (sink) supports. Accordingly use the drm_dp_max_dprx_data_rate() name for clarity. This patchset will also switch calling the new DRM function in i915 instead of intel_dp_max_data_rate(). While at it simplify the function documentation/comments, removing parts described already by drm_dp_bw_channel_coding_efficiency(). v2: (Ville) - Remove max_link_rate_kbps. - Simplify the function documentation. v3: - Rebased on latest drm-tip. Cc: Ville Syrjälä Reviewed-by: Uma Shankar Reviewed-by: Ville Syrjälä Signed-off-by: Imre Deak --- drivers/gpu/drm/display/drm_dp_helper.c | 30 +++++++++++++++++++++++++ include/drm/display/drm_dp_helper.h | 1 + 2 files changed, 31 insertions(+) diff --git a/drivers/gpu/drm/display/drm_dp_helper.c b/drivers/gpu/drm/display/drm_dp_helper.c index 9ac52cf5d4d87..314509d999f14 100644 --- a/drivers/gpu/drm/display/drm_dp_helper.c +++ b/drivers/gpu/drm/display/drm_dp_helper.c @@ -4152,3 +4152,33 @@ int drm_dp_bw_channel_coding_efficiency(bool is_uhbr) return 800000; } EXPORT_SYMBOL(drm_dp_bw_channel_coding_efficiency); + +/** + * drm_dp_max_dprx_data_rate - Get the max data bandwidth of a DPRX sink + * @max_link_rate: max DPRX link rate in 10kbps units + * @max_lanes: max DPRX lane count + * + * Given a link rate and lanes, get the data bandwidth. + * + * Data bandwidth is the actual payload rate, which depends on the data + * bandwidth efficiency and the link rate. + * + * Note that protocol layers above the DPRX link level considered here can + * further limit the maximum data rate. Such layers are the MST topology (with + * limits on the link between the source and first branch device as well as on + * the whole MST path until the DPRX link) and (Thunderbolt) DP tunnels - + * which in turn can encapsulate an MST link with its own limit - with each + * SST or MST encapsulated tunnel sharing the BW of a tunnel group. + * + * Returns the maximum data rate in kBps units. + */ +int drm_dp_max_dprx_data_rate(int max_link_rate, int max_lanes) +{ + int ch_coding_efficiency = + drm_dp_bw_channel_coding_efficiency(drm_dp_is_uhbr_rate(max_link_rate)); + + return DIV_ROUND_DOWN_ULL(mul_u32_u32(max_link_rate * 10 * max_lanes, + ch_coding_efficiency), + 1000000 * 8); +} +EXPORT_SYMBOL(drm_dp_max_dprx_data_rate); diff --git a/include/drm/display/drm_dp_helper.h b/include/drm/display/drm_dp_helper.h index 0c1a4021e098e..91fb404dd5310 100644 --- a/include/drm/display/drm_dp_helper.h +++ b/include/drm/display/drm_dp_helper.h @@ -813,6 +813,7 @@ int drm_dp_bw_overhead(int lane_count, int hactive, int dsc_slice_count, int bpp_x16, unsigned long flags); int drm_dp_bw_channel_coding_efficiency(bool is_uhbr); +int drm_dp_max_dprx_data_rate(int max_link_rate, int max_lanes); ssize_t drm_dp_vsc_sdp_pack(const struct drm_dp_vsc_sdp *vsc, struct dp_sdp *sdp); From patchwork Mon Feb 26 18:52:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Imre Deak X-Patchwork-Id: 13572699 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 26BB7C48BF6 for ; Mon, 26 Feb 2024 18:52:35 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A02AB10E63E; Mon, 26 Feb 2024 18:52:33 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="Ev1a12eW"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.19]) by gabe.freedesktop.org (Postfix) with ESMTPS id 742A710E63E for ; Mon, 26 Feb 2024 18:52:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1708973546; x=1740509546; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=xR1xVT/S1zO5gSS5+oDsCok/iue4HwjnZL78JtAQOpE=; b=Ev1a12eWKweHPJOwqH0y0+yEhsDJ4n7MojJFalZ1kYtZGBQ2S+ZSniRq QYEx7musS/FEU2hPuGl99UOXeAriyjrK9IXXf66j+LhINZIYO8rroVIqh 0MY9xFkk4K9KBFHMCV0eBUuiPNHDVUzCMLMZ/YK18hzfpPTUiX7nxeUXy EU5VKLOpeT7VRVFhdKY9ZdP1KdubkKDHZPjLDV/p3CWbCtk+cKjRC0YWh uAyR9gk/v8SQ1L+i426zAXsvd025V8FGYvu8V+ZTSJuqS7zrrauMRWbrg /D/6vqdSYegfpRPFgT6V46+UsnK6XoN4LL49nS0iZHVQzpatbOWFYzneL g==; X-IronPort-AV: E=McAfee;i="6600,9927,10996"; a="3151305" X-IronPort-AV: E=Sophos;i="6.06,186,1705392000"; d="scan'208";a="3151305" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by fmvoesa113.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Feb 2024 10:52:26 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,186,1705392000"; d="scan'208";a="7121060" Received: from ideak-desk.fi.intel.com ([10.237.72.78]) by orviesa007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Feb 2024 10:52:25 -0800 From: Imre Deak To: intel-gfx@lists.freedesktop.org Cc: =?utf-8?b?VmlsbGUgU3lyasOkbMOk?= , Uma Shankar Subject: [PATCH v3 02/21] drm/dp: Add support for DP tunneling Date: Mon, 26 Feb 2024 20:52:44 +0200 Message-Id: <20240226185246.1276018-2-imre.deak@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240220211841.448846-3-imre.deak@intel.com> References: <20240220211841.448846-3-imre.deak@intel.com> MIME-Version: 1.0 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Add support for Display Port tunneling. For now this includes the support for Bandwidth Allocation Mode (BWA), leaving adding Panel Replay support for later. BWA allows using displays that share the same (Thunderbolt) link with their maximum resolution. Atm, this may not be possible due to the coarse granularity of partitioning the link BW among the displays on the link: the BW allocation policy is in a SW/FW/HW component on the link (on Thunderbolt it's the SW or FW Connection Manager), independent of the driver. This policy will set the DPRX maximum rate and lane count DPCD registers the GFX driver will see (0x00000, 0x00001, 0x02200, 0x02201) based on the available link BW. The granularity of the current BW allocation policy is coarse, based on the required link rate in the 1.62Gbs..8.1Gbps range and it may prevent using higher resolutions all together: the display connected first will get a share of the link BW which corresponds to its full DPRX capability (regardless of the actual mode it uses). A subsequent display connected will only get the remaining BW, which could be well below its full capability. BWA solves the above coarse granularity (reducing it to a 250Mbs..1Gps range) and first-come/first-served issues by letting the driver request the BW for each display on a link which reflects the actual modes the displays use. This patch adds the DRM core helper functions, while a follow-up change in the patchset takes them into use in the i915 driver. v2: - Fix prepare_to_wait vs. wake-up cond check order in allocate_tunnel_bw(). (Ville) - Move tunnel==NULL checks from callers in drivers to here. (Ville) - Avoid var inits in declaration blocks that can fail or have side-effects. (Ville) - Use u8 for driver and group IDs. (Ville) - Simplify API removing drm_dp_tunnel_get/put_untracked(). (Ville) - Reuse str_yes_no() instead of a local yes_no_chr(). (Ville) - s/drm_dp_tunnel_atomic_clear_state()/free_tunnel_state() and unexport the function. (Ville) - s/clear_tunnel_group_state()/free_group_state() and move kfree() to this function. (Ville) - Add separate group_free_bw() helper and describe what the tunnel estimated BW includes. (Ville) - Improve help text for CONFIG_DRM_DISPLAY_DP_TUNNEL. (Ville) - Add code comment explaining the purpose of DPCD reg read helpers. (Ville) - Add code comment describing the tunnel group name prefix format. (Ville) - Report the allocated BW as undetermined until the first allocation request. - Skip allocation requests matching the previous request. - Clear any stale BW request status flags before a new request. - Add missing error return check of drm_dp_tunnel_atomic_get_group_state() in drm_dp_tunnel_atomic_set_stream_bw(). - Add drm_dp_tunnel_get_allocated_bw(). - s/drm_dp_tunnel_atomic_get_tunnel_bw/drm_dp_tunnel_atomic_get_required_bw - Fix return value description in function doc of drm_dp_tunnel_detect(). - Add function documentation to all exported functions. v3: - Improve grouping of fields in drm_dp_tunnel_group struct. (Uma) - Fix validating the BW granularity DPCD reg value. (Uma) - Document return value of check_and_clear_status_change(). (Uma) - Fix resetting drm_dp_tunnel_ref::tunnel in drm_dp_tunnel_ref_put(). (Ville) - Allow for ALLOCATED_BW to change after a BWA enable/disable sequence. Cc: Ville Syrjälä Reviewed-by: Uma Shankar Reviewed-by: Ville Syrjälä Signed-off-by: Imre Deak --- drivers/gpu/drm/display/Kconfig | 21 + drivers/gpu/drm/display/Makefile | 2 + drivers/gpu/drm/display/drm_dp_tunnel.c | 1949 +++++++++++++++++++++++ include/drm/display/drm_dp.h | 60 + include/drm/display/drm_dp_tunnel.h | 249 +++ 5 files changed, 2281 insertions(+) create mode 100644 drivers/gpu/drm/display/drm_dp_tunnel.c create mode 100644 include/drm/display/drm_dp_tunnel.h diff --git a/drivers/gpu/drm/display/Kconfig b/drivers/gpu/drm/display/Kconfig index 09712b88a5b83..c0f56888c3280 100644 --- a/drivers/gpu/drm/display/Kconfig +++ b/drivers/gpu/drm/display/Kconfig @@ -17,6 +17,27 @@ config DRM_DISPLAY_DP_HELPER help DRM display helpers for DisplayPort. +config DRM_DISPLAY_DP_TUNNEL + bool + select DRM_DISPLAY_DP_HELPER + help + Enable support for DisplayPort tunnels. This allows drivers to use + DP tunnel features like the Bandwidth Allocation mode to maximize the + BW utilization for display streams on Thunderbolt links. + +config DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE + bool "Enable debugging the DP tunnel state" + depends on REF_TRACKER + depends on DRM_DISPLAY_DP_TUNNEL + depends on DEBUG_KERNEL + depends on EXPERT + help + Enables debugging the DP tunnel manager's state, including the + consistency of all managed tunnels' reference counting and the state of + streams contained in tunnels. + + If in doubt, say "N". + config DRM_DISPLAY_HDCP_HELPER bool depends on DRM_DISPLAY_HELPER diff --git a/drivers/gpu/drm/display/Makefile b/drivers/gpu/drm/display/Makefile index 17ac4a1006a80..7ca61333c6696 100644 --- a/drivers/gpu/drm/display/Makefile +++ b/drivers/gpu/drm/display/Makefile @@ -8,6 +8,8 @@ drm_display_helper-$(CONFIG_DRM_DISPLAY_DP_HELPER) += \ drm_dp_helper.o \ drm_dp_mst_topology.o \ drm_dsc_helper.o +drm_display_helper-$(CONFIG_DRM_DISPLAY_DP_TUNNEL) += \ + drm_dp_tunnel.o drm_display_helper-$(CONFIG_DRM_DISPLAY_HDCP_HELPER) += drm_hdcp_helper.o drm_display_helper-$(CONFIG_DRM_DISPLAY_HDMI_HELPER) += \ drm_hdmi_helper.o \ diff --git a/drivers/gpu/drm/display/drm_dp_tunnel.c b/drivers/gpu/drm/display/drm_dp_tunnel.c new file mode 100644 index 0000000000000..c428e7e7d6b56 --- /dev/null +++ b/drivers/gpu/drm/display/drm_dp_tunnel.c @@ -0,0 +1,1949 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2023 Intel Corporation + */ + +#include +#include + +#include + +#include +#include +#include +#include +#include + +#define to_group(__private_obj) \ + container_of(__private_obj, struct drm_dp_tunnel_group, base) + +#define to_group_state(__private_state) \ + container_of(__private_state, struct drm_dp_tunnel_group_state, base) + +#define is_dp_tunnel_private_obj(__obj) \ + ((__obj)->funcs == &tunnel_group_funcs) + +#define for_each_new_group_in_state(__state, __new_group_state, __i) \ + for ((__i) = 0; \ + (__i) < (__state)->num_private_objs; \ + (__i)++) \ + for_each_if ((__state)->private_objs[__i].ptr && \ + is_dp_tunnel_private_obj((__state)->private_objs[__i].ptr) && \ + ((__new_group_state) = \ + to_group_state((__state)->private_objs[__i].new_state), 1)) + +#define for_each_old_group_in_state(__state, __old_group_state, __i) \ + for ((__i) = 0; \ + (__i) < (__state)->num_private_objs; \ + (__i)++) \ + for_each_if ((__state)->private_objs[__i].ptr && \ + is_dp_tunnel_private_obj((__state)->private_objs[__i].ptr) && \ + ((__old_group_state) = \ + to_group_state((__state)->private_objs[__i].old_state), 1)) + +#define for_each_tunnel_in_group(__group, __tunnel) \ + list_for_each_entry(__tunnel, &(__group)->tunnels, node) + +#define for_each_tunnel_state(__group_state, __tunnel_state) \ + list_for_each_entry(__tunnel_state, &(__group_state)->tunnel_states, node) + +#define for_each_tunnel_state_safe(__group_state, __tunnel_state, __tunnel_state_tmp) \ + list_for_each_entry_safe(__tunnel_state, __tunnel_state_tmp, \ + &(__group_state)->tunnel_states, node) + +#define kbytes_to_mbits(__kbytes) \ + DIV_ROUND_UP((__kbytes) * 8, 1000) + +#define DPTUN_BW_ARG(__bw) ((__bw) < 0 ? (__bw) : kbytes_to_mbits(__bw)) + +#define __tun_prn(__tunnel, __level, __type, __fmt, ...) \ + drm_##__level##__type((__tunnel)->group->mgr->dev, \ + "[DPTUN %s][%s] " __fmt, \ + drm_dp_tunnel_name(__tunnel), \ + (__tunnel)->aux->name, ## \ + __VA_ARGS__) + +#define tun_dbg(__tunnel, __fmt, ...) \ + __tun_prn(__tunnel, dbg, _kms, __fmt, ## __VA_ARGS__) + +#define tun_dbg_stat(__tunnel, __err, __fmt, ...) do { \ + if (__err) \ + __tun_prn(__tunnel, dbg, _kms, __fmt " (Failed, err: %pe)\n", \ + ## __VA_ARGS__, ERR_PTR(__err)); \ + else \ + __tun_prn(__tunnel, dbg, _kms, __fmt " (Ok)\n", \ + ## __VA_ARGS__); \ +} while (0) + +#define tun_dbg_atomic(__tunnel, __fmt, ...) \ + __tun_prn(__tunnel, dbg, _atomic, __fmt, ## __VA_ARGS__) + +#define tun_grp_dbg(__group, __fmt, ...) \ + drm_dbg_kms((__group)->mgr->dev, \ + "[DPTUN %s] " __fmt, \ + drm_dp_tunnel_group_name(__group), ## \ + __VA_ARGS__) + +#define DP_TUNNELING_BASE DP_TUNNELING_OUI + +#define __DPTUN_REG_RANGE(start, size) \ + GENMASK_ULL(start + size - 1, start) + +#define DPTUN_REG_RANGE(addr, size) \ + __DPTUN_REG_RANGE((addr) - DP_TUNNELING_BASE, size) + +#define DPTUN_REG(addr) DPTUN_REG_RANGE(addr, 1) + +#define DPTUN_INFO_REG_MASK ( \ + DPTUN_REG_RANGE(DP_TUNNELING_OUI, DP_TUNNELING_OUI_BYTES) | \ + DPTUN_REG_RANGE(DP_TUNNELING_DEV_ID, DP_TUNNELING_DEV_ID_BYTES) | \ + DPTUN_REG(DP_TUNNELING_HW_REV) | \ + DPTUN_REG(DP_TUNNELING_SW_REV_MAJOR) | \ + DPTUN_REG(DP_TUNNELING_SW_REV_MINOR) | \ + DPTUN_REG(DP_TUNNELING_CAPABILITIES) | \ + DPTUN_REG(DP_IN_ADAPTER_INFO) | \ + DPTUN_REG(DP_USB4_DRIVER_ID) | \ + DPTUN_REG(DP_USB4_DRIVER_BW_CAPABILITY) | \ + DPTUN_REG(DP_IN_ADAPTER_TUNNEL_INFORMATION) | \ + DPTUN_REG(DP_BW_GRANULARITY) | \ + DPTUN_REG(DP_ESTIMATED_BW) | \ + DPTUN_REG(DP_ALLOCATED_BW) | \ + DPTUN_REG(DP_TUNNELING_MAX_LINK_RATE) | \ + DPTUN_REG(DP_TUNNELING_MAX_LANE_COUNT) | \ + DPTUN_REG(DP_DPTX_BW_ALLOCATION_MODE_CONTROL)) + +static const DECLARE_BITMAP(dptun_info_regs, 64) = { + DPTUN_INFO_REG_MASK & -1UL, +#if BITS_PER_LONG == 32 + DPTUN_INFO_REG_MASK >> 32, +#endif +}; + +struct drm_dp_tunnel_regs { + u8 buf[HWEIGHT64(DPTUN_INFO_REG_MASK)]; +}; + +struct drm_dp_tunnel_group; + +struct drm_dp_tunnel { + struct drm_dp_tunnel_group *group; + + struct list_head node; + + struct kref kref; + struct ref_tracker *tracker; + struct drm_dp_aux *aux; + char name[8]; + + int bw_granularity; + int estimated_bw; + int allocated_bw; + + int max_dprx_rate; + u8 max_dprx_lane_count; + + u8 adapter_id; + + bool bw_alloc_supported:1; + bool bw_alloc_enabled:1; + bool has_io_error:1; + bool destroyed:1; +}; + +struct drm_dp_tunnel_group_state; + +struct drm_dp_tunnel_state { + struct drm_dp_tunnel_group_state *group_state; + + struct drm_dp_tunnel_ref tunnel_ref; + + struct list_head node; + + u32 stream_mask; + int *stream_bw; +}; + +struct drm_dp_tunnel_group_state { + struct drm_private_state base; + + struct list_head tunnel_states; +}; + +struct drm_dp_tunnel_group { + struct drm_private_obj base; + struct drm_dp_tunnel_mgr *mgr; + + struct list_head tunnels; + + /* available BW including the allocated_bw of all tunnels in the group */ + int available_bw; + + u8 drv_group_id; + char name[8]; + + bool active:1; +}; + +struct drm_dp_tunnel_mgr { + struct drm_device *dev; + + int group_count; + struct drm_dp_tunnel_group *groups; + wait_queue_head_t bw_req_queue; + +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE + struct ref_tracker_dir ref_tracker; +#endif +}; + +/* + * The following helpers provide a way to read out the tunneling DPCD + * registers with a minimal amount of AUX transfers (1 transfer per contiguous + * range, as permitted by the 16 byte per transfer AUX limit), not accessing + * other registers to avoid any read side-effects. + */ +static int next_reg_area(int *offset) +{ + *offset = find_next_bit(dptun_info_regs, 64, *offset); + + return find_next_zero_bit(dptun_info_regs, 64, *offset + 1) - *offset; +} + +#define tunnel_reg_ptr(__regs, __address) ({ \ + WARN_ON(!test_bit((__address) - DP_TUNNELING_BASE, dptun_info_regs)); \ + &(__regs)->buf[bitmap_weight(dptun_info_regs, (__address) - DP_TUNNELING_BASE)]; \ +}) + +static int read_tunnel_regs(struct drm_dp_aux *aux, struct drm_dp_tunnel_regs *regs) +{ + int offset = 0; + int len; + + while ((len = next_reg_area(&offset))) { + int address = DP_TUNNELING_BASE + offset; + + if (drm_dp_dpcd_read(aux, address, tunnel_reg_ptr(regs, address), len) < 0) + return -EIO; + + offset += len; + } + + return 0; +} + +static u8 tunnel_reg(const struct drm_dp_tunnel_regs *regs, int address) +{ + return *tunnel_reg_ptr(regs, address); +} + +static u8 tunnel_reg_drv_group_id(const struct drm_dp_tunnel_regs *regs) +{ + u8 drv_id = tunnel_reg(regs, DP_USB4_DRIVER_ID) & DP_USB4_DRIVER_ID_MASK; + u8 group_id = tunnel_reg(regs, DP_IN_ADAPTER_TUNNEL_INFORMATION) & DP_GROUP_ID_MASK; + + if (!group_id) + return 0; + + return (drv_id << DP_GROUP_ID_BITS) | group_id; +} + +/* Return granularity in kB/s units */ +static int tunnel_reg_bw_granularity(const struct drm_dp_tunnel_regs *regs) +{ + int gr = tunnel_reg(regs, DP_BW_GRANULARITY) & DP_BW_GRANULARITY_MASK; + + if (gr > 2) + return -1; + + return (250000 << gr) / 8; +} + +static int tunnel_reg_max_dprx_rate(const struct drm_dp_tunnel_regs *regs) +{ + u8 bw_code = tunnel_reg(regs, DP_TUNNELING_MAX_LINK_RATE); + + return drm_dp_bw_code_to_link_rate(bw_code); +} + +static int tunnel_reg_max_dprx_lane_count(const struct drm_dp_tunnel_regs *regs) +{ + return tunnel_reg(regs, DP_TUNNELING_MAX_LANE_COUNT) & + DP_TUNNELING_MAX_LANE_COUNT_MASK; +} + +static bool tunnel_reg_bw_alloc_supported(const struct drm_dp_tunnel_regs *regs) +{ + u8 cap_mask = DP_TUNNELING_SUPPORT | DP_IN_BW_ALLOCATION_MODE_SUPPORT; + + if ((tunnel_reg(regs, DP_TUNNELING_CAPABILITIES) & cap_mask) != cap_mask) + return false; + + return tunnel_reg(regs, DP_USB4_DRIVER_BW_CAPABILITY) & + DP_USB4_DRIVER_BW_ALLOCATION_MODE_SUPPORT; +} + +static bool tunnel_reg_bw_alloc_enabled(const struct drm_dp_tunnel_regs *regs) +{ + return tunnel_reg(regs, DP_DPTX_BW_ALLOCATION_MODE_CONTROL) & + DP_DISPLAY_DRIVER_BW_ALLOCATION_MODE_ENABLE; +} + +static u8 tunnel_group_drv_id(u8 drv_group_id) +{ + return drv_group_id >> DP_GROUP_ID_BITS; +} + +static u8 tunnel_group_id(u8 drv_group_id) +{ + return drv_group_id & DP_GROUP_ID_MASK; +} + +const char *drm_dp_tunnel_name(const struct drm_dp_tunnel *tunnel) +{ + return tunnel->name; +} +EXPORT_SYMBOL(drm_dp_tunnel_name); + +static const char *drm_dp_tunnel_group_name(const struct drm_dp_tunnel_group *group) +{ + return group->name; +} + +static struct drm_dp_tunnel_group * +lookup_or_alloc_group(struct drm_dp_tunnel_mgr *mgr, u8 drv_group_id) +{ + struct drm_dp_tunnel_group *group = NULL; + int i; + + for (i = 0; i < mgr->group_count; i++) { + /* + * A tunnel group with 0 group ID shouldn't have more than one + * tunnels. + */ + if (tunnel_group_id(drv_group_id) && + mgr->groups[i].drv_group_id == drv_group_id) + return &mgr->groups[i]; + + if (!group && !mgr->groups[i].active) + group = &mgr->groups[i]; + } + + if (!group) { + drm_dbg_kms(mgr->dev, + "DPTUN: Can't allocate more tunnel groups\n"); + return NULL; + } + + group->drv_group_id = drv_group_id; + group->active = true; + + /* + * The group name format here and elsewhere: Driver-ID:Group-ID:* + * (* standing for all DP-Adapters/tunnels in the group). + */ + snprintf(group->name, sizeof(group->name), "%d:%d:*", + tunnel_group_drv_id(drv_group_id) & ((1 << DP_GROUP_ID_BITS) - 1), + tunnel_group_id(drv_group_id) & ((1 << DP_USB4_DRIVER_ID_BITS) - 1)); + + return group; +} + +static void free_group(struct drm_dp_tunnel_group *group) +{ + struct drm_dp_tunnel_mgr *mgr = group->mgr; + + if (drm_WARN_ON(mgr->dev, !list_empty(&group->tunnels))) + return; + + group->drv_group_id = 0; + group->available_bw = -1; + group->active = false; +} + +static struct drm_dp_tunnel * +tunnel_get(struct drm_dp_tunnel *tunnel) +{ + kref_get(&tunnel->kref); + + return tunnel; +} + +static void free_tunnel(struct kref *kref) +{ + struct drm_dp_tunnel *tunnel = container_of(kref, typeof(*tunnel), kref); + struct drm_dp_tunnel_group *group = tunnel->group; + + list_del(&tunnel->node); + if (list_empty(&group->tunnels)) + free_group(group); + + kfree(tunnel); +} + +static void tunnel_put(struct drm_dp_tunnel *tunnel) +{ + kref_put(&tunnel->kref, free_tunnel); +} + +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE +static void track_tunnel_ref(struct drm_dp_tunnel *tunnel, + struct ref_tracker **tracker) +{ + ref_tracker_alloc(&tunnel->group->mgr->ref_tracker, + tracker, GFP_KERNEL); +} + +static void untrack_tunnel_ref(struct drm_dp_tunnel *tunnel, + struct ref_tracker **tracker) +{ + ref_tracker_free(&tunnel->group->mgr->ref_tracker, + tracker); +} +#else +static void track_tunnel_ref(struct drm_dp_tunnel *tunnel, + struct ref_tracker **tracker) +{ +} + +static void untrack_tunnel_ref(struct drm_dp_tunnel *tunnel, + struct ref_tracker **tracker) +{ +} +#endif + +/** + * drm_dp_tunnel_get - Get a reference for a DP tunnel + * @tunnel: Tunnel object + * @tracker: Debug tracker for the reference + * + * Get a reference for @tunnel, along with a debug tracker to help locating + * the source of a reference leak/double reference put etc. issue. + * + * The reference must be dropped after use calling drm_dp_tunnel_put() + * passing @tunnel and *@tracker returned from here. + * + * Returns @tunnel - as a convenience - along with *@tracker. + */ +struct drm_dp_tunnel * +drm_dp_tunnel_get(struct drm_dp_tunnel *tunnel, + struct ref_tracker **tracker) +{ + track_tunnel_ref(tunnel, tracker); + + return tunnel_get(tunnel); +} +EXPORT_SYMBOL(drm_dp_tunnel_get); + +/** + * drm_dp_tunnel_put - Put a reference for a DP tunnel + * @tunnel - Tunnel object + * @tracker - Debug tracker for the reference + * + * Put a reference for @tunnel along with its debug *@tracker, which + * was obtained with drm_dp_tunnel_get(). + */ +void drm_dp_tunnel_put(struct drm_dp_tunnel *tunnel, + struct ref_tracker **tracker) +{ + untrack_tunnel_ref(tunnel, tracker); + + tunnel_put(tunnel); +} +EXPORT_SYMBOL(drm_dp_tunnel_put); + +static bool add_tunnel_to_group(struct drm_dp_tunnel_mgr *mgr, + u8 drv_group_id, + struct drm_dp_tunnel *tunnel) +{ + struct drm_dp_tunnel_group *group; + + group = lookup_or_alloc_group(mgr, drv_group_id); + if (!group) + return false; + + tunnel->group = group; + list_add(&tunnel->node, &group->tunnels); + + return true; +} + +static struct drm_dp_tunnel * +create_tunnel(struct drm_dp_tunnel_mgr *mgr, + struct drm_dp_aux *aux, + const struct drm_dp_tunnel_regs *regs) +{ + u8 drv_group_id = tunnel_reg_drv_group_id(regs); + struct drm_dp_tunnel *tunnel; + + tunnel = kzalloc(sizeof(*tunnel), GFP_KERNEL); + if (!tunnel) + return NULL; + + INIT_LIST_HEAD(&tunnel->node); + + kref_init(&tunnel->kref); + + tunnel->aux = aux; + + tunnel->adapter_id = tunnel_reg(regs, DP_IN_ADAPTER_INFO) & DP_IN_ADAPTER_NUMBER_MASK; + + snprintf(tunnel->name, sizeof(tunnel->name), "%d:%d:%d", + tunnel_group_drv_id(drv_group_id) & ((1 << DP_GROUP_ID_BITS) - 1), + tunnel_group_id(drv_group_id) & ((1 << DP_USB4_DRIVER_ID_BITS) - 1), + tunnel->adapter_id & ((1 << DP_IN_ADAPTER_NUMBER_BITS) - 1)); + + tunnel->bw_granularity = tunnel_reg_bw_granularity(regs); + tunnel->allocated_bw = tunnel_reg(regs, DP_ALLOCATED_BW) * + tunnel->bw_granularity; + /* + * An initial allocated BW of 0 indicates an undefined state: the + * actual allocation is determined by the TBT CM, usually following a + * legacy allocation policy (based on the max DPRX caps). From the + * driver's POV the state becomes defined only after the first + * allocation request. + */ + if (!tunnel->allocated_bw) + tunnel->allocated_bw = -1; + + tunnel->bw_alloc_supported = tunnel_reg_bw_alloc_supported(regs); + tunnel->bw_alloc_enabled = tunnel_reg_bw_alloc_enabled(regs); + + if (!add_tunnel_to_group(mgr, drv_group_id, tunnel)) { + kfree(tunnel); + + return NULL; + } + + track_tunnel_ref(tunnel, &tunnel->tracker); + + return tunnel; +} + +static void destroy_tunnel(struct drm_dp_tunnel *tunnel) +{ + untrack_tunnel_ref(tunnel, &tunnel->tracker); + tunnel_put(tunnel); +} + +/** + * drm_dp_tunnel_set_io_error - Set the IO error flag for a DP tunnel + * @tunnel: Tunnel object + * + * Set the IO error flag for @tunnel. Drivers can call this function upon + * detecting a failure that affects the tunnel functionality, for instance + * after a DP AUX transfer failure on the port @tunnel is connected to. + * + * This disables further management of @tunnel, including any related + * AUX accesses for tunneling DPCD registers, returning error to the + * initiators of these. The driver is supposed to drop this tunnel and - + * optionally - recreate it. + */ +void drm_dp_tunnel_set_io_error(struct drm_dp_tunnel *tunnel) +{ + tunnel->has_io_error = true; +} +EXPORT_SYMBOL(drm_dp_tunnel_set_io_error); + +#define SKIP_DPRX_CAPS_CHECK BIT(0) +#define ALLOW_ALLOCATED_BW_CHANGE BIT(1) +static bool tunnel_regs_are_valid(struct drm_dp_tunnel_mgr *mgr, + const struct drm_dp_tunnel_regs *regs, + unsigned int flags) +{ + u8 drv_group_id = tunnel_reg_drv_group_id(regs); + bool check_dprx = !(flags & SKIP_DPRX_CAPS_CHECK); + bool ret = true; + + if (!tunnel_reg_bw_alloc_supported(regs)) { + if (tunnel_group_id(drv_group_id)) { + drm_dbg_kms(mgr->dev, + "DPTUN: A non-zero group ID is only allowed with BWA support\n"); + ret = false; + } + + if (tunnel_reg(regs, DP_ALLOCATED_BW)) { + drm_dbg_kms(mgr->dev, + "DPTUN: BW is allocated without BWA support\n"); + ret = false; + } + + return ret; + } + + if (!tunnel_group_id(drv_group_id)) { + drm_dbg_kms(mgr->dev, + "DPTUN: BWA support requires a non-zero group ID\n"); + ret = false; + } + + if (check_dprx && hweight8(tunnel_reg_max_dprx_lane_count(regs)) != 1) { + drm_dbg_kms(mgr->dev, + "DPTUN: Invalid DPRX lane count: %d\n", + tunnel_reg_max_dprx_lane_count(regs)); + + ret = false; + } + + if (check_dprx && !tunnel_reg_max_dprx_rate(regs)) { + drm_dbg_kms(mgr->dev, + "DPTUN: DPRX rate is 0\n"); + + ret = false; + } + + if (tunnel_reg_bw_granularity(regs) < 0) { + drm_dbg_kms(mgr->dev, + "DPTUN: Invalid BW granularity\n"); + + ret = false; + } + + if (tunnel_reg(regs, DP_ALLOCATED_BW) > tunnel_reg(regs, DP_ESTIMATED_BW)) { + drm_dbg_kms(mgr->dev, + "DPTUN: Allocated BW %d > estimated BW %d Mb/s\n", + DPTUN_BW_ARG(tunnel_reg(regs, DP_ALLOCATED_BW) * + tunnel_reg_bw_granularity(regs)), + DPTUN_BW_ARG(tunnel_reg(regs, DP_ESTIMATED_BW) * + tunnel_reg_bw_granularity(regs))); + + ret = false; + } + + return ret; +} + +static int tunnel_allocated_bw(const struct drm_dp_tunnel *tunnel) +{ + return max(tunnel->allocated_bw, 0); +} + +static bool tunnel_info_changes_are_valid(struct drm_dp_tunnel *tunnel, + const struct drm_dp_tunnel_regs *regs, + unsigned int flags) +{ + u8 new_drv_group_id = tunnel_reg_drv_group_id(regs); + bool ret = true; + + if (tunnel->bw_alloc_supported != tunnel_reg_bw_alloc_supported(regs)) { + tun_dbg(tunnel, + "BW alloc support has changed %s -> %s\n", + str_yes_no(tunnel->bw_alloc_supported), + str_yes_no(tunnel_reg_bw_alloc_supported(regs))); + + ret = false; + } + + if (tunnel->group->drv_group_id != new_drv_group_id) { + tun_dbg(tunnel, + "Driver/group ID has changed %d:%d:* -> %d:%d:*\n", + tunnel_group_drv_id(tunnel->group->drv_group_id), + tunnel_group_id(tunnel->group->drv_group_id), + tunnel_group_drv_id(new_drv_group_id), + tunnel_group_id(new_drv_group_id)); + + ret = false; + } + + if (!tunnel->bw_alloc_supported) + return ret; + + if (tunnel->bw_granularity != tunnel_reg_bw_granularity(regs)) { + tun_dbg(tunnel, + "BW granularity has changed: %d -> %d Mb/s\n", + DPTUN_BW_ARG(tunnel->bw_granularity), + DPTUN_BW_ARG(tunnel_reg_bw_granularity(regs))); + + ret = false; + } + + /* + * On some devices at least the BW alloc mode enabled status is always + * reported as 0, so skip checking that here. + */ + + if (!(flags & ALLOW_ALLOCATED_BW_CHANGE) && + tunnel_allocated_bw(tunnel) != + tunnel_reg(regs, DP_ALLOCATED_BW) * tunnel->bw_granularity) { + tun_dbg(tunnel, + "Allocated BW has changed: %d -> %d Mb/s\n", + DPTUN_BW_ARG(tunnel->allocated_bw), + DPTUN_BW_ARG(tunnel_reg(regs, DP_ALLOCATED_BW) * tunnel->bw_granularity)); + + ret = false; + } + + return ret; +} + +static int +read_and_verify_tunnel_regs(struct drm_dp_tunnel *tunnel, + struct drm_dp_tunnel_regs *regs, + unsigned int flags) +{ + int err; + + err = read_tunnel_regs(tunnel->aux, regs); + if (err < 0) { + drm_dp_tunnel_set_io_error(tunnel); + + return err; + } + + if (!tunnel_regs_are_valid(tunnel->group->mgr, regs, flags)) + return -EINVAL; + + if (!tunnel_info_changes_are_valid(tunnel, regs, flags)) + return -EINVAL; + + return 0; +} + +static bool update_dprx_caps(struct drm_dp_tunnel *tunnel, const struct drm_dp_tunnel_regs *regs) +{ + bool changed = false; + + if (tunnel_reg_max_dprx_rate(regs) != tunnel->max_dprx_rate) { + tunnel->max_dprx_rate = tunnel_reg_max_dprx_rate(regs); + changed = true; + } + + if (tunnel_reg_max_dprx_lane_count(regs) != tunnel->max_dprx_lane_count) { + tunnel->max_dprx_lane_count = tunnel_reg_max_dprx_lane_count(regs); + changed = true; + } + + return changed; +} + +static int dev_id_len(const u8 *dev_id, int max_len) +{ + while (max_len && dev_id[max_len - 1] == '\0') + max_len--; + + return max_len; +} + +static int get_max_dprx_bw(const struct drm_dp_tunnel *tunnel) +{ + int max_dprx_bw = drm_dp_max_dprx_data_rate(tunnel->max_dprx_rate, + tunnel->max_dprx_lane_count); + + /* + * A BW request of roundup(max_dprx_bw, tunnel->bw_granularity) results in + * an allocation of max_dprx_bw. A BW request above this rounded-up + * value will fail. + */ + return min(roundup(max_dprx_bw, tunnel->bw_granularity), + MAX_DP_REQUEST_BW * tunnel->bw_granularity); +} + +static int get_max_tunnel_bw(const struct drm_dp_tunnel *tunnel) +{ + return min(get_max_dprx_bw(tunnel), tunnel->group->available_bw); +} + +/** + * drm_dp_tunnel_detect - Detect DP tunnel on the link + * @mgr: Tunnel manager + * @aux: DP AUX on which the tunnel will be detected + * + * Detect if there is any DP tunnel on the link and add it to the tunnel + * group's tunnel list. + * + * Returns a pointer to a tunnel on success, or an ERR_PTR() error on + * failure. + */ +struct drm_dp_tunnel * +drm_dp_tunnel_detect(struct drm_dp_tunnel_mgr *mgr, + struct drm_dp_aux *aux) +{ + struct drm_dp_tunnel_regs regs; + struct drm_dp_tunnel *tunnel; + int err; + + err = read_tunnel_regs(aux, ®s); + if (err) + return ERR_PTR(err); + + if (!(tunnel_reg(®s, DP_TUNNELING_CAPABILITIES) & + DP_TUNNELING_SUPPORT)) + return ERR_PTR(-ENODEV); + + /* The DPRX caps are valid only after enabling BW alloc mode. */ + if (!tunnel_regs_are_valid(mgr, ®s, SKIP_DPRX_CAPS_CHECK)) + return ERR_PTR(-EINVAL); + + tunnel = create_tunnel(mgr, aux, ®s); + if (!tunnel) + return ERR_PTR(-ENOMEM); + + tun_dbg(tunnel, + "OUI:%*phD DevID:%*pE Rev-HW:%d.%d SW:%d.%d PR-Sup:%s BWA-Sup:%s BWA-En:%s\n", + DP_TUNNELING_OUI_BYTES, + tunnel_reg_ptr(®s, DP_TUNNELING_OUI), + dev_id_len(tunnel_reg_ptr(®s, DP_TUNNELING_DEV_ID), DP_TUNNELING_DEV_ID_BYTES), + tunnel_reg_ptr(®s, DP_TUNNELING_DEV_ID), + (tunnel_reg(®s, DP_TUNNELING_HW_REV) & DP_TUNNELING_HW_REV_MAJOR_MASK) >> + DP_TUNNELING_HW_REV_MAJOR_SHIFT, + (tunnel_reg(®s, DP_TUNNELING_HW_REV) & DP_TUNNELING_HW_REV_MINOR_MASK) >> + DP_TUNNELING_HW_REV_MINOR_SHIFT, + tunnel_reg(®s, DP_TUNNELING_SW_REV_MAJOR), + tunnel_reg(®s, DP_TUNNELING_SW_REV_MINOR), + str_yes_no(tunnel_reg(®s, DP_TUNNELING_CAPABILITIES) & + DP_PANEL_REPLAY_OPTIMIZATION_SUPPORT), + str_yes_no(tunnel->bw_alloc_supported), + str_yes_no(tunnel->bw_alloc_enabled)); + + return tunnel; +} +EXPORT_SYMBOL(drm_dp_tunnel_detect); + +/** + * drm_dp_tunnel_destroy - Destroy tunnel object + * @tunnel: Tunnel object + * + * Remove the tunnel from the tunnel topology and destroy it. + * + * Returns 0 on success, -ENODEV if the tunnel has been destroyed already. + */ +int drm_dp_tunnel_destroy(struct drm_dp_tunnel *tunnel) +{ + if (!tunnel) + return 0; + + if (drm_WARN_ON(tunnel->group->mgr->dev, tunnel->destroyed)) + return -ENODEV; + + tun_dbg(tunnel, "destroying\n"); + + tunnel->destroyed = true; + destroy_tunnel(tunnel); + + return 0; +} +EXPORT_SYMBOL(drm_dp_tunnel_destroy); + +static int check_tunnel(const struct drm_dp_tunnel *tunnel) +{ + if (tunnel->destroyed) + return -ENODEV; + + if (tunnel->has_io_error) + return -EIO; + + return 0; +} + +static int group_allocated_bw(struct drm_dp_tunnel_group *group) +{ + struct drm_dp_tunnel *tunnel; + int group_allocated_bw = 0; + + for_each_tunnel_in_group(group, tunnel) { + if (check_tunnel(tunnel) == 0 && + tunnel->bw_alloc_enabled) + group_allocated_bw += tunnel_allocated_bw(tunnel); + } + + return group_allocated_bw; +} + +/* + * The estimated BW reported by the TBT Connection Manager for each tunnel in + * a group includes the BW already allocated for the given tunnel and the + * unallocated BW which is free to be used by any tunnel in the group. + */ +static int group_free_bw(const struct drm_dp_tunnel *tunnel) +{ + return tunnel->estimated_bw - tunnel_allocated_bw(tunnel); +} + +static int calc_group_available_bw(const struct drm_dp_tunnel *tunnel) +{ + return group_allocated_bw(tunnel->group) + + group_free_bw(tunnel); +} + +static int update_group_available_bw(struct drm_dp_tunnel *tunnel, + const struct drm_dp_tunnel_regs *regs) +{ + struct drm_dp_tunnel *tunnel_iter; + int group_available_bw; + bool changed; + + tunnel->estimated_bw = tunnel_reg(regs, DP_ESTIMATED_BW) * tunnel->bw_granularity; + + if (calc_group_available_bw(tunnel) == tunnel->group->available_bw) + return 0; + + for_each_tunnel_in_group(tunnel->group, tunnel_iter) { + int err; + + if (tunnel_iter == tunnel) + continue; + + if (check_tunnel(tunnel_iter) != 0 || + !tunnel_iter->bw_alloc_enabled) + continue; + + err = drm_dp_dpcd_probe(tunnel_iter->aux, DP_DPCD_REV); + if (err) { + tun_dbg(tunnel_iter, + "Probe failed, assume disconnected (err %pe)\n", + ERR_PTR(err)); + drm_dp_tunnel_set_io_error(tunnel_iter); + } + } + + group_available_bw = calc_group_available_bw(tunnel); + + tun_dbg(tunnel, "Updated group available BW: %d->%d\n", + DPTUN_BW_ARG(tunnel->group->available_bw), + DPTUN_BW_ARG(group_available_bw)); + + changed = tunnel->group->available_bw != group_available_bw; + + tunnel->group->available_bw = group_available_bw; + + return changed ? 1 : 0; +} + +static int set_bw_alloc_mode(struct drm_dp_tunnel *tunnel, bool enable) +{ + u8 mask = DP_DISPLAY_DRIVER_BW_ALLOCATION_MODE_ENABLE | DP_UNMASK_BW_ALLOCATION_IRQ; + u8 val; + + if (drm_dp_dpcd_readb(tunnel->aux, DP_DPTX_BW_ALLOCATION_MODE_CONTROL, &val) < 0) + goto out_err; + + if (enable) + val |= mask; + else + val &= ~mask; + + if (drm_dp_dpcd_writeb(tunnel->aux, DP_DPTX_BW_ALLOCATION_MODE_CONTROL, val) < 0) + goto out_err; + + tunnel->bw_alloc_enabled = enable; + + return 0; + +out_err: + drm_dp_tunnel_set_io_error(tunnel); + + return -EIO; +} + +/** + * drm_dp_tunnel_enable_bw_alloc - Enable DP tunnel BW allocation mode + * @tunnel: Tunnel object + * + * Enable the DP tunnel BW allocation mode on @tunnel if it supports it. + * + * Returns 0 in case of success, negative error code otherwise. + */ +int drm_dp_tunnel_enable_bw_alloc(struct drm_dp_tunnel *tunnel) +{ + struct drm_dp_tunnel_regs regs; + int err; + + err = check_tunnel(tunnel); + if (err) + return err; + + if (!tunnel->bw_alloc_supported) + return -EOPNOTSUPP; + + if (!tunnel_group_id(tunnel->group->drv_group_id)) + return -EINVAL; + + err = set_bw_alloc_mode(tunnel, true); + if (err) + goto out; + + /* + * After a BWA disable/re-enable sequence the allocated BW can either + * stay at its last requested value or, for instance after system + * suspend/resume, TBT CM can reset back the allocation to the amount + * allocated in the legacy/non-BWA mode. Accordingly allow for the + * allocation to change wrt. the last SW state. + */ + err = read_and_verify_tunnel_regs(tunnel, ®s, + ALLOW_ALLOCATED_BW_CHANGE); + if (err) { + set_bw_alloc_mode(tunnel, false); + + goto out; + } + + if (!tunnel->max_dprx_rate) + update_dprx_caps(tunnel, ®s); + + if (tunnel->group->available_bw == -1) { + err = update_group_available_bw(tunnel, ®s); + if (err > 0) + err = 0; + } +out: + tun_dbg_stat(tunnel, err, + "Enabling BW alloc mode: DPRX:%dx%d Group alloc:%d/%d Mb/s", + tunnel->max_dprx_rate / 100, tunnel->max_dprx_lane_count, + DPTUN_BW_ARG(group_allocated_bw(tunnel->group)), + DPTUN_BW_ARG(tunnel->group->available_bw)); + + return err; +} +EXPORT_SYMBOL(drm_dp_tunnel_enable_bw_alloc); + +/** + * drm_dp_tunnel_disable_bw_alloc - Disable DP tunnel BW allocation mode + * @tunnel: Tunnel object + * + * Disable the DP tunnel BW allocation mode on @tunnel. + * + * Returns 0 in case of success, negative error code otherwise. + */ +int drm_dp_tunnel_disable_bw_alloc(struct drm_dp_tunnel *tunnel) +{ + int err; + + err = check_tunnel(tunnel); + if (err) + return err; + + tunnel->allocated_bw = -1; + + err = set_bw_alloc_mode(tunnel, false); + + tun_dbg_stat(tunnel, err, "Disabling BW alloc mode"); + + return err; +} +EXPORT_SYMBOL(drm_dp_tunnel_disable_bw_alloc); + +/** + * drm_dp_tunnel_bw_alloc_is_enabled - Query the BW allocation mode enabled state + * @tunnel: Tunnel object + * + * Query if the BW allocation mode is enabled for @tunnel. + * + * Returns %true if the BW allocation mode is enabled for @tunnel. + */ +bool drm_dp_tunnel_bw_alloc_is_enabled(const struct drm_dp_tunnel *tunnel) +{ + return tunnel && tunnel->bw_alloc_enabled; +} +EXPORT_SYMBOL(drm_dp_tunnel_bw_alloc_is_enabled); + +static int clear_bw_req_state(struct drm_dp_aux *aux) +{ + u8 bw_req_mask = DP_BW_REQUEST_SUCCEEDED | DP_BW_REQUEST_FAILED; + + if (drm_dp_dpcd_writeb(aux, DP_TUNNELING_STATUS, bw_req_mask) < 0) + return -EIO; + + return 0; +} + +static int bw_req_complete(struct drm_dp_aux *aux, bool *status_changed) +{ + u8 bw_req_mask = DP_BW_REQUEST_SUCCEEDED | DP_BW_REQUEST_FAILED; + u8 status_change_mask = DP_BW_ALLOCATION_CAPABILITY_CHANGED | DP_ESTIMATED_BW_CHANGED; + u8 val; + int err; + + if (drm_dp_dpcd_readb(aux, DP_TUNNELING_STATUS, &val) < 0) + return -EIO; + + *status_changed = val & status_change_mask; + + val &= bw_req_mask; + + if (!val) + return -EAGAIN; + + err = clear_bw_req_state(aux); + if (err < 0) + return err; + + return val == DP_BW_REQUEST_SUCCEEDED ? 0 : -ENOSPC; +} + +static int allocate_tunnel_bw(struct drm_dp_tunnel *tunnel, int bw) +{ + struct drm_dp_tunnel_mgr *mgr = tunnel->group->mgr; + int request_bw = DIV_ROUND_UP(bw, tunnel->bw_granularity); + DEFINE_WAIT_FUNC(wait, woken_wake_function); + long timeout; + int err; + + if (bw < 0) { + err = -EINVAL; + goto out; + } + + if (request_bw * tunnel->bw_granularity == tunnel->allocated_bw) + return 0; + + /* Atomic check should prevent the following. */ + if (drm_WARN_ON(mgr->dev, request_bw > MAX_DP_REQUEST_BW)) { + err = -EINVAL; + goto out; + } + + err = clear_bw_req_state(tunnel->aux); + if (err) + goto out; + + if (drm_dp_dpcd_writeb(tunnel->aux, DP_REQUEST_BW, request_bw) < 0) { + err = -EIO; + goto out; + } + + timeout = msecs_to_jiffies(3000); + add_wait_queue(&mgr->bw_req_queue, &wait); + + for (;;) { + bool status_changed; + + err = bw_req_complete(tunnel->aux, &status_changed); + if (err != -EAGAIN) + break; + + if (status_changed) { + struct drm_dp_tunnel_regs regs; + + err = read_and_verify_tunnel_regs(tunnel, ®s, + ALLOW_ALLOCATED_BW_CHANGE); + if (err) + break; + } + + if (!timeout) { + err = -ETIMEDOUT; + break; + } + + timeout = wait_woken(&wait, TASK_UNINTERRUPTIBLE, timeout); + }; + + remove_wait_queue(&mgr->bw_req_queue, &wait); + + if (err) + goto out; + + tunnel->allocated_bw = request_bw * tunnel->bw_granularity; + +out: + tun_dbg_stat(tunnel, err, "Allocating %d/%d Mb/s for tunnel: Group alloc:%d/%d Mb/s", + DPTUN_BW_ARG(request_bw * tunnel->bw_granularity), + DPTUN_BW_ARG(get_max_tunnel_bw(tunnel)), + DPTUN_BW_ARG(group_allocated_bw(tunnel->group)), + DPTUN_BW_ARG(tunnel->group->available_bw)); + + if (err == -EIO) + drm_dp_tunnel_set_io_error(tunnel); + + return err; +} + +/** + * drm_dp_tunnel_alloc_bw - Allocate BW for a DP tunnel + * @tunnel: Tunnel object + * @bw: BW in kB/s units + * + * Allocate @bw kB/s for @tunnel. The allocated BW must be freed after use by + * calling this function for the same tunnel setting @bw to 0. + * + * Returns 0 in case of success, a negative error code otherwise. + */ +int drm_dp_tunnel_alloc_bw(struct drm_dp_tunnel *tunnel, int bw) +{ + int err; + + err = check_tunnel(tunnel); + if (err) + return err; + + return allocate_tunnel_bw(tunnel, bw); +} +EXPORT_SYMBOL(drm_dp_tunnel_alloc_bw); + +/** + * drm_dp_tunnel_atomic_get_allocated_bw - Get the BW allocated for a DP tunnel + * @tunnel: Tunnel object + * + * Get the current BW allocated for @tunnel. After the tunnel is created / + * resumed and the BW allocation mode is enabled for it, the allocation + * becomes determined only after the first allocation request by the driver + * calling drm_dp_tunnel_alloc_bw(). + * + * Return the BW allocated for the tunnel, or -1 if the allocation is + * undetermined. + */ +int drm_dp_tunnel_get_allocated_bw(struct drm_dp_tunnel *tunnel) +{ + return tunnel->allocated_bw; +} +EXPORT_SYMBOL(drm_dp_tunnel_get_allocated_bw); + +/* + * Return 0 if the status hasn't changed, 1 if the status has changed, a + * negative error code in case of an I/O failure. + */ +static int check_and_clear_status_change(struct drm_dp_tunnel *tunnel) +{ + u8 mask = DP_BW_ALLOCATION_CAPABILITY_CHANGED | DP_ESTIMATED_BW_CHANGED; + u8 val; + + if (drm_dp_dpcd_readb(tunnel->aux, DP_TUNNELING_STATUS, &val) < 0) + goto out_err; + + val &= mask; + + if (val) { + if (drm_dp_dpcd_writeb(tunnel->aux, DP_TUNNELING_STATUS, val) < 0) + goto out_err; + + return 1; + } + + if (!drm_dp_tunnel_bw_alloc_is_enabled(tunnel)) + return 0; + + /* + * Check for estimated BW changes explicitly to account for lost + * BW change notifications. + */ + if (drm_dp_dpcd_readb(tunnel->aux, DP_ESTIMATED_BW, &val) < 0) + goto out_err; + + if (val * tunnel->bw_granularity != tunnel->estimated_bw) + return 1; + + return 0; + +out_err: + drm_dp_tunnel_set_io_error(tunnel); + + return -EIO; +} + +/** + * drm_dp_tunnel_update_state - Update DP tunnel SW state with the HW state + * @tunnel: Tunnel object + * + * Update the SW state of @tunnel with the HW state. + * + * Returns 0 if the state has not changed, 1 if it has changed and got updated + * successfully and a negative error code otherwise. + */ +int drm_dp_tunnel_update_state(struct drm_dp_tunnel *tunnel) +{ + struct drm_dp_tunnel_regs regs; + bool changed = false; + int ret; + + ret = check_tunnel(tunnel); + if (ret < 0) + return ret; + + ret = check_and_clear_status_change(tunnel); + if (ret < 0) + goto out; + + if (!ret) + return 0; + + ret = read_and_verify_tunnel_regs(tunnel, ®s, 0); + if (ret) + goto out; + + if (update_dprx_caps(tunnel, ®s)) + changed = true; + + ret = update_group_available_bw(tunnel, ®s); + if (ret == 1) + changed = true; + +out: + tun_dbg_stat(tunnel, ret < 0 ? ret : 0, + "State update: Changed:%s DPRX:%dx%d Tunnel alloc:%d/%d Group alloc:%d/%d Mb/s", + str_yes_no(changed), + tunnel->max_dprx_rate / 100, tunnel->max_dprx_lane_count, + DPTUN_BW_ARG(tunnel->allocated_bw), + DPTUN_BW_ARG(get_max_tunnel_bw(tunnel)), + DPTUN_BW_ARG(group_allocated_bw(tunnel->group)), + DPTUN_BW_ARG(tunnel->group->available_bw)); + + if (ret < 0) + return ret; + + if (changed) + return 1; + + return 0; +} +EXPORT_SYMBOL(drm_dp_tunnel_update_state); + +/* + * drm_dp_tunnel_handle_irq - Handle DP tunnel IRQs + * + * Handle any pending DP tunnel IRQs, waking up waiters for a completion + * event. + * + * Returns 1 if the state of the tunnel has changed which requires calling + * drm_dp_tunnel_update_state(), a negative error code in case of a failure, + * 0 otherwise. + */ +int drm_dp_tunnel_handle_irq(struct drm_dp_tunnel_mgr *mgr, struct drm_dp_aux *aux) +{ + u8 val; + + if (drm_dp_dpcd_readb(aux, DP_TUNNELING_STATUS, &val) < 0) + return -EIO; + + if (val & (DP_BW_REQUEST_SUCCEEDED | DP_BW_REQUEST_FAILED)) + wake_up_all(&mgr->bw_req_queue); + + if (val & (DP_BW_ALLOCATION_CAPABILITY_CHANGED | DP_ESTIMATED_BW_CHANGED)) + return 1; + + return 0; +} +EXPORT_SYMBOL(drm_dp_tunnel_handle_irq); + +/** + * drm_dp_tunnel_max_dprx_rate - Query the maximum rate of the tunnel's DPRX + * @tunnel: Tunnel object + * + * The function is used to query the maximum link rate of the DPRX connected + * to @tunnel. Note that this rate will not be limited by the BW limit of the + * tunnel, as opposed to the standard and extended DP_MAX_LINK_RATE DPCD + * registers. + * + * Returns the maximum link rate in 10 kbit/s units. + */ +int drm_dp_tunnel_max_dprx_rate(const struct drm_dp_tunnel *tunnel) +{ + return tunnel->max_dprx_rate; +} +EXPORT_SYMBOL(drm_dp_tunnel_max_dprx_rate); + +/** + * drm_dp_tunnel_max_dprx_lane_count - Query the maximum lane count of the tunnel's DPRX + * @tunnel: Tunnel object + * + * The function is used to query the maximum lane count of the DPRX connected + * to @tunnel. Note that this lane count will not be limited by the BW limit of + * the tunnel, as opposed to the standard and extended DP_MAX_LANE_COUNT DPCD + * registers. + * + * Returns the maximum lane count. + */ +int drm_dp_tunnel_max_dprx_lane_count(const struct drm_dp_tunnel *tunnel) +{ + return tunnel->max_dprx_lane_count; +} +EXPORT_SYMBOL(drm_dp_tunnel_max_dprx_lane_count); + +/** + * drm_dp_tunnel_available_bw - Query the estimated total available BW of the tunnel + * @tunnel: Tunnel object + * + * This function is used to query the estimated total available BW of the + * tunnel. This includes the currently allocated and free BW for all the + * tunnels in @tunnel's group. The available BW is valid only after the BW + * allocation mode has been enabled for the tunnel and its state got updated + * calling drm_dp_tunnel_update_state(). + * + * Returns the @tunnel group's estimated total available bandwidth in kB/s + * units, or -1 if the available BW isn't valid (the BW allocation mode is + * not enabled or the tunnel's state hasn't been updated). + */ +int drm_dp_tunnel_available_bw(const struct drm_dp_tunnel *tunnel) +{ + return tunnel->group->available_bw; +} +EXPORT_SYMBOL(drm_dp_tunnel_available_bw); + +static struct drm_dp_tunnel_group_state * +drm_dp_tunnel_atomic_get_group_state(struct drm_atomic_state *state, + const struct drm_dp_tunnel *tunnel) +{ + return (struct drm_dp_tunnel_group_state *) + drm_atomic_get_private_obj_state(state, + &tunnel->group->base); +} + +static struct drm_dp_tunnel_state * +add_tunnel_state(struct drm_dp_tunnel_group_state *group_state, + struct drm_dp_tunnel *tunnel) +{ + struct drm_dp_tunnel_state *tunnel_state; + + tun_dbg_atomic(tunnel, + "Adding state for tunnel %p to group state %p\n", + tunnel, group_state); + + tunnel_state = kzalloc(sizeof(*tunnel_state), GFP_KERNEL); + if (!tunnel_state) + return NULL; + + tunnel_state->group_state = group_state; + + drm_dp_tunnel_ref_get(tunnel, &tunnel_state->tunnel_ref); + + INIT_LIST_HEAD(&tunnel_state->node); + list_add(&tunnel_state->node, &group_state->tunnel_states); + + return tunnel_state; +} + +static void free_tunnel_state(struct drm_dp_tunnel_state *tunnel_state) +{ + tun_dbg_atomic(tunnel_state->tunnel_ref.tunnel, + "Freeing state for tunnel %p\n", + tunnel_state->tunnel_ref.tunnel); + + list_del(&tunnel_state->node); + + kfree(tunnel_state->stream_bw); + drm_dp_tunnel_ref_put(&tunnel_state->tunnel_ref); + + kfree(tunnel_state); +} + +static void free_group_state(struct drm_dp_tunnel_group_state *group_state) +{ + struct drm_dp_tunnel_state *tunnel_state; + struct drm_dp_tunnel_state *tunnel_state_tmp; + + for_each_tunnel_state_safe(group_state, tunnel_state, tunnel_state_tmp) + free_tunnel_state(tunnel_state); + + kfree(group_state); +} + +static struct drm_dp_tunnel_state * +get_tunnel_state(struct drm_dp_tunnel_group_state *group_state, + const struct drm_dp_tunnel *tunnel) +{ + struct drm_dp_tunnel_state *tunnel_state; + + for_each_tunnel_state(group_state, tunnel_state) + if (tunnel_state->tunnel_ref.tunnel == tunnel) + return tunnel_state; + + return NULL; +} + +static struct drm_dp_tunnel_state * +get_or_add_tunnel_state(struct drm_dp_tunnel_group_state *group_state, + struct drm_dp_tunnel *tunnel) +{ + struct drm_dp_tunnel_state *tunnel_state; + + tunnel_state = get_tunnel_state(group_state, tunnel); + if (tunnel_state) + return tunnel_state; + + return add_tunnel_state(group_state, tunnel); +} + +static struct drm_private_state * +tunnel_group_duplicate_state(struct drm_private_obj *obj) +{ + struct drm_dp_tunnel_group_state *group_state; + struct drm_dp_tunnel_state *tunnel_state; + + group_state = kzalloc(sizeof(*group_state), GFP_KERNEL); + if (!group_state) + return NULL; + + INIT_LIST_HEAD(&group_state->tunnel_states); + + __drm_atomic_helper_private_obj_duplicate_state(obj, &group_state->base); + + for_each_tunnel_state(to_group_state(obj->state), tunnel_state) { + struct drm_dp_tunnel_state *new_tunnel_state; + + new_tunnel_state = get_or_add_tunnel_state(group_state, + tunnel_state->tunnel_ref.tunnel); + if (!new_tunnel_state) + goto out_free_state; + + new_tunnel_state->stream_mask = tunnel_state->stream_mask; + new_tunnel_state->stream_bw = kmemdup(tunnel_state->stream_bw, + sizeof(*tunnel_state->stream_bw) * + hweight32(tunnel_state->stream_mask), + GFP_KERNEL); + + if (!new_tunnel_state->stream_bw) + goto out_free_state; + } + + return &group_state->base; + +out_free_state: + free_group_state(group_state); + + return NULL; +} + +static void tunnel_group_destroy_state(struct drm_private_obj *obj, struct drm_private_state *state) +{ + free_group_state(to_group_state(state)); +} + +static const struct drm_private_state_funcs tunnel_group_funcs = { + .atomic_duplicate_state = tunnel_group_duplicate_state, + .atomic_destroy_state = tunnel_group_destroy_state, +}; + +/** + * drm_dp_tunnel_atomic_get_state - get/allocate the new atomic state for a tunnel + * @state: Atomic state + * @tunnel: Tunnel to get the state for + * + * Get the new atomic state for @tunnel, duplicating it from the old tunnel + * state if not yet allocated. + * + * Return the state or an ERR_PTR() error on failure. + */ +struct drm_dp_tunnel_state * +drm_dp_tunnel_atomic_get_state(struct drm_atomic_state *state, + struct drm_dp_tunnel *tunnel) +{ + struct drm_dp_tunnel_group_state *group_state; + struct drm_dp_tunnel_state *tunnel_state; + + group_state = drm_dp_tunnel_atomic_get_group_state(state, tunnel); + if (IS_ERR(group_state)) + return ERR_CAST(group_state); + + tunnel_state = get_or_add_tunnel_state(group_state, tunnel); + if (!tunnel_state) + return ERR_PTR(-ENOMEM); + + return tunnel_state; +} +EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_state); + +/** + * drm_dp_tunnel_atomic_get_old_state - get the old atomic state for a tunnel + * @state: Atomic state + * @tunnel: Tunnel to get the state for + * + * Get the old atomic state for @tunnel. + * + * Return the old state or NULL if the tunnel's atomic state is not in @state. + */ +struct drm_dp_tunnel_state * +drm_dp_tunnel_atomic_get_old_state(struct drm_atomic_state *state, + const struct drm_dp_tunnel *tunnel) +{ + struct drm_dp_tunnel_group_state *old_group_state; + int i; + + for_each_old_group_in_state(state, old_group_state, i) + if (to_group(old_group_state->base.obj) == tunnel->group) + return get_tunnel_state(old_group_state, tunnel); + + return NULL; +} +EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_old_state); + +/** + * drm_dp_tunnel_atomic_get_new_state - get the new atomic state for a tunnel + * @state: Atomic state + * @tunnel: Tunnel to get the state for + * + * Get the new atomic state for @tunnel. + * + * Return the new state or NULL if the tunnel's atomic state is not in @state. + */ +struct drm_dp_tunnel_state * +drm_dp_tunnel_atomic_get_new_state(struct drm_atomic_state *state, + const struct drm_dp_tunnel *tunnel) +{ + struct drm_dp_tunnel_group_state *new_group_state; + int i; + + for_each_new_group_in_state(state, new_group_state, i) + if (to_group(new_group_state->base.obj) == tunnel->group) + return get_tunnel_state(new_group_state, tunnel); + + return NULL; +} +EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_new_state); + +static bool init_group(struct drm_dp_tunnel_mgr *mgr, struct drm_dp_tunnel_group *group) +{ + struct drm_dp_tunnel_group_state *group_state; + + group_state = kzalloc(sizeof(*group_state), GFP_KERNEL); + if (!group_state) + return false; + + INIT_LIST_HEAD(&group_state->tunnel_states); + + group->mgr = mgr; + group->available_bw = -1; + INIT_LIST_HEAD(&group->tunnels); + + drm_atomic_private_obj_init(mgr->dev, &group->base, &group_state->base, + &tunnel_group_funcs); + + return true; +} + +static void cleanup_group(struct drm_dp_tunnel_group *group) +{ + drm_atomic_private_obj_fini(&group->base); +} + +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE +static void check_unique_stream_ids(const struct drm_dp_tunnel_group_state *group_state) +{ + const struct drm_dp_tunnel_state *tunnel_state; + u32 stream_mask = 0; + + for_each_tunnel_state(group_state, tunnel_state) { + drm_WARN(to_group(group_state->base.obj)->mgr->dev, + tunnel_state->stream_mask & stream_mask, + "[DPTUN %s]: conflicting stream IDs %x (IDs in other tunnels %x)\n", + tunnel_state->tunnel_ref.tunnel->name, + tunnel_state->stream_mask, + stream_mask); + + stream_mask |= tunnel_state->stream_mask; + } +} +#else +static void check_unique_stream_ids(const struct drm_dp_tunnel_group_state *group_state) +{ +} +#endif + +static int stream_id_to_idx(u32 stream_mask, u8 stream_id) +{ + return hweight32(stream_mask & (BIT(stream_id) - 1)); +} + +static int resize_bw_array(struct drm_dp_tunnel_state *tunnel_state, + unsigned long old_mask, unsigned long new_mask) +{ + unsigned long move_mask = old_mask & new_mask; + int *new_bws = NULL; + int id; + + WARN_ON(!new_mask); + + if (old_mask == new_mask) + return 0; + + new_bws = kcalloc(hweight32(new_mask), sizeof(*new_bws), GFP_KERNEL); + if (!new_bws) + return -ENOMEM; + + for_each_set_bit(id, &move_mask, BITS_PER_TYPE(move_mask)) + new_bws[stream_id_to_idx(new_mask, id)] = + tunnel_state->stream_bw[stream_id_to_idx(old_mask, id)]; + + kfree(tunnel_state->stream_bw); + tunnel_state->stream_bw = new_bws; + tunnel_state->stream_mask = new_mask; + + return 0; +} + +static int set_stream_bw(struct drm_dp_tunnel_state *tunnel_state, + u8 stream_id, int bw) +{ + int err; + + err = resize_bw_array(tunnel_state, + tunnel_state->stream_mask, + tunnel_state->stream_mask | BIT(stream_id)); + if (err) + return err; + + tunnel_state->stream_bw[stream_id_to_idx(tunnel_state->stream_mask, stream_id)] = bw; + + return 0; +} + +static int clear_stream_bw(struct drm_dp_tunnel_state *tunnel_state, + u8 stream_id) +{ + if (!(tunnel_state->stream_mask & ~BIT(stream_id))) { + free_tunnel_state(tunnel_state); + return 0; + } + + return resize_bw_array(tunnel_state, + tunnel_state->stream_mask, + tunnel_state->stream_mask & ~BIT(stream_id)); +} + +/** + * drm_dp_tunnel_atomic_set_stream_bw - Set the BW for a DP tunnel stream + * @state: Atomic state + * @tunnel: DP tunnel containing the stream + * @stream_id: Stream ID + * @bw: BW of the stream + * + * Set a DP tunnel stream's required BW in the atomic state. + * + * Returns 0 in case of success, a negative error code otherwise. + */ +int drm_dp_tunnel_atomic_set_stream_bw(struct drm_atomic_state *state, + struct drm_dp_tunnel *tunnel, + u8 stream_id, int bw) +{ + struct drm_dp_tunnel_group_state *new_group_state; + struct drm_dp_tunnel_state *tunnel_state; + int err; + + if (drm_WARN_ON(tunnel->group->mgr->dev, + stream_id > BITS_PER_TYPE(tunnel_state->stream_mask))) + return -EINVAL; + + tun_dbg(tunnel, + "Setting %d Mb/s for stream %d\n", + DPTUN_BW_ARG(bw), stream_id); + + new_group_state = drm_dp_tunnel_atomic_get_group_state(state, tunnel); + if (IS_ERR(new_group_state)) + return PTR_ERR(new_group_state); + + if (bw == 0) { + tunnel_state = get_tunnel_state(new_group_state, tunnel); + if (!tunnel_state) + return 0; + + return clear_stream_bw(tunnel_state, stream_id); + } + + tunnel_state = get_or_add_tunnel_state(new_group_state, tunnel); + if (drm_WARN_ON(state->dev, !tunnel_state)) + return -EINVAL; + + err = set_stream_bw(tunnel_state, stream_id, bw); + if (err) + return err; + + check_unique_stream_ids(new_group_state); + + return 0; +} +EXPORT_SYMBOL(drm_dp_tunnel_atomic_set_stream_bw); + +/** + * drm_dp_tunnel_atomic_get_required_bw - Get the BW required by a DP tunnel + * @tunnel_state: Atomic state of the queried tunnel + * + * Calculate the BW required by a tunnel adding up the required BW of all + * the streams in the tunnel. + * + * Return the total BW required by the tunnel. + */ +int drm_dp_tunnel_atomic_get_required_bw(const struct drm_dp_tunnel_state *tunnel_state) +{ + int tunnel_bw = 0; + int i; + + if (!tunnel_state || !tunnel_state->stream_mask) + return 0; + + for (i = 0; i < hweight32(tunnel_state->stream_mask); i++) + tunnel_bw += tunnel_state->stream_bw[i]; + + return tunnel_bw; +} +EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_required_bw); + +/** + * drm_dp_tunnel_atomic_get_group_streams_in_state - Get mask of stream IDs in a group + * @state: Atomic state + * @tunnel: Tunnel object + * @stream_mask: Mask of streams in @tunnel's group + * + * Get the mask of all the stream IDs in the tunnel group of @tunnel. + * + * Return 0 in case of success - with the stream IDs in @stream_mask - or a + * negative error code in case of failure. + */ +int drm_dp_tunnel_atomic_get_group_streams_in_state(struct drm_atomic_state *state, + const struct drm_dp_tunnel *tunnel, + u32 *stream_mask) +{ + struct drm_dp_tunnel_group_state *group_state; + struct drm_dp_tunnel_state *tunnel_state; + + group_state = drm_dp_tunnel_atomic_get_group_state(state, tunnel); + if (IS_ERR(group_state)) + return PTR_ERR(group_state); + + *stream_mask = 0; + for_each_tunnel_state(group_state, tunnel_state) + *stream_mask |= tunnel_state->stream_mask; + + return 0; +} +EXPORT_SYMBOL(drm_dp_tunnel_atomic_get_group_streams_in_state); + +static int +drm_dp_tunnel_atomic_check_group_bw(struct drm_dp_tunnel_group_state *new_group_state, + u32 *failed_stream_mask) +{ + struct drm_dp_tunnel_group *group = to_group(new_group_state->base.obj); + struct drm_dp_tunnel_state *new_tunnel_state; + u32 group_stream_mask = 0; + int group_bw = 0; + + for_each_tunnel_state(new_group_state, new_tunnel_state) { + struct drm_dp_tunnel *tunnel = new_tunnel_state->tunnel_ref.tunnel; + int max_dprx_bw = get_max_dprx_bw(tunnel); + int tunnel_bw = drm_dp_tunnel_atomic_get_required_bw(new_tunnel_state); + + tun_dbg(tunnel, + "%sRequired %d/%d Mb/s total for tunnel.\n", + tunnel_bw > max_dprx_bw ? "Not enough BW: " : "", + DPTUN_BW_ARG(tunnel_bw), + DPTUN_BW_ARG(max_dprx_bw)); + + if (tunnel_bw > max_dprx_bw) { + *failed_stream_mask = new_tunnel_state->stream_mask; + return -ENOSPC; + } + + group_bw += min(roundup(tunnel_bw, tunnel->bw_granularity), + max_dprx_bw); + group_stream_mask |= new_tunnel_state->stream_mask; + } + + tun_grp_dbg(group, + "%sRequired %d/%d Mb/s total for tunnel group.\n", + group_bw > group->available_bw ? "Not enough BW: " : "", + DPTUN_BW_ARG(group_bw), + DPTUN_BW_ARG(group->available_bw)); + + if (group_bw > group->available_bw) { + *failed_stream_mask = group_stream_mask; + return -ENOSPC; + } + + return 0; +} + +/** + * drm_dp_tunnel_atomic_check_stream_bws - Check BW limit for all streams in state + * @state: Atomic state + * @failed_stream_mask: Mask of stream IDs with a BW limit failure + * + * Check the required BW of each DP tunnel in @state against both the DPRX BW + * limit of the tunnel and the BW limit of the tunnel group. Return a mask of + * stream IDs in @failed_stream_mask once a check fails. The mask will contain + * either all the streams in a tunnel (in case a DPRX BW limit check failed) or + * all the streams in a tunnel group (in case a group BW limit check failed). + * + * Return 0 if all the BW limit checks passed, -ENOSPC in case a BW limit + * check failed - with @failed_stream_mask containing the streams failing the + * check - or a negative error code otherwise. + */ +int drm_dp_tunnel_atomic_check_stream_bws(struct drm_atomic_state *state, + u32 *failed_stream_mask) +{ + struct drm_dp_tunnel_group_state *new_group_state; + int i; + + for_each_new_group_in_state(state, new_group_state, i) { + int ret; + + ret = drm_dp_tunnel_atomic_check_group_bw(new_group_state, + failed_stream_mask); + if (ret) + return ret; + } + + return 0; +} +EXPORT_SYMBOL(drm_dp_tunnel_atomic_check_stream_bws); + +static void destroy_mgr(struct drm_dp_tunnel_mgr *mgr) +{ + int i; + + for (i = 0; i < mgr->group_count; i++) { + cleanup_group(&mgr->groups[i]); + drm_WARN_ON(mgr->dev, !list_empty(&mgr->groups[i].tunnels)); + } + +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE + ref_tracker_dir_exit(&mgr->ref_tracker); +#endif + + kfree(mgr->groups); + kfree(mgr); +} + +/** + * drm_dp_tunnel_mgr_create - Create a DP tunnel manager + * @dev: DRM device object + * + * Creates a DP tunnel manager for @dev. + * + * Returns a pointer to the tunnel manager if created successfully or NULL in + * case of an error. + */ +struct drm_dp_tunnel_mgr * +drm_dp_tunnel_mgr_create(struct drm_device *dev, int max_group_count) +{ + struct drm_dp_tunnel_mgr *mgr; + int i; + + mgr = kzalloc(sizeof(*mgr), GFP_KERNEL); + if (!mgr) + return NULL; + + mgr->dev = dev; + init_waitqueue_head(&mgr->bw_req_queue); + + mgr->groups = kcalloc(max_group_count, sizeof(*mgr->groups), GFP_KERNEL); + if (!mgr->groups) { + kfree(mgr); + + return NULL; + } + +#ifdef CONFIG_DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE + ref_tracker_dir_init(&mgr->ref_tracker, 16, "dptun"); +#endif + + for (i = 0; i < max_group_count; i++) { + if (!init_group(mgr, &mgr->groups[i])) { + destroy_mgr(mgr); + + return NULL; + } + + mgr->group_count++; + } + + return mgr; +} +EXPORT_SYMBOL(drm_dp_tunnel_mgr_create); + +/** + * drm_dp_tunnel_mgr_destroy - Destroy DP tunnel manager + * @mgr: Tunnel manager object + * + * Destroy the tunnel manager. + */ +void drm_dp_tunnel_mgr_destroy(struct drm_dp_tunnel_mgr *mgr) +{ + destroy_mgr(mgr); +} +EXPORT_SYMBOL(drm_dp_tunnel_mgr_destroy); diff --git a/include/drm/display/drm_dp.h b/include/drm/display/drm_dp.h index 281afff6ee4e5..8bfd5d007be8d 100644 --- a/include/drm/display/drm_dp.h +++ b/include/drm/display/drm_dp.h @@ -1382,6 +1382,66 @@ #define DP_HDCP_2_2_REG_STREAM_TYPE_OFFSET 0x69494 #define DP_HDCP_2_2_REG_DBG_OFFSET 0x69518 +/* DP-tunneling */ +#define DP_TUNNELING_OUI 0xe0000 +#define DP_TUNNELING_OUI_BYTES 3 + +#define DP_TUNNELING_DEV_ID 0xe0003 +#define DP_TUNNELING_DEV_ID_BYTES 6 + +#define DP_TUNNELING_HW_REV 0xe0009 +#define DP_TUNNELING_HW_REV_MAJOR_SHIFT 4 +#define DP_TUNNELING_HW_REV_MAJOR_MASK (0xf << DP_TUNNELING_HW_REV_MAJOR_SHIFT) +#define DP_TUNNELING_HW_REV_MINOR_SHIFT 0 +#define DP_TUNNELING_HW_REV_MINOR_MASK (0xf << DP_TUNNELING_HW_REV_MINOR_SHIFT) + +#define DP_TUNNELING_SW_REV_MAJOR 0xe000a +#define DP_TUNNELING_SW_REV_MINOR 0xe000b + +#define DP_TUNNELING_CAPABILITIES 0xe000d +#define DP_IN_BW_ALLOCATION_MODE_SUPPORT (1 << 7) +#define DP_PANEL_REPLAY_OPTIMIZATION_SUPPORT (1 << 6) +#define DP_TUNNELING_SUPPORT (1 << 0) + +#define DP_IN_ADAPTER_INFO 0xe000e +#define DP_IN_ADAPTER_NUMBER_BITS 7 +#define DP_IN_ADAPTER_NUMBER_MASK ((1 << DP_IN_ADAPTER_NUMBER_BITS) - 1) + +#define DP_USB4_DRIVER_ID 0xe000f +#define DP_USB4_DRIVER_ID_BITS 4 +#define DP_USB4_DRIVER_ID_MASK ((1 << DP_USB4_DRIVER_ID_BITS) - 1) + +#define DP_USB4_DRIVER_BW_CAPABILITY 0xe0020 +#define DP_USB4_DRIVER_BW_ALLOCATION_MODE_SUPPORT (1 << 7) + +#define DP_IN_ADAPTER_TUNNEL_INFORMATION 0xe0021 +#define DP_GROUP_ID_BITS 3 +#define DP_GROUP_ID_MASK ((1 << DP_GROUP_ID_BITS) - 1) + +#define DP_BW_GRANULARITY 0xe0022 +#define DP_BW_GRANULARITY_MASK 0x3 + +#define DP_ESTIMATED_BW 0xe0023 +#define DP_ALLOCATED_BW 0xe0024 + +#define DP_TUNNELING_STATUS 0xe0025 +#define DP_BW_ALLOCATION_CAPABILITY_CHANGED (1 << 3) +#define DP_ESTIMATED_BW_CHANGED (1 << 2) +#define DP_BW_REQUEST_SUCCEEDED (1 << 1) +#define DP_BW_REQUEST_FAILED (1 << 0) + +#define DP_TUNNELING_MAX_LINK_RATE 0xe0028 + +#define DP_TUNNELING_MAX_LANE_COUNT 0xe0029 +#define DP_TUNNELING_MAX_LANE_COUNT_MASK 0x1f + +#define DP_DPTX_BW_ALLOCATION_MODE_CONTROL 0xe0030 +#define DP_DISPLAY_DRIVER_BW_ALLOCATION_MODE_ENABLE (1 << 7) +#define DP_UNMASK_BW_ALLOCATION_IRQ (1 << 6) + +#define DP_REQUEST_BW 0xe0031 +#define MAX_DP_REQUEST_BW 255 + /* LTTPR: Link Training (LT)-tunable PHY Repeaters */ #define DP_LT_TUNABLE_PHY_REPEATER_FIELD_DATA_STRUCTURE_REV 0xf0000 /* 1.3 */ #define DP_MAX_LINK_RATE_PHY_REPEATER 0xf0001 /* 1.4a */ diff --git a/include/drm/display/drm_dp_tunnel.h b/include/drm/display/drm_dp_tunnel.h new file mode 100644 index 0000000000000..69267d2f7980f --- /dev/null +++ b/include/drm/display/drm_dp_tunnel.h @@ -0,0 +1,249 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2023 Intel Corporation + */ + +#ifndef __DRM_DP_TUNNEL_H__ +#define __DRM_DP_TUNNEL_H__ + +#include +#include +#include + +struct drm_dp_aux; + +struct drm_device; + +struct drm_atomic_state; +struct drm_dp_tunnel_mgr; +struct drm_dp_tunnel_state; + +struct ref_tracker; + +struct drm_dp_tunnel_ref { + struct drm_dp_tunnel *tunnel; + struct ref_tracker *tracker; +}; + +#ifdef CONFIG_DRM_DISPLAY_DP_TUNNEL + +struct drm_dp_tunnel * +drm_dp_tunnel_get(struct drm_dp_tunnel *tunnel, struct ref_tracker **tracker); + +void +drm_dp_tunnel_put(struct drm_dp_tunnel *tunnel, struct ref_tracker **tracker); + +static inline void drm_dp_tunnel_ref_get(struct drm_dp_tunnel *tunnel, + struct drm_dp_tunnel_ref *tunnel_ref) +{ + tunnel_ref->tunnel = drm_dp_tunnel_get(tunnel, &tunnel_ref->tracker); +} + +static inline void drm_dp_tunnel_ref_put(struct drm_dp_tunnel_ref *tunnel_ref) +{ + drm_dp_tunnel_put(tunnel_ref->tunnel, &tunnel_ref->tracker); + tunnel_ref->tunnel = NULL; +} + +struct drm_dp_tunnel * +drm_dp_tunnel_detect(struct drm_dp_tunnel_mgr *mgr, + struct drm_dp_aux *aux); +int drm_dp_tunnel_destroy(struct drm_dp_tunnel *tunnel); + +int drm_dp_tunnel_enable_bw_alloc(struct drm_dp_tunnel *tunnel); +int drm_dp_tunnel_disable_bw_alloc(struct drm_dp_tunnel *tunnel); +bool drm_dp_tunnel_bw_alloc_is_enabled(const struct drm_dp_tunnel *tunnel); +int drm_dp_tunnel_alloc_bw(struct drm_dp_tunnel *tunnel, int bw); +int drm_dp_tunnel_get_allocated_bw(struct drm_dp_tunnel *tunnel); +int drm_dp_tunnel_update_state(struct drm_dp_tunnel *tunnel); + +void drm_dp_tunnel_set_io_error(struct drm_dp_tunnel *tunnel); + +int drm_dp_tunnel_handle_irq(struct drm_dp_tunnel_mgr *mgr, + struct drm_dp_aux *aux); + +int drm_dp_tunnel_max_dprx_rate(const struct drm_dp_tunnel *tunnel); +int drm_dp_tunnel_max_dprx_lane_count(const struct drm_dp_tunnel *tunnel); +int drm_dp_tunnel_available_bw(const struct drm_dp_tunnel *tunnel); + +const char *drm_dp_tunnel_name(const struct drm_dp_tunnel *tunnel); + +struct drm_dp_tunnel_state * +drm_dp_tunnel_atomic_get_state(struct drm_atomic_state *state, + struct drm_dp_tunnel *tunnel); + +struct drm_dp_tunnel_state * +drm_dp_tunnel_atomic_get_old_state(struct drm_atomic_state *state, + const struct drm_dp_tunnel *tunnel); + +struct drm_dp_tunnel_state * +drm_dp_tunnel_atomic_get_new_state(struct drm_atomic_state *state, + const struct drm_dp_tunnel *tunnel); + +int drm_dp_tunnel_atomic_set_stream_bw(struct drm_atomic_state *state, + struct drm_dp_tunnel *tunnel, + u8 stream_id, int bw); +int drm_dp_tunnel_atomic_get_group_streams_in_state(struct drm_atomic_state *state, + const struct drm_dp_tunnel *tunnel, + u32 *stream_mask); + +int drm_dp_tunnel_atomic_check_stream_bws(struct drm_atomic_state *state, + u32 *failed_stream_mask); + +int drm_dp_tunnel_atomic_get_required_bw(const struct drm_dp_tunnel_state *tunnel_state); + +struct drm_dp_tunnel_mgr * +drm_dp_tunnel_mgr_create(struct drm_device *dev, int max_group_count); +void drm_dp_tunnel_mgr_destroy(struct drm_dp_tunnel_mgr *mgr); + +#else + +static inline struct drm_dp_tunnel * +drm_dp_tunnel_get(struct drm_dp_tunnel *tunnel, struct ref_tracker **tracker) +{ + return NULL; +} + +static inline void +drm_dp_tunnel_put(struct drm_dp_tunnel *tunnel, struct ref_tracker **tracker) {} + +static inline void drm_dp_tunnel_ref_get(struct drm_dp_tunnel *tunnel, + struct drm_dp_tunnel_ref *tunnel_ref) {} + +static inline void drm_dp_tunnel_ref_put(struct drm_dp_tunnel_ref *tunnel_ref) {} + +static inline struct drm_dp_tunnel * +drm_dp_tunnel_detect(struct drm_dp_tunnel_mgr *mgr, + struct drm_dp_aux *aux) +{ + return ERR_PTR(-EOPNOTSUPP); +} + +static inline int +drm_dp_tunnel_destroy(struct drm_dp_tunnel *tunnel) +{ + return 0; +} + +static inline int drm_dp_tunnel_enable_bw_alloc(struct drm_dp_tunnel *tunnel) +{ + return -EOPNOTSUPP; +} + +static inline int drm_dp_tunnel_disable_bw_alloc(struct drm_dp_tunnel *tunnel) +{ + return -EOPNOTSUPP; +} + +static inline bool drm_dp_tunnel_bw_alloc_is_enabled(const struct drm_dp_tunnel *tunnel) +{ + return false; +} + +static inline int +drm_dp_tunnel_alloc_bw(struct drm_dp_tunnel *tunnel, int bw) +{ + return -EOPNOTSUPP; +} + +static inline int +drm_dp_tunnel_get_allocated_bw(struct drm_dp_tunnel *tunnel) +{ + return -1; +} + +static inline int +drm_dp_tunnel_update_state(struct drm_dp_tunnel *tunnel) +{ + return -EOPNOTSUPP; +} + +static inline void drm_dp_tunnel_set_io_error(struct drm_dp_tunnel *tunnel) {} + +static inline int +drm_dp_tunnel_handle_irq(struct drm_dp_tunnel_mgr *mgr, + struct drm_dp_aux *aux) +{ + return -EOPNOTSUPP; +} + +static inline int +drm_dp_tunnel_max_dprx_rate(const struct drm_dp_tunnel *tunnel) +{ + return 0; +} + +static inline int +drm_dp_tunnel_max_dprx_lane_count(const struct drm_dp_tunnel *tunnel) +{ + return 0; +} + +static inline int +drm_dp_tunnel_available_bw(const struct drm_dp_tunnel *tunnel) +{ + return -1; +} + +static inline const char * +drm_dp_tunnel_name(const struct drm_dp_tunnel *tunnel) +{ + return NULL; +} + +static inline struct drm_dp_tunnel_state * +drm_dp_tunnel_atomic_get_state(struct drm_atomic_state *state, + struct drm_dp_tunnel *tunnel) +{ + return ERR_PTR(-EOPNOTSUPP); +} + +static inline struct drm_dp_tunnel_state * +drm_dp_tunnel_atomic_get_new_state(struct drm_atomic_state *state, + const struct drm_dp_tunnel *tunnel) +{ + return ERR_PTR(-EOPNOTSUPP); +} + +static inline int +drm_dp_tunnel_atomic_set_stream_bw(struct drm_atomic_state *state, + struct drm_dp_tunnel *tunnel, + u8 stream_id, int bw) +{ + return -EOPNOTSUPP; +} + +static inline int +drm_dp_tunnel_atomic_get_group_streams_in_state(struct drm_atomic_state *state, + const struct drm_dp_tunnel *tunnel, + u32 *stream_mask) +{ + return -EOPNOTSUPP; +} + +static inline int +drm_dp_tunnel_atomic_check_stream_bws(struct drm_atomic_state *state, + u32 *failed_stream_mask) +{ + return -EOPNOTSUPP; +} + +static inline int +drm_dp_tunnel_atomic_get_required_bw(const struct drm_dp_tunnel_state *tunnel_state) +{ + return 0; +} + +static inline struct drm_dp_tunnel_mgr * +drm_dp_tunnel_mgr_create(struct drm_device *dev, int max_group_count) +{ + return ERR_PTR(-EOPNOTSUPP); +} + +static inline +void drm_dp_tunnel_mgr_destroy(struct drm_dp_tunnel_mgr *mgr) {} + + +#endif /* CONFIG_DRM_DISPLAY_DP_TUNNEL */ + +#endif /* __DRM_DP_TUNNEL_H__ */ From patchwork Mon Feb 26 18:52:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Imre Deak X-Patchwork-Id: 13572698 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5AE1FC5478C for ; Mon, 26 Feb 2024 18:52:34 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id AD79F10EE23; Mon, 26 Feb 2024 18:52:33 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="NKsSV4JU"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.19]) by gabe.freedesktop.org (Postfix) with ESMTPS id BAEA310E63E for ; Mon, 26 Feb 2024 18:52:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1708973548; x=1740509548; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dkhNooBSbe08WkniK1xNaYqak7r0bOVstdSsUTbRs2I=; b=NKsSV4JUet1OfHBzWcUpDGbH/Pyo4DAUBpW8xxoBONhpDOgx2GJUJC+C vEkFsj5pJ4zKTg9owZ+EL5R/foJmjjNIkFEi2XCQJkOZ90CtgCAHvaqy2 40tjTAugjFYRK3vR2SUBebDEKiYmd+dPcTJsw1kOyZfjSObzrE7qpqeN5 ST0amuhTRYuQ4+vpGoNYkvL1NfBzVAgHPM2JWcYdWaY7CNsY2HmX7GACJ iUT2EJVmAxtkhLx2SIkAPIOV7E8kb7erqTUyN2dZumvHkx+ahJjAu/qF4 dav20AttCb3Jx3h1crtY3umTtElcpTQ9s7qdXvNzDWYqubrpreUmjsXMW g==; X-IronPort-AV: E=McAfee;i="6600,9927,10996"; a="3151311" X-IronPort-AV: E=Sophos;i="6.06,186,1705392000"; d="scan'208";a="3151311" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by fmvoesa113.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Feb 2024 10:52:27 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,186,1705392000"; d="scan'208";a="7121079" Received: from ideak-desk.fi.intel.com ([10.237.72.78]) by orviesa007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Feb 2024 10:52:27 -0800 From: Imre Deak To: intel-gfx@lists.freedesktop.org Cc: =?utf-8?b?VmlsbGUgU3lyasOkbMOk?= , Uma Shankar Subject: [PATCH v3 11/21] drm/i915/dp: Sync instead of try-sync commits when getting active pipes Date: Mon, 26 Feb 2024 20:52:45 +0200 Message-Id: <20240226185246.1276018-3-imre.deak@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240220211841.448846-12-imre.deak@intel.com> References: <20240220211841.448846-12-imre.deak@intel.com> MIME-Version: 1.0 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Sync instead of only try-sync non-blocking commits when getting the active pipes through a given DP port. Atm intel_dp_get_active_pipes() will only try to sync a given pipe and if that would block ignore the pipe. This was supposed to avoid link retraining in case a pending modeset would do that anyway, however that could incorrectly ignore fastset pipes as well for instance (which don't retraing the link). The TC port reset path needs to handle all pipes, even if a waiting for a pending commit would block. To account for the above cases sync all the pipes unconditionally. This also prepares for a follow-up change enabling the DP tunnel BW allocation mode which needs to ensure that all active pipes are synced and returned from intel_dp_get_active_pipes(). v2: - Add a separate function to try-sync the pipes. (Ville) v3: - Just sync the pipes unconditionally in intel_dp_get_active_pipes(). (Ville) Cc: Ville Syrjälä Reviewed-by: Uma Shankar (v2) Reviewed-by: Ville Syrjälä Signed-off-by: Imre Deak --- drivers/gpu/drm/i915/display/intel_dp.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c index ebf5a6a344a40..b49dc3836b29b 100644 --- a/drivers/gpu/drm/i915/display/intel_dp.c +++ b/drivers/gpu/drm/i915/display/intel_dp.c @@ -4980,9 +4980,10 @@ int intel_dp_get_active_pipes(struct intel_dp *intel_dp, if (!crtc_state->hw.active) continue; - if (conn_state->commit && - !try_wait_for_completion(&conn_state->commit->hw_done)) - continue; + if (conn_state->commit) + drm_WARN_ON(&i915->drm, + !wait_for_completion_timeout(&conn_state->commit->hw_done, + msecs_to_jiffies(5000))); *pipe_mask |= BIT(crtc->pipe); } From patchwork Mon Feb 26 18:52:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Imre Deak X-Patchwork-Id: 13572700 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8BC02C48BF6 for ; Mon, 26 Feb 2024 18:52:43 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C43CC10EE2C; Mon, 26 Feb 2024 18:52:42 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="TSuoJ5Or"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.19]) by gabe.freedesktop.org (Postfix) with ESMTPS id 851BD10E63E for ; Mon, 26 Feb 2024 18:52:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1708973549; x=1740509549; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ENkMx9KegHclzPJDJNdtqrm/7xP8QJ8qQcwPHgtRKuw=; b=TSuoJ5OrYasMLeawSbxE5dUWjx69o2UkZyYd0rEzdgiSWpyXrsUx9gW/ Vd8RdWsOZwAIR2s6RM/nKaQmLa3+c6RtA6EreQCImE+ns4uWkW4gmZ6ml ljJc2OlhxntauZIAcurco/liugsHjKz5Lr0D1TU+Yq7eU0RYQ8MErxB97 akQSfe5jNMYY24tDVajXH9sgFCi5Y/DE7//wWopS2mj3KM+IFdS88bJNA Abqkeua7qfly1v1JEKlRx7JYn2uvpK3R2pkiOmjC68F79h8aHeQwas0f8 m4Et2yw97JnL2UpC0pa+03vlEKljKyxurtgQx/tNY+buP7+qBHdaDZ/of A==; X-IronPort-AV: E=McAfee;i="6600,9927,10996"; a="3151313" X-IronPort-AV: E=Sophos;i="6.06,186,1705392000"; d="scan'208";a="3151313" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by fmvoesa113.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Feb 2024 10:52:29 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,186,1705392000"; d="scan'208";a="7121098" Received: from ideak-desk.fi.intel.com ([10.237.72.78]) by orviesa007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Feb 2024 10:52:28 -0800 From: Imre Deak To: intel-gfx@lists.freedesktop.org Cc: =?utf-8?b?VmlsbGUgU3lyasOkbMOk?= , Uma Shankar Subject: [PATCH v3 12/21] drm/i915/dp: Add support for DP tunnel BW allocation Date: Mon, 26 Feb 2024 20:52:46 +0200 Message-Id: <20240226185246.1276018-4-imre.deak@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240220211841.448846-13-imre.deak@intel.com> References: <20240220211841.448846-13-imre.deak@intel.com> MIME-Version: 1.0 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Add support to enable the DP tunnel BW allocation mode. Follow-up patches will call the required helpers added here to prepare for a modeset on a link with DP tunnels, the last change in the patchset actually enabling BWA. With BWA enabled, the driver will expose the full mode list a display supports, regardless of any BW limitation on a shared (Thunderbolt) link. Such BW limits will be checked against only during a modeset, when the driver has the full knowledge of each display's BW requirement. If the link BW changes in a way that a connector's modelist may also change, userspace will get a hotplug notification for all the connectors sharing the same link (so it can adjust the mode used for a display). The BW limitation can change at any point, asynchronously to modesets on a given connector, so a modeset can fail even though the atomic check for it passed. In such scenarios userspace will get a bad link notification and in response is supposed to retry the modeset. v2: - Fix old vs. new connector state in intel_dp_tunnel_atomic_check_state(). (Ville) - Fix propagating the error from intel_dp_tunnel_atomic_compute_stream_bw(). (Ville) - Move tunnel==NULL checks from driver to DRM core helpers. (Ville) - Simplify return flow from intel_dp_tunnel_detect(). (Ville) - s/dp_tunnel_state/inherited_dp_tunnels (Ville) - Simplify struct intel_dp_tunnel_inherited_state. (Ville) - Unconstify object pointers (vs. states) where possible. (Ville) - Init crtc_state while declaring it in check_group_state(). (Ville) - Join obj->base.id, obj->name arg lines in debug prints to reduce LOC. (Ville) - Add/rework intel_dp_tunnel_atomic_alloc_bw() to prepare for moving the BW allocation from encoder hooks up to intel_atomic_commit_tail() later in the patchset. - Disable BW alloc mode during system suspend. - Allocate the required BW for all tunnels during system resume. - Add intel_dp_tunnel_atomic_clear_stream_bw() instead of the open-coded sequence in a follow-up patch. - Add function documentation to all exported functions. - Add CONFIG_USB4 dependency to CONFIG_DRM_I915_DP_TUNNEL. v3: - Rebase on intel_dp_get_active_pipes() change in previous patch. Cc: Ville Syrjälä Reviewed-by: Ville Syrjälä Reviewed-by: Uma Shankar Signed-off-by: Imre Deak --- drivers/gpu/drm/i915/Kconfig | 14 + drivers/gpu/drm/i915/Kconfig.debug | 1 + drivers/gpu/drm/i915/Makefile | 3 + drivers/gpu/drm/i915/display/intel_atomic.c | 2 + .../gpu/drm/i915/display/intel_display_core.h | 1 + .../drm/i915/display/intel_display_types.h | 9 + .../gpu/drm/i915/display/intel_dp_tunnel.c | 811 ++++++++++++++++++ .../gpu/drm/i915/display/intel_dp_tunnel.h | 133 +++ 8 files changed, 974 insertions(+) create mode 100644 drivers/gpu/drm/i915/display/intel_dp_tunnel.c create mode 100644 drivers/gpu/drm/i915/display/intel_dp_tunnel.h diff --git a/drivers/gpu/drm/i915/Kconfig b/drivers/gpu/drm/i915/Kconfig index 3089029abba48..5932024f8f954 100644 --- a/drivers/gpu/drm/i915/Kconfig +++ b/drivers/gpu/drm/i915/Kconfig @@ -155,6 +155,20 @@ config DRM_I915_PXP protected session and manage the status of the alive software session, as well as its life cycle. +config DRM_I915_DP_TUNNEL + bool "Enable DP tunnel support" + depends on DRM_I915 + depends on USB4 + select DRM_DISPLAY_DP_TUNNEL + default y + help + Choose this option to detect DP tunnels and enable the Bandwidth + Allocation mode for such tunnels. This allows using the maximum + resolution allowed by the link BW on all displays sharing the + link BW, for instance on a Thunderbolt link. + + If in doubt, say "Y". + menu "drm/i915 Debugging" depends on DRM_I915 depends on EXPERT diff --git a/drivers/gpu/drm/i915/Kconfig.debug b/drivers/gpu/drm/i915/Kconfig.debug index 5b7162076850c..bc18e2d9ea05d 100644 --- a/drivers/gpu/drm/i915/Kconfig.debug +++ b/drivers/gpu/drm/i915/Kconfig.debug @@ -28,6 +28,7 @@ config DRM_I915_DEBUG select STACKDEPOT select STACKTRACE select DRM_DP_AUX_CHARDEV + select DRM_DISPLAY_DEBUG_DP_TUNNEL_STATE if DRM_I915_DP_TUNNEL select X86_MSR # used by igt/pm_rpm select DRM_VGEM # used by igt/prime_vgem (dmabuf interop checks) select DRM_DEBUG_MM if DRM=y diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile index c13f14edb5088..3ef6ed41e62b4 100644 --- a/drivers/gpu/drm/i915/Makefile +++ b/drivers/gpu/drm/i915/Makefile @@ -369,6 +369,9 @@ i915-y += \ display/vlv_dsi.o \ display/vlv_dsi_pll.o +i915-$(CONFIG_DRM_I915_DP_TUNNEL) += \ + display/intel_dp_tunnel.o + i915-y += \ i915_perf.o diff --git a/drivers/gpu/drm/i915/display/intel_atomic.c b/drivers/gpu/drm/i915/display/intel_atomic.c index ec0d5168b5035..96ab37e158995 100644 --- a/drivers/gpu/drm/i915/display/intel_atomic.c +++ b/drivers/gpu/drm/i915/display/intel_atomic.c @@ -29,6 +29,7 @@ * See intel_atomic_plane.c for the plane-specific atomic functionality. */ +#include #include #include #include @@ -38,6 +39,7 @@ #include "intel_atomic.h" #include "intel_cdclk.h" #include "intel_display_types.h" +#include "intel_dp_tunnel.h" #include "intel_global_state.h" #include "intel_hdcp.h" #include "intel_psr.h" diff --git a/drivers/gpu/drm/i915/display/intel_display_core.h b/drivers/gpu/drm/i915/display/intel_display_core.h index fdeaac994e17b..2167dbee5eea7 100644 --- a/drivers/gpu/drm/i915/display/intel_display_core.h +++ b/drivers/gpu/drm/i915/display/intel_display_core.h @@ -524,6 +524,7 @@ struct intel_display { } wq; /* Grouping using named structs. Keep sorted. */ + struct drm_dp_tunnel_mgr *dp_tunnel_mgr; struct intel_audio audio; struct intel_dpll dpll; struct intel_fbc *fbc[I915_MAX_FBCS]; diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h b/drivers/gpu/drm/i915/display/intel_display_types.h index 8ce986fadd9aa..860e867586f48 100644 --- a/drivers/gpu/drm/i915/display/intel_display_types.h +++ b/drivers/gpu/drm/i915/display/intel_display_types.h @@ -33,6 +33,7 @@ #include #include +#include #include #include #include @@ -682,6 +683,8 @@ struct intel_atomic_state { struct intel_shared_dpll_state shared_dpll[I915_NUM_PLLS]; + struct intel_dp_tunnel_inherited_state *inherited_dp_tunnels; + /* * Current watermarks can't be trusted during hardware readout, so * don't bother calculating intermediate watermarks. @@ -1379,6 +1382,9 @@ struct intel_crtc_state { struct drm_dsc_config config; } dsc; + /* DP tunnel used for BW allocation. */ + struct drm_dp_tunnel_ref dp_tunnel_ref; + /* HSW+ linetime watermarks */ u16 linetime; u16 ips_linetime; @@ -1789,6 +1795,9 @@ struct intel_dp { /* connector directly attached - won't be use for modeset in mst world */ struct intel_connector *attached_connector; + struct drm_dp_tunnel *tunnel; + bool tunnel_suspended:1; + /* mst connector list */ struct intel_dp_mst_encoder *mst_encoders[I915_MAX_PIPES]; struct drm_dp_mst_topology_mgr mst_mgr; diff --git a/drivers/gpu/drm/i915/display/intel_dp_tunnel.c b/drivers/gpu/drm/i915/display/intel_dp_tunnel.c new file mode 100644 index 0000000000000..2f144aefb89f4 --- /dev/null +++ b/drivers/gpu/drm/i915/display/intel_dp_tunnel.c @@ -0,0 +1,811 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2023 Intel Corporation + */ + +#include "i915_drv.h" + +#include + +#include "intel_atomic.h" +#include "intel_display_limits.h" +#include "intel_display_types.h" +#include "intel_dp.h" +#include "intel_dp_link_training.h" +#include "intel_dp_mst.h" +#include "intel_dp_tunnel.h" +#include "intel_link_bw.h" + +struct intel_dp_tunnel_inherited_state { + struct drm_dp_tunnel_ref ref[I915_MAX_PIPES]; +}; + +/** + * intel_dp_tunnel_disconnect - Disconnect a DP tunnel from a port + * @intel_dp: DP port object the tunnel is connected to + * + * Disconnect a DP tunnel from @intel_dp, destroying any related state. This + * should be called after detecting a sink-disconnect event from the port. + */ +void intel_dp_tunnel_disconnect(struct intel_dp *intel_dp) +{ + drm_dp_tunnel_destroy(intel_dp->tunnel); + intel_dp->tunnel = NULL; +} + +/** + * intel_dp_tunnel_destroy - Destroy a DP tunnel + * @intel_dp: DP port object the tunnel is connected to + * + * Destroy a DP tunnel connected to @intel_dp, after disabling the BW + * allocation mode on the tunnel. This should be called while destroying the + * port. + */ +void intel_dp_tunnel_destroy(struct intel_dp *intel_dp) +{ + if (intel_dp_tunnel_bw_alloc_is_enabled(intel_dp)) + drm_dp_tunnel_disable_bw_alloc(intel_dp->tunnel); + + intel_dp_tunnel_disconnect(intel_dp); +} + +static int kbytes_to_mbits(int kbytes) +{ + return DIV_ROUND_UP(kbytes * 8, 1000); +} + +static int get_current_link_bw(struct intel_dp *intel_dp, + bool *below_dprx_bw) +{ + int rate = intel_dp_max_common_rate(intel_dp); + int lane_count = intel_dp_max_common_lane_count(intel_dp); + int bw; + + bw = intel_dp_max_link_data_rate(intel_dp, rate, lane_count); + *below_dprx_bw = bw < drm_dp_max_dprx_data_rate(rate, lane_count); + + return bw; +} + +static int update_tunnel_state(struct intel_dp *intel_dp) +{ + struct drm_i915_private *i915 = dp_to_i915(intel_dp); + struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base; + bool old_bw_below_dprx; + bool new_bw_below_dprx; + int old_bw; + int new_bw; + int ret; + + old_bw = get_current_link_bw(intel_dp, &old_bw_below_dprx); + + ret = drm_dp_tunnel_update_state(intel_dp->tunnel); + if (ret < 0) { + drm_dbg_kms(&i915->drm, + "[DPTUN %s][ENCODER:%d:%s] State update failed (err %pe)\n", + drm_dp_tunnel_name(intel_dp->tunnel), + encoder->base.base.id, encoder->base.name, + ERR_PTR(ret)); + + return ret; + } + + if (ret == 0 || + !drm_dp_tunnel_bw_alloc_is_enabled(intel_dp->tunnel)) + return 0; + + intel_dp_update_sink_caps(intel_dp); + + new_bw = get_current_link_bw(intel_dp, &new_bw_below_dprx); + + /* Suppress the notification if the mode list can't change due to bw. */ + if (old_bw_below_dprx == new_bw_below_dprx && + !new_bw_below_dprx) + return 0; + + drm_dbg_kms(&i915->drm, + "[DPTUN %s][ENCODER:%d:%s] Notify users about BW change: %d -> %d\n", + drm_dp_tunnel_name(intel_dp->tunnel), + encoder->base.base.id, encoder->base.name, + kbytes_to_mbits(old_bw), kbytes_to_mbits(new_bw)); + + return 1; +} + +/* + * Allocate the BW for a tunnel on a DP connector/port if the connector/port + * was already active when detecting the tunnel. The allocated BW must be + * freed by the next atomic modeset, storing the BW in the + * intel_atomic_state::inherited_dp_tunnels, and calling + * intel_dp_tunnel_atomic_free_bw(). + */ +static int allocate_initial_tunnel_bw_for_pipes(struct intel_dp *intel_dp, u8 pipe_mask) +{ + struct drm_i915_private *i915 = dp_to_i915(intel_dp); + struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base; + struct intel_crtc *crtc; + int tunnel_bw = 0; + int err; + + for_each_intel_crtc_in_pipe_mask(&i915->drm, crtc, pipe_mask) { + const struct intel_crtc_state *crtc_state = + to_intel_crtc_state(crtc->base.state); + int stream_bw = intel_dp_config_required_rate(crtc_state); + + tunnel_bw += stream_bw; + + drm_dbg_kms(&i915->drm, + "[DPTUN %s][ENCODER:%d:%s][CRTC:%d:%s] Initial BW for stream %d: %d/%d Mb/s\n", + drm_dp_tunnel_name(intel_dp->tunnel), + encoder->base.base.id, encoder->base.name, + crtc->base.base.id, crtc->base.name, + crtc->pipe, + kbytes_to_mbits(stream_bw), kbytes_to_mbits(tunnel_bw)); + } + + err = drm_dp_tunnel_alloc_bw(intel_dp->tunnel, tunnel_bw); + if (err) { + drm_dbg_kms(&i915->drm, + "[DPTUN %s][ENCODER:%d:%s] Initial BW allocation failed (err %pe)\n", + drm_dp_tunnel_name(intel_dp->tunnel), + encoder->base.base.id, encoder->base.name, + ERR_PTR(err)); + + return err; + } + + return update_tunnel_state(intel_dp); +} + +static int allocate_initial_tunnel_bw(struct intel_dp *intel_dp, + struct drm_modeset_acquire_ctx *ctx) +{ + u8 pipe_mask; + int err; + + err = intel_dp_get_active_pipes(intel_dp, ctx, &pipe_mask); + if (err) + return err; + + return allocate_initial_tunnel_bw_for_pipes(intel_dp, pipe_mask); +} + +static int detect_new_tunnel(struct intel_dp *intel_dp, struct drm_modeset_acquire_ctx *ctx) +{ + struct drm_i915_private *i915 = dp_to_i915(intel_dp); + struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base; + struct drm_dp_tunnel *tunnel; + int ret; + + tunnel = drm_dp_tunnel_detect(i915->display.dp_tunnel_mgr, + &intel_dp->aux); + if (IS_ERR(tunnel)) + return PTR_ERR(tunnel); + + intel_dp->tunnel = tunnel; + + ret = drm_dp_tunnel_enable_bw_alloc(intel_dp->tunnel); + if (ret) { + if (ret == -EOPNOTSUPP) + return 0; + + drm_dbg_kms(&i915->drm, + "[DPTUN %s][ENCODER:%d:%s] Failed to enable BW allocation mode (ret %pe)\n", + drm_dp_tunnel_name(intel_dp->tunnel), + encoder->base.base.id, encoder->base.name, + ERR_PTR(ret)); + + /* Keep the tunnel with BWA disabled */ + return 0; + } + + ret = allocate_initial_tunnel_bw(intel_dp, ctx); + if (ret < 0) + intel_dp_tunnel_destroy(intel_dp); + + return ret; +} + +/** + * intel_dp_tunnel_detect - Detect a DP tunnel on a port + * @intel_dp: DP port object + * @ctx: lock context acquired by the connector detection handler + * + * Detect a DP tunnel on the @intel_dp port, enabling the BW allocation mode + * on it if supported and allocating the BW required on an already active port. + * The BW allocated this way must be freed by the next atomic modeset calling + * intel_dp_tunnel_atomic_free_bw(). + * + * If @intel_dp has already a tunnel detected on it, update the tunnel's state + * wrt. its support for BW allocation mode and the available BW via the + * tunnel. If the tunnel's state change requires this - for instance the + * tunnel's group ID has changed - the tunnel will be dropped and recreated. + * + * Return 0 in case of success - after any tunnel detected and added to + * @intel_dp - 1 in case the BW on an already existing tunnel has changed in a + * way that requires notifying user space. + */ +int intel_dp_tunnel_detect(struct intel_dp *intel_dp, struct drm_modeset_acquire_ctx *ctx) +{ + int ret; + + if (intel_dp_is_edp(intel_dp)) + return 0; + + if (intel_dp->tunnel) { + ret = update_tunnel_state(intel_dp); + if (ret >= 0) + return ret; + + /* Try to recreate the tunnel after an update error. */ + intel_dp_tunnel_destroy(intel_dp); + } + + return detect_new_tunnel(intel_dp, ctx); +} + +/** + * intel_dp_tunnel_bw_alloc_is_enabled - Query the BW allocation support on a tunnel + * @intel_dp: DP port object + * + * Query whether a DP tunnel is connected on @intel_dp and the tunnel supports + * the BW allocation mode. + * + * Returns %true if the BW allocation mode is supported on @intel_dp. + */ +bool intel_dp_tunnel_bw_alloc_is_enabled(struct intel_dp *intel_dp) +{ + return drm_dp_tunnel_bw_alloc_is_enabled(intel_dp->tunnel); +} + +/** + * intel_dp_tunnel_suspend - Suspend a DP tunnel connected on a port + * @intel_dp: DP port object + * + * Suspend a DP tunnel on @intel_dp with BW allocation mode enabled on it. + */ +void intel_dp_tunnel_suspend(struct intel_dp *intel_dp) +{ + struct drm_i915_private *i915 = dp_to_i915(intel_dp); + struct intel_connector *connector = intel_dp->attached_connector; + struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base; + + if (!intel_dp_tunnel_bw_alloc_is_enabled(intel_dp)) + return; + + drm_dbg_kms(&i915->drm, "[DPTUN %s][CONNECTOR:%d:%s][ENCODER:%d:%s] Suspend\n", + drm_dp_tunnel_name(intel_dp->tunnel), + connector->base.base.id, connector->base.name, + encoder->base.base.id, encoder->base.name); + + drm_dp_tunnel_disable_bw_alloc(intel_dp->tunnel); + + intel_dp->tunnel_suspended = true; +} + +/** + * intel_dp_tunnel_resume - Resume a DP tunnel connected on a port + * @intel_dp: DP port object + * @crtc_state: CRTC state + * @dpcd_updated: the DPCD DPRX capabilities got updated during resume + * + * Resume a DP tunnel on @intel_dp with BW allocation mode enabled on it. + */ +void intel_dp_tunnel_resume(struct intel_dp *intel_dp, + const struct intel_crtc_state *crtc_state, + bool dpcd_updated) +{ + struct drm_i915_private *i915 = dp_to_i915(intel_dp); + struct intel_connector *connector = intel_dp->attached_connector; + struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base; + u8 dpcd[DP_RECEIVER_CAP_SIZE]; + u8 pipe_mask; + int err = 0; + + if (!intel_dp->tunnel_suspended) + return; + + intel_dp->tunnel_suspended = false; + + drm_dbg_kms(&i915->drm, "[DPTUN %s][CONNECTOR:%d:%s][ENCODER:%d:%s] Resume\n", + drm_dp_tunnel_name(intel_dp->tunnel), + connector->base.base.id, connector->base.name, + encoder->base.base.id, encoder->base.name); + + /* + * The TBT Connection Manager requires the GFX driver to read out + * the sink's DPRX caps to be able to service any BW requests later. + * During resume overriding the caps in @intel_dp cached before + * suspend must be avoided, so do here only a dummy read, unless the + * capabilities were updated already during resume. + */ + if (!dpcd_updated) { + err = intel_dp_read_dprx_caps(intel_dp, dpcd); + + if (err) { + drm_dp_tunnel_set_io_error(intel_dp->tunnel); + goto out_err; + } + } + + err = drm_dp_tunnel_enable_bw_alloc(intel_dp->tunnel); + if (err) + goto out_err; + + pipe_mask = 0; + if (crtc_state) { + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); + + /* TODO: Add support for MST */ + pipe_mask |= BIT(crtc->pipe); + } + + err = allocate_initial_tunnel_bw_for_pipes(intel_dp, pipe_mask); + if (err) + goto out_err; + + return; + +out_err: + drm_dbg_kms(&i915->drm, + "[DPTUN %s][CONNECTOR:%d:%s][ENCODER:%d:%s] Tunnel can't be resumed, will drop and redect it (err %pe)\n", + drm_dp_tunnel_name(intel_dp->tunnel), + connector->base.base.id, connector->base.name, + encoder->base.base.id, encoder->base.name, + ERR_PTR(err)); +} + +static struct drm_dp_tunnel * +get_inherited_tunnel(struct intel_atomic_state *state, struct intel_crtc *crtc) +{ + if (!state->inherited_dp_tunnels) + return NULL; + + return state->inherited_dp_tunnels->ref[crtc->pipe].tunnel; +} + +static int +add_inherited_tunnel(struct intel_atomic_state *state, + struct drm_dp_tunnel *tunnel, + struct intel_crtc *crtc) +{ + struct drm_i915_private *i915 = to_i915(state->base.dev); + struct drm_dp_tunnel *old_tunnel; + + old_tunnel = get_inherited_tunnel(state, crtc); + if (old_tunnel) { + drm_WARN_ON(&i915->drm, old_tunnel != tunnel); + return 0; + } + + if (!state->inherited_dp_tunnels) { + state->inherited_dp_tunnels = kzalloc(sizeof(*state->inherited_dp_tunnels), + GFP_KERNEL); + if (!state->inherited_dp_tunnels) + return -ENOMEM; + } + + drm_dp_tunnel_ref_get(tunnel, &state->inherited_dp_tunnels->ref[crtc->pipe]); + + return 0; +} + +static int check_inherited_tunnel_state(struct intel_atomic_state *state, + struct intel_dp *intel_dp, + const struct intel_digital_connector_state *old_conn_state) +{ + struct drm_i915_private *i915 = dp_to_i915(intel_dp); + struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base; + struct intel_connector *connector = + to_intel_connector(old_conn_state->base.connector); + struct intel_crtc *old_crtc; + const struct intel_crtc_state *old_crtc_state; + + /* + * If a BWA tunnel gets detected only after the corresponding + * connector got enabled already without a BWA tunnel, or a different + * BWA tunnel (which was removed meanwhile) the old CRTC state won't + * contain the state of the current tunnel. This tunnel still has a + * reserved BW, which needs to be released, add the state for such + * inherited tunnels separately only to this atomic state. + */ + if (!intel_dp_tunnel_bw_alloc_is_enabled(intel_dp)) + return 0; + + if (!old_conn_state->base.crtc) + return 0; + + old_crtc = to_intel_crtc(old_conn_state->base.crtc); + old_crtc_state = intel_atomic_get_old_crtc_state(state, old_crtc); + + if (!old_crtc_state->hw.active || + old_crtc_state->dp_tunnel_ref.tunnel == intel_dp->tunnel) + return 0; + + drm_dbg_kms(&i915->drm, + "[DPTUN %s][CONNECTOR:%d:%s][ENCODER:%d:%s][CRTC:%d:%s] Adding state for inherited tunnel %p\n", + drm_dp_tunnel_name(intel_dp->tunnel), + connector->base.base.id, connector->base.name, + encoder->base.base.id, encoder->base.name, + old_crtc->base.base.id, old_crtc->base.name, + intel_dp->tunnel); + + return add_inherited_tunnel(state, intel_dp->tunnel, old_crtc); +} + +/** + * intel_dp_tunnel_atomic_cleanup_inherited_state - Free any inherited DP tunnel state + * @state: Atomic state + * + * Free the inherited DP tunnel state in @state. + */ +void intel_dp_tunnel_atomic_cleanup_inherited_state(struct intel_atomic_state *state) +{ + enum pipe pipe; + + if (!state->inherited_dp_tunnels) + return; + + for_each_pipe(to_i915(state->base.dev), pipe) + if (state->inherited_dp_tunnels->ref[pipe].tunnel) + drm_dp_tunnel_ref_put(&state->inherited_dp_tunnels->ref[pipe]); + + kfree(state->inherited_dp_tunnels); + state->inherited_dp_tunnels = NULL; +} + +static int intel_dp_tunnel_atomic_add_group_state(struct intel_atomic_state *state, + struct drm_dp_tunnel *tunnel) +{ + struct drm_i915_private *i915 = to_i915(state->base.dev); + u32 pipe_mask; + int err; + + err = drm_dp_tunnel_atomic_get_group_streams_in_state(&state->base, + tunnel, &pipe_mask); + if (err) + return err; + + drm_WARN_ON(&i915->drm, pipe_mask & ~((1 << I915_MAX_PIPES) - 1)); + + return intel_modeset_pipes_in_mask_early(state, "DPTUN", pipe_mask); +} + +/** + * intel_dp_tunnel_atomic_add_state_for_crtc - Add CRTC specific DP tunnel state + * @state: Atomic state + * @crtc: CRTC to add the tunnel state for + * + * Add the DP tunnel state for @crtc if the CRTC (aka DP tunnel stream) is enabled + * via a DP tunnel. + * + * Return 0 in case of success, a negative error code otherwise. + */ +int intel_dp_tunnel_atomic_add_state_for_crtc(struct intel_atomic_state *state, + struct intel_crtc *crtc) +{ + const struct intel_crtc_state *new_crtc_state = + intel_atomic_get_new_crtc_state(state, crtc); + const struct drm_dp_tunnel_state *tunnel_state; + struct drm_dp_tunnel *tunnel = new_crtc_state->dp_tunnel_ref.tunnel; + + if (!tunnel) + return 0; + + tunnel_state = drm_dp_tunnel_atomic_get_state(&state->base, tunnel); + if (IS_ERR(tunnel_state)) + return PTR_ERR(tunnel_state); + + return 0; +} + +static int check_group_state(struct intel_atomic_state *state, + struct intel_dp *intel_dp, + struct intel_connector *connector, + struct intel_crtc *crtc) +{ + struct drm_i915_private *i915 = to_i915(state->base.dev); + struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base; + const struct intel_crtc_state *crtc_state = + intel_atomic_get_new_crtc_state(state, crtc); + + if (!crtc_state->dp_tunnel_ref.tunnel) + return 0; + + drm_dbg_kms(&i915->drm, + "[DPTUN %s][CONNECTOR:%d:%s][ENCODER:%d:%s][CRTC:%d:%s] Adding group state for tunnel %p\n", + drm_dp_tunnel_name(intel_dp->tunnel), + connector->base.base.id, connector->base.name, + encoder->base.base.id, encoder->base.name, + crtc->base.base.id, crtc->base.name, + crtc_state->dp_tunnel_ref.tunnel); + + return intel_dp_tunnel_atomic_add_group_state(state, crtc_state->dp_tunnel_ref.tunnel); +} + +/** + * intel_dp_tunnel_atomic_check_state - Check a connector's DP tunnel specific state + * @state: Atomic state + * @intel_dp: DP port object + * @connector: connector using @intel_dp + * + * Check and add the DP tunnel atomic state for @intel_dp/@connector to + * @state, if there is a DP tunnel detected on @intel_dp with BW allocation + * mode enabled on it, or if @intel_dp/@connector was previously enabled via a + * DP tunnel. + * + * Returns 0 in case of success, or a negative error code otherwise. + */ +int intel_dp_tunnel_atomic_check_state(struct intel_atomic_state *state, + struct intel_dp *intel_dp, + struct intel_connector *connector) +{ + const struct intel_digital_connector_state *old_conn_state = + intel_atomic_get_old_connector_state(state, connector); + const struct intel_digital_connector_state *new_conn_state = + intel_atomic_get_new_connector_state(state, connector); + int err; + + if (old_conn_state->base.crtc) { + err = check_group_state(state, intel_dp, connector, + to_intel_crtc(old_conn_state->base.crtc)); + if (err) + return err; + } + + if (new_conn_state->base.crtc && + new_conn_state->base.crtc != old_conn_state->base.crtc) { + err = check_group_state(state, intel_dp, connector, + to_intel_crtc(new_conn_state->base.crtc)); + if (err) + return err; + } + + return check_inherited_tunnel_state(state, intel_dp, old_conn_state); +} + +/** + * intel_dp_tunnel_atomic_compute_stream_bw - Compute the BW required by a DP tunnel stream + * @state: Atomic state + * @intel_dp: DP object + * @connector: connector using @intel_dp + * @crtc_state: state of CRTC of the given DP tunnel stream + * + * Compute the required BW of CRTC (aka DP tunnel stream), storing this BW to + * the DP tunnel state containing the stream in @state. Before re-calculating a + * BW requirement in the crtc_state state the old BW requirement computed by this + * function must be cleared by calling intel_dp_tunnel_atomic_clear_stream_bw(). + * + * Returns 0 in case of success, a negative error code otherwise. + */ +int intel_dp_tunnel_atomic_compute_stream_bw(struct intel_atomic_state *state, + struct intel_dp *intel_dp, + const struct intel_connector *connector, + struct intel_crtc_state *crtc_state) +{ + struct drm_i915_private *i915 = to_i915(state->base.dev); + struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base; + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); + int required_rate = intel_dp_config_required_rate(crtc_state); + int ret; + + if (!intel_dp_tunnel_bw_alloc_is_enabled(intel_dp)) + return 0; + + drm_dbg_kms(&i915->drm, + "[DPTUN %s][CONNECTOR:%d:%s][ENCODER:%d:%s][CRTC:%d:%s] Stream %d required BW %d Mb/s\n", + drm_dp_tunnel_name(intel_dp->tunnel), + connector->base.base.id, connector->base.name, + encoder->base.base.id, encoder->base.name, + crtc->base.base.id, crtc->base.name, + crtc->pipe, + kbytes_to_mbits(required_rate)); + + ret = drm_dp_tunnel_atomic_set_stream_bw(&state->base, intel_dp->tunnel, + crtc->pipe, required_rate); + if (ret < 0) + return ret; + + drm_dp_tunnel_ref_get(intel_dp->tunnel, + &crtc_state->dp_tunnel_ref); + + return 0; +} + +/** + * intel_dp_tunnel_atomic_clear_stream_bw - Clear any DP tunnel stream BW requirement + * @state: Atomic state + * @crtc_state: state of CRTC of the given DP tunnel stream + * + * Clear any DP tunnel stream BW requirement set by + * intel_dp_tunnel_atomic_compute_stream_bw(). + */ +void intel_dp_tunnel_atomic_clear_stream_bw(struct intel_atomic_state *state, + struct intel_crtc_state *crtc_state) +{ + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); + + if (!crtc_state->dp_tunnel_ref.tunnel) + return; + + drm_dp_tunnel_atomic_set_stream_bw(&state->base, + crtc_state->dp_tunnel_ref.tunnel, + crtc->pipe, 0); + drm_dp_tunnel_ref_put(&crtc_state->dp_tunnel_ref); +} + +/** + * intel_dp_tunnel_atomic_check_link - Check the DP tunnel atomic state + * @state: intel atomic state + * @limits: link BW limits + * + * Check the link configuration for all DP tunnels in @state. If the + * configuration is invalid @limits will be updated if possible to + * reduce the total BW, after which the configuration for all CRTCs in + * @state must be recomputed with the updated @limits. + * + * Returns: + * - 0 if the confugration is valid + * - %-EAGAIN, if the configuration is invalid and @limits got updated + * with fallback values with which the configuration of all CRTCs in + * @state must be recomputed + * - Other negative error, if the configuration is invalid without a + * fallback possibility, or the check failed for another reason + */ +int intel_dp_tunnel_atomic_check_link(struct intel_atomic_state *state, + struct intel_link_bw_limits *limits) +{ + u32 failed_stream_mask; + int err; + + err = drm_dp_tunnel_atomic_check_stream_bws(&state->base, + &failed_stream_mask); + if (err != -ENOSPC) + return err; + + err = intel_link_bw_reduce_bpp(state, limits, + failed_stream_mask, "DP tunnel link BW"); + + return err ? : -EAGAIN; +} + +static void atomic_decrease_bw(struct intel_atomic_state *state) +{ + struct intel_crtc *crtc; + const struct intel_crtc_state *old_crtc_state; + const struct intel_crtc_state *new_crtc_state; + int i; + + for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) { + const struct drm_dp_tunnel_state *new_tunnel_state; + struct drm_dp_tunnel *tunnel; + int old_bw; + int new_bw; + + if (!intel_crtc_needs_modeset(new_crtc_state)) + continue; + + tunnel = get_inherited_tunnel(state, crtc); + if (!tunnel) + tunnel = old_crtc_state->dp_tunnel_ref.tunnel; + + if (!tunnel) + continue; + + old_bw = drm_dp_tunnel_get_allocated_bw(tunnel); + + new_tunnel_state = drm_dp_tunnel_atomic_get_new_state(&state->base, tunnel); + new_bw = drm_dp_tunnel_atomic_get_required_bw(new_tunnel_state); + + if (new_bw >= old_bw) + continue; + + drm_dp_tunnel_alloc_bw(tunnel, new_bw); + } +} + +static void queue_retry_work(struct intel_atomic_state *state, + struct drm_dp_tunnel *tunnel, + const struct intel_crtc_state *crtc_state) +{ + struct drm_i915_private *i915 = to_i915(state->base.dev); + struct intel_encoder *encoder; + + encoder = intel_get_crtc_new_encoder(state, crtc_state); + + if (!intel_digital_port_connected(encoder)) + return; + + drm_dbg_kms(&i915->drm, + "[DPTUN %s][ENCODER:%d:%s] BW allocation failed on a connected sink\n", + drm_dp_tunnel_name(tunnel), + encoder->base.base.id, + encoder->base.name); + + intel_dp_queue_modeset_retry_for_link(state, encoder, crtc_state); +} + +static void atomic_increase_bw(struct intel_atomic_state *state) +{ + struct intel_crtc *crtc; + const struct intel_crtc_state *crtc_state; + int i; + + for_each_new_intel_crtc_in_state(state, crtc, crtc_state, i) { + struct drm_dp_tunnel_state *tunnel_state; + struct drm_dp_tunnel *tunnel = crtc_state->dp_tunnel_ref.tunnel; + int bw; + + if (!intel_crtc_needs_modeset(crtc_state)) + continue; + + if (!tunnel) + continue; + + tunnel_state = drm_dp_tunnel_atomic_get_new_state(&state->base, tunnel); + + bw = drm_dp_tunnel_atomic_get_required_bw(tunnel_state); + + if (drm_dp_tunnel_alloc_bw(tunnel, bw) != 0) + queue_retry_work(state, tunnel, crtc_state); + } +} + +/** + * intel_dp_tunnel_atomic_alloc_bw - Allocate the BW for all modeset tunnels + * @state: Atomic state + * + * Allocate the required BW for all tunnels in @state. + */ +void intel_dp_tunnel_atomic_alloc_bw(struct intel_atomic_state *state) +{ + atomic_decrease_bw(state); + atomic_increase_bw(state); +} + +/** + * intel_dp_tunnel_mgr_init - Initialize the DP tunnel manager + * @i915: i915 device object + * + * Initialize the DP tunnel manager. The tunnel manager will support the + * detection/management of DP tunnels on all DP connectors, so the function + * must be called after all these connectors have been registered already. + * + * Return 0 in case of success, a negative error code otherwise. + */ +int intel_dp_tunnel_mgr_init(struct drm_i915_private *i915) +{ + struct drm_dp_tunnel_mgr *tunnel_mgr; + struct drm_connector_list_iter connector_list_iter; + struct intel_connector *connector; + int dp_connectors = 0; + + drm_connector_list_iter_begin(&i915->drm, &connector_list_iter); + for_each_intel_connector_iter(connector, &connector_list_iter) { + if (connector->base.connector_type != DRM_MODE_CONNECTOR_DisplayPort) + continue; + + dp_connectors++; + } + drm_connector_list_iter_end(&connector_list_iter); + + tunnel_mgr = drm_dp_tunnel_mgr_create(&i915->drm, dp_connectors); + if (IS_ERR(tunnel_mgr)) + return PTR_ERR(tunnel_mgr); + + i915->display.dp_tunnel_mgr = tunnel_mgr; + + return 0; +} + +/** + * intel_dp_tunnel_mgr_cleanup - Clean up the DP tunnel manager state + * @i915: i915 device object + * + * Clean up the DP tunnel manager state. + */ +void intel_dp_tunnel_mgr_cleanup(struct drm_i915_private *i915) +{ + drm_dp_tunnel_mgr_destroy(i915->display.dp_tunnel_mgr); + i915->display.dp_tunnel_mgr = NULL; +} diff --git a/drivers/gpu/drm/i915/display/intel_dp_tunnel.h b/drivers/gpu/drm/i915/display/intel_dp_tunnel.h new file mode 100644 index 0000000000000..08b2cba84af2b --- /dev/null +++ b/drivers/gpu/drm/i915/display/intel_dp_tunnel.h @@ -0,0 +1,133 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2023 Intel Corporation + */ + +#ifndef __INTEL_DP_TUNNEL_H__ +#define __INTEL_DP_TUNNEL_H__ + +#include +#include + +struct drm_i915_private; +struct drm_connector_state; +struct drm_modeset_acquire_ctx; + +struct intel_atomic_state; +struct intel_connector; +struct intel_crtc; +struct intel_crtc_state; +struct intel_dp; +struct intel_encoder; +struct intel_link_bw_limits; + +#if defined(CONFIG_DRM_I915_DP_TUNNEL) && defined(I915) + +int intel_dp_tunnel_detect(struct intel_dp *intel_dp, struct drm_modeset_acquire_ctx *ctx); +void intel_dp_tunnel_disconnect(struct intel_dp *intel_dp); +void intel_dp_tunnel_destroy(struct intel_dp *intel_dp); +void intel_dp_tunnel_resume(struct intel_dp *intel_dp, + const struct intel_crtc_state *crtc_state, + bool dpcd_updated); +void intel_dp_tunnel_suspend(struct intel_dp *intel_dp); + +bool intel_dp_tunnel_bw_alloc_is_enabled(struct intel_dp *intel_dp); + +void +intel_dp_tunnel_atomic_cleanup_inherited_state(struct intel_atomic_state *state); + +int intel_dp_tunnel_atomic_compute_stream_bw(struct intel_atomic_state *state, + struct intel_dp *intel_dp, + const struct intel_connector *connector, + struct intel_crtc_state *crtc_state); +void intel_dp_tunnel_atomic_clear_stream_bw(struct intel_atomic_state *state, + struct intel_crtc_state *crtc_state); + +int intel_dp_tunnel_atomic_add_state_for_crtc(struct intel_atomic_state *state, + struct intel_crtc *crtc); +int intel_dp_tunnel_atomic_check_link(struct intel_atomic_state *state, + struct intel_link_bw_limits *limits); +int intel_dp_tunnel_atomic_check_state(struct intel_atomic_state *state, + struct intel_dp *intel_dp, + struct intel_connector *connector); + +void intel_dp_tunnel_atomic_alloc_bw(struct intel_atomic_state *state); + +int intel_dp_tunnel_mgr_init(struct drm_i915_private *i915); +void intel_dp_tunnel_mgr_cleanup(struct drm_i915_private *i915); + +#else + +static inline int +intel_dp_tunnel_detect(struct intel_dp *intel_dp, struct drm_modeset_acquire_ctx *ctx) +{ + return -EOPNOTSUPP; +} + +static inline void intel_dp_tunnel_disconnect(struct intel_dp *intel_dp) {} +static inline void intel_dp_tunnel_destroy(struct intel_dp *intel_dp) {} +static inline void intel_dp_tunnel_resume(struct intel_dp *intel_dp, + const struct intel_crtc_state *crtc_state, + bool dpcd_updated) {} +static inline void intel_dp_tunnel_suspend(struct intel_dp *intel_dp) {} + +static inline bool intel_dp_tunnel_bw_alloc_is_enabled(struct intel_dp *intel_dp) +{ + return false; +} + +static inline void +intel_dp_tunnel_atomic_cleanup_inherited_state(struct intel_atomic_state *state) {} + +static inline int +intel_dp_tunnel_atomic_compute_stream_bw(struct intel_atomic_state *state, + struct intel_dp *intel_dp, + const struct intel_connector *connector, + struct intel_crtc_state *crtc_state) +{ + return 0; +} + +static inline void +intel_dp_tunnel_atomic_clear_stream_bw(struct intel_atomic_state *state, + struct intel_crtc_state *crtc_state) {} + +static inline int +intel_dp_tunnel_atomic_add_state_for_crtc(struct intel_atomic_state *state, + struct intel_crtc *crtc) +{ + return 0; +} + +static inline int +intel_dp_tunnel_atomic_check_link(struct intel_atomic_state *state, + struct intel_link_bw_limits *limits) +{ + return 0; +} + +static inline int +intel_dp_tunnel_atomic_check_state(struct intel_atomic_state *state, + struct intel_dp *intel_dp, + struct intel_connector *connector) +{ + return 0; +} + +static inline int +intel_dp_tunnel_atomic_alloc_bw(struct intel_atomic_state *state) +{ + return 0; +} + +static inline int +intel_dp_tunnel_mgr_init(struct drm_i915_private *i915) +{ + return 0; +} + +static inline void intel_dp_tunnel_mgr_cleanup(struct drm_i915_private *i915) {} + +#endif /* CONFIG_DRM_I915_DP_TUNNEL */ + +#endif /* __INTEL_DP_TUNNEL_H__ */