From patchwork Fri Mar 7 04:34:49 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Baryshkov X-Patchwork-Id: 14005930 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EA820C19F32 for ; Fri, 7 Mar 2025 04:35:46 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6C31E10EAC5; Fri, 7 Mar 2025 04:35:46 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="ldKGQrHC"; dkim-atps=neutral Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9F21410EAC2; Fri, 7 Mar 2025 04:35:44 +0000 (UTC) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 1E0E8A454AD; Fri, 7 Mar 2025 04:30:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C5266C4CEE9; Fri, 7 Mar 2025 04:35:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1741322143; bh=KL01593/zSX4Xb5hwQBCHJDAckIJ37QdEuF5jXeWgJ8=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=ldKGQrHCCCBRKwxBBjQ7/sdDNo9UDRM9Wuk4aXL/VrX1QWc9XvIPKvuRu7T8kyCAz VsuiJ5Z/QO4QUG/SxMtNtVWspKFytYNeOOUuseFIIcKEjLRPpkwu52F1VYFZlqa56m OrYLW5UImkUOXbsCO4SoHXaXpWBZ9OzryLoqdNdE+5/ML0dQ1BAAlLBKviO/tTJaYR jxqjGGtrSgSehHdEfSUiUSvgH3PfvLnDO3FB5BZes2EDMGY5bNbdLJQyfuXQyGoUI8 d6ANHFMxFlgYXjydA/YWUMQ8WjMow2EBF7RizBcdJkbLLh/+Y9cpyOs5M4b5Hd16iH z1UHTNk19h3Dw== From: Dmitry Baryshkov Date: Fri, 07 Mar 2025 06:34:49 +0200 Subject: [PATCH RFC v3 7/7] drm/display: dp-tunnel: use new DCPD access helpers MIME-Version: 1.0 Message-Id: <20250307-drm-rework-dpcd-access-v3-7-9044a3a868ee@linaro.org> References: <20250307-drm-rework-dpcd-access-v3-0-9044a3a868ee@linaro.org> In-Reply-To: <20250307-drm-rework-dpcd-access-v3-0-9044a3a868ee@linaro.org> To: Lyude Paul , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Rob Clark , Abhinav Kumar , Sean Paul , Marijn Suijten , Jani Nikula , Alex Deucher , =?utf-8?q?Christian_K=C3=B6nig?= , Andrzej Hajda , Neil Armstrong , Robert Foss , Laurent Pinchart , Jonas Karlman , Jernej Skrabec , Xinliang Liu , Tian Tao , Xinwei Kong , Sumit Semwal , Yongqin Liu , John Stultz Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, intel-xe@lists.freedesktop.org, amd-gfx@lists.freedesktop.org, Jani Nikula X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=openpgp-sha256; l=4082; i=dmitry.baryshkov@linaro.org; h=from:subject:message-id; bh=AezDpTs9i2rYRC6AeDwZvqAV87zlWZZG2GdTd51PPFs=; b=owEBbQGS/pANAwAKAYs8ij4CKSjVAcsmYgBnyndm4YmJ13M13g50Hz9Sq+c+qDBc7tzhUsJZc 26viUPY20iJATMEAAEKAB0WIQRMcISVXLJjVvC4lX+LPIo+Aiko1QUCZ8p3ZgAKCRCLPIo+Aiko 1X8sB/4888qcKk/8zGB7aMpCxYduTg4THzLpcYD8TWj2W/90FyWWaqO73ENLrgFchxmmyfbNS8a uDYKawkLTsjbg4Hf0lxnxHrxmcerpAVcQeOZO1haVm1oU2Uoytx/uWdHsaDk5Y8thUE6X67vXjl G+bTSCZk7ALEbOVQYznsizRJchJw5FrA9YoosnZARbCDWrcSLHrcWs1lgiKBFDw/FbgWSqB2jb7 pqstTbcI6TmS3kiK0BjHUVuaVqtuwFf0lhzfJF0ECNE3rAeSFHqjo4lF8WI6n7iA1OXyZAOof+P mKnSCpQbwrKULU5PBolPo0HltA8pk1y+F5Gx/VcSbMJAyDI/ X-Developer-Key: i=dmitry.baryshkov@linaro.org; a=openpgp; fpr=8F88381DD5C873E4AE487DA5199BF1243632046A X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Dmitry Baryshkov Switch drm_dp_tunnel.c to use new set of DPCD read / write helpers. Reviewed-by: Lyude Paul Acked-by: Jani Nikula Signed-off-by: Dmitry Baryshkov --- drivers/gpu/drm/display/drm_dp_tunnel.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/display/drm_dp_tunnel.c b/drivers/gpu/drm/display/drm_dp_tunnel.c index 90fe07a89260e21e78f2db7f57a90602be921a11..076edf1610480275c62395334ab0536befa42f15 100644 --- a/drivers/gpu/drm/display/drm_dp_tunnel.c +++ b/drivers/gpu/drm/display/drm_dp_tunnel.c @@ -222,7 +222,7 @@ static int read_tunnel_regs(struct drm_dp_aux *aux, struct drm_dp_tunnel_regs *r while ((len = next_reg_area(&offset))) { int address = DP_TUNNELING_BASE + offset; - if (drm_dp_dpcd_read(aux, address, tunnel_reg_ptr(regs, address), len) < 0) + if (drm_dp_dpcd_read_data(aux, address, tunnel_reg_ptr(regs, address), len) < 0) return -EIO; offset += len; @@ -913,7 +913,7 @@ static int set_bw_alloc_mode(struct drm_dp_tunnel *tunnel, bool enable) u8 mask = DP_DISPLAY_DRIVER_BW_ALLOCATION_MODE_ENABLE | DP_UNMASK_BW_ALLOCATION_IRQ; u8 val; - if (drm_dp_dpcd_readb(tunnel->aux, DP_DPTX_BW_ALLOCATION_MODE_CONTROL, &val) < 0) + if (drm_dp_dpcd_read_byte(tunnel->aux, DP_DPTX_BW_ALLOCATION_MODE_CONTROL, &val) < 0) goto out_err; if (enable) @@ -921,7 +921,7 @@ static int set_bw_alloc_mode(struct drm_dp_tunnel *tunnel, bool enable) else val &= ~mask; - if (drm_dp_dpcd_writeb(tunnel->aux, DP_DPTX_BW_ALLOCATION_MODE_CONTROL, val) < 0) + if (drm_dp_dpcd_write_byte(tunnel->aux, DP_DPTX_BW_ALLOCATION_MODE_CONTROL, val) < 0) goto out_err; tunnel->bw_alloc_enabled = enable; @@ -1039,7 +1039,7 @@ static int clear_bw_req_state(struct drm_dp_aux *aux) { u8 bw_req_mask = DP_BW_REQUEST_SUCCEEDED | DP_BW_REQUEST_FAILED; - if (drm_dp_dpcd_writeb(aux, DP_TUNNELING_STATUS, bw_req_mask) < 0) + if (drm_dp_dpcd_write_byte(aux, DP_TUNNELING_STATUS, bw_req_mask) < 0) return -EIO; return 0; @@ -1052,7 +1052,7 @@ static int bw_req_complete(struct drm_dp_aux *aux, bool *status_changed) u8 val; int err; - if (drm_dp_dpcd_readb(aux, DP_TUNNELING_STATUS, &val) < 0) + if (drm_dp_dpcd_read_byte(aux, DP_TUNNELING_STATUS, &val) < 0) return -EIO; *status_changed = val & status_change_mask; @@ -1095,7 +1095,7 @@ static int allocate_tunnel_bw(struct drm_dp_tunnel *tunnel, int bw) if (err) goto out; - if (drm_dp_dpcd_writeb(tunnel->aux, DP_REQUEST_BW, request_bw) < 0) { + if (drm_dp_dpcd_write_byte(tunnel->aux, DP_REQUEST_BW, request_bw) < 0) { err = -EIO; goto out; } @@ -1196,13 +1196,13 @@ static int check_and_clear_status_change(struct drm_dp_tunnel *tunnel) u8 mask = DP_BW_ALLOCATION_CAPABILITY_CHANGED | DP_ESTIMATED_BW_CHANGED; u8 val; - if (drm_dp_dpcd_readb(tunnel->aux, DP_TUNNELING_STATUS, &val) < 0) + if (drm_dp_dpcd_read_byte(tunnel->aux, DP_TUNNELING_STATUS, &val) < 0) goto out_err; val &= mask; if (val) { - if (drm_dp_dpcd_writeb(tunnel->aux, DP_TUNNELING_STATUS, val) < 0) + if (drm_dp_dpcd_write_byte(tunnel->aux, DP_TUNNELING_STATUS, val) < 0) goto out_err; return 1; @@ -1215,7 +1215,7 @@ static int check_and_clear_status_change(struct drm_dp_tunnel *tunnel) * Check for estimated BW changes explicitly to account for lost * BW change notifications. */ - if (drm_dp_dpcd_readb(tunnel->aux, DP_ESTIMATED_BW, &val) < 0) + if (drm_dp_dpcd_read_byte(tunnel->aux, DP_ESTIMATED_BW, &val) < 0) goto out_err; if (val * tunnel->bw_granularity != tunnel->estimated_bw) @@ -1300,7 +1300,7 @@ int drm_dp_tunnel_handle_irq(struct drm_dp_tunnel_mgr *mgr, struct drm_dp_aux *a { u8 val; - if (drm_dp_dpcd_readb(aux, DP_TUNNELING_STATUS, &val) < 0) + if (drm_dp_dpcd_read_byte(aux, DP_TUNNELING_STATUS, &val) < 0) return -EIO; if (val & (DP_BW_REQUEST_SUCCEEDED | DP_BW_REQUEST_FAILED))