From patchwork Tue May 14 05:03:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sowjanya Komatineni X-Patchwork-Id: 10942021 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D037F912 for ; Tue, 14 May 2019 05:04:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BE349285CD for ; Tue, 14 May 2019 05:04:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B0DB0285DD; Tue, 14 May 2019 05:04:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 20BB2285CD for ; Tue, 14 May 2019 05:04:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726853AbfENFEG (ORCPT ); Tue, 14 May 2019 01:04:06 -0400 Received: from hqemgate15.nvidia.com ([216.228.121.64]:17396 "EHLO hqemgate15.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726787AbfENFEF (ORCPT ); Tue, 14 May 2019 01:04:05 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 13 May 2019 22:03:24 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Mon, 13 May 2019 22:04:04 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Mon, 13 May 2019 22:04:04 -0700 Received: from HQMAIL102.nvidia.com (172.18.146.10) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 14 May 2019 05:04:03 +0000 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL102.nvidia.com (172.18.146.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 14 May 2019 05:04:03 +0000 Received: from hqnvemgw01.nvidia.com (172.20.150.20) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 14 May 2019 05:04:03 +0000 Received: from skomatineni-linux.nvidia.com (Not Verified[10.2.164.159]) by hqnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Mon, 13 May 2019 22:04:03 -0700 From: Sowjanya Komatineni To: , , , CC: , , , Sowjanya Komatineni Subject: [PATCH V5 4/4] spi: tegra114: add support for TX and RX trimmers Date: Mon, 13 May 2019 22:03:55 -0700 Message-ID: <1557810235-16401-5-git-send-email-skomatineni@nvidia.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1557810235-16401-1-git-send-email-skomatineni@nvidia.com> References: <1557810235-16401-1-git-send-email-skomatineni@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1557810204; bh=k4U7Tjq34Bvh+kViRp16VTC/QWi1drhF13s5gIr1ESI=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=bbjU4ukzEgCxrIGp8NYVy+tsdjVKHFIazJgfXRgq+/tUq+1uNgq4bdF0aDCo/drjy RXwJfpLrmKHQ3TJq4wYPx2hzd2crPLoe3GNen+G+0wsJYoIzur1OjLH0wxp/I87QfF ohV5G4lBnxIA7p5xHaC+aDnfCe5MszQfA6EJiPrjfSJvDnmosPOyoU1sKmy62yWrio WBVkixJ1v9eQPxVKIRgLGS78miAJGPzSTT+71wt+KWDvIrqb0JDfZJeA8grMiwUMCL MsuqRd3X1TibKUGRjiwYhDozcEh+qAT9VSl+XLfUJOzvjLhnH1LFBwT/ciXaDJYJ/S 55ydrGJJpEfOA== Sender: linux-spi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Tegra SPI master controller has programmable trimmers to adjust the data with respect to the clock. These trimmers are programmed in TX_CLK_TAP_DELAY and RX_CLK_TAP_DELAY fields of COMMAND2 register. SPI TX trimmer is to adjust the outgoing data with respect to the outgoing clock and SPI RX trimmer is to adjust the loopback clock with respect to the incoming data from the slave device. These trimmers vary based on trace lengths of the platform design for each of the slaves on the SPI bus and optimal value programmed is from the platform validation across PVT. This patch adds support for configuring TX and RX clock delay trimmers through the device tree properties. Signed-off-by: Sowjanya Komatineni --- drivers/spi/spi-tegra114.c | 67 ++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 65 insertions(+), 2 deletions(-) diff --git a/drivers/spi/spi-tegra114.c b/drivers/spi/spi-tegra114.c index e59ff7c1cee6..253a7f182fc9 100644 --- a/drivers/spi/spi-tegra114.c +++ b/drivers/spi/spi-tegra114.c @@ -169,6 +169,11 @@ struct tegra_spi_soc_data { bool has_intr_mask_reg; }; +struct tegra_spi_client_data { + int tx_clk_tap_delay; + int rx_clk_tap_delay; +}; + struct tegra_spi_data { struct device *dev; struct spi_master *master; @@ -208,8 +213,10 @@ struct tegra_spi_data { u32 command1_reg; u32 dma_control_reg; u32 def_command1_reg; + u32 def_command2_reg; u32 spi_cs_timing1; u32 spi_cs_timing2; + u8 last_used_cs; struct completion xfer_completion; struct spi_transfer *curr_xfer; @@ -770,10 +777,12 @@ static u32 tegra_spi_setup_transfer_one(struct spi_device *spi, bool is_single_xfer) { struct tegra_spi_data *tspi = spi_master_get_devdata(spi->master); + struct tegra_spi_client_data *cdata = spi->controller_data; u32 speed = t->speed_hz; u8 bits_per_word = t->bits_per_word; - u32 command1; + u32 command1, command2; int req_mode; + u32 tx_tap = 0, rx_tap = 0; if (speed != tspi->cur_speed) { clk_set_rate(tspi->clk, speed); @@ -836,7 +845,18 @@ static u32 tegra_spi_setup_transfer_one(struct spi_device *spi, command1 &= ~SPI_CS_SW_VAL; } - tegra_spi_writel(tspi, 0, SPI_COMMAND2); + if (tspi->last_used_cs != spi->chip_select) { + if (cdata && cdata->tx_clk_tap_delay) + tx_tap = cdata->tx_clk_tap_delay; + if (cdata && cdata->rx_clk_tap_delay) + rx_tap = cdata->rx_clk_tap_delay; + command2 = SPI_TX_TAP_DELAY(tx_tap) | + SPI_RX_TAP_DELAY(rx_tap); + if (command2 != tspi->def_command2_reg) + tegra_spi_writel(tspi, command2, SPI_COMMAND2); + tspi->last_used_cs = spi->chip_select; + } + } else { command1 = tspi->command1_reg; command1 &= ~SPI_BIT_LENGTH(~0); @@ -892,9 +912,42 @@ static int tegra_spi_start_transfer_one(struct spi_device *spi, return ret; } +static struct tegra_spi_client_data + *tegra_spi_parse_cdata_dt(struct spi_device *spi) +{ + struct tegra_spi_client_data *cdata; + struct device_node *slave_np; + + slave_np = spi->dev.of_node; + if (!slave_np) { + dev_dbg(&spi->dev, "device node not found\n"); + return NULL; + } + + cdata = kzalloc(sizeof(*cdata), GFP_KERNEL); + if (!cdata) + return NULL; + + of_property_read_u32(slave_np, "nvidia,tx-clk-tap-delay", + &cdata->tx_clk_tap_delay); + of_property_read_u32(slave_np, "nvidia,rx-clk-tap-delay", + &cdata->rx_clk_tap_delay); + return cdata; +} + +static void tegra_spi_cleanup(struct spi_device *spi) +{ + struct tegra_spi_client_data *cdata = spi->controller_data; + + spi->controller_data = NULL; + if (spi->dev.of_node) + kfree(cdata); +} + static int tegra_spi_setup(struct spi_device *spi) { struct tegra_spi_data *tspi = spi_master_get_devdata(spi->master); + struct tegra_spi_client_data *cdata = spi->controller_data; u32 val; unsigned long flags; int ret; @@ -905,6 +958,11 @@ static int tegra_spi_setup(struct spi_device *spi) spi->mode & SPI_CPHA ? "" : "~", spi->max_speed_hz); + if (!cdata) { + cdata = tegra_spi_parse_cdata_dt(spi); + spi->controller_data = cdata; + } + ret = pm_runtime_get_sync(tspi->dev); if (ret < 0) { dev_err(tspi->dev, "pm runtime failed, e = %d\n", ret); @@ -1034,6 +1092,7 @@ static int tegra_spi_transfer_one_message(struct spi_master *master, reset_control_assert(tspi->rst); udelay(2); reset_control_deassert(tspi->rst); + tspi->last_used_cs = master->num_chipselect + 1; goto complete_xfer; } @@ -1351,6 +1410,8 @@ static int tegra_spi_probe(struct platform_device *pdev) tegra_spi_writel(tspi, tspi->def_command1_reg, SPI_COMMAND1); tspi->spi_cs_timing1 = tegra_spi_readl(tspi, SPI_CS_TIMING1); tspi->spi_cs_timing2 = tegra_spi_readl(tspi, SPI_CS_TIMING2); + tspi->def_command2_reg = tegra_spi_readl(tspi, SPI_COMMAND2); + tspi->last_used_cs = master->num_chipselect + 1; pm_runtime_put(&pdev->dev); ret = request_threaded_irq(tspi->irq, tegra_spi_isr, tegra_spi_isr_thread, IRQF_ONESHOT, @@ -1423,6 +1484,8 @@ static int tegra_spi_resume(struct device *dev) return ret; } tegra_spi_writel(tspi, tspi->command1_reg, SPI_COMMAND1); + tegra_spi_writel(tspi, tspi->def_command2_reg, SPI_COMMAND2); + tspi->last_used_cs = master->num_chipselect + 1; pm_runtime_put(dev); return spi_master_resume(master);