From patchwork Mon Apr 18 14:51:45 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thierry Reding X-Patchwork-Id: 8872761 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: X-Original-To: patchwork-linux-pci@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id EF220BF29F for ; Mon, 18 Apr 2016 14:51:55 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id B8565201E4 for ; Mon, 18 Apr 2016 14:51:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 65B6A20222 for ; Mon, 18 Apr 2016 14:51:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751881AbcDROvw (ORCPT ); Mon, 18 Apr 2016 10:51:52 -0400 Received: from mail-wm0-f52.google.com ([74.125.82.52]:37206 "EHLO mail-wm0-f52.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751522AbcDROvu (ORCPT ); Mon, 18 Apr 2016 10:51:50 -0400 Received: by mail-wm0-f52.google.com with SMTP id n3so128634755wmn.0; Mon, 18 Apr 2016 07:51:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=TJljMhm08tVvL8DjAs8Ci/dLTjrvuAJjTYM2hirdc08=; b=G3XyBv4XZvTd+LrtZ3Bx8sLEJXreYEejjLKXesetBTMWVG8ry0aXknr970JbhbFr6X Y2ASaAk2K6UVF2Hy6IF2RgPNDqZQc9eJ83WlPdtuIjn74i8DAyBTDz4jhRKiPHmI4VW5 82DwqJJSep+8bbj0LGULDNIOJ4W/gyN3Na6vSD5IePghUrtX6UNtGV7H+CUkWVrp03oB KfUOL3l0sk4kXf7Hpwo4TWqqyjNE9OM91O/DkWqM7Trb865brd57QVrKdCLlkGG398B1 6LDWVECaywxIM/YXrDhng4R1fHrB5Y1xNYKqMzFfjUgoc03ORzlk9pthV1iuoOYMZt5u a8zQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=TJljMhm08tVvL8DjAs8Ci/dLTjrvuAJjTYM2hirdc08=; b=IUOHGRaqu8t71RORQGfnYDO9YJPGL5l3EiHXtEbcyP/5wKlGFX2421+4D3lXmgUCE4 ahaEYeuLhW7Aeiz5nBABSsBTyQGHMbfQTl1P27f6HtG5IBexCuy+MQt7mDROx8dE+9VD U57QhEnOBsrLojmMBveQiOu1se4aFKdeS9h/76RiPGoG20SGh8ihEbjm3kLs3EeNuNg+ hmKPJTIpapLPv3sc0iYMCz5NkYDgMZbGa+ht6TuApw/AadLB2GgXWXK8h0PTiG622Lyx UnNKqvabDbKosCizCNhq3b6KmpehfRzgR7E40oLEH4Ogf3WUPQ3xGt1iixS0xPIAm+a3 2gtA== X-Gm-Message-State: AOPr4FWppkfIw8w1mv5uAGLdgJBfrWEra1QILTp3edyyK8H6MEAqXj0dwOLLUwA7HfJ9hA== X-Received: by 10.28.194.67 with SMTP id s64mr18414148wmf.44.1460991109076; Mon, 18 Apr 2016 07:51:49 -0700 (PDT) Received: from localhost (port-8083.pppoe.wtnet.de. [84.46.31.178]) by smtp.gmail.com with ESMTPSA id 188sm27109627wmk.6.2016.04.18.07.51.47 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 18 Apr 2016 07:51:47 -0700 (PDT) From: Thierry Reding To: Bjorn Helgaas Cc: Stephen Warren , Alexandre Courbot , linux-tegra@vger.kernel.org, linux-pci@vger.kernel.org, devicetree@vger.kernel.org Subject: [PATCH v5 2/2] PCI: tegra: Support per-lane PHYs Date: Mon, 18 Apr 2016 16:51:45 +0200 Message-Id: <1460991105-22861-2-git-send-email-thierry.reding@gmail.com> X-Mailer: git-send-email 2.8.0 In-Reply-To: <1460991105-22861-1-git-send-email-thierry.reding@gmail.com> References: <1460991105-22861-1-git-send-email-thierry.reding@gmail.com> Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org X-Spam-Status: No, score=-7.8 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Thierry Reding The current XUSB pad controller bindings are insufficient to describe PHY devices attached to USB controllers. New bindings have been created to overcome these restrictions. As a side-effect each root port now is assigned a set of PHY devices, one for each lane associated with the root port. This has the benefit of allowing fine-grained control of the power management for each lane. Signed-off-by: Thierry Reding Acked-by: Bjorn Helgaas --- Changes in v5: - fix power off sequence Changes in v4: - propagate failure from PHY power on - refactor PHY power off sequence Changes in v3: - cache result of check for new PHY bindings usage (Stephen Warren) Changes in v2: - rework commit message to more accurately describe this change drivers/pci/host/pci-tegra.c | 243 ++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 226 insertions(+), 17 deletions(-) diff --git a/drivers/pci/host/pci-tegra.c b/drivers/pci/host/pci-tegra.c index 68d1f41b3cbf..12d581130645 100644 --- a/drivers/pci/host/pci-tegra.c +++ b/drivers/pci/host/pci-tegra.c @@ -295,6 +295,7 @@ struct tegra_pcie { struct reset_control *afi_rst; struct reset_control *pcie_xrst; + bool legacy_phy; struct phy *phy; struct tegra_msi msi; @@ -311,11 +312,14 @@ struct tegra_pcie { struct tegra_pcie_port { struct tegra_pcie *pcie; + struct device_node *np; struct list_head list; struct resource regs; void __iomem *base; unsigned int index; unsigned int lanes; + + struct phy **phys; }; struct tegra_pcie_bus { @@ -860,6 +864,128 @@ static int tegra_pcie_phy_enable(struct tegra_pcie *pcie) return 0; } +static int tegra_pcie_phy_disable(struct tegra_pcie *pcie) +{ + const struct tegra_pcie_soc_data *soc = pcie->soc_data; + u32 value; + + /* disable TX/RX data */ + value = pads_readl(pcie, PADS_CTL); + value &= ~(PADS_CTL_TX_DATA_EN_1L | PADS_CTL_RX_DATA_EN_1L); + pads_writel(pcie, value, PADS_CTL); + + /* override IDDQ */ + value = pads_readl(pcie, PADS_CTL); + value |= PADS_CTL_IDDQ_1L; + pads_writel(pcie, PADS_CTL, value); + + /* reset PLL */ + value = pads_readl(pcie, soc->pads_pll_ctl); + value &= ~PADS_PLL_CTL_RST_B4SM; + pads_writel(pcie, value, soc->pads_pll_ctl); + + usleep_range(20, 100); + + return 0; +} + +static int tegra_pcie_port_phy_power_on(struct tegra_pcie_port *port) +{ + struct device *dev = port->pcie->dev; + unsigned int i; + int err; + + for (i = 0; i < port->lanes; i++) { + err = phy_power_on(port->phys[i]); + if (err < 0) { + dev_err(dev, "failed to power on PHY#%u: %d\n", i, + err); + return err; + } + } + + return 0; +} + +static int tegra_pcie_port_phy_power_off(struct tegra_pcie_port *port) +{ + struct device *dev = port->pcie->dev; + unsigned int i; + int err; + + for (i = 0; i < port->lanes; i++) { + err = phy_power_off(port->phys[i]); + if (err < 0) { + dev_err(dev, "failed to power off PHY#%u: %d\n", i, + err); + return err; + } + } + + return 0; +} + +static int tegra_pcie_phy_power_on(struct tegra_pcie *pcie) +{ + struct tegra_pcie_port *port; + int err; + + if (pcie->legacy_phy) { + if (pcie->phy) + err = phy_power_on(pcie->phy); + else + err = tegra_pcie_phy_enable(pcie); + + if (err < 0) + dev_err(pcie->dev, "failed to power on PHY: %d\n", err); + + return err; + } + + list_for_each_entry(port, &pcie->ports, list) { + err = tegra_pcie_port_phy_power_on(port); + if (err < 0) { + dev_err(pcie->dev, + "failed to power on PCIe port %u PHY: %d\n", + port->index, err); + return err; + } + } + + return 0; +} + +static int tegra_pcie_phy_power_off(struct tegra_pcie *pcie) +{ + struct tegra_pcie_port *port; + int err; + + if (pcie->legacy_phy) { + if (pcie->phy) + err = phy_power_off(pcie->phy); + else + err = tegra_pcie_phy_disable(pcie); + + if (err < 0) + dev_err(pcie->dev, "failed to power off PHY: %d\n", + err); + + return err; + } + + list_for_each_entry(port, &pcie->ports, list) { + err = tegra_pcie_port_phy_power_off(port); + if (err < 0) { + dev_err(pcie->dev, + "failed to power off PCIe port %u PHY: %d\n", + port->index, err); + return err; + } + } + + return 0; +} + static int tegra_pcie_enable_controller(struct tegra_pcie *pcie) { const struct tegra_pcie_soc_data *soc = pcie->soc_data; @@ -899,13 +1025,9 @@ static int tegra_pcie_enable_controller(struct tegra_pcie *pcie) afi_writel(pcie, value, AFI_FUSE); } - if (!pcie->phy) - err = tegra_pcie_phy_enable(pcie); - else - err = phy_power_on(pcie->phy); - + err = tegra_pcie_phy_power_on(pcie); if (err < 0) { - dev_err(pcie->dev, "failed to power on PHY: %d\n", err); + dev_err(pcie->dev, "failed to power on PHY(s): %d\n", err); return err; } @@ -942,9 +1064,9 @@ static void tegra_pcie_power_off(struct tegra_pcie *pcie) /* TODO: disable and unprepare clocks? */ - err = phy_power_off(pcie->phy); + err = tegra_pcie_phy_power_off(pcie); if (err < 0) - dev_warn(pcie->dev, "failed to power off PHY: %d\n", err); + dev_err(pcie->dev, "failed to power off PHY(s): %d\n", err); reset_control_assert(pcie->pcie_xrst); reset_control_assert(pcie->afi_rst); @@ -1049,6 +1171,99 @@ static int tegra_pcie_resets_get(struct tegra_pcie *pcie) return 0; } +static int tegra_pcie_phys_get_legacy(struct tegra_pcie *pcie) +{ + int err; + + pcie->phy = devm_phy_optional_get(pcie->dev, "pcie"); + if (IS_ERR(pcie->phy)) { + err = PTR_ERR(pcie->phy); + dev_err(pcie->dev, "failed to get PHY: %d\n", err); + return err; + } + + err = phy_init(pcie->phy); + if (err < 0) { + dev_err(pcie->dev, "failed to initialize PHY: %d\n", err); + return err; + } + + pcie->legacy_phy = true; + + return 0; +} + +static struct phy *devm_of_phy_optional_get_index(struct device *dev, + struct device_node *np, + const char *consumer, + unsigned int index) +{ + struct phy *phy; + char *name; + + name = kasprintf(GFP_KERNEL, "%s-%u", consumer, index); + if (!name) + return ERR_PTR(-ENOMEM); + + phy = devm_of_phy_get(dev, np, name); + kfree(name); + + if (IS_ERR(phy) && PTR_ERR(phy) == -ENODEV) + phy = NULL; + + return phy; +} + +static int tegra_pcie_port_get_phys(struct tegra_pcie_port *port) +{ + struct device *dev = port->pcie->dev; + struct phy *phy; + unsigned int i; + int err; + + port->phys = devm_kcalloc(dev, sizeof(phy), port->lanes, GFP_KERNEL); + if (!port->phys) + return -ENOMEM; + + for (i = 0; i < port->lanes; i++) { + phy = devm_of_phy_optional_get_index(dev, port->np, "pcie", i); + if (IS_ERR(phy)) { + dev_err(dev, "failed to get PHY#%u: %ld\n", i, + PTR_ERR(phy)); + return PTR_ERR(phy); + } + + err = phy_init(phy); + if (err < 0) { + dev_err(dev, "failed to initialize PHY#%u: %d\n", i, + err); + return err; + } + + port->phys[i] = phy; + } + + return 0; +} + +static int tegra_pcie_phys_get(struct tegra_pcie *pcie) +{ + struct tegra_pcie_port *port; + int err; + + if (of_get_property(pcie->dev->of_node, "phys", NULL) != NULL) + return tegra_pcie_phys_get_legacy(pcie); + + list_for_each_entry(port, &pcie->ports, list) { + err = tegra_pcie_port_get_phys(port); + if (err < 0) { + return err; + } + } + + return 0; +} + static int tegra_pcie_get_resources(struct tegra_pcie *pcie) { struct platform_device *pdev = to_platform_device(pcie->dev); @@ -1067,16 +1282,9 @@ static int tegra_pcie_get_resources(struct tegra_pcie *pcie) return err; } - pcie->phy = devm_phy_optional_get(pcie->dev, "pcie"); - if (IS_ERR(pcie->phy)) { - err = PTR_ERR(pcie->phy); - dev_err(&pdev->dev, "failed to get PHY: %d\n", err); - return err; - } - - err = phy_init(pcie->phy); + err = tegra_pcie_phys_get(pcie); if (err < 0) { - dev_err(&pdev->dev, "failed to initialize PHY: %d\n", err); + dev_err(&pdev->dev, "failed to get PHYs: %d\n", err); return err; } @@ -1752,6 +1960,7 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie) rp->index = index; rp->lanes = value; rp->pcie = pcie; + rp->np = port; rp->base = devm_ioremap_resource(pcie->dev, &rp->regs); if (IS_ERR(rp->base))