Message ID | 1457452094-5409-1-git-send-email-thierry.reding@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Delegated to: | Bjorn Helgaas |
Headers | show |
On 03/08/2016 08:48 AM, Thierry Reding wrote: > From: Thierry Reding <treding@nvidia.com> > > Changes to the pad controller device tree binding have required that > each lane be associated with a separate PHY. I still don't think this has anything to do with DT bindings. Rather, the definition of a PHY (in HW and the Linux PHY subsystem) is a single lane. That fact then requires drivers to support a PHY per lane rather than a single multi-lane PHY, and equally means the DT bindings must be written according to the correct definition of a PHY. Still, I suppose the commit description is fine as is. > Update the PCI host bridge > device tree binding to allow each root port to define the list of PHYs > required to drive the lanes associated with it. > diff --git a/Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt b/Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt > +Required properties for Tegra124 and later: > +- phys: Must contain an phandle to a PHY for each entry in phy-names. > +- phy-names: Must include an entry for each active lane. Note that the number > + of entries does not have to (though usually will) be equal to the specified > + number of lanes in the nvidia,num-lanes property. Entries are of the form > + "pcie-N": where N ranges from 0 to the value specified in nvidia,num-lanes. When would the number of PHYs not equal the number of lanes? I thought the whole point of this patch was to switch to per-lane PHYs? Perhaps I'm just misremembering some exception, so there may be no need to change this. > Example: > > SoC DTSI: > @@ -169,6 +179,9 @@ SoC DTSI: > ranges; > > nvidia,num-lanes = <2>; > + > + phys = <&{/padctl@0,7009f000/pads/pcie/pcie-4}>; > + phy-names = "pcie-0"; > }; The example shows a Tegra20 PCIe controller, yet includes Tegra124-or-greater properties. That seems a bit odd. Should the changes to the example be dropped, or does "Required properties for Tegra124 and later" mean "Required for T124+, optional for earlier chips"? Conceptually this change is fine by me though. -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Mar 08, 2016 at 04:48:13PM +0100, Thierry Reding wrote: > From: Thierry Reding <treding@nvidia.com> > > Changes to the pad controller device tree binding have required that > each lane be associated with a separate PHY. Update the PCI host bridge > device tree binding to allow each root port to define the list of PHYs > required to drive the lanes associated with it. > > Signed-off-by: Thierry Reding <treding@nvidia.com> > --- > .../devicetree/bindings/pci/nvidia,tegra20-pcie.txt | 18 +++++++++++++++++- > 1 file changed, 17 insertions(+), 1 deletion(-) Acked-by: Rob Herring <robh@kernel.org> -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, Mar 16, 2016 at 10:51:58AM -0600, Stephen Warren wrote: > On 03/08/2016 08:48 AM, Thierry Reding wrote: > > From: Thierry Reding <treding@nvidia.com> > > > > Changes to the pad controller device tree binding have required that > > each lane be associated with a separate PHY. > > I still don't think this has anything to do with DT bindings. Rather, the > definition of a PHY (in HW and the Linux PHY subsystem) is a single lane. > That fact then requires drivers to support a PHY per lane rather than a > single multi-lane PHY, and equally means the DT bindings must be written > according to the correct definition of a PHY. > > Still, I suppose the commit description is fine as is. I've reworded the commit message to give a more accurate rationale for the change. I'll be posting a v5 soon. > > Update the PCI host bridge > > device tree binding to allow each root port to define the list of PHYs > > required to drive the lanes associated with it. > > > diff --git a/Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt b/Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt > > > +Required properties for Tegra124 and later: > > +- phys: Must contain an phandle to a PHY for each entry in phy-names. > > +- phy-names: Must include an entry for each active lane. Note that the number > > + of entries does not have to (though usually will) be equal to the specified > > + number of lanes in the nvidia,num-lanes property. Entries are of the form > > + "pcie-N": where N ranges from 0 to the value specified in nvidia,num-lanes. > > When would the number of PHYs not equal the number of lanes? I thought the > whole point of this patch was to switch to per-lane PHYs? Perhaps I'm just > misremembering some exception, so there may be no need to change this. This is useful to support the case where we want to connect a x1 or x2 device to a root port that is configured to drive more lanes. It's a rather unusual configuration, but it would be possible for example to have an onboard x1 ethernet card, but the board layout is such that it runs in x1/x2 mode, with the ethernet card connected to the x2 port. > > Example: > > > > SoC DTSI: > > @@ -169,6 +179,9 @@ SoC DTSI: > > ranges; > > > > nvidia,num-lanes = <2>; > > + > > + phys = <&{/padctl@0,7009f000/pads/pcie/pcie-4}>; > > + phy-names = "pcie-0"; > > }; > > The example shows a Tegra20 PCIe controller, yet includes > Tegra124-or-greater properties. That seems a bit odd. Should the changes to > the example be dropped, or does "Required properties for Tegra124 and later" > mean "Required for T124+, optional for earlier chips"? I've annotated these properties with "for Tegra124 and later", hopefully that clarifies that these properties are only valid on Tegra124 and later chips. The reason is that earlier (Tegra114 didn't support PCIe, Tegra30 and Tegra20 did but had PHY registers within the PCI host bridge I/O memory). Thierry
On 04/13/2016 10:22 AM, Thierry Reding wrote: > On Wed, Mar 16, 2016 at 10:51:58AM -0600, Stephen Warren wrote: >> On 03/08/2016 08:48 AM, Thierry Reding wrote: >>> From: Thierry Reding <treding@nvidia.com> >>> >>> Changes to the pad controller device tree binding have required that >>> each lane be associated with a separate PHY. >> >> I still don't think this has anything to do with DT bindings. Rather, the >> definition of a PHY (in HW and the Linux PHY subsystem) is a single lane. >> That fact then requires drivers to support a PHY per lane rather than a >> single multi-lane PHY, and equally means the DT bindings must be written >> according to the correct definition of a PHY. >> >> Still, I suppose the commit description is fine as is. > > I've reworded the commit message to give a more accurate rationale for > the change. I'll be posting a v5 soon. > >>> Update the PCI host bridge >>> device tree binding to allow each root port to define the list of PHYs >>> required to drive the lanes associated with it. >> >>> diff --git a/Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt b/Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt >> >>> +Required properties for Tegra124 and later: >>> +- phys: Must contain an phandle to a PHY for each entry in phy-names. >>> +- phy-names: Must include an entry for each active lane. Note that the number >>> + of entries does not have to (though usually will) be equal to the specified >>> + number of lanes in the nvidia,num-lanes property. Entries are of the form >>> + "pcie-N": where N ranges from 0 to the value specified in nvidia,num-lanes. >> >> When would the number of PHYs not equal the number of lanes? I thought the >> whole point of this patch was to switch to per-lane PHYs? Perhaps I'm just >> misremembering some exception, so there may be no need to change this. > > This is useful to support the case where we want to connect a x1 or x2 > device to a root port that is configured to drive more lanes. It's a > rather unusual configuration, but it would be possible for example to > have an onboard x1 ethernet card, but the board layout is such that it > runs in x1/x2 mode, with the ethernet card connected to the x2 port. Does the controller HW actually work correctly in such a mode? Obviously a fully initialized x4 controller has to correctly handle being attached solely to a x1 device. However, that's a different case to simply not initializing 3 of the 4 PHYs. It's plausible the controller handles this just fine, or that it hangs up or otherwise misbehaves if some of the PHYs aren't enabled and hence it can't even detect whether something is attached to them or not. Either way, adding your explanation into the binding would be useful to highlight the reason for the special case. -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, Apr 13, 2016 at 11:04:56AM -0600, Stephen Warren wrote: > On 04/13/2016 10:22 AM, Thierry Reding wrote: > > On Wed, Mar 16, 2016 at 10:51:58AM -0600, Stephen Warren wrote: > > > On 03/08/2016 08:48 AM, Thierry Reding wrote: > > > > From: Thierry Reding <treding@nvidia.com> > > > > > > > > Changes to the pad controller device tree binding have required that > > > > each lane be associated with a separate PHY. > > > > > > I still don't think this has anything to do with DT bindings. Rather, the > > > definition of a PHY (in HW and the Linux PHY subsystem) is a single lane. > > > That fact then requires drivers to support a PHY per lane rather than a > > > single multi-lane PHY, and equally means the DT bindings must be written > > > according to the correct definition of a PHY. > > > > > > Still, I suppose the commit description is fine as is. > > > > I've reworded the commit message to give a more accurate rationale for > > the change. I'll be posting a v5 soon. > > > > > > Update the PCI host bridge > > > > device tree binding to allow each root port to define the list of PHYs > > > > required to drive the lanes associated with it. > > > > > > > diff --git a/Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt b/Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt > > > > > > > +Required properties for Tegra124 and later: > > > > +- phys: Must contain an phandle to a PHY for each entry in phy-names. > > > > +- phy-names: Must include an entry for each active lane. Note that the number > > > > + of entries does not have to (though usually will) be equal to the specified > > > > + number of lanes in the nvidia,num-lanes property. Entries are of the form > > > > + "pcie-N": where N ranges from 0 to the value specified in nvidia,num-lanes. > > > > > > When would the number of PHYs not equal the number of lanes? I thought the > > > whole point of this patch was to switch to per-lane PHYs? Perhaps I'm just > > > misremembering some exception, so there may be no need to change this. > > > > This is useful to support the case where we want to connect a x1 or x2 > > device to a root port that is configured to drive more lanes. It's a > > rather unusual configuration, but it would be possible for example to > > have an onboard x1 ethernet card, but the board layout is such that it > > runs in x1/x2 mode, with the ethernet card connected to the x2 port. > > Does the controller HW actually work correctly in such a mode? I think it does, and up until a few minutes ago I was even sure that I had tested it once. But looking at the various boards that I have I don't think I actually have test equipment that's wired the proper way to test this. > Obviously a fully initialized x4 controller has to correctly handle being > attached solely to a x1 device. However, that's a different case to simply > not initializing 3 of the 4 PHYs. It's plausible the controller handles this > just fine, or that it hangs up or otherwise misbehaves if some of the PHYs > aren't enabled and hence it can't even detect whether something is attached > to them or not. Either way, adding your explanation into the binding would > be useful to highlight the reason for the special case. Perhaps for now it would be better to make the binding stricter. The wording could be relaxed if we ever determine that it still works correctly with a number of PHYs smaller than the number of lanes. Thierry
On Thu, Apr 14, 2016 at 05:29:05PM +0200, Thierry Reding wrote: > On Wed, Apr 13, 2016 at 11:04:56AM -0600, Stephen Warren wrote: > > On 04/13/2016 10:22 AM, Thierry Reding wrote: > > > On Wed, Mar 16, 2016 at 10:51:58AM -0600, Stephen Warren wrote: > > > > On 03/08/2016 08:48 AM, Thierry Reding wrote: > > > > > From: Thierry Reding <treding@nvidia.com> > > > > > > > > > > Changes to the pad controller device tree binding have required that > > > > > each lane be associated with a separate PHY. > > > > > > > > I still don't think this has anything to do with DT bindings. Rather, the > > > > definition of a PHY (in HW and the Linux PHY subsystem) is a single lane. > > > > That fact then requires drivers to support a PHY per lane rather than a > > > > single multi-lane PHY, and equally means the DT bindings must be written > > > > according to the correct definition of a PHY. > > > > > > > > Still, I suppose the commit description is fine as is. > > > > > > I've reworded the commit message to give a more accurate rationale for > > > the change. I'll be posting a v5 soon. > > > > > > > > Update the PCI host bridge > > > > > device tree binding to allow each root port to define the list of PHYs > > > > > required to drive the lanes associated with it. > > > > > > > > > diff --git a/Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt b/Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt > > > > > > > > > +Required properties for Tegra124 and later: > > > > > +- phys: Must contain an phandle to a PHY for each entry in phy-names. > > > > > +- phy-names: Must include an entry for each active lane. Note that the number > > > > > + of entries does not have to (though usually will) be equal to the specified > > > > > + number of lanes in the nvidia,num-lanes property. Entries are of the form > > > > > + "pcie-N": where N ranges from 0 to the value specified in nvidia,num-lanes. > > > > > > > > When would the number of PHYs not equal the number of lanes? I thought the > > > > whole point of this patch was to switch to per-lane PHYs? Perhaps I'm just > > > > misremembering some exception, so there may be no need to change this. > > > > > > This is useful to support the case where we want to connect a x1 or x2 > > > device to a root port that is configured to drive more lanes. It's a > > > rather unusual configuration, but it would be possible for example to > > > have an onboard x1 ethernet card, but the board layout is such that it > > > runs in x1/x2 mode, with the ethernet card connected to the x2 port. > > > > Does the controller HW actually work correctly in such a mode? > > I think it does, and up until a few minutes ago I was even sure that I > had tested it once. But looking at the various boards that I have I > don't think I actually have test equipment that's wired the proper way > to test this. > > > Obviously a fully initialized x4 controller has to correctly handle being > > attached solely to a x1 device. However, that's a different case to simply > > not initializing 3 of the 4 PHYs. It's plausible the controller handles this > > just fine, or that it hangs up or otherwise misbehaves if some of the PHYs > > aren't enabled and hence it can't even detect whether something is attached > > to them or not. Either way, adding your explanation into the binding would > > be useful to highlight the reason for the special case. > > Perhaps for now it would be better to make the binding stricter. The > wording could be relaxed if we ever determine that it still works > correctly with a number of PHYs smaller than the number of lanes. Going over the patches again I realized that Jetson TK1 is actually one such case. The PCI host bridge controller is configured to run root port 0 using two lanes, and root port 1 using one lane. However, only one lane is connected to each port. Root port 0 goes to the miniPCIe slot, which takes a single lane (PEX4), while root port 1 goes to the onboard NIC, which takes a single lane (PEX2) as well. x1 + x1 is an unsupported configuration for the root complex, though, hence why it is configured to be x2 + x1. I've tested that both the onboard NIC as well as a miniPCIe card work correctly with the setup. So I think the wording in the binding, as well as the example, are correct. So I left the original wording in place, but instead added a couple more examples so that we have one per SoC generation, which will hopefully clarify what properties should and shouldn't be used. Thierry
diff --git a/Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt b/Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt index 75321ae23c08..033fe4b5afac 100644 --- a/Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt +++ b/Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt @@ -60,11 +60,14 @@ Required properties: - afi - pcie_x -Required properties on Tegra124 and later: +Required properties on Tegra124 and later (deprecated): - phys: Must contain an entry for each entry in phy-names. - phy-names: Must include the following entries: - pcie +These properties are deprecated in favour of per-lane PHYs define in each of +the root ports (see below). + Power supplies for Tegra20: - avdd-pex-supply: Power supply for analog PCIe logic. Must supply 1.05 V. - vdd-pex-supply: Power supply for digital PCIe I/O. Must supply 1.05 V. @@ -122,6 +125,13 @@ Required properties: - Root port 0 uses 4 lanes, root port 1 is unused. - Both root ports use 2 lanes. +Required properties for Tegra124 and later: +- phys: Must contain an phandle to a PHY for each entry in phy-names. +- phy-names: Must include an entry for each active lane. Note that the number + of entries does not have to (though usually will) be equal to the specified + number of lanes in the nvidia,num-lanes property. Entries are of the form + "pcie-N": where N ranges from 0 to the value specified in nvidia,num-lanes. + Example: SoC DTSI: @@ -169,6 +179,9 @@ SoC DTSI: ranges; nvidia,num-lanes = <2>; + + phys = <&{/padctl@0,7009f000/pads/pcie/pcie-4}>; + phy-names = "pcie-0"; }; pci@2,0 { @@ -183,6 +196,9 @@ SoC DTSI: ranges; nvidia,num-lanes = <2>; + + phys = <&{/padctl@0,7009f000/pads/pcie/pcie-2}>; + phy-names = "pcie-0"; }; };