Message ID | 20190513050626.14991-3-vidyas@nvidia.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Add Tegra194 PCIe support | expand |
On Mon, May 13, 2019 at 10:36:13AM +0530, Vidya Sagar wrote: > Export pcie_pme_disable_msi() & pcie_pme_no_msi() APIs to enable drivers > using these APIs be able to build as loadable modules. But this is a global setting. If you root port is broken you need a per-rootport quirk instead.
On 5/13/2019 12:55 PM, Christoph Hellwig wrote: > On Mon, May 13, 2019 at 10:36:13AM +0530, Vidya Sagar wrote: >> Export pcie_pme_disable_msi() & pcie_pme_no_msi() APIs to enable drivers >> using these APIs be able to build as loadable modules. > > But this is a global setting. If you root port is broken you need > a per-rootport quirk instead. > There is nothing broken in Tegra194 root port as such, rather, this is more of software configuration choice and we are going with legacy interrupts than MSI interrupts (as Tegra194 doesn't support raising PME interrupts through MSI and please note that this doesn't mean root port is broken). Since Tegra194 has only Synopsys DesignWare core based host controllers and not any other hosts, I think it is fine to call this API in driver. Also, since this is a global setting, calling this APIs from anywhere (be it from quirk or from driver) would have the same effect, or did I miss anything here?
On Tue, May 14, 2019 at 09:00:19AM +0530, Vidya Sagar wrote: > There is nothing broken in Tegra194 root port as such, rather, this is more > of software configuration choice and we are going with legacy interrupts than > MSI interrupts (as Tegra194 doesn't support raising PME interrupts through MSI > and please note that this doesn't mean root port is broken). It seems odd at least and probably broken if it adverises MSI interrupts, but than doesn't actually support them fully. > Since Tegra194 has > only Synopsys DesignWare core based host controllers and not any other hosts, I > think it is fine to call this API in driver. Also, since this is a global setting, > calling this APIs from anywhere (be it from quirk or from driver) would have the > same effect, or did I miss anything here? Maybe in your current particular case this is true, but in general we see more and more systems with different kind of root ports, and having a driver set a global variable due to its own hardwares quirk defeats that. So the right thing here is to replace the global flag with a per-device one first before setting it for a driver.
On Mon, May 13, 2019 at 10:36:13AM +0530, Vidya Sagar wrote: > Export pcie_pme_disable_msi() & pcie_pme_no_msi() APIs to enable drivers > using these APIs be able to build as loadable modules. > > Signed-off-by: Vidya Sagar <vidyas@nvidia.com> Nak, as-is. 1) The argument for why this is needed is unconvincing. If device advertises MSI support, we should be able to use it. 2) If it turns out we really need this, it should be some sort of per-device setting rather than a global thing like this. > --- > Changes since [v5]: > * Corrected inline implementation of pcie_pme_no_msi() API > > Changes since [v4]: > * None > > Changes since [v3]: > * None > > Changes since [v2]: > * Exported pcie_pme_no_msi() API after making pcie_pme_msi_disabled a static > > Changes since [v1]: > * This is a new patch in v2 series > > drivers/pci/pcie/pme.c | 14 +++++++++++++- > drivers/pci/pcie/portdrv.h | 14 ++------------ > 2 files changed, 15 insertions(+), 13 deletions(-) > > diff --git a/drivers/pci/pcie/pme.c b/drivers/pci/pcie/pme.c > index 54d593d10396..d5e0ea4a62fc 100644 > --- a/drivers/pci/pcie/pme.c > +++ b/drivers/pci/pcie/pme.c > @@ -25,7 +25,19 @@ > * that using MSI for PCIe PME signaling doesn't play well with PCIe PME-based > * wake-up from system sleep states. > */ > -bool pcie_pme_msi_disabled; > +static bool pcie_pme_msi_disabled; > + > +void pcie_pme_disable_msi(void) > +{ > + pcie_pme_msi_disabled = true; > +} > +EXPORT_SYMBOL_GPL(pcie_pme_disable_msi); > + > +bool pcie_pme_no_msi(void) > +{ > + return pcie_pme_msi_disabled; > +} > +EXPORT_SYMBOL_GPL(pcie_pme_no_msi); > > static int __init pcie_pme_setup(char *str) > { > diff --git a/drivers/pci/pcie/portdrv.h b/drivers/pci/pcie/portdrv.h > index 944827a8c7d3..1d441fe26c51 100644 > --- a/drivers/pci/pcie/portdrv.h > +++ b/drivers/pci/pcie/portdrv.h > @@ -129,18 +129,8 @@ void pcie_port_bus_unregister(void); > struct pci_dev; > > #ifdef CONFIG_PCIE_PME > -extern bool pcie_pme_msi_disabled; > - > -static inline void pcie_pme_disable_msi(void) > -{ > - pcie_pme_msi_disabled = true; > -} > - > -static inline bool pcie_pme_no_msi(void) > -{ > - return pcie_pme_msi_disabled; > -} > - > +void pcie_pme_disable_msi(void); > +bool pcie_pme_no_msi(void); > void pcie_pme_interrupt_enable(struct pci_dev *dev, bool enable); > #else /* !CONFIG_PCIE_PME */ > static inline void pcie_pme_disable_msi(void) {} > -- > 2.17.1 >
On Tue, May 14, 2019 at 09:00:19AM +0530, Vidya Sagar wrote: > On 5/13/2019 12:55 PM, Christoph Hellwig wrote: > > On Mon, May 13, 2019 at 10:36:13AM +0530, Vidya Sagar wrote: > > > Export pcie_pme_disable_msi() & pcie_pme_no_msi() APIs to enable drivers > > > using these APIs be able to build as loadable modules. > > > > But this is a global setting. If you root port is broken you need > > a per-rootport quirk instead. > > > There is nothing broken in Tegra194 root port as such, rather, this > is more of software configuration choice and we are going with > legacy interrupts than MSI interrupts (as Tegra194 doesn't support > raising PME interrupts through MSI and please note that this doesn't > mean root port is broken). I think the port *is* broken. PCIe r4.0, sec 6.1.6, says If the Root Port is enabled for edge-triggered interrupt signaling using MSI or MSI-X, an interrupt message must be sent every time the logical AND of the following conditions transitions from FALSE to TRUE: * The associated vector is unmasked (not applicable if MSI does not support PVM). * The PME Interrupt Enable bit in the Root Control register is set to 1b. * The PME Status bit in the Root Status register is set. The Tegra194 root port advertises MSI support, so the above should apply. > Since Tegra194 has only Synopsys DesignWare core based host > controllers and not any other hosts, I think it is fine to call this > API in driver. It's fine to add a per-device quirk to set pdev->no_msi or something similar. Bjorn
On 5/16/2019 7:04 PM, Bjorn Helgaas wrote: > On Tue, May 14, 2019 at 09:00:19AM +0530, Vidya Sagar wrote: >> On 5/13/2019 12:55 PM, Christoph Hellwig wrote: >>> On Mon, May 13, 2019 at 10:36:13AM +0530, Vidya Sagar wrote: >>>> Export pcie_pme_disable_msi() & pcie_pme_no_msi() APIs to enable drivers >>>> using these APIs be able to build as loadable modules. >>> >>> But this is a global setting. If you root port is broken you need >>> a per-rootport quirk instead. >>> >> There is nothing broken in Tegra194 root port as such, rather, this >> is more of software configuration choice and we are going with >> legacy interrupts than MSI interrupts (as Tegra194 doesn't support >> raising PME interrupts through MSI and please note that this doesn't >> mean root port is broken). > > I think the port *is* broken. PCIe r4.0, sec 6.1.6, says > > If the Root Port is enabled for edge-triggered interrupt signaling > using MSI or MSI-X, an interrupt message must be sent every time the > logical AND of the following conditions transitions from FALSE to > TRUE: > > * The associated vector is unmasked (not applicable if MSI does > not support PVM). > > * The PME Interrupt Enable bit in the Root Control register is set > to 1b. > > * The PME Status bit in the Root Status register is set. > > The Tegra194 root port advertises MSI support, so the above should > apply. I had a discussion with our hardware engineers and we are of the opinion that the root port is not really broken w.r.t MSI as spec doesn't clearly say that if root port advertises MSI support, it must generate MSI interrupts for PME. All that it says is, if MSI is enabled, then MSI should be raised for PME events. Here, by 'enable', we understand that as enabling at hardware level to generate MSI interrupt which is not the case with Tegra194. In Tegra194, root port is enabled to generate MSI only for hot-plug events and legacy interrupts are used for PME, AER. Having said that I'm fine to add a quick based on Vendor-ID and Device-IDs of root port in Tegra194 to set pdev->no_msi to '1'. > >> Since Tegra194 has only Synopsys DesignWare core based host >> controllers and not any other hosts, I think it is fine to call this >> API in driver. > > It's fine to add a per-device quirk to set pdev->no_msi or something > similar. > > Bjorn >
On Fri, May 17, 2019 at 01:49:49PM +0530, Vidya Sagar wrote: > On 5/16/2019 7:04 PM, Bjorn Helgaas wrote: > > On Tue, May 14, 2019 at 09:00:19AM +0530, Vidya Sagar wrote: > > > On 5/13/2019 12:55 PM, Christoph Hellwig wrote: > > > > On Mon, May 13, 2019 at 10:36:13AM +0530, Vidya Sagar wrote: > > > > > Export pcie_pme_disable_msi() & pcie_pme_no_msi() APIs to enable drivers > > > > > using these APIs be able to build as loadable modules. > > > > > > > > But this is a global setting. If you root port is broken you need > > > > a per-rootport quirk instead. > > > > > > > There is nothing broken in Tegra194 root port as such, rather, this > > > is more of software configuration choice and we are going with > > > legacy interrupts than MSI interrupts (as Tegra194 doesn't support > > > raising PME interrupts through MSI and please note that this doesn't > > > mean root port is broken). > > > > I think the port *is* broken. PCIe r4.0, sec 6.1.6, says > > > > If the Root Port is enabled for edge-triggered interrupt signaling > > using MSI or MSI-X, an interrupt message must be sent every time the > > logical AND of the following conditions transitions from FALSE to > > TRUE: > > > > * The associated vector is unmasked (not applicable if MSI does > > not support PVM). > > > > * The PME Interrupt Enable bit in the Root Control register is set > > to 1b. > > > > * The PME Status bit in the Root Status register is set. > > > > The Tegra194 root port advertises MSI support, so the above should > > apply. > I had a discussion with our hardware engineers and we are of the > opinion that the root port is not really broken w.r.t MSI as spec > doesn't clearly say that if root port advertises MSI support, it > must generate MSI interrupts for PME. All that it says is, if MSI is > enabled, then MSI should be raised for PME events. Here, by > 'enable', we understand that as enabling at hardware level to > generate MSI interrupt which is not the case with Tegra194. In > Tegra194, root port is enabled to generate MSI only for hot-plug > events and legacy interrupts are used for PME, AER. Do you have "lspci -vvxxx" output for the root ports handy? If there's some clue in the standard config space that would tell us that MSI works for some events but not others, we could make the PCI core pay attention it. That would be the best solution because it wouldn't require Tegra-specific code. If this situation requires Tegra-specific code, that becomes an issue if you ever want to use the part in an ACPI system because the ACPI host bridge driver is generic and there isn't a place to put device-specific code. Bjorn
On 5/17/2019 6:54 PM, Bjorn Helgaas wrote: > On Fri, May 17, 2019 at 01:49:49PM +0530, Vidya Sagar wrote: >> On 5/16/2019 7:04 PM, Bjorn Helgaas wrote: >>> On Tue, May 14, 2019 at 09:00:19AM +0530, Vidya Sagar wrote: >>>> On 5/13/2019 12:55 PM, Christoph Hellwig wrote: >>>>> On Mon, May 13, 2019 at 10:36:13AM +0530, Vidya Sagar wrote: >>>>>> Export pcie_pme_disable_msi() & pcie_pme_no_msi() APIs to enable drivers >>>>>> using these APIs be able to build as loadable modules. >>>>> >>>>> But this is a global setting. If you root port is broken you need >>>>> a per-rootport quirk instead. >>>>> >>>> There is nothing broken in Tegra194 root port as such, rather, this >>>> is more of software configuration choice and we are going with >>>> legacy interrupts than MSI interrupts (as Tegra194 doesn't support >>>> raising PME interrupts through MSI and please note that this doesn't >>>> mean root port is broken). >>> >>> I think the port *is* broken. PCIe r4.0, sec 6.1.6, says >>> >>> If the Root Port is enabled for edge-triggered interrupt signaling >>> using MSI or MSI-X, an interrupt message must be sent every time the >>> logical AND of the following conditions transitions from FALSE to >>> TRUE: >>> >>> * The associated vector is unmasked (not applicable if MSI does >>> not support PVM). >>> >>> * The PME Interrupt Enable bit in the Root Control register is set >>> to 1b. >>> >>> * The PME Status bit in the Root Status register is set. >>> >>> The Tegra194 root port advertises MSI support, so the above should >>> apply. >> I had a discussion with our hardware engineers and we are of the >> opinion that the root port is not really broken w.r.t MSI as spec >> doesn't clearly say that if root port advertises MSI support, it >> must generate MSI interrupts for PME. All that it says is, if MSI is >> enabled, then MSI should be raised for PME events. Here, by >> 'enable', we understand that as enabling at hardware level to >> generate MSI interrupt which is not the case with Tegra194. In >> Tegra194, root port is enabled to generate MSI only for hot-plug >> events and legacy interrupts are used for PME, AER. > > Do you have "lspci -vvxxx" output for the root ports handy? > > If there's some clue in the standard config space that would tell us > that MSI works for some events but not others, we could make the PCI > core pay attention it. That would be the best solution because it > wouldn't require Tegra-specific code. Here is the output of 'lspci vvxxx' for one of Tegra194's root ports. 0005:00:00.0 PCI bridge: NVIDIA Corporation Device 1ad0 (rev a1) (prog-if 00 [Normal decode]) Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 50 Bus: primary=00, secondary=01, subordinate=ff, sec-latency=0 I/O behind bridge: None Memory behind bridge: 40000000-400fffff [size=1M] Prefetchable memory behind bridge: None Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR- BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16- MAbort- >Reset- FastB2B- PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn- Capabilities: [40] Power Management version 3 Flags: PMEClk- DSI- D1- D2- AuxCurrent=375mA PME(D0+,D1-,D2-,D3hot+,D3cold+) Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME- Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+ Address: 0000000000000000 Data: 0000 Masking: 00000000 Pending: 00000000 Capabilities: [70] Express (v2) Root Port (Slot-), MSI 00 DevCap: MaxPayload 256 bytes, PhantFunc 0 ExtTag- RBE+ DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+ RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+ MaxPayload 128 bytes, MaxReadReq 512 bytes DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr+ TransPend- LnkCap: Port #0, Speed 16GT/s, Width x8, ASPM L0s L1, Exit Latency L0s <1us, L1 <64us ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+ LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+ ExtSynch- ClockPM- AutWidDis- BWInt+ AutBWInt- LnkSta: Speed 5GT/s (downgraded), Width x1 (downgraded) TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt+ RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+ RootCap: CRSVisible+ RootSta: PME ReqID 0000, PMEStatus- PMEPending- DevCap2: Completion Timeout: Range ABCD, TimeoutDis+, LTR+, OBFF Not Supported ARIFwd- AtomicOpsCap: Routing- 32bit- 64bit- 128bitCAS- DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR+, OBFF Disabled ARIFwd- AtomicOpsCtl: ReqEn- EgressBlck- LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis- Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS- Compliance De-emphasis: -6dB LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1- EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest- Capabilities: [b0] MSI-X: Enable- Count=8 Masked- Vector table: BAR=2 offset=00000000 PBA: BAR=2 offset=00010000 Capabilities: [100 v2] Advanced Error Reporting UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol- CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr- CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+ AERCap: First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn- MultHdrRecCap+ MultHdrRecEn- TLPPfxPres- HdrLogCap- HeaderLog: 00000000 00000000 00000000 00000000 RootCmd: CERptEn+ NFERptEn+ FERptEn+ RootSta: CERcvd- MultCERcvd- UERcvd- MultUERcvd- FirstFatal- NonFatalMsg- FatalMsg- IntMsg 0 ErrorSrc: ERR_COR: 0000 ERR_FATAL/NONFATAL: 0000 Capabilities: [148 v1] Secondary PCI Express <?> Capabilities: [168 v1] Physical Layer 16.0 GT/s <?> Capabilities: [190 v1] Lane Margining at the Receiver <?> Capabilities: [1c0 v1] L1 PM Substates L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+ L1_PM_Substates+ PortCommonModeRestoreTime=60us PortTPowerOnTime=40us L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1- T_CommonMode=10us LTR1.2_Threshold=0ns L1SubCtl2: T_PwrOn=10us Capabilities: [1d0 v1] Vendor Specific Information: ID=0002 Rev=4 Len=100 <?> Capabilities: [2d0 v1] Vendor Specific Information: ID=0001 Rev=1 Len=038 <?> Capabilities: [308 v1] Data Link Feature <?> Capabilities: [314 v1] Precision Time Measurement PTMCap: Requester:+ Responder:+ Root:+ PTMClockGranularity: 16ns PTMControl: Enabled:- RootSelected:- PTMEffectiveGranularity: Unknown Capabilities: [320 v1] Vendor Specific Information: ID=0004 Rev=1 Len=054 <?> Kernel driver in use: pcieport 00: de 10 d0 1a 07 01 10 00 a1 00 04 06 00 00 01 00 10: 00 00 00 00 00 00 00 00 00 01 ff 00 f0 00 00 00 20: 00 40 00 40 f1 ff 01 00 00 00 00 00 00 00 00 00 30: 00 00 00 00 40 00 00 00 00 00 00 00 32 01 02 00 40: 01 50 c3 c9 08 00 00 00 00 00 00 00 00 00 00 00 50: 05 70 80 01 00 00 00 00 00 00 00 00 00 00 00 00 60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 70: 10 b0 42 00 01 80 00 00 1f 28 10 00 84 4c 7b 00 80: 40 04 12 f0 00 00 00 00 c0 03 40 00 18 00 01 00 90: 00 00 00 00 1f 0c 01 00 00 04 00 00 1e 00 80 01 a0: 04 00 00 02 00 00 00 00 00 00 00 00 00 00 00 00 b0: 11 00 07 00 02 00 00 00 02 00 01 00 00 00 00 00 c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > > If this situation requires Tegra-specific code, that becomes an issue > if you ever want to use the part in an ACPI system because the ACPI > host bridge driver is generic and there isn't a place to put > device-specific code. Thanks for bringing it up. I'll make a note of this and discuss about it internally. > > Bjorn >
On Fri, May 17, 2019 at 11:23:36PM +0530, Vidya Sagar wrote: > On 5/17/2019 6:54 PM, Bjorn Helgaas wrote: > > Do you have "lspci -vvxxx" output for the root ports handy? > > > > If there's some clue in the standard config space that would tell us > > that MSI works for some events but not others, we could make the PCI > > core pay attention it. That would be the best solution because it > > wouldn't require Tegra-specific code. > > Here is the output of 'lspci vvxxx' for one of Tegra194's root ports. Thanks! This port advertises both MSI and MSI-X, and neither one is enabled. This particular port doesn't have a slot, so hotplug isn't applicable to it. But if I understand correctly, if MSI or MSI-X were enabled and the port had a slot, the port would generate MSI/MSI-X hotplug interrupts. But PME and AER events would still cause INTx interrupts (even with MSI or MSI-X enabled). Do I have that right? I just want to make sure that the reason for PME being INTx is a permanent hardware choice and that it's not related to MSI and MSI-X currently being disabled. > 0005:00:00.0 PCI bridge: NVIDIA Corporation Device 1ad0 (rev a1) (prog-if 00 [Normal decode]) > Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx- > Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- > Latency: 0 > Interrupt: pin A routed to IRQ 50 > Bus: primary=00, secondary=01, subordinate=ff, sec-latency=0 > I/O behind bridge: None > Memory behind bridge: 40000000-400fffff [size=1M] > Prefetchable memory behind bridge: None > Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR- > BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16- MAbort- >Reset- FastB2B- > PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn- > Capabilities: [40] Power Management version 3 > Flags: PMEClk- DSI- D1- D2- AuxCurrent=375mA PME(D0+,D1-,D2-,D3hot+,D3cold+) > Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME- > Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+ > Address: 0000000000000000 Data: 0000 > Masking: 00000000 Pending: 00000000 > Capabilities: [70] Express (v2) Root Port (Slot-), MSI 00 > DevCap: MaxPayload 256 bytes, PhantFunc 0 > ExtTag- RBE+ > DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+ > RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+ > MaxPayload 128 bytes, MaxReadReq 512 bytes > DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr+ TransPend- > LnkCap: Port #0, Speed 16GT/s, Width x8, ASPM L0s L1, Exit Latency L0s <1us, L1 <64us > ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+ > LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+ > ExtSynch- ClockPM- AutWidDis- BWInt+ AutBWInt- > LnkSta: Speed 5GT/s (downgraded), Width x1 (downgraded) > TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt+ > RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+ > RootCap: CRSVisible+ > RootSta: PME ReqID 0000, PMEStatus- PMEPending- > DevCap2: Completion Timeout: Range ABCD, TimeoutDis+, LTR+, OBFF Not Supported ARIFwd- > AtomicOpsCap: Routing- 32bit- 64bit- 128bitCAS- > DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR+, OBFF Disabled ARIFwd- > AtomicOpsCtl: ReqEn- EgressBlck- > LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis- > Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS- > Compliance De-emphasis: -6dB > LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1- > EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest- > Capabilities: [b0] MSI-X: Enable- Count=8 Masked- > Vector table: BAR=2 offset=00000000 > PBA: BAR=2 offset=00010000 > Capabilities: [100 v2] Advanced Error Reporting > UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- > UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- > UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol- > CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr- > CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+ > AERCap: First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn- > MultHdrRecCap+ MultHdrRecEn- TLPPfxPres- HdrLogCap- > HeaderLog: 00000000 00000000 00000000 00000000 > RootCmd: CERptEn+ NFERptEn+ FERptEn+ > RootSta: CERcvd- MultCERcvd- UERcvd- MultUERcvd- > FirstFatal- NonFatalMsg- FatalMsg- IntMsg 0 > ErrorSrc: ERR_COR: 0000 ERR_FATAL/NONFATAL: 0000 > Capabilities: [148 v1] Secondary PCI Express <?> > Capabilities: [168 v1] Physical Layer 16.0 GT/s <?> > Capabilities: [190 v1] Lane Margining at the Receiver <?> > Capabilities: [1c0 v1] L1 PM Substates > L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+ L1_PM_Substates+ > PortCommonModeRestoreTime=60us PortTPowerOnTime=40us > L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1- > T_CommonMode=10us LTR1.2_Threshold=0ns > L1SubCtl2: T_PwrOn=10us > Capabilities: [1d0 v1] Vendor Specific Information: ID=0002 Rev=4 Len=100 <?> > Capabilities: [2d0 v1] Vendor Specific Information: ID=0001 Rev=1 Len=038 <?> > Capabilities: [308 v1] Data Link Feature <?> > Capabilities: [314 v1] Precision Time Measurement > PTMCap: Requester:+ Responder:+ Root:+ > PTMClockGranularity: 16ns > PTMControl: Enabled:- RootSelected:- > PTMEffectiveGranularity: Unknown > Capabilities: [320 v1] Vendor Specific Information: ID=0004 Rev=1 Len=054 <?> > Kernel driver in use: pcieport > 00: de 10 d0 1a 07 01 10 00 a1 00 04 06 00 00 01 00 > 10: 00 00 00 00 00 00 00 00 00 01 ff 00 f0 00 00 00 > 20: 00 40 00 40 f1 ff 01 00 00 00 00 00 00 00 00 00 > 30: 00 00 00 00 40 00 00 00 00 00 00 00 32 01 02 00 > 40: 01 50 c3 c9 08 00 00 00 00 00 00 00 00 00 00 00 > 50: 05 70 80 01 00 00 00 00 00 00 00 00 00 00 00 00 > 60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > 70: 10 b0 42 00 01 80 00 00 1f 28 10 00 84 4c 7b 00 > 80: 40 04 12 f0 00 00 00 00 c0 03 40 00 18 00 01 00 > 90: 00 00 00 00 1f 0c 01 00 00 04 00 00 1e 00 80 01 > a0: 04 00 00 02 00 00 00 00 00 00 00 00 00 00 00 00 > b0: 11 00 07 00 02 00 00 00 02 00 01 00 00 00 00 00 > c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
On 5/18/2019 12:25 AM, Bjorn Helgaas wrote: > On Fri, May 17, 2019 at 11:23:36PM +0530, Vidya Sagar wrote: >> On 5/17/2019 6:54 PM, Bjorn Helgaas wrote: >>> Do you have "lspci -vvxxx" output for the root ports handy? >>> >>> If there's some clue in the standard config space that would tell us >>> that MSI works for some events but not others, we could make the PCI >>> core pay attention it. That would be the best solution because it >>> wouldn't require Tegra-specific code. >> >> Here is the output of 'lspci vvxxx' for one of Tegra194's root ports. > > Thanks! > > This port advertises both MSI and MSI-X, and neither one is enabled. > This particular port doesn't have a slot, so hotplug isn't applicable > to it. > > But if I understand correctly, if MSI or MSI-X were enabled and the > port had a slot, the port would generate MSI/MSI-X hotplug interrupts. > But PME and AER events would still cause INTx interrupts (even with > MSI or MSI-X enabled). > > Do I have that right? I just want to make sure that the reason for > PME being INTx is a permanent hardware choice and that it's not > related to MSI and MSI-X currently being disabled. Yes. Thats right. Its hardware choice that our hardware engineers made to use INTx for PME instead of MSI irrespective of MSI/MSI-X enabled/disabled in the root port. > >> 0005:00:00.0 PCI bridge: NVIDIA Corporation Device 1ad0 (rev a1) (prog-if 00 [Normal decode]) >> Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx- >> Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- >> Latency: 0 >> Interrupt: pin A routed to IRQ 50 >> Bus: primary=00, secondary=01, subordinate=ff, sec-latency=0 >> I/O behind bridge: None >> Memory behind bridge: 40000000-400fffff [size=1M] >> Prefetchable memory behind bridge: None >> Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR- >> BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16- MAbort- >Reset- FastB2B- >> PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn- >> Capabilities: [40] Power Management version 3 >> Flags: PMEClk- DSI- D1- D2- AuxCurrent=375mA PME(D0+,D1-,D2-,D3hot+,D3cold+) >> Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME- >> Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+ >> Address: 0000000000000000 Data: 0000 >> Masking: 00000000 Pending: 00000000 >> Capabilities: [70] Express (v2) Root Port (Slot-), MSI 00 >> DevCap: MaxPayload 256 bytes, PhantFunc 0 >> ExtTag- RBE+ >> DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+ >> RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+ >> MaxPayload 128 bytes, MaxReadReq 512 bytes >> DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr+ TransPend- >> LnkCap: Port #0, Speed 16GT/s, Width x8, ASPM L0s L1, Exit Latency L0s <1us, L1 <64us >> ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+ >> LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+ >> ExtSynch- ClockPM- AutWidDis- BWInt+ AutBWInt- >> LnkSta: Speed 5GT/s (downgraded), Width x1 (downgraded) >> TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt+ >> RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+ >> RootCap: CRSVisible+ >> RootSta: PME ReqID 0000, PMEStatus- PMEPending- >> DevCap2: Completion Timeout: Range ABCD, TimeoutDis+, LTR+, OBFF Not Supported ARIFwd- >> AtomicOpsCap: Routing- 32bit- 64bit- 128bitCAS- >> DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR+, OBFF Disabled ARIFwd- >> AtomicOpsCtl: ReqEn- EgressBlck- >> LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis- >> Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS- >> Compliance De-emphasis: -6dB >> LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1- >> EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest- >> Capabilities: [b0] MSI-X: Enable- Count=8 Masked- >> Vector table: BAR=2 offset=00000000 >> PBA: BAR=2 offset=00010000 >> Capabilities: [100 v2] Advanced Error Reporting >> UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- >> UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- >> UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol- >> CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr- >> CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+ >> AERCap: First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn- >> MultHdrRecCap+ MultHdrRecEn- TLPPfxPres- HdrLogCap- >> HeaderLog: 00000000 00000000 00000000 00000000 >> RootCmd: CERptEn+ NFERptEn+ FERptEn+ >> RootSta: CERcvd- MultCERcvd- UERcvd- MultUERcvd- >> FirstFatal- NonFatalMsg- FatalMsg- IntMsg 0 >> ErrorSrc: ERR_COR: 0000 ERR_FATAL/NONFATAL: 0000 >> Capabilities: [148 v1] Secondary PCI Express <?> >> Capabilities: [168 v1] Physical Layer 16.0 GT/s <?> >> Capabilities: [190 v1] Lane Margining at the Receiver <?> >> Capabilities: [1c0 v1] L1 PM Substates >> L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+ L1_PM_Substates+ >> PortCommonModeRestoreTime=60us PortTPowerOnTime=40us >> L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1- >> T_CommonMode=10us LTR1.2_Threshold=0ns >> L1SubCtl2: T_PwrOn=10us >> Capabilities: [1d0 v1] Vendor Specific Information: ID=0002 Rev=4 Len=100 <?> >> Capabilities: [2d0 v1] Vendor Specific Information: ID=0001 Rev=1 Len=038 <?> >> Capabilities: [308 v1] Data Link Feature <?> >> Capabilities: [314 v1] Precision Time Measurement >> PTMCap: Requester:+ Responder:+ Root:+ >> PTMClockGranularity: 16ns >> PTMControl: Enabled:- RootSelected:- >> PTMEffectiveGranularity: Unknown >> Capabilities: [320 v1] Vendor Specific Information: ID=0004 Rev=1 Len=054 <?> >> Kernel driver in use: pcieport >> 00: de 10 d0 1a 07 01 10 00 a1 00 04 06 00 00 01 00 >> 10: 00 00 00 00 00 00 00 00 00 01 ff 00 f0 00 00 00 >> 20: 00 40 00 40 f1 ff 01 00 00 00 00 00 00 00 00 00 >> 30: 00 00 00 00 40 00 00 00 00 00 00 00 32 01 02 00 >> 40: 01 50 c3 c9 08 00 00 00 00 00 00 00 00 00 00 00 >> 50: 05 70 80 01 00 00 00 00 00 00 00 00 00 00 00 00 >> 60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 >> 70: 10 b0 42 00 01 80 00 00 1f 28 10 00 84 4c 7b 00 >> 80: 40 04 12 f0 00 00 00 00 c0 03 40 00 18 00 01 00 >> 90: 00 00 00 00 1f 0c 01 00 00 04 00 00 1e 00 80 01 >> a0: 04 00 00 02 00 00 00 00 00 00 00 00 00 00 00 00 >> b0: 11 00 07 00 02 00 00 00 02 00 01 00 00 00 00 00 >> c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 >> d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 >> e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 >> f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
On Sat, May 18, 2019 at 07:28:29AM +0530, Vidya Sagar wrote: > On 5/18/2019 12:25 AM, Bjorn Helgaas wrote: > > On Fri, May 17, 2019 at 11:23:36PM +0530, Vidya Sagar wrote: > > > On 5/17/2019 6:54 PM, Bjorn Helgaas wrote: > > > > Do you have "lspci -vvxxx" output for the root ports handy? > > > > > > > > If there's some clue in the standard config space that would tell us > > > > that MSI works for some events but not others, we could make the PCI > > > > core pay attention it. That would be the best solution because it > > > > wouldn't require Tegra-specific code. > > > > > > Here is the output of 'lspci vvxxx' for one of Tegra194's root ports. > > > > Thanks! > > > > This port advertises both MSI and MSI-X, and neither one is enabled. > > This particular port doesn't have a slot, so hotplug isn't applicable > > to it. > > > > But if I understand correctly, if MSI or MSI-X were enabled and the > > port had a slot, the port would generate MSI/MSI-X hotplug interrupts. > > But PME and AER events would still cause INTx interrupts (even with > > MSI or MSI-X enabled). > > > > Do I have that right? I just want to make sure that the reason for > > PME being INTx is a permanent hardware choice and that it's not > > related to MSI and MSI-X currently being disabled. > > Yes. Thats right. Its hardware choice that our hardware engineers made to > use INTx for PME instead of MSI irrespective of MSI/MSI-X enabled/disabled > in the root port. Here are more spec references that seem applicable: - PCIe r4.0, sec 7.7.1.2 (Message Control Register for MSI) says: MSI Enable – If Set and the MSI-X Enable bit in the MSI-X Message Control register (see Section 7.9.2) is Clear, the Function is permitted to use MSI to request service and is prohibited from using INTx interrupts. - PCIe r4.0, sec 7.7.2.2 (Message Control Register for MSI-X) says: MSI-X Enable – If Set and the MSI Enable bit in the MSI Message Control register (see Section 6.8.1.3) is Clear, the Function is permitted to use MSI-X to request service and is prohibited from using INTx interrupts (if implemented). I read that to mean a device is prohibited from using MSI/MSI-X for some interrupts and INTx for others. Since Tegra194 cannot use MSI/MSI-X for PME, it should use INTx for *all* interrupts. That makes the MSI/MSI-X Capabilities superfluous, and they should be omitted. If we set pdev->no_msi for Tegra194, we'll avoid MSI/MSI-X completely, so we'll assume *all* interrupts including hotplug will be INTx. Will that work?
On 5/20/2019 11:27 PM, Bjorn Helgaas wrote: > On Sat, May 18, 2019 at 07:28:29AM +0530, Vidya Sagar wrote: >> On 5/18/2019 12:25 AM, Bjorn Helgaas wrote: >>> On Fri, May 17, 2019 at 11:23:36PM +0530, Vidya Sagar wrote: >>>> On 5/17/2019 6:54 PM, Bjorn Helgaas wrote: >>>>> Do you have "lspci -vvxxx" output for the root ports handy? >>>>> >>>>> If there's some clue in the standard config space that would tell us >>>>> that MSI works for some events but not others, we could make the PCI >>>>> core pay attention it. That would be the best solution because it >>>>> wouldn't require Tegra-specific code. >>>> >>>> Here is the output of 'lspci vvxxx' for one of Tegra194's root ports. >>> >>> Thanks! >>> >>> This port advertises both MSI and MSI-X, and neither one is enabled. >>> This particular port doesn't have a slot, so hotplug isn't applicable >>> to it. >>> >>> But if I understand correctly, if MSI or MSI-X were enabled and the >>> port had a slot, the port would generate MSI/MSI-X hotplug interrupts. >>> But PME and AER events would still cause INTx interrupts (even with >>> MSI or MSI-X enabled). >>> >>> Do I have that right? I just want to make sure that the reason for >>> PME being INTx is a permanent hardware choice and that it's not >>> related to MSI and MSI-X currently being disabled. >> >> Yes. Thats right. Its hardware choice that our hardware engineers made to >> use INTx for PME instead of MSI irrespective of MSI/MSI-X enabled/disabled >> in the root port. > > Here are more spec references that seem applicable: > > - PCIe r4.0, sec 7.7.1.2 (Message Control Register for MSI) says: > > MSI Enable – If Set and the MSI-X Enable bit in the MSI-X > Message Control register (see Section 7.9.2) is Clear, the > Function is permitted to use MSI to request service and is > prohibited from using INTx interrupts. > > - PCIe r4.0, sec 7.7.2.2 (Message Control Register for MSI-X) says: > > MSI-X Enable – If Set and the MSI Enable bit in the MSI Message > Control register (see Section 6.8.1.3) is Clear, the Function is > permitted to use MSI-X to request service and is prohibited from > using INTx interrupts (if implemented). > > I read that to mean a device is prohibited from using MSI/MSI-X for > some interrupts and INTx for others. Since Tegra194 cannot use > MSI/MSI-X for PME, it should use INTx for *all* interrupts. That > makes the MSI/MSI-X Capabilities superfluous, and they should be > omitted. > > If we set pdev->no_msi for Tegra194, we'll avoid MSI/MSI-X completely, > so we'll assume *all* interrupts including hotplug will be INTx. Will > that work? Yes. We are fine with having all root port originated interrupts getting generated through INTx instead of MSI/MSI-X. >
diff --git a/drivers/pci/pcie/pme.c b/drivers/pci/pcie/pme.c index 54d593d10396..d5e0ea4a62fc 100644 --- a/drivers/pci/pcie/pme.c +++ b/drivers/pci/pcie/pme.c @@ -25,7 +25,19 @@ * that using MSI for PCIe PME signaling doesn't play well with PCIe PME-based * wake-up from system sleep states. */ -bool pcie_pme_msi_disabled; +static bool pcie_pme_msi_disabled; + +void pcie_pme_disable_msi(void) +{ + pcie_pme_msi_disabled = true; +} +EXPORT_SYMBOL_GPL(pcie_pme_disable_msi); + +bool pcie_pme_no_msi(void) +{ + return pcie_pme_msi_disabled; +} +EXPORT_SYMBOL_GPL(pcie_pme_no_msi); static int __init pcie_pme_setup(char *str) { diff --git a/drivers/pci/pcie/portdrv.h b/drivers/pci/pcie/portdrv.h index 944827a8c7d3..1d441fe26c51 100644 --- a/drivers/pci/pcie/portdrv.h +++ b/drivers/pci/pcie/portdrv.h @@ -129,18 +129,8 @@ void pcie_port_bus_unregister(void); struct pci_dev; #ifdef CONFIG_PCIE_PME -extern bool pcie_pme_msi_disabled; - -static inline void pcie_pme_disable_msi(void) -{ - pcie_pme_msi_disabled = true; -} - -static inline bool pcie_pme_no_msi(void) -{ - return pcie_pme_msi_disabled; -} - +void pcie_pme_disable_msi(void); +bool pcie_pme_no_msi(void); void pcie_pme_interrupt_enable(struct pci_dev *dev, bool enable); #else /* !CONFIG_PCIE_PME */ static inline void pcie_pme_disable_msi(void) {}
Export pcie_pme_disable_msi() & pcie_pme_no_msi() APIs to enable drivers using these APIs be able to build as loadable modules. Signed-off-by: Vidya Sagar <vidyas@nvidia.com> --- Changes since [v5]: * Corrected inline implementation of pcie_pme_no_msi() API Changes since [v4]: * None Changes since [v3]: * None Changes since [v2]: * Exported pcie_pme_no_msi() API after making pcie_pme_msi_disabled a static Changes since [v1]: * This is a new patch in v2 series drivers/pci/pcie/pme.c | 14 +++++++++++++- drivers/pci/pcie/portdrv.h | 14 ++------------ 2 files changed, 15 insertions(+), 13 deletions(-)