Message ID | 20230327133824.29136-2-manivannan.sadhasivam@linaro.org (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | PCI: qcom: Add support for system suspend and resume | expand |
> -----Original Message----- > From: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> > Sent: Monday, March 27, 2023 8:38 AM > To: lpieralisi@kernel.org; kw@linux.com; robh@kernel.org > Cc: andersson@kernel.org; konrad.dybcio@linaro.org; > bhelgaas@google.com; linux-pci@vger.kernel.org; linux-arm- > msm@vger.kernel.org; linux-kernel@vger.kernel.org; > quic_krichai@quicinc.com; johan+linaro@kernel.org; steev@kali.org; > mka@chromium.org; Manivannan Sadhasivam > <manivannan.sadhasivam@linaro.org>; Dhruva Gole <d-gole@ti.com> > Subject: [EXT] [PATCH v3 1/1] PCI: qcom: Add support for system suspend > and resume > > Caution: EXT Email > > During the system suspend, vote for minimal interconnect bandwidth and > also turn OFF the resources like clock and PHY if there are no active > devices connected to the controller. For the controllers with active > devices, the resources are kept ON as removing the resources will > trigger access violation during the late end of suspend cycle as kernel > tries to access the config space of PCIe devices to mask the MSIs. I remember I met similar problem before. It is relate ASPM settings of NVME. NVME try to use L1.2 at suspend to save restore time. It should be user decided if PCI enter L1.2( for better resume time) or L2 For batter power saving. If NVME disable ASPM, NVME driver will free Msi irq before enter suspend, so not issue access config space by MSI Irq disable function. This is just general comment. It is not specific for this patches. Many platform Will face the similar problem. Maybe need better solution to handle L2/L3 for better power saving in future. Frank Li > > Also, it is not desirable to put the link into L2/L3 state as that > implies VDD supply will be removed and the devices may go into powerdown > state. This will affect the lifetime of storage devices like NVMe. > > And finally, during resume, turn ON the resources if the controller was > truly suspended (resources OFF) and update the interconnect bandwidth > based on PCIe Gen speed. > > Suggested-by: Krishna chaitanya chundru <quic_krichai@quicinc.com> > Acked-by: Dhruva Gole <d-gole@ti.com> > Signed-off-by: Manivannan Sadhasivam > <manivannan.sadhasivam@linaro.org> > --- > drivers/pci/controller/dwc/pcie-qcom.c | 62 ++++++++++++++++++++++++++ > 1 file changed, 62 insertions(+) > > diff --git a/drivers/pci/controller/dwc/pcie-qcom.c > b/drivers/pci/controller/dwc/pcie-qcom.c > index a232b04af048..f33df536d9be 100644 > --- a/drivers/pci/controller/dwc/pcie-qcom.c > +++ b/drivers/pci/controller/dwc/pcie-qcom.c > @@ -227,6 +227,7 @@ struct qcom_pcie { > struct gpio_desc *reset; > struct icc_path *icc_mem; > const struct qcom_pcie_cfg *cfg; > + bool suspended; > }; > > #define to_qcom_pcie(x) dev_get_drvdata((x)->dev) > @@ -1820,6 +1821,62 @@ static int qcom_pcie_probe(struct > platform_device *pdev) > return ret; > } > > +static int qcom_pcie_suspend_noirq(struct device *dev) > +{ > + struct qcom_pcie *pcie = dev_get_drvdata(dev); > + int ret; > + > + /* > + * Set minimum bandwidth required to keep data path functional during > + * suspend. > + */ > + ret = icc_set_bw(pcie->icc_mem, 0, MBps_to_icc(250)); > + if (ret) { > + dev_err(dev, "Failed to set interconnect bandwidth: %d\n", ret); > + return ret; > + } > + > + /* > + * Turn OFF the resources only for controllers without active PCIe > + * devices. For controllers with active devices, the resources are kept > + * ON and the link is expected to be in L0/L1 (sub)states. > + * > + * Turning OFF the resources for controllers with active PCIe devices > + * will trigger access violation during the end of the suspend cycle, > + * as kernel tries to access the PCIe devices config space for masking > + * MSIs. > + * > + * Also, it is not desirable to put the link into L2/L3 state as that > + * implies VDD supply will be removed and the devices may go into > + * powerdown state. This will affect the lifetime of the storage devices > + * like NVMe. > + */ > + if (!dw_pcie_link_up(pcie->pci)) { > + qcom_pcie_host_deinit(&pcie->pci->pp); > + pcie->suspended = true; > + } > + > + return 0; > +} > + > +static int qcom_pcie_resume_noirq(struct device *dev) > +{ > + struct qcom_pcie *pcie = dev_get_drvdata(dev); > + int ret; > + > + if (pcie->suspended) { > + ret = qcom_pcie_host_init(&pcie->pci->pp); > + if (ret) > + return ret; > + > + pcie->suspended = false; > + } > + > + qcom_pcie_icc_update(pcie); > + > + return 0; > +} > + > static const struct of_device_id qcom_pcie_match[] = { > { .compatible = "qcom,pcie-apq8064", .data = &cfg_2_1_0 }, > { .compatible = "qcom,pcie-apq8084", .data = &cfg_1_0_0 }, > @@ -1856,12 +1913,17 @@ > DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0302, > qcom_fixup_class); > DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1000, > qcom_fixup_class); > DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1001, > qcom_fixup_class); > > +static const struct dev_pm_ops qcom_pcie_pm_ops = { > + NOIRQ_SYSTEM_SLEEP_PM_OPS(qcom_pcie_suspend_noirq, > qcom_pcie_resume_noirq) > +}; > + > static struct platform_driver qcom_pcie_driver = { > .probe = qcom_pcie_probe, > .driver = { > .name = "qcom-pcie", > .suppress_bind_attrs = true, > .of_match_table = qcom_pcie_match, > + .pm = &qcom_pcie_pm_ops, > }, > }; > builtin_platform_driver(qcom_pcie_driver); > -- > 2.25.1
On Mon, Mar 27, 2023 at 07:08:24PM +0530, Manivannan Sadhasivam wrote: > During the system suspend, vote for minimal interconnect bandwidth and > also turn OFF the resources like clock and PHY if there are no active > devices connected to the controller. For the controllers with active > devices, the resources are kept ON as removing the resources will > trigger access violation during the late end of suspend cycle as kernel > tries to access the config space of PCIe devices to mask the MSIs. > > Also, it is not desirable to put the link into L2/L3 state as that > implies VDD supply will be removed and the devices may go into powerdown > state. This will affect the lifetime of storage devices like NVMe. > > And finally, during resume, turn ON the resources if the controller was > truly suspended (resources OFF) and update the interconnect bandwidth > based on PCIe Gen speed. > > Suggested-by: Krishna chaitanya chundru <quic_krichai@quicinc.com> > Acked-by: Dhruva Gole <d-gole@ti.com> > Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> > --- > drivers/pci/controller/dwc/pcie-qcom.c | 62 ++++++++++++++++++++++++++ > 1 file changed, 62 insertions(+) > > diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c > index a232b04af048..f33df536d9be 100644 > --- a/drivers/pci/controller/dwc/pcie-qcom.c > +++ b/drivers/pci/controller/dwc/pcie-qcom.c > @@ -227,6 +227,7 @@ struct qcom_pcie { > struct gpio_desc *reset; > struct icc_path *icc_mem; > const struct qcom_pcie_cfg *cfg; > + bool suspended; > }; > > #define to_qcom_pcie(x) dev_get_drvdata((x)->dev) > @@ -1820,6 +1821,62 @@ static int qcom_pcie_probe(struct platform_device *pdev) > return ret; > } > > +static int qcom_pcie_suspend_noirq(struct device *dev) > +{ > + struct qcom_pcie *pcie = dev_get_drvdata(dev); > + int ret; > + > + /* > + * Set minimum bandwidth required to keep data path functional during > + * suspend. > + */ > + ret = icc_set_bw(pcie->icc_mem, 0, MBps_to_icc(250)); This isn't really the minimum bandwidth you're setting here. I think you said off list that you didn't see real impact reducing the bandwidth, but have you tried requesting the real minimum which would be kBps_to_icc(1)? Doing so works fine here with both the CRD and X13s and may result in some further power savings. > + if (ret) { > + dev_err(dev, "Failed to set interconnect bandwidth: %d\n", ret); > + return ret; > + } Johan
On Wed, Mar 29, 2023 at 11:56:43AM +0200, Johan Hovold wrote: > On Mon, Mar 27, 2023 at 07:08:24PM +0530, Manivannan Sadhasivam wrote: > > During the system suspend, vote for minimal interconnect bandwidth and > > also turn OFF the resources like clock and PHY if there are no active > > devices connected to the controller. For the controllers with active > > devices, the resources are kept ON as removing the resources will > > trigger access violation during the late end of suspend cycle as kernel > > tries to access the config space of PCIe devices to mask the MSIs. > > > > Also, it is not desirable to put the link into L2/L3 state as that > > implies VDD supply will be removed and the devices may go into powerdown > > state. This will affect the lifetime of storage devices like NVMe. > > > > And finally, during resume, turn ON the resources if the controller was > > truly suspended (resources OFF) and update the interconnect bandwidth > > based on PCIe Gen speed. > > > > Suggested-by: Krishna chaitanya chundru <quic_krichai@quicinc.com> > > Acked-by: Dhruva Gole <d-gole@ti.com> > > Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> > > --- > > drivers/pci/controller/dwc/pcie-qcom.c | 62 ++++++++++++++++++++++++++ > > 1 file changed, 62 insertions(+) > > > > diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c > > index a232b04af048..f33df536d9be 100644 > > --- a/drivers/pci/controller/dwc/pcie-qcom.c > > +++ b/drivers/pci/controller/dwc/pcie-qcom.c > > @@ -227,6 +227,7 @@ struct qcom_pcie { > > struct gpio_desc *reset; > > struct icc_path *icc_mem; > > const struct qcom_pcie_cfg *cfg; > > + bool suspended; > > }; > > > > #define to_qcom_pcie(x) dev_get_drvdata((x)->dev) > > @@ -1820,6 +1821,62 @@ static int qcom_pcie_probe(struct platform_device *pdev) > > return ret; > > } > > > > +static int qcom_pcie_suspend_noirq(struct device *dev) > > +{ > > + struct qcom_pcie *pcie = dev_get_drvdata(dev); > > + int ret; > > + > > + /* > > + * Set minimum bandwidth required to keep data path functional during > > + * suspend. > > + */ > > + ret = icc_set_bw(pcie->icc_mem, 0, MBps_to_icc(250)); > > This isn't really the minimum bandwidth you're setting here. > > I think you said off list that you didn't see real impact reducing the > bandwidth, but have you tried requesting the real minimum which would be > kBps_to_icc(1)? > > Doing so works fine here with both the CRD and X13s and may result in > some further power savings. > No, we shouldn't be setting random value as the bandwidth. Reason is, these values are computed by the bus team based on the requirement of the interconnect paths (clock, voltage etc...) with actual PCIe Gen speeds. I don't know about the potential implication even if it happens to work. - Mani > > + if (ret) { > > + dev_err(dev, "Failed to set interconnect bandwidth: %d\n", ret); > > + return ret; > > + } > > Johan
On 29.03.2023 14:52, Manivannan Sadhasivam wrote: > On Wed, Mar 29, 2023 at 11:56:43AM +0200, Johan Hovold wrote: >> On Mon, Mar 27, 2023 at 07:08:24PM +0530, Manivannan Sadhasivam wrote: >>> During the system suspend, vote for minimal interconnect bandwidth and >>> also turn OFF the resources like clock and PHY if there are no active >>> devices connected to the controller. For the controllers with active >>> devices, the resources are kept ON as removing the resources will >>> trigger access violation during the late end of suspend cycle as kernel >>> tries to access the config space of PCIe devices to mask the MSIs. >>> >>> Also, it is not desirable to put the link into L2/L3 state as that >>> implies VDD supply will be removed and the devices may go into powerdown >>> state. This will affect the lifetime of storage devices like NVMe. >>> >>> And finally, during resume, turn ON the resources if the controller was >>> truly suspended (resources OFF) and update the interconnect bandwidth >>> based on PCIe Gen speed. >>> >>> Suggested-by: Krishna chaitanya chundru <quic_krichai@quicinc.com> >>> Acked-by: Dhruva Gole <d-gole@ti.com> >>> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> >>> --- >>> drivers/pci/controller/dwc/pcie-qcom.c | 62 ++++++++++++++++++++++++++ >>> 1 file changed, 62 insertions(+) >>> >>> diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c >>> index a232b04af048..f33df536d9be 100644 >>> --- a/drivers/pci/controller/dwc/pcie-qcom.c >>> +++ b/drivers/pci/controller/dwc/pcie-qcom.c >>> @@ -227,6 +227,7 @@ struct qcom_pcie { >>> struct gpio_desc *reset; >>> struct icc_path *icc_mem; >>> const struct qcom_pcie_cfg *cfg; >>> + bool suspended; >>> }; >>> >>> #define to_qcom_pcie(x) dev_get_drvdata((x)->dev) >>> @@ -1820,6 +1821,62 @@ static int qcom_pcie_probe(struct platform_device *pdev) >>> return ret; >>> } >>> >>> +static int qcom_pcie_suspend_noirq(struct device *dev) >>> +{ >>> + struct qcom_pcie *pcie = dev_get_drvdata(dev); >>> + int ret; >>> + >>> + /* >>> + * Set minimum bandwidth required to keep data path functional during >>> + * suspend. >>> + */ >>> + ret = icc_set_bw(pcie->icc_mem, 0, MBps_to_icc(250)); >> >> This isn't really the minimum bandwidth you're setting here. >> >> I think you said off list that you didn't see real impact reducing the >> bandwidth, but have you tried requesting the real minimum which would be >> kBps_to_icc(1)? >> >> Doing so works fine here with both the CRD and X13s and may result in >> some further power savings. >> > > No, we shouldn't be setting random value as the bandwidth. Reason is, these > values are computed by the bus team based on the requirement of the interconnect > paths (clock, voltage etc...) with actual PCIe Gen speeds. Should it then be variable, based on the current link gen? Konrad I don't know about > the potential implication even if it happens to work. > > - Mani > >>> + if (ret) { >>> + dev_err(dev, "Failed to set interconnect bandwidth: %d\n", ret); >>> + return ret; >>> + } >> >> Johan >
On Mon, Mar 27, 2023 at 03:29:54PM +0000, Frank Li wrote: > > > > -----Original Message----- > > From: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> > > Sent: Monday, March 27, 2023 8:38 AM > > To: lpieralisi@kernel.org; kw@linux.com; robh@kernel.org > > Cc: andersson@kernel.org; konrad.dybcio@linaro.org; > > bhelgaas@google.com; linux-pci@vger.kernel.org; linux-arm- > > msm@vger.kernel.org; linux-kernel@vger.kernel.org; > > quic_krichai@quicinc.com; johan+linaro@kernel.org; steev@kali.org; > > mka@chromium.org; Manivannan Sadhasivam > > <manivannan.sadhasivam@linaro.org>; Dhruva Gole <d-gole@ti.com> > > Subject: [EXT] [PATCH v3 1/1] PCI: qcom: Add support for system suspend > > and resume > > > > Caution: EXT Email > > > > During the system suspend, vote for minimal interconnect bandwidth and > > also turn OFF the resources like clock and PHY if there are no active > > devices connected to the controller. For the controllers with active > > devices, the resources are kept ON as removing the resources will > > trigger access violation during the late end of suspend cycle as kernel > > tries to access the config space of PCIe devices to mask the MSIs. > > I remember I met similar problem before. It is relate ASPM settings of NVME. > NVME try to use L1.2 at suspend to save restore time. > > It should be user decided if PCI enter L1.2( for better resume time) or L2 > For batter power saving. If NVME disable ASPM, NVME driver will free > Msi irq before enter suspend, so not issue access config space by MSI > Irq disable function. > The NVMe driver will only shutdown the device if ASPM is completely disabled in the kernel. They also take powerdown path for some Intel platforms though. For others, they keep the device in power on state and expect power saving with ASPM. > This is just general comment. It is not specific for this patches. Many platform > Will face the similar problem. Maybe need better solution to handle > L2/L3 for better power saving in future. > The only argument I hear from them is that, when the NVMe device gets powered down during suspend, then it may detoriate the life time of it as the suspend cycle is going to be high. - Mani > Frank Li > > > > > Also, it is not desirable to put the link into L2/L3 state as that > > implies VDD supply will be removed and the devices may go into powerdown > > state. This will affect the lifetime of storage devices like NVMe. > > > > And finally, during resume, turn ON the resources if the controller was > > truly suspended (resources OFF) and update the interconnect bandwidth > > based on PCIe Gen speed. > > > > Suggested-by: Krishna chaitanya chundru <quic_krichai@quicinc.com> > > Acked-by: Dhruva Gole <d-gole@ti.com> > > Signed-off-by: Manivannan Sadhasivam > > <manivannan.sadhasivam@linaro.org> > > --- > > drivers/pci/controller/dwc/pcie-qcom.c | 62 ++++++++++++++++++++++++++ > > 1 file changed, 62 insertions(+) > > > > diff --git a/drivers/pci/controller/dwc/pcie-qcom.c > > b/drivers/pci/controller/dwc/pcie-qcom.c > > index a232b04af048..f33df536d9be 100644 > > --- a/drivers/pci/controller/dwc/pcie-qcom.c > > +++ b/drivers/pci/controller/dwc/pcie-qcom.c > > @@ -227,6 +227,7 @@ struct qcom_pcie { > > struct gpio_desc *reset; > > struct icc_path *icc_mem; > > const struct qcom_pcie_cfg *cfg; > > + bool suspended; > > }; > > > > #define to_qcom_pcie(x) dev_get_drvdata((x)->dev) > > @@ -1820,6 +1821,62 @@ static int qcom_pcie_probe(struct > > platform_device *pdev) > > return ret; > > } > > > > +static int qcom_pcie_suspend_noirq(struct device *dev) > > +{ > > + struct qcom_pcie *pcie = dev_get_drvdata(dev); > > + int ret; > > + > > + /* > > + * Set minimum bandwidth required to keep data path functional during > > + * suspend. > > + */ > > + ret = icc_set_bw(pcie->icc_mem, 0, MBps_to_icc(250)); > > + if (ret) { > > + dev_err(dev, "Failed to set interconnect bandwidth: %d\n", ret); > > + return ret; > > + } > > + > > + /* > > + * Turn OFF the resources only for controllers without active PCIe > > + * devices. For controllers with active devices, the resources are kept > > + * ON and the link is expected to be in L0/L1 (sub)states. > > + * > > + * Turning OFF the resources for controllers with active PCIe devices > > + * will trigger access violation during the end of the suspend cycle, > > + * as kernel tries to access the PCIe devices config space for masking > > + * MSIs. > > + * > > + * Also, it is not desirable to put the link into L2/L3 state as that > > + * implies VDD supply will be removed and the devices may go into > > + * powerdown state. This will affect the lifetime of the storage devices > > + * like NVMe. > > + */ > > + if (!dw_pcie_link_up(pcie->pci)) { > > + qcom_pcie_host_deinit(&pcie->pci->pp); > > + pcie->suspended = true; > > + } > > + > > + return 0; > > +} > > + > > +static int qcom_pcie_resume_noirq(struct device *dev) > > +{ > > + struct qcom_pcie *pcie = dev_get_drvdata(dev); > > + int ret; > > + > > + if (pcie->suspended) { > > + ret = qcom_pcie_host_init(&pcie->pci->pp); > > + if (ret) > > + return ret; > > + > > + pcie->suspended = false; > > + } > > + > > + qcom_pcie_icc_update(pcie); > > + > > + return 0; > > +} > > + > > static const struct of_device_id qcom_pcie_match[] = { > > { .compatible = "qcom,pcie-apq8064", .data = &cfg_2_1_0 }, > > { .compatible = "qcom,pcie-apq8084", .data = &cfg_1_0_0 }, > > @@ -1856,12 +1913,17 @@ > > DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0302, > > qcom_fixup_class); > > DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1000, > > qcom_fixup_class); > > DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1001, > > qcom_fixup_class); > > > > +static const struct dev_pm_ops qcom_pcie_pm_ops = { > > + NOIRQ_SYSTEM_SLEEP_PM_OPS(qcom_pcie_suspend_noirq, > > qcom_pcie_resume_noirq) > > +}; > > + > > static struct platform_driver qcom_pcie_driver = { > > .probe = qcom_pcie_probe, > > .driver = { > > .name = "qcom-pcie", > > .suppress_bind_attrs = true, > > .of_match_table = qcom_pcie_match, > > + .pm = &qcom_pcie_pm_ops, > > }, > > }; > > builtin_platform_driver(qcom_pcie_driver); > > -- > > 2.25.1 >
On 29.03.2023 15:02, Manivannan Sadhasivam wrote: > On Mon, Mar 27, 2023 at 03:29:54PM +0000, Frank Li wrote: >> >> >>> -----Original Message----- >>> From: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> >>> Sent: Monday, March 27, 2023 8:38 AM >>> To: lpieralisi@kernel.org; kw@linux.com; robh@kernel.org >>> Cc: andersson@kernel.org; konrad.dybcio@linaro.org; >>> bhelgaas@google.com; linux-pci@vger.kernel.org; linux-arm- >>> msm@vger.kernel.org; linux-kernel@vger.kernel.org; >>> quic_krichai@quicinc.com; johan+linaro@kernel.org; steev@kali.org; >>> mka@chromium.org; Manivannan Sadhasivam >>> <manivannan.sadhasivam@linaro.org>; Dhruva Gole <d-gole@ti.com> >>> Subject: [EXT] [PATCH v3 1/1] PCI: qcom: Add support for system suspend >>> and resume >>> >>> Caution: EXT Email >>> >>> During the system suspend, vote for minimal interconnect bandwidth and >>> also turn OFF the resources like clock and PHY if there are no active >>> devices connected to the controller. For the controllers with active >>> devices, the resources are kept ON as removing the resources will >>> trigger access violation during the late end of suspend cycle as kernel >>> tries to access the config space of PCIe devices to mask the MSIs. >> >> I remember I met similar problem before. It is relate ASPM settings of NVME. >> NVME try to use L1.2 at suspend to save restore time. >> >> It should be user decided if PCI enter L1.2( for better resume time) or L2 >> For batter power saving. If NVME disable ASPM, NVME driver will free >> Msi irq before enter suspend, so not issue access config space by MSI >> Irq disable function. >> > > The NVMe driver will only shutdown the device if ASPM is completely disabled in > the kernel. They also take powerdown path for some Intel platforms though. For > others, they keep the device in power on state and expect power saving with > ASPM. > >> This is just general comment. It is not specific for this patches. Many platform >> Will face the similar problem. Maybe need better solution to handle >> L2/L3 for better power saving in future. >> > > The only argument I hear from them is that, when the NVMe device gets powered > down during suspend, then it may detoriate the life time of it as the suspend > cycle is going to be high. I think I asked that question before, but.. Do we know what Windows/macOS do? Konrad > > - Mani > >> Frank Li >> >>> >>> Also, it is not desirable to put the link into L2/L3 state as that >>> implies VDD supply will be removed and the devices may go into powerdown >>> state. This will affect the lifetime of storage devices like NVMe. >>> >>> And finally, during resume, turn ON the resources if the controller was >>> truly suspended (resources OFF) and update the interconnect bandwidth >>> based on PCIe Gen speed. >>> >>> Suggested-by: Krishna chaitanya chundru <quic_krichai@quicinc.com> >>> Acked-by: Dhruva Gole <d-gole@ti.com> >>> Signed-off-by: Manivannan Sadhasivam >>> <manivannan.sadhasivam@linaro.org> >>> --- >>> drivers/pci/controller/dwc/pcie-qcom.c | 62 ++++++++++++++++++++++++++ >>> 1 file changed, 62 insertions(+) >>> >>> diff --git a/drivers/pci/controller/dwc/pcie-qcom.c >>> b/drivers/pci/controller/dwc/pcie-qcom.c >>> index a232b04af048..f33df536d9be 100644 >>> --- a/drivers/pci/controller/dwc/pcie-qcom.c >>> +++ b/drivers/pci/controller/dwc/pcie-qcom.c >>> @@ -227,6 +227,7 @@ struct qcom_pcie { >>> struct gpio_desc *reset; >>> struct icc_path *icc_mem; >>> const struct qcom_pcie_cfg *cfg; >>> + bool suspended; >>> }; >>> >>> #define to_qcom_pcie(x) dev_get_drvdata((x)->dev) >>> @@ -1820,6 +1821,62 @@ static int qcom_pcie_probe(struct >>> platform_device *pdev) >>> return ret; >>> } >>> >>> +static int qcom_pcie_suspend_noirq(struct device *dev) >>> +{ >>> + struct qcom_pcie *pcie = dev_get_drvdata(dev); >>> + int ret; >>> + >>> + /* >>> + * Set minimum bandwidth required to keep data path functional during >>> + * suspend. >>> + */ >>> + ret = icc_set_bw(pcie->icc_mem, 0, MBps_to_icc(250)); >>> + if (ret) { >>> + dev_err(dev, "Failed to set interconnect bandwidth: %d\n", ret); >>> + return ret; >>> + } >>> + >>> + /* >>> + * Turn OFF the resources only for controllers without active PCIe >>> + * devices. For controllers with active devices, the resources are kept >>> + * ON and the link is expected to be in L0/L1 (sub)states. >>> + * >>> + * Turning OFF the resources for controllers with active PCIe devices >>> + * will trigger access violation during the end of the suspend cycle, >>> + * as kernel tries to access the PCIe devices config space for masking >>> + * MSIs. >>> + * >>> + * Also, it is not desirable to put the link into L2/L3 state as that >>> + * implies VDD supply will be removed and the devices may go into >>> + * powerdown state. This will affect the lifetime of the storage devices >>> + * like NVMe. >>> + */ >>> + if (!dw_pcie_link_up(pcie->pci)) { >>> + qcom_pcie_host_deinit(&pcie->pci->pp); >>> + pcie->suspended = true; >>> + } >>> + >>> + return 0; >>> +} >>> + >>> +static int qcom_pcie_resume_noirq(struct device *dev) >>> +{ >>> + struct qcom_pcie *pcie = dev_get_drvdata(dev); >>> + int ret; >>> + >>> + if (pcie->suspended) { >>> + ret = qcom_pcie_host_init(&pcie->pci->pp); >>> + if (ret) >>> + return ret; >>> + >>> + pcie->suspended = false; >>> + } >>> + >>> + qcom_pcie_icc_update(pcie); >>> + >>> + return 0; >>> +} >>> + >>> static const struct of_device_id qcom_pcie_match[] = { >>> { .compatible = "qcom,pcie-apq8064", .data = &cfg_2_1_0 }, >>> { .compatible = "qcom,pcie-apq8084", .data = &cfg_1_0_0 }, >>> @@ -1856,12 +1913,17 @@ >>> DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0302, >>> qcom_fixup_class); >>> DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1000, >>> qcom_fixup_class); >>> DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1001, >>> qcom_fixup_class); >>> >>> +static const struct dev_pm_ops qcom_pcie_pm_ops = { >>> + NOIRQ_SYSTEM_SLEEP_PM_OPS(qcom_pcie_suspend_noirq, >>> qcom_pcie_resume_noirq) >>> +}; >>> + >>> static struct platform_driver qcom_pcie_driver = { >>> .probe = qcom_pcie_probe, >>> .driver = { >>> .name = "qcom-pcie", >>> .suppress_bind_attrs = true, >>> .of_match_table = qcom_pcie_match, >>> + .pm = &qcom_pcie_pm_ops, >>> }, >>> }; >>> builtin_platform_driver(qcom_pcie_driver); >>> -- >>> 2.25.1 >> >
On Wed, Mar 29, 2023 at 06:22:32PM +0530, Manivannan Sadhasivam wrote: > On Wed, Mar 29, 2023 at 11:56:43AM +0200, Johan Hovold wrote: > > On Mon, Mar 27, 2023 at 07:08:24PM +0530, Manivannan Sadhasivam wrote: > > > +static int qcom_pcie_suspend_noirq(struct device *dev) > > > +{ > > > + struct qcom_pcie *pcie = dev_get_drvdata(dev); > > > + int ret; > > > + > > > + /* > > > + * Set minimum bandwidth required to keep data path functional during > > > + * suspend. > > > + */ > > > + ret = icc_set_bw(pcie->icc_mem, 0, MBps_to_icc(250)); > > > > This isn't really the minimum bandwidth you're setting here. > > > > I think you said off list that you didn't see real impact reducing the > > bandwidth, but have you tried requesting the real minimum which would be > > kBps_to_icc(1)? > > > > Doing so works fine here with both the CRD and X13s and may result in > > some further power savings. > > > > No, we shouldn't be setting random value as the bandwidth. Reason is, these > values are computed by the bus team based on the requirement of the interconnect > paths (clock, voltage etc...) with actual PCIe Gen speeds. I don't know about > the potential implication even if it happens to work. Why would you need PCIe gen1 speed during suspend? These numbers are already somewhat random as, for example, the vendor driver is requesting 500 kBps (800 peak) during runtime, while we are now requesting five times that during suspend (the vendor driver gets a away with 0). Sure, this indicates that the interconnect driver is broken and we should indeed be using values that at least makes some sense (and eventually fix the interconnect driver). Just not sure that you need to request that much bandwidth during suspend (e.g. for just a couple of register accesses). Johan
On Wed, Mar 29, 2023 at 03:19:51PM +0200, Johan Hovold wrote: > On Wed, Mar 29, 2023 at 06:22:32PM +0530, Manivannan Sadhasivam wrote: > > On Wed, Mar 29, 2023 at 11:56:43AM +0200, Johan Hovold wrote: > > > On Mon, Mar 27, 2023 at 07:08:24PM +0530, Manivannan Sadhasivam wrote: > > > > > +static int qcom_pcie_suspend_noirq(struct device *dev) > > > > +{ > > > > + struct qcom_pcie *pcie = dev_get_drvdata(dev); > > > > + int ret; > > > > + > > > > + /* > > > > + * Set minimum bandwidth required to keep data path functional during > > > > + * suspend. > > > > + */ > > > > + ret = icc_set_bw(pcie->icc_mem, 0, MBps_to_icc(250)); > > > > > > This isn't really the minimum bandwidth you're setting here. > > > > > > I think you said off list that you didn't see real impact reducing the > > > bandwidth, but have you tried requesting the real minimum which would be > > > kBps_to_icc(1)? > > > > > > Doing so works fine here with both the CRD and X13s and may result in > > > some further power savings. > > > > > > > No, we shouldn't be setting random value as the bandwidth. Reason is, these > > values are computed by the bus team based on the requirement of the interconnect > > paths (clock, voltage etc...) with actual PCIe Gen speeds. I don't know about > > the potential implication even if it happens to work. > > Why would you need PCIe gen1 speed during suspend? > That's what the suggestion I got from Qcom PCIe team. But I didn't compare the value you added during icc support patch with downstream. More below... > These numbers are already somewhat random as, for example, the vendor > driver is requesting 500 kBps (800 peak) during runtime, while we are > now requesting five times that during suspend (the vendor driver gets a > away with 0). > Hmm, then I should've asked you this question when you added icc support. I thought you inherited those values from downstream but apparently not. Even in downstream they are using different bw votes for different platforms. I will touch base with PCIe and ICC teams to find out the actual value that needs to be used. Regarding 0 icc vote, downstream puts all the devices in D3Cold (poweroff) state during suspend. So for them 0 icc vote will work but not for us as we need to keep the device and link intact. - Mani > Sure, this indicates that the interconnect driver is broken and we > should indeed be using values that at least makes some sense (and > eventually fix the interconnect driver). > > Just not sure that you need to request that much bandwidth during > suspend (e.g. for just a couple of register accesses). > > Johan
On Wed, Mar 29, 2023 at 07:31:50PM +0530, Manivannan Sadhasivam wrote: > On Wed, Mar 29, 2023 at 03:19:51PM +0200, Johan Hovold wrote: > > On Wed, Mar 29, 2023 at 06:22:32PM +0530, Manivannan Sadhasivam wrote: > > Why would you need PCIe gen1 speed during suspend? > > That's what the suggestion I got from Qcom PCIe team. But I didn't compare the > value you added during icc support patch with downstream. More below... > > > These numbers are already somewhat random as, for example, the vendor > > driver is requesting 500 kBps (800 peak) during runtime, while we are > > now requesting five times that during suspend (the vendor driver gets a > > away with 0). > > Hmm, then I should've asked you this question when you added icc support. > I thought you inherited those values from downstream but apparently not. > Even in downstream they are using different bw votes for different platforms. > I will touch base with PCIe and ICC teams to find out the actual value that > needs to be used. We discussed things at length at the time, but perhaps it was before you joined to project. As I alluded to above, we should not play the game of using arbitrary numbers but instead fix the interconnect driver so that it can map the interconnect values in kBps to something that makes sense for the Qualcomm hardware. Anything else is not acceptable for upstream. Johan
> -----Original Message----- > From: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> > Sent: Wednesday, March 29, 2023 8:03 AM > To: Frank Li <frank.li@nxp.com> > Cc: lpieralisi@kernel.org; kw@linux.com; robh@kernel.org; > andersson@kernel.org; konrad.dybcio@linaro.org; bhelgaas@google.com; > linux-pci@vger.kernel.org; linux-arm-msm@vger.kernel.org; linux- > kernel@vger.kernel.org; quic_krichai@quicinc.com; johan+linaro@kernel.org; > steev@kali.org; mka@chromium.org; Dhruva Gole <d-gole@ti.com> > Subject: Re: [EXT] [PATCH v3 1/1] PCI: qcom: Add support for system suspend > and resume > > Caution: EXT Email > > On Mon, Mar 27, 2023 at 03:29:54PM +0000, Frank Li wrote: > > > > > > > -----Original Message----- > > > From: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> > > > Sent: Monday, March 27, 2023 8:38 AM > > > To: lpieralisi@kernel.org; kw@linux.com; robh@kernel.org > > > Cc: andersson@kernel.org; konrad.dybcio@linaro.org; > > > bhelgaas@google.com; linux-pci@vger.kernel.org; linux-arm- > > > msm@vger.kernel.org; linux-kernel@vger.kernel.org; > > > quic_krichai@quicinc.com; johan+linaro@kernel.org; steev@kali.org; > > > mka@chromium.org; Manivannan Sadhasivam > > > <manivannan.sadhasivam@linaro.org>; Dhruva Gole <d-gole@ti.com> > > > Subject: [EXT] [PATCH v3 1/1] PCI: qcom: Add support for system suspend > > > and resume > > > > > > Caution: EXT Email > > > > > > During the system suspend, vote for minimal interconnect bandwidth and > > > also turn OFF the resources like clock and PHY if there are no active > > > devices connected to the controller. For the controllers with active > > > devices, the resources are kept ON as removing the resources will > > > trigger access violation during the late end of suspend cycle as kernel > > > tries to access the config space of PCIe devices to mask the MSIs. > > > > I remember I met similar problem before. It is relate ASPM settings of > NVME. > > NVME try to use L1.2 at suspend to save restore time. > > > > It should be user decided if PCI enter L1.2( for better resume time) or L2 > > For batter power saving. If NVME disable ASPM, NVME driver will free > > Msi irq before enter suspend, so not issue access config space by MSI > > Irq disable function. > > > > The NVMe driver will only shutdown the device if ASPM is completely > disabled in > the kernel. They also take powerdown path for some Intel platforms though. > For > others, they keep the device in power on state and expect power saving with > ASPM. It appears that not every device is compatible with L1.2 ASPM. The PCI controller driver should manage this situation by transitioning devices to L2/3 when the system is suspended. However, I am unsure of the appropriate method for handling this case.. > > > This is just general comment. It is not specific for this patches. Many > platform > > Will face the similar problem. Maybe need better solution to handle > > L2/L3 for better power saving in future. > > > > The only argument I hear from them is that, when the NVMe device gets > powered > down during suspend, then it may detoriate the life time of it as the suspend > cycle is going to be high. > > - Mani > > > Frank Li > > > > > > > > Also, it is not desirable to put the link into L2/L3 state as that > > > implies VDD supply will be removed and the devices may go into > powerdown > > > state. This will affect the lifetime of storage devices like NVMe. > > > > > > And finally, during resume, turn ON the resources if the controller was > > > truly suspended (resources OFF) and update the interconnect bandwidth > > > based on PCIe Gen speed. > > > > > > Suggested-by: Krishna chaitanya chundru <quic_krichai@quicinc.com> > > > Acked-by: Dhruva Gole <d-gole@ti.com> > > > Signed-off-by: Manivannan Sadhasivam > > > <manivannan.sadhasivam@linaro.org> > > > --- > > > drivers/pci/controller/dwc/pcie-qcom.c | 62 > ++++++++++++++++++++++++++ > > > 1 file changed, 62 insertions(+) > > > > > > diff --git a/drivers/pci/controller/dwc/pcie-qcom.c > > > b/drivers/pci/controller/dwc/pcie-qcom.c > > > index a232b04af048..f33df536d9be 100644 > > > --- a/drivers/pci/controller/dwc/pcie-qcom.c > > > +++ b/drivers/pci/controller/dwc/pcie-qcom.c > > > @@ -227,6 +227,7 @@ struct qcom_pcie { > > > struct gpio_desc *reset; > > > struct icc_path *icc_mem; > > > const struct qcom_pcie_cfg *cfg; > > > + bool suspended; > > > }; > > > > > > #define to_qcom_pcie(x) dev_get_drvdata((x)->dev) > > > @@ -1820,6 +1821,62 @@ static int qcom_pcie_probe(struct > > > platform_device *pdev) > > > return ret; > > > } > > > > > > +static int qcom_pcie_suspend_noirq(struct device *dev) > > > +{ > > > + struct qcom_pcie *pcie = dev_get_drvdata(dev); > > > + int ret; > > > + > > > + /* > > > + * Set minimum bandwidth required to keep data path functional > during > > > + * suspend. > > > + */ > > > + ret = icc_set_bw(pcie->icc_mem, 0, MBps_to_icc(250)); > > > + if (ret) { > > > + dev_err(dev, "Failed to set interconnect bandwidth: %d\n", ret); > > > + return ret; > > > + } > > > + > > > + /* > > > + * Turn OFF the resources only for controllers without active PCIe > > > + * devices. For controllers with active devices, the resources are kept > > > + * ON and the link is expected to be in L0/L1 (sub)states. > > > + * > > > + * Turning OFF the resources for controllers with active PCIe devices > > > + * will trigger access violation during the end of the suspend cycle, > > > + * as kernel tries to access the PCIe devices config space for masking > > > + * MSIs. > > > + * > > > + * Also, it is not desirable to put the link into L2/L3 state as that > > > + * implies VDD supply will be removed and the devices may go into > > > + * powerdown state. This will affect the lifetime of the storage > devices > > > + * like NVMe. > > > + */ > > > + if (!dw_pcie_link_up(pcie->pci)) { > > > + qcom_pcie_host_deinit(&pcie->pci->pp); > > > + pcie->suspended = true; > > > + } > > > + > > > + return 0; > > > +} > > > + > > > +static int qcom_pcie_resume_noirq(struct device *dev) > > > +{ > > > + struct qcom_pcie *pcie = dev_get_drvdata(dev); > > > + int ret; > > > + > > > + if (pcie->suspended) { > > > + ret = qcom_pcie_host_init(&pcie->pci->pp); > > > + if (ret) > > > + return ret; > > > + > > > + pcie->suspended = false; > > > + } > > > + > > > + qcom_pcie_icc_update(pcie); > > > + > > > + return 0; > > > +} > > > + > > > static const struct of_device_id qcom_pcie_match[] = { > > > { .compatible = "qcom,pcie-apq8064", .data = &cfg_2_1_0 }, > > > { .compatible = "qcom,pcie-apq8084", .data = &cfg_1_0_0 }, > > > @@ -1856,12 +1913,17 @@ > > > DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0302, > > > qcom_fixup_class); > > > DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1000, > > > qcom_fixup_class); > > > DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1001, > > > qcom_fixup_class); > > > > > > +static const struct dev_pm_ops qcom_pcie_pm_ops = { > > > + NOIRQ_SYSTEM_SLEEP_PM_OPS(qcom_pcie_suspend_noirq, > > > qcom_pcie_resume_noirq) > > > +}; > > > + > > > static struct platform_driver qcom_pcie_driver = { > > > .probe = qcom_pcie_probe, > > > .driver = { > > > .name = "qcom-pcie", > > > .suppress_bind_attrs = true, > > > .of_match_table = qcom_pcie_match, > > > + .pm = &qcom_pcie_pm_ops, > > > }, > > > }; > > > builtin_platform_driver(qcom_pcie_driver); > > > -- > > > 2.25.1 > > > > -- > மணிவண்ணன் சதாசிவம்
On Wed, Mar 29, 2023 at 04:42:23PM +0200, Johan Hovold wrote: > On Wed, Mar 29, 2023 at 07:31:50PM +0530, Manivannan Sadhasivam wrote: > > On Wed, Mar 29, 2023 at 03:19:51PM +0200, Johan Hovold wrote: > > > On Wed, Mar 29, 2023 at 06:22:32PM +0530, Manivannan Sadhasivam wrote: > > > > Why would you need PCIe gen1 speed during suspend? > > > > That's what the suggestion I got from Qcom PCIe team. But I didn't compare the > > value you added during icc support patch with downstream. More below... > > > > > These numbers are already somewhat random as, for example, the vendor > > > driver is requesting 500 kBps (800 peak) during runtime, while we are > > > now requesting five times that during suspend (the vendor driver gets a > > > away with 0). > > > > Hmm, then I should've asked you this question when you added icc support. > > I thought you inherited those values from downstream but apparently not. > > Even in downstream they are using different bw votes for different platforms. > > I will touch base with PCIe and ICC teams to find out the actual value that > > needs to be used. > > We discussed things at length at the time, but perhaps it was before you > joined to project. Yeah, could be. > As I alluded to above, we should not play the game of > using arbitrary numbers but instead fix the interconnect driver so that > it can map the interconnect values in kBps to something that makes sense > for the Qualcomm hardware. Anything else is not acceptable for upstream. > Agree. I've started the discussion regarding this and will get back once I have answers. - Mani > Johan
diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c index a232b04af048..f33df536d9be 100644 --- a/drivers/pci/controller/dwc/pcie-qcom.c +++ b/drivers/pci/controller/dwc/pcie-qcom.c @@ -227,6 +227,7 @@ struct qcom_pcie { struct gpio_desc *reset; struct icc_path *icc_mem; const struct qcom_pcie_cfg *cfg; + bool suspended; }; #define to_qcom_pcie(x) dev_get_drvdata((x)->dev) @@ -1820,6 +1821,62 @@ static int qcom_pcie_probe(struct platform_device *pdev) return ret; } +static int qcom_pcie_suspend_noirq(struct device *dev) +{ + struct qcom_pcie *pcie = dev_get_drvdata(dev); + int ret; + + /* + * Set minimum bandwidth required to keep data path functional during + * suspend. + */ + ret = icc_set_bw(pcie->icc_mem, 0, MBps_to_icc(250)); + if (ret) { + dev_err(dev, "Failed to set interconnect bandwidth: %d\n", ret); + return ret; + } + + /* + * Turn OFF the resources only for controllers without active PCIe + * devices. For controllers with active devices, the resources are kept + * ON and the link is expected to be in L0/L1 (sub)states. + * + * Turning OFF the resources for controllers with active PCIe devices + * will trigger access violation during the end of the suspend cycle, + * as kernel tries to access the PCIe devices config space for masking + * MSIs. + * + * Also, it is not desirable to put the link into L2/L3 state as that + * implies VDD supply will be removed and the devices may go into + * powerdown state. This will affect the lifetime of the storage devices + * like NVMe. + */ + if (!dw_pcie_link_up(pcie->pci)) { + qcom_pcie_host_deinit(&pcie->pci->pp); + pcie->suspended = true; + } + + return 0; +} + +static int qcom_pcie_resume_noirq(struct device *dev) +{ + struct qcom_pcie *pcie = dev_get_drvdata(dev); + int ret; + + if (pcie->suspended) { + ret = qcom_pcie_host_init(&pcie->pci->pp); + if (ret) + return ret; + + pcie->suspended = false; + } + + qcom_pcie_icc_update(pcie); + + return 0; +} + static const struct of_device_id qcom_pcie_match[] = { { .compatible = "qcom,pcie-apq8064", .data = &cfg_2_1_0 }, { .compatible = "qcom,pcie-apq8084", .data = &cfg_1_0_0 }, @@ -1856,12 +1913,17 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0302, qcom_fixup_class); DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1000, qcom_fixup_class); DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1001, qcom_fixup_class); +static const struct dev_pm_ops qcom_pcie_pm_ops = { + NOIRQ_SYSTEM_SLEEP_PM_OPS(qcom_pcie_suspend_noirq, qcom_pcie_resume_noirq) +}; + static struct platform_driver qcom_pcie_driver = { .probe = qcom_pcie_probe, .driver = { .name = "qcom-pcie", .suppress_bind_attrs = true, .of_match_table = qcom_pcie_match, + .pm = &qcom_pcie_pm_ops, }, }; builtin_platform_driver(qcom_pcie_driver);