Message ID | cover.1585188174.git.leonard.crestez@nxp.com (mailing list archive) |
---|---|
Headers | show |
Series | interconnect: Add imx support via devfreq | expand |
On 26.03.20 03:16, Leonard Crestez wrote: > This series adds interconnect scaling support for imx8m series chips. It uses a > per-SOC interconnect provider layered on top of multiple instances of devfreq > for scalable nodes along the interconnect. > > Existing qcom interconnect providers mostly translate bandwidth requests into > firmware calls but equivalent firmware on imx8m is much thinner. Scaling > support for individual nodes is implemented as distinct devfreq drivers > instead. > > The imx interconnect provider doesn't communicate with devfreq directly > but rather computes "minimum frequencies" for nodes along the path and > creates dev_pm_qos requests. > > Since there is no single devicetree node that can represent the > "interconnect" the main NOC is picked as the "interconnect provider" and > will probe the interconnect platform device if #interconnect-cells is > present. This avoids introducing "virtual" devices but it means that DT > bindings of main NOC includes properties for both devfreq and > interconnect. > > Only the ddrc and main noc are scalable right now but more can be added. > > Also available on a github branch (with various unrelated changes): > https://github.com/cdleonard/linux/tree/next > Testing currently requires NXP branch of atf+uboot > > Martin: I believe you should be able to use this to control DRAM > frequency from video by just adding interconnect consumer code to > nwl-dsi. Sample code: > https://github.com/cdleonard/linux/commit/43772762aa5045f1ce5623740f9a4baef988d083 > https://github.com/cdleonard/linux/commit/7b601e981b1f517b5d98b43bde292972ded13086 > Thanks for updating this series Leonard! A few questions for my understanding before trying to test: Isn't the ddrc_opp_table missing in these additions to the DT? That's what I want to scale after all. If I want to keep calling the "request", now icc_set_bw(), in nwl-dsi: I'd add an "interconnects" property to the node, but what would be my interconnect master? i.e.: interconnects = <&noc master? &noc IMX8MQ_ICS_DRAM>; At least it's not obvious to me from interconnect/imx/imx8mq.c the interconnect framework seems to be powerful indeed, but I still need to fully wrap my head around it. thanks for the help so far, martin > Changes since RFCv6: > * Replace scalable-nodes stuff with just a fsl,ddrc property. Future > scalable nodes can be added as additional phandles on the NOC > * Allow building interconnect drivers as modules > * Handle icc_provider_del errors in imx_icc_unregister (like EBUSY). > * Rename imx-devfreq to imx-bus, similar to exynos-bus > * Explain why imx bus clock enabling is not required > * All dependencies accepted (some time ago). > Link: https://patchwork.kernel.org/cover/11244421/ > > Changes since RFCv5: > * Replace scanning for interconnect-node-id with explicit > scalable-nodes/scalable-node-ids property on NoC. > * Now passes make `dtbs_check` > * Remove struct imx_icc_provider > * Switch to of_icc_xlate_onecell > * Use of_find_device_by_node to fetch QoS target, this causes fewer probe > deferrals, removes dependency on devfreq API and even allows reloading ddrc > module at runtime > * Add imx_icc_node_destroy helper > * Remove 0/1 on DEFINE_BUS_SLAVE/MASTER which created spurious links > Link: https://patchwork.kernel.org/cover/11222015/ > > Changes since RFCv4: > * Drop icc proxy nonsense > * Make devfreq driver for NOC probe the ICC driver if > #interconnect-cells is present > * Move NOC support to interconnect series and rename the node in DT > * Add support for all chips at once, differences are not intereseting > and there is more community interest for 8mq than 8mm. > Link: https://patchwork.kernel.org/cover/11111865/ > > Changes since RFCv3: > * Remove the virtual "icc" node and add devfreq nodes as proxy providers > * Fix build on 32-bit arm (reported by kbuilt test robot) > * Remove ARCH_MXC_ARM64 (never existed in upstream) > * Remove _numlinks, calculate instead > * Replace __BUSFREQ_H header guard > * Improve commit message and comment spelling > * Fix checkpatch issues > Link to RFCv3: https://patchwork.kernel.org/cover/11078671/ > > Changes since RFCv2 and initial work by Alexandre Bailon: > * Relying on devfreq and dev_pm_qos instead of CLK > * No more "platform opp" stuff > * No more special suspend handling: use suspend-opp on devfreq instead > * Replace all mentions of "busfreq" with "interconnect" > Link to v2: https://patchwork.kernel.org/cover/11021563/ > > Leonard Crestez (8): > dt-bindings: interconnect: Add bindings for imx8m noc > PM / devfreq: Add generic imx bus scaling driver > PM / devfreq: imx: Register interconnect device > interconnect: Add imx core driver > interconnect: imx: Add platform driver for imx8mm > interconnect: imx: Add platform driver for imx8mq > interconnect: imx: Add platform driver for imx8mn > arm64: dts: imx8m: Add NOC nodes > > .../bindings/interconnect/fsl,imx8m-noc.yaml | 138 ++++++++ > arch/arm64/boot/dts/freescale/imx8mm.dtsi | 24 ++ > arch/arm64/boot/dts/freescale/imx8mn.dtsi | 24 ++ > arch/arm64/boot/dts/freescale/imx8mq.dtsi | 24 ++ > drivers/devfreq/Kconfig | 9 + > drivers/devfreq/Makefile | 1 + > drivers/devfreq/imx-bus.c | 181 +++++++++++ > drivers/interconnect/Kconfig | 1 + > drivers/interconnect/Makefile | 1 + > drivers/interconnect/imx/Kconfig | 17 + > drivers/interconnect/imx/Makefile | 9 + > drivers/interconnect/imx/imx.c | 298 ++++++++++++++++++ > drivers/interconnect/imx/imx.h | 62 ++++ > drivers/interconnect/imx/imx8mm.c | 108 +++++++ > drivers/interconnect/imx/imx8mn.c | 97 ++++++ > drivers/interconnect/imx/imx8mq.c | 106 +++++++ > include/dt-bindings/interconnect/imx8mm.h | 49 +++ > include/dt-bindings/interconnect/imx8mn.h | 41 +++ > include/dt-bindings/interconnect/imx8mq.h | 48 +++ > 19 files changed, 1238 insertions(+) > create mode 100644 Documentation/devicetree/bindings/interconnect/fsl,imx8m-noc.yaml > create mode 100644 drivers/devfreq/imx-bus.c > create mode 100644 drivers/interconnect/imx/Kconfig > create mode 100644 drivers/interconnect/imx/Makefile > create mode 100644 drivers/interconnect/imx/imx.c > create mode 100644 drivers/interconnect/imx/imx.h > create mode 100644 drivers/interconnect/imx/imx8mm.c > create mode 100644 drivers/interconnect/imx/imx8mn.c > create mode 100644 drivers/interconnect/imx/imx8mq.c > create mode 100644 include/dt-bindings/interconnect/imx8mm.h > create mode 100644 include/dt-bindings/interconnect/imx8mn.h > create mode 100644 include/dt-bindings/interconnect/imx8mq.h >
Hi Martin, On Thu, Mar 26, 2020 at 02:55:27PM +0100, Martin Kepplinger wrote: > On 26.03.20 03:16, Leonard Crestez wrote: > > This series adds interconnect scaling support for imx8m series chips. It uses a > > per-SOC interconnect provider layered on top of multiple instances of devfreq > > for scalable nodes along the interconnect. > > > > Existing qcom interconnect providers mostly translate bandwidth requests into > > firmware calls but equivalent firmware on imx8m is much thinner. Scaling > > support for individual nodes is implemented as distinct devfreq drivers > > instead. > > > > The imx interconnect provider doesn't communicate with devfreq directly > > but rather computes "minimum frequencies" for nodes along the path and > > creates dev_pm_qos requests. > > > > Since there is no single devicetree node that can represent the > > "interconnect" the main NOC is picked as the "interconnect provider" and > > will probe the interconnect platform device if #interconnect-cells is > > present. This avoids introducing "virtual" devices but it means that DT > > bindings of main NOC includes properties for both devfreq and > > interconnect. > > > > Only the ddrc and main noc are scalable right now but more can be added. > > > > Also available on a github branch (with various unrelated changes): > > https://github.com/cdleonard/linux/tree/next > > Testing currently requires NXP branch of atf+uboot > > > > Martin: I believe you should be able to use this to control DRAM > > frequency from video by just adding interconnect consumer code to > > nwl-dsi. Sample code: > > https://github.com/cdleonard/linux/commit/43772762aa5045f1ce5623740f9a4baef988d083 > > https://github.com/cdleonard/linux/commit/7b601e981b1f517b5d98b43bde292972ded13086 > > > > Thanks for updating this series Leonard! A few questions for my > understanding before trying to test: > > Isn't the ddrc_opp_table missing in these additions to the DT? That's > what I want to scale after all. > > If I want to keep calling the "request", now icc_set_bw(), in nwl-dsi: > I'd add an "interconnects" property to the node, but what would be my > interconnect master? i.e.: interconnects = <&noc master? &noc > IMX8MQ_ICS_DRAM>; At least it's not obvious to me from > interconnect/imx/imx8mq.c The NWL DSI host controller is fed by DCSS or mxsfb so any bandwidth requirements should (as far as I understand things) go into the display controller driver since that's what fetches from RAM. Cheers, -- Guido > > the interconnect framework seems to be powerful indeed, but I still need > to fully wrap my head around it. > > thanks for the help so far, > > martin > > > > Changes since RFCv6: > > * Replace scalable-nodes stuff with just a fsl,ddrc property. Future > > scalable nodes can be added as additional phandles on the NOC > > * Allow building interconnect drivers as modules > > * Handle icc_provider_del errors in imx_icc_unregister (like EBUSY). > > * Rename imx-devfreq to imx-bus, similar to exynos-bus > > * Explain why imx bus clock enabling is not required > > * All dependencies accepted (some time ago). > > Link: https://patchwork.kernel.org/cover/11244421/ > > > > Changes since RFCv5: > > * Replace scanning for interconnect-node-id with explicit > > scalable-nodes/scalable-node-ids property on NoC. > > * Now passes make `dtbs_check` > > * Remove struct imx_icc_provider > > * Switch to of_icc_xlate_onecell > > * Use of_find_device_by_node to fetch QoS target, this causes fewer probe > > deferrals, removes dependency on devfreq API and even allows reloading ddrc > > module at runtime > > * Add imx_icc_node_destroy helper > > * Remove 0/1 on DEFINE_BUS_SLAVE/MASTER which created spurious links > > Link: https://patchwork.kernel.org/cover/11222015/ > > > > Changes since RFCv4: > > * Drop icc proxy nonsense > > * Make devfreq driver for NOC probe the ICC driver if > > #interconnect-cells is present > > * Move NOC support to interconnect series and rename the node in DT > > * Add support for all chips at once, differences are not intereseting > > and there is more community interest for 8mq than 8mm. > > Link: https://patchwork.kernel.org/cover/11111865/ > > > > Changes since RFCv3: > > * Remove the virtual "icc" node and add devfreq nodes as proxy providers > > * Fix build on 32-bit arm (reported by kbuilt test robot) > > * Remove ARCH_MXC_ARM64 (never existed in upstream) > > * Remove _numlinks, calculate instead > > * Replace __BUSFREQ_H header guard > > * Improve commit message and comment spelling > > * Fix checkpatch issues > > Link to RFCv3: https://patchwork.kernel.org/cover/11078671/ > > > > Changes since RFCv2 and initial work by Alexandre Bailon: > > * Relying on devfreq and dev_pm_qos instead of CLK > > * No more "platform opp" stuff > > * No more special suspend handling: use suspend-opp on devfreq instead > > * Replace all mentions of "busfreq" with "interconnect" > > Link to v2: https://patchwork.kernel.org/cover/11021563/ > > > > Leonard Crestez (8): > > dt-bindings: interconnect: Add bindings for imx8m noc > > PM / devfreq: Add generic imx bus scaling driver > > PM / devfreq: imx: Register interconnect device > > interconnect: Add imx core driver > > interconnect: imx: Add platform driver for imx8mm > > interconnect: imx: Add platform driver for imx8mq > > interconnect: imx: Add platform driver for imx8mn > > arm64: dts: imx8m: Add NOC nodes > > > > .../bindings/interconnect/fsl,imx8m-noc.yaml | 138 ++++++++ > > arch/arm64/boot/dts/freescale/imx8mm.dtsi | 24 ++ > > arch/arm64/boot/dts/freescale/imx8mn.dtsi | 24 ++ > > arch/arm64/boot/dts/freescale/imx8mq.dtsi | 24 ++ > > drivers/devfreq/Kconfig | 9 + > > drivers/devfreq/Makefile | 1 + > > drivers/devfreq/imx-bus.c | 181 +++++++++++ > > drivers/interconnect/Kconfig | 1 + > > drivers/interconnect/Makefile | 1 + > > drivers/interconnect/imx/Kconfig | 17 + > > drivers/interconnect/imx/Makefile | 9 + > > drivers/interconnect/imx/imx.c | 298 ++++++++++++++++++ > > drivers/interconnect/imx/imx.h | 62 ++++ > > drivers/interconnect/imx/imx8mm.c | 108 +++++++ > > drivers/interconnect/imx/imx8mn.c | 97 ++++++ > > drivers/interconnect/imx/imx8mq.c | 106 +++++++ > > include/dt-bindings/interconnect/imx8mm.h | 49 +++ > > include/dt-bindings/interconnect/imx8mn.h | 41 +++ > > include/dt-bindings/interconnect/imx8mq.h | 48 +++ > > 19 files changed, 1238 insertions(+) > > create mode 100644 Documentation/devicetree/bindings/interconnect/fsl,imx8m-noc.yaml > > create mode 100644 drivers/devfreq/imx-bus.c > > create mode 100644 drivers/interconnect/imx/Kconfig > > create mode 100644 drivers/interconnect/imx/Makefile > > create mode 100644 drivers/interconnect/imx/imx.c > > create mode 100644 drivers/interconnect/imx/imx.h > > create mode 100644 drivers/interconnect/imx/imx8mm.c > > create mode 100644 drivers/interconnect/imx/imx8mn.c > > create mode 100644 drivers/interconnect/imx/imx8mq.c > > create mode 100644 include/dt-bindings/interconnect/imx8mm.h > > create mode 100644 include/dt-bindings/interconnect/imx8mn.h > > create mode 100644 include/dt-bindings/interconnect/imx8mq.h > > > > _______________________________________________ > linux-arm-kernel mailing list > linux-arm-kernel@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel >
On 27.03.20 12:36, Guido Günther wrote: > Hi Martin, > On Thu, Mar 26, 2020 at 02:55:27PM +0100, Martin Kepplinger wrote: >> On 26.03.20 03:16, Leonard Crestez wrote: >>> This series adds interconnect scaling support for imx8m series chips. It uses a >>> per-SOC interconnect provider layered on top of multiple instances of devfreq >>> for scalable nodes along the interconnect. >>> >>> Existing qcom interconnect providers mostly translate bandwidth requests into >>> firmware calls but equivalent firmware on imx8m is much thinner. Scaling >>> support for individual nodes is implemented as distinct devfreq drivers >>> instead. >>> >>> The imx interconnect provider doesn't communicate with devfreq directly >>> but rather computes "minimum frequencies" for nodes along the path and >>> creates dev_pm_qos requests. >>> >>> Since there is no single devicetree node that can represent the >>> "interconnect" the main NOC is picked as the "interconnect provider" and >>> will probe the interconnect platform device if #interconnect-cells is >>> present. This avoids introducing "virtual" devices but it means that DT >>> bindings of main NOC includes properties for both devfreq and >>> interconnect. >>> >>> Only the ddrc and main noc are scalable right now but more can be added. >>> >>> Also available on a github branch (with various unrelated changes): >>> https://github.com/cdleonard/linux/tree/next >>> Testing currently requires NXP branch of atf+uboot >>> >>> Martin: I believe you should be able to use this to control DRAM >>> frequency from video by just adding interconnect consumer code to >>> nwl-dsi. Sample code: >>> https://github.com/cdleonard/linux/commit/43772762aa5045f1ce5623740f9a4baef988d083 >>> https://github.com/cdleonard/linux/commit/7b601e981b1f517b5d98b43bde292972ded13086 >>> >> >> Thanks for updating this series Leonard! A few questions for my >> understanding before trying to test: >> >> Isn't the ddrc_opp_table missing in these additions to the DT? That's >> what I want to scale after all. >> >> If I want to keep calling the "request", now icc_set_bw(), in nwl-dsi: >> I'd add an "interconnects" property to the node, but what would be my >> interconnect master? i.e.: interconnects = <&noc master? &noc >> IMX8MQ_ICS_DRAM>; At least it's not obvious to me from >> interconnect/imx/imx8mq.c > > The NWL DSI host controller is fed by DCSS or mxsfb so any bandwidth > requirements should (as far as I understand things) go into the display > controller driver since that's what fetches from RAM. > Cheers, > -- Guido > Hi, Thanks a lot Leonard and Guido! Here's the tree I'm running, which is your patches based on Linus' tree, with icc request in mxsfb: https://source.puri.sm/martin.kepplinger/linux-next/commits/5.6-rc7/librem5__integration_devfreq1 The path from icc via pm_qos to devfreq does work (which is great) - however only after setting the minimum frequencies via a governor - I set the "powersave" governor. After that, frequencies are both set to high / low correctly. My impression was that I should be able to use the "passive" governor (a passive devfreq device?). What am I missing with using devfreq correctly? Or do I already? other than the above uncertainty: Tested-by: Martin Kepplinger <martin.kepplinger@puri.sm> thanks so far! martin
On 2020-03-30 11:52 AM, Martin Kepplinger wrote: > On 27.03.20 12:36, Guido Günther wrote: >> Hi Martin, >> On Thu, Mar 26, 2020 at 02:55:27PM +0100, Martin Kepplinger wrote: >>> On 26.03.20 03:16, Leonard Crestez wrote: >>>> This series adds interconnect scaling support for imx8m series chips. It uses a >>>> per-SOC interconnect provider layered on top of multiple instances of devfreq >>>> for scalable nodes along the interconnect. >>>> >>>> Existing qcom interconnect providers mostly translate bandwidth requests into >>>> firmware calls but equivalent firmware on imx8m is much thinner. Scaling >>>> support for individual nodes is implemented as distinct devfreq drivers >>>> instead. >>>> >>>> The imx interconnect provider doesn't communicate with devfreq directly >>>> but rather computes "minimum frequencies" for nodes along the path and >>>> creates dev_pm_qos requests. >>>> >>>> Since there is no single devicetree node that can represent the >>>> "interconnect" the main NOC is picked as the "interconnect provider" and >>>> will probe the interconnect platform device if #interconnect-cells is >>>> present. This avoids introducing "virtual" devices but it means that DT >>>> bindings of main NOC includes properties for both devfreq and >>>> interconnect. >>>> >>>> Only the ddrc and main noc are scalable right now but more can be added. >>>> >>>> Also available on a github branch (with various unrelated changes): >>>> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fcdleonard%2Flinux%2Ftree%2Fnext&data=02%7C01%7Cleonard.crestez%40nxp.com%7C52eb55181dce469eb85608d7d487bb93%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637211551737394247&sdata=A0l5FBF%2BT3k7H5HMtRMfX8WfVSqQm9jijgr8aexCoUA%3D&reserved=0 >>>> Testing currently requires NXP branch of atf+uboot >>>> >>>> Martin: I believe you should be able to use this to control DRAM >>>> frequency from video by just adding interconnect consumer code to >>>> nwl-dsi. Sample code: >>>> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fcdleonard%2Flinux%2Fcommit%2F43772762aa5045f1ce5623740f9a4baef988d083&data=02%7C01%7Cleonard.crestez%40nxp.com%7C52eb55181dce469eb85608d7d487bb93%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637211551737394247&sdata=%2B%2BGWQTaQLLk98yFRHJ0o3Sks9DHGuKv7twBvn89f1Tg%3D&reserved=0 >>>> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fcdleonard%2Flinux%2Fcommit%2F7b601e981b1f517b5d98b43bde292972ded13086&data=02%7C01%7Cleonard.crestez%40nxp.com%7C52eb55181dce469eb85608d7d487bb93%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637211551737394247&sdata=Jy2%2FI3CE1H3ilmLvZAVhjlPHO3KRNK6%2F9dX%2BS124ROA%3D&reserved=0 >>>> >>> >>> Thanks for updating this series Leonard! A few questions for my >>> understanding before trying to test: >>> >>> Isn't the ddrc_opp_table missing in these additions to the DT? That's >>> what I want to scale after all. DDRC OPP table belongs in board file because it can vary with RAM type on boards. >>> If I want to keep calling the "request", now icc_set_bw(), in nwl-dsi: >>> I'd add an "interconnects" property to the node, but what would be my >>> interconnect master? i.e.: interconnects = <&noc master? &noc >>> IMX8MQ_ICS_DRAM>; At least it's not obvious to me from >>> interconnect/imx/imx8mq.c >> >> The NWL DSI host controller is fed by DCSS or mxsfb so any bandwidth >> requirements should (as far as I understand things) go into the display >> controller driver since that's what fetches from RAM. >> Cheers, >> -- Guido This is correct. > Thanks a lot Leonard and Guido! Here's the tree I'm running, which is > your patches based on Linus' tree, with icc request in mxsfb: > > https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsource.puri.sm%2Fmartin.kepplinger%2Flinux-next%2Fcommits%2F5.6-rc7%2Flibrem5__integration_devfreq1&data=02%7C01%7Cleonard.crestez%40nxp.com%7C52eb55181dce469eb85608d7d487bb93%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637211551737394247&sdata=FM8JoOuNa2gg09XVJ%2FLTLqK9rlL4hAwig2iM9cMYFhg%3D&reserved=0 > > The path from icc via pm_qos to devfreq does work (which is great) - > however only after setting the minimum frequencies via a governor - I > set the "powersave" governor. > > After that, frequencies are both set to high / low correctly. > > My impression was that I should be able to use the "passive" governor (a > passive devfreq device?). What am I missing with using devfreq > correctly? Or do I already? The devfreq governor is something else: it's used to make one devfreq device match frequencies with another devfreq device. Setting the default governor to "powersave" is correct and roughly matches behavior in NXP kernel. > Tested-by: Martin Kepplinger <martin.kepplinger@puri.sm> Thanks!