mbox series

[RFCv4,0/7] interconnect: Add imx support via devfreq

Message ID cover.1566570260.git.leonard.crestez@nxp.com (mailing list archive)
Headers show
Series interconnect: Add imx support via devfreq | expand

Message

Leonard Crestez Aug. 23, 2019, 2:36 p.m. UTC
This series add imx support for interconnect via devfreq: the ICC
framework is used to aggregate requests from devices and then those are
converted to DEV_PM_QOS_MIN_FREQUENCY requests for devfreq.

The devfreq parts are posted separately, this series only intersects in
devicetree: https://patchwork.kernel.org/cover/11104113/

Since there is no single devicetree node that can represent the "interconnect"
new API is added to allow individual devfreq nodes to act as parsing proxies
all mapping to a single soc-level icc provider. This is still RFC
because this

The rest of the changes are small and deal with review comments.

Changes since RFCv3:
* Remove the virtual "icc" node and add devfreq nodes as proxy providers
* Fix build on 32-bit arm (reported by kbuilt test robot)
* Remove ARCH_MXC_ARM64 (never existed in upstream)
* Remove _numlinks, calculate instead
* Replace __BUSFREQ_H header guard
* Improve commit message and comment spelling
* Fix checkpatch issues
Link to RFCv3: https://patchwork.kernel.org/cover/11078671/

Changes since RFCv2 and initial work by Alexandre Bailon:
* Relying on devfreq and dev_pm_qos instead of CLK
* No more "platform opp" stuff
* No more special suspend handling: use suspend-opp on devfreq instead.
* Replace all mentions of "busfreq" with "interconnect"
Link to v2: https://patchwork.kernel.org/patch/11056789/

Leonard Crestez (7):
  PM / devfreq: Add devfreq_get_devfreq_by_node
  interconnect: Add of_icc_add_proxy
  dt-bindings: devfreq: imx: Describe interconnect properties
  interconnect: Add imx core driver
  interconnect: imx: Add platform driver for imx8mm
  soc: imx8mm: Register interconnect platform device
  arm64: dts: imx8mm: Add interconnect properties

 .../devicetree/bindings/devfreq/imx-ddrc.yaml |   5 +
 .../devicetree/bindings/devfreq/imx.yaml      |   5 +
 arch/arm64/boot/dts/freescale/imx8mm.dtsi     |   5 +
 drivers/devfreq/devfreq.c                     |  42 ++-
 drivers/interconnect/Kconfig                  |   1 +
 drivers/interconnect/Makefile                 |   1 +
 drivers/interconnect/core.c                   |  88 +++++-
 drivers/interconnect/imx/Kconfig              |   9 +
 drivers/interconnect/imx/Makefile             |   2 +
 drivers/interconnect/imx/imx.c                | 279 ++++++++++++++++++
 drivers/interconnect/imx/imx.h                |  60 ++++
 drivers/interconnect/imx/imx8mm.c             | 105 +++++++
 drivers/soc/imx/soc-imx8.c                    |   4 +
 include/dt-bindings/interconnect/imx8mm.h     |  49 +++
 include/linux/devfreq.h                       |   1 +
 include/linux/interconnect-provider.h         |   7 +
 16 files changed, 645 insertions(+), 18 deletions(-)
 create mode 100644 drivers/interconnect/imx/Kconfig
 create mode 100644 drivers/interconnect/imx/Makefile
 create mode 100644 drivers/interconnect/imx/imx.c
 create mode 100644 drivers/interconnect/imx/imx.h
 create mode 100644 drivers/interconnect/imx/imx8mm.c
 create mode 100644 include/dt-bindings/interconnect/imx8mm.h

Comments

Leonard Crestez Sept. 16, 2019, 12:34 p.m. UTC | #1
On 23.08.2019 17:37, Leonard Crestez wrote:
> This series add imx support for interconnect via devfreq: the ICC
> framework is used to aggregate requests from devices and then those are
> converted to DEV_PM_QOS_MIN_FREQUENCY requests for devfreq.
>  
> Since there is no single devicetree node that can represent the "interconnect"
> new API is added to allow individual devfreq nodes to act as parsing proxies
> all mapping to a single soc-level icc provider. This is still RFC
> because of this

Any comments? I made a lot of changes relative to previous versions, 
most of them solely to avoid adding a virtual node in DT bindings.

The only current interconnect provider implementation is for qcom and it 
uses a firmware node as the provider node (with #interconnect-cells). 
However there is no obvious equivalent of that for imx and many other SOCs.

On imx there are multiple pieces of scalable fabric which can be defined 
in DT as devfreq devices and it sort of makes sense to add 
#interconnect-cells to those. However when it comes to describing the 
SOC interconnect graph it's much more convenient to have a single 
per-SOC platform driver.

My solution is to add an "icc_proxy" API so that a single platform-level 
interconnect provider can be referenced through multiple DT nodes. Does 
this make sense?

The implementation is not very pretty, the interconnect platform devices 
ends up enumerating the entire devicetree in order to find proxies.

Right now the interconnect API use a relatively standard split between 
consumer and "provider" but I think it might make sense to have a 
separate abstractions for "graph" and "midnode". A "midnode" could act 
as a DT proxy if there is no single representation of the "interconnect" 
and it could support custom scaling for itself (with the default being 
scaling kbps into MIN_FREQ).

There are also other options:
  * Pick one "main" bus and bless it as the "interconnect provider". I 
want to represent buses as devfreq devices so I would have to call from 
devfreq to ICC for registration somehow.
  * Maybe the "no virtual device" rule could be relaxed for the 
interconnect subsystem?

--
Regards,
Leonard
Martin Kepplinger-Novakovic Sept. 21, 2019, 6:06 a.m. UTC | #2
On 23.08.19 16:36, Leonard Crestez wrote:
> This series add imx support for interconnect via devfreq: the ICC
> framework is used to aggregate requests from devices and then those are
> converted to DEV_PM_QOS_MIN_FREQUENCY requests for devfreq.
> 
> The devfreq parts are posted separately, this series only intersects in
> devicetree: https://patchwork.kernel.org/cover/11104113/
> 
> Since there is no single devicetree node that can represent the "interconnect"
> new API is added to allow individual devfreq nodes to act as parsing proxies
> all mapping to a single soc-level icc provider. This is still RFC
> because this
> 
> The rest of the changes are small and deal with review comments.
> 

hi Leonard,

When I run https://github.com/cdleonard/linux/tree/next_imx_busfreq and
https://github.com/cdleonard/arm-trusted-firmware/tree/imx_2.0.y_busfreq
on imx8mq, probe() fails:

[    1.082847] imx-ddrc-devfreq 3d400000.dram-controller: failed to init
firmware freq info: -19
[    1.091434] imx-ddrc-devfreq: probe of 3d400000.dram-controller
rejects match -19

in imx_ddrc_init_freq_info()'s check:

if (priv->freq_count <= 0 || priv->freq_count > IMX_DDRC_MAX_FREQ_COUNT)

That would indicate that I'm missing something in ATF? I'm pretty sure
I'm running your tree though.

Does anything come to your mind what I might be doing wrong? Am I
running your "correct" linux tree? Remember I'm on imx8mq, so I don't
exactly test this RFC of yours.

Also, how are your plans to rebase / update your ATF tree?

And since there's a lot in there: what additions are necessary for this
devfreq to work?

Lastly, how do you test? Is /sys/class/devfreq supposted to get populated?

thanks for your work!

                                    martin
Leonard Crestez Sept. 23, 2019, 9:22 p.m. UTC | #3
On 21.09.2019 09:07, Martin Kepplinger wrote:
> On 23.08.19 16:36, Leonard Crestez wrote:
>> This series add imx support for interconnect via devfreq: the ICC
>> framework is used to aggregate requests from devices and then those are
>> converted to DEV_PM_QOS_MIN_FREQUENCY requests for devfreq.
>>
>> The devfreq parts are posted separately, this series only intersects in
>> devicetree.
>>
>> Since there is no single devicetree node that can represent the "interconnect"
>> new API is added to allow individual devfreq nodes to act as parsing proxies
>> all mapping to a single soc-level icc provider. This is still RFC
>> because this
>>
>> The rest of the changes are small and deal with review comments.

> on imx8mq, probe() fails:
> 
> [    1.082847] imx-ddrc-devfreq 3d400000.dram-controller: failed to init
> firmware freq info: -19
> [    1.091434] imx-ddrc-devfreq: probe of 3d400000.dram-controller
> rejects match -19
> 
> in imx_ddrc_init_freq_info()'s check:
> 
> if (priv->freq_count <= 0 || priv->freq_count > IMX_DDRC_MAX_FREQ_COUNT)
> 
> That would indicate that I'm missing something in ATF? I'm pretty sure
> I'm running your tree though.

What is your board and uboot version? I tested on imx8mq-evk (SOC Rev 
B1) with NXP uboot. For example this uboot release works:

https://source.codeaurora.org/external/imx/uboot-imx/log/?h=imx_v2019.04_4.19.35_1.0.0

It is uboot which trains DDR for multiple frequencies and then passes 
that info to ATF. I'm not sure about the steps required to enable this 
for 3rd-party boards, should be same as for busfreq from NXP tree.

Getting this to work on a 3rd-party board would be interesting.

> Does anything come to your mind what I might be doing wrong? Am I
> running your "correct" linux tree? Remember I'm on imx8mq, so I don't
> exactly test this RFC of yours.
> 
> Also, how are your plans to rebase / update your ATF tree?

The ATF changes will show up in a future release of NXP ATF branch, when 
that happens I will drop my branch. NXP ATF releases are on CAF:

     https://source.codeaurora.org/external/imx/imx-atf/

> And since there's a lot in there: what additions are necessary for this
> devfreq to work?

devfreq imx support here: https://patchwork.kernel.org/cover/11104113/
Interconnect support also needs PM QoS support for devfreq:

     https://patchwork.kernel.org/cover/11157649/

> Lastly, how do you test? Is /sys/class/devfreq supposted to get populated?

Yes, and only the devfreq patches are required for that.

# cat /sys/class/devfreq/devfreq0/available_frequencies
25000000 100000000 800000000
# cat /sys/class/devfreq/devfreq0/cur_freq
800000000

You should be able to control frequencies manually with the userspace 
governor:
# echo "userspace" > /sys/class/devfreq/devfreq0/governor
# echo "25000000" > /sys/class/devfreq/devfreq0/userspace/set_freq

This series allows devices to request guaranteed bandwidth for 
themselves through the interconnect subsystem, without it DRAM freq will 
still switch but you'll have problems like display corruption as it 
turns on before freq goes up. You can check that probe worked by doing

# cat /sys/kernel/debug/interconnect/interconnect_summary

--
Regards,
Leonard
Georgi Djakov Sept. 25, 2019, 2:37 a.m. UTC | #4
Hi Leonard,

On 9/16/19 05:34, Leonard Crestez wrote:
> On 23.08.2019 17:37, Leonard Crestez wrote:
>> This series add imx support for interconnect via devfreq: the ICC
>> framework is used to aggregate requests from devices and then those are
>> converted to DEV_PM_QOS_MIN_FREQUENCY requests for devfreq.
>>  
>> Since there is no single devicetree node that can represent the "interconnect"
>> new API is added to allow individual devfreq nodes to act as parsing proxies
>> all mapping to a single soc-level icc provider. This is still RFC
>> because of this
> 
> Any comments? I made a lot of changes relative to previous versions, 
> most of them solely to avoid adding a virtual node in DT bindings.
> 
> The only current interconnect provider implementation is for qcom and it 
> uses a firmware node as the provider node (with #interconnect-cells). 
> However there is no obvious equivalent of that for imx and many other SOCs.

Not sure if it will help, but have you seen the qcs404 interconnect driver?
There is also mt8183 interconnect provider driver on LKML.

> On imx there are multiple pieces of scalable fabric which can be defined 
> in DT as devfreq devices and it sort of makes sense to add 
> #interconnect-cells to those. However when it comes to describing the 
> SOC interconnect graph it's much more convenient to have a single 
> per-SOC platform driver.

Is all the NoC configuration done only by ATF? Are there any NoC related memory
mapped registers?

Thanks,
Georgi
Leonard Crestez Sept. 25, 2019, 10:52 p.m. UTC | #5
On 25.09.2019 05:38, Georgi Djakov wrote:
> Hi Leonard,
> 
> On 9/16/19 05:34, Leonard Crestez wrote:
>> On 23.08.2019 17:37, Leonard Crestez wrote:
>>> This series add imx support for interconnect via devfreq: the ICC
>>> framework is used to aggregate requests from devices and then those are
>>> converted to DEV_PM_QOS_MIN_FREQUENCY requests for devfreq.
>>>   
>>> Since there is no single devicetree node that can represent the "interconnect"
>>> new API is added to allow individual devfreq nodes to act as parsing proxies
>>> all mapping to a single soc-level icc provider. This is still RFC
>>> because of this
>>
>> Any comments? I made a lot of changes relative to previous versions,
>> most of them solely to avoid adding a virtual node in DT bindings.
>>
>> The only current interconnect provider implementation is for qcom and it
>> uses a firmware node as the provider node (with #interconnect-cells).
>> However there is no obvious equivalent of that for imx and many other SOCs.
> 
> Not sure if it will help, but have you seen the qcs404 interconnect driver?
> There is also mt8183 interconnect provider driver on LKML.

Yes, but only yesterday. The qcs404 driver involves multiple DT devices 
so it seems closer to imx.

As far as I understand from reading qcs404 source:

* There is no struct device representing the entire graph.
* There are multiple NOCs and each registers itself as a separate 
interconnect provider.
* Each NOC registers multiple icc_nodes of various sorts:
   * Device masters and slaves
   * Some nodes representing NoC ports?
   * Multiple internal nodes
* There is single per-SOC master list of QNOCs in the qcs404 driver.
* The QNOCs can reference each other between multiple providers.
* Each NOC registers an icc_provider and a subset of the graph.
* The multiple NoC inside a chip are distinguished by compat strings. 
This seems strange, aren't they really different instantiations of the 
same IP with small config changes?

This design is still quite odd, what would make sense to me is to 
register the "interconnect graph" once and then provide multiple 
"interconnect scalers" which handle the aggregated requests for certain 
specific nodes.

>> On imx there are multiple pieces of scalable fabric which can be defined
>> in DT as devfreq devices and it sort of makes sense to add
>> #interconnect-cells to those. However when it comes to describing the
>> SOC interconnect graph it's much more convenient to have a single
>> per-SOC platform driver.
> 
> Is all the NoC configuration done only by ATF? Are there any NoC related memory
> mapped registers?

Registers are memory-mapped and visible to the A-cores but should only 
be accessed through secure transactions. This means that configuration 
needs be done by ATF in EL3 (we don't support running linux in secure 
world on imx8m). There is no "remote processor" managing this on imx8m.

On older imx6/7 chips we actually have two out-of-tree implementations 
of bus freq switching code: An older one in Linux (used when running in 
secure world) and a different one in optee for running Linux in 
non-secure world.

NoC registers can be used to control some "transaction priority" bits 
but I don't want to expose that part right now.

What determines bandwidth versus power consumption is the NoC clk rate 
and clocks are managed by Linux directly.

DVFS on the RAM controller (DDRC) is also important. That component is 
only a bus slave and frequency switching requires a complex sequence 
inside ATF.

--
Regards,
Leonard
Georgi Djakov Sept. 26, 2019, 12:51 a.m. UTC | #6
On 9/25/19 15:52, Leonard Crestez wrote:
> On 25.09.2019 05:38, Georgi Djakov wrote:
>> Hi Leonard,
>>
>> On 9/16/19 05:34, Leonard Crestez wrote:
>>> On 23.08.2019 17:37, Leonard Crestez wrote:
>>>> This series add imx support for interconnect via devfreq: the ICC
>>>> framework is used to aggregate requests from devices and then those are
>>>> converted to DEV_PM_QOS_MIN_FREQUENCY requests for devfreq.
>>>>   
>>>> Since there is no single devicetree node that can represent the "interconnect"
>>>> new API is added to allow individual devfreq nodes to act as parsing proxies
>>>> all mapping to a single soc-level icc provider. This is still RFC
>>>> because of this
>>>
>>> Any comments? I made a lot of changes relative to previous versions,
>>> most of them solely to avoid adding a virtual node in DT bindings.
>>>
>>> The only current interconnect provider implementation is for qcom and it
>>> uses a firmware node as the provider node (with #interconnect-cells).
>>> However there is no obvious equivalent of that for imx and many other SOCs.
>>
>> Not sure if it will help, but have you seen the qcs404 interconnect driver?
>> There is also mt8183 interconnect provider driver on LKML.
> 
> Yes, but only yesterday. The qcs404 driver involves multiple DT devices 
> so it seems closer to imx.
> 
> As far as I understand from reading qcs404 source:
> 
> * There is no struct device representing the entire graph.
> * There are multiple NOCs and each registers itself as a separate 
> interconnect provider.
> * Each NOC registers multiple icc_nodes of various sorts:
>    * Device masters and slaves
>    * Some nodes representing NoC ports?

Well, all nodes are representing ports.

>    * Multiple internal nodes
> * There is single per-SOC master list of QNOCs in the qcs404 driver.
> * The QNOCs can reference each other between multiple providers.
> * Each NOC registers an icc_provider and a subset of the graph.
> * The multiple NoC inside a chip are distinguished by compat strings. 
> This seems strange, aren't they really different instantiations of the 
> same IP with small config changes?

No, they are different IPs - ahb, axi or custom based.

> This design is still quite odd, what would make sense to me is to 
> register the "interconnect graph" once and then provide multiple 
> "interconnect scalers" which handle the aggregated requests for certain 
> specific nodes.
> 
>>> On imx there are multiple pieces of scalable fabric which can be defined
>>> in DT as devfreq devices and it sort of makes sense to add
>>> #interconnect-cells to those. However when it comes to describing the
>>> SOC interconnect graph it's much more convenient to have a single
>>> per-SOC platform driver.
>>
>> Is all the NoC configuration done only by ATF? Are there any NoC related memory
>> mapped registers?
> 
> Registers are memory-mapped and visible to the A-cores but should only 
> be accessed through secure transactions. This means that configuration 
> needs be done by ATF in EL3 (we don't support running linux in secure 
> world on imx8m). There is no "remote processor" managing this on imx8m.

Can we create some noc DT node with it's memory mapped address and make
it an interconnect provider? Sounds to me like a more correct representation
of the hardware?
Other option would be to bless some PSCI DT node (for example) to be a
provider.

> 
> On older imx6/7 chips we actually have two out-of-tree implementations 
> of bus freq switching code: An older one in Linux (used when running in 
> secure world) and a different one in optee for running Linux in 
> non-secure world.
> 
> NoC registers can be used to control some "transaction priority" bits 
> but I don't want to expose that part right now.

This is very similar to some of the Qcom hardware.

> What determines bandwidth versus power consumption is the NoC clk rate 
> and clocks are managed by Linux directly.

So you will need to describe these clocks in the interconnect provider
DT node like on qcs404.

> DVFS on the RAM controller (DDRC) is also important. That component is 
> only a bus slave and frequency switching requires a complex sequence 
> inside ATF.

Makes sense.

Thanks,
Georgi
Leonard Crestez Sept. 30, 2019, 12:31 p.m. UTC | #7
On 2019-09-26 3:52 AM, Georgi Djakov wrote:
> On 9/25/19 15:52, Leonard Crestez wrote:
>> On 25.09.2019 05:38, Georgi Djakov wrote:
>>> Hi Leonard,
>>>
>>> On 9/16/19 05:34, Leonard Crestez wrote:
>>>> On 23.08.2019 17:37, Leonard Crestez wrote:
>>>>> This series add imx support for interconnect via devfreq: the ICC
>>>>> framework is used to aggregate requests from devices and then those are
>>>>> converted to DEV_PM_QOS_MIN_FREQUENCY requests for devfreq.
>>>>>    
>>>>> Since there is no single devicetree node that can represent the "interconnect"
>>>>> new API is added to allow individual devfreq nodes to act as parsing proxies
>>>>> all mapping to a single soc-level icc provider. This is still RFC
>>>>> because of this
>>>>
>>>> Any comments? I made a lot of changes relative to previous versions,
>>>> most of them solely to avoid adding a virtual node in DT bindings.
>>>>
>>>> The only current interconnect provider implementation is for qcom and it
>>>> uses a firmware node as the provider node (with #interconnect-cells).
>>>> However there is no obvious equivalent of that for imx and many other SOCs.
>>>
>>> Not sure if it will help, but have you seen the qcs404 interconnect driver?
>>> There is also mt8183 interconnect provider driver on LKML.
>>
>> Yes, but only yesterday. The qcs404 driver involves multiple DT devices
>> so it seems closer to imx.
>>
>> As far as I understand from reading qcs404 source:
>>
>> * There is no struct device representing the entire graph.
>> * There are multiple NOCs and each registers itself as a separate
>> interconnect provider.
>> * Each NOC registers multiple icc_nodes of various sorts:
>>     * Device masters and slaves
>>     * Some nodes representing NoC ports?
> 
> Well, all nodes are representing ports.
> 
>>     * Multiple internal nodes
>> * There is single per-SOC master list of QNOCs in the qcs404 driver.
>> * The QNOCs can reference each other between multiple providers.
>> * Each NOC registers an icc_provider and a subset of the graph.
>> * The multiple NoC inside a chip are distinguished by compat strings.
>> This seems strange, aren't they really different instantiations of the
>> same IP with small config changes?
> 
> No, they are different IPs - ahb, axi or custom based.

On IMX some of the buses are just different instantiations.

Would it make sense to standardize an "interconnect-node-id" to identify 
middle nodes? For example if you have nearly identical "audio" "display" 
"vpu" NICs then this property would make it possible to map from a DT 
done to an ICC graph node.

>> This design is still quite odd, what would make sense to me is to
>> register the "interconnect graph" once and then provide multiple
>> "interconnect scalers" which handle the aggregated requests for certain
>> specific nodes.
>>
>>>> On imx there are multiple pieces of scalable fabric which can be defined
>>>> in DT as devfreq devices and it sort of makes sense to add
>>>> #interconnect-cells to those. However when it comes to describing the
>>>> SOC interconnect graph it's much more convenient to have a single
>>>> per-SOC platform driver.
>>>
>>> Is all the NoC configuration done only by ATF? Are there any NoC related memory
>>> mapped registers?
>>
>> Registers are memory-mapped and visible to the A-cores but should only
>> be accessed through secure transactions. This means that configuration
>> needs be done by ATF in EL3 (we don't support running linux in secure
>> world on imx8m). There is no "remote processor" managing this on imx8m.
> 
> Can we create some noc DT node with it's memory mapped address and make
> it an interconnect provider? Sounds to me like a more correct representation
> of the hardware?

This is what I did, it's just that the initial binding is in this series:
https://patchwork.kernel.org/cover/11104113/
https://patchwork.kernel.org/patch/11104137/
https://patchwork.kernel.org/patch/11104141/

The nodes are scaled via devfreq and interconnect comes "on top" to make 
device bandwidth requests.

I think using devfreq is valuable for example:
  * DDRC can support reactive scaling based on performance counters
  * The NOC can run at different voltages so it should have it's own OPP 
table.

> Other option would be to bless some PSCI DT node (for example) to be a
> provider.

I don't think this can be a good fit, I want to support different 
interconnect nodes with different underlying interfaces on the same SOC.

There is no abstraction layer in firmware so abstractions for different 
interconnect midnodes should be in linux instead.

>> On older imx6/7 chips we actually have two out-of-tree implementations
>> of bus freq switching code: An older one in Linux (used when running in
>> secure world) and a different one in optee for running Linux in
>> non-secure world.
>>
>> NoC registers can be used to control some "transaction priority" bits
>> but I don't want to expose that part right now.
> 
> This is very similar to some of the Qcom hardware.

NoC IP is licensed from Arteris which was bought-out by Qcom. 
Documentation is not public though and there are likely many differences 
versus what Qcom has in their own chips.
>> What determines bandwidth versus power consumption is the NoC clk rate
>> and clocks are managed by Linux directly.
> 
> So you will need to describe these clocks in the interconnect provider
> DT node like on qcs404.

I already implemented the nodes as devfreq provider and DDRC even 
includes ondemand reactive scaling support:

https://patchwork.kernel.org/patch/11104139/
https://patchwork.kernel.org/patch/11104145/
https://patchwork.kernel.org/patch/11104143/

I could just pick the "main" NOC and turn than into the "only" 
interconnect provider. Something like this:

if (has_property(noc_device, "#interconnect-cells")) {
     register_soc_icc_driver(noc_device);
}

This would get rid of the icc_proxy stuff but fetching references to 
other scalable nodes would require some other way to identify them.

--
Regards,
Leonard