mbox series

[V2,0/5] dts: qcom: Introduce X1E80100 platforms device tree

Message ID 20231117113931.26660-1-quic_sibis@quicinc.com (mailing list archive)
Headers show
Series dts: qcom: Introduce X1E80100 platforms device tree | expand

Message

Sibi Sankar Nov. 17, 2023, 11:39 a.m. UTC
This series adds the initial (clocks, pinctrl, rpmhpd, regulator, interconnect,
CPU, SoC and board compatibles) device tree support to boot to shell on the
Qualcomm X1E80100 platform, aka Snapdragon X Elite.

Our v1 post of the patchsets adding support for Snapdragon X Elite SoC had
the part number sc8380xp which is now updated to the new part number x1e80100
based on the new branding scheme and refers to the exact same SoC.

v2:
* Update the part number from sc8380xp to x1e80100.
* Fixup ordering in the SoC/board bindings. [Krzysztof]
* Add pdc node and add wakeup tlmm parent. [Rajendra]
* Add cpu/cluster idle states. [Bjorn]
* Document reserved gpios. [Konrad]
* Remove L1 and add missing props to L2. [Konrad]
* Remove region suffix. [Konrad]
* Append digits to gcc node. [Konrad]
* Add ICC_TAGS instead of leaving it unspecified. [Konrad]
* Remove double space. [Konrad]
* Leave the size index of memory node untouched. [Konrad]
* Override the serial uart with "qcom,geni-debug-uart" in the board files. [Rajendra]
* Add additional details to patch 5 commit message. [Konrad/Krzysztof]

Dependencies:
clks: https://lore.kernel.org/lkml/20231117092737.28362-1-quic_sibis@quicinc.com/
interconnect: https://lore.kernel.org/lkml/20231117103035.25848-1-quic_sibis@quicinc.com/
llcc: https://lore.kernel.org/lkml/20231117095315.2087-1-quic_sibis@quicinc.com/
misc-bindings: https://lore.kernel.org/lkml/20231117105635.343-1-quic_sibis@quicinc.com/
pinctrl: https://lore.kernel.org/lkml/20231117093921.31968-1-quic_sibis@quicinc.com/
rpmhpd: https://lore.kernel.org/lkml/20231117104254.28862-1-quic_sibis@quicinc.com/

Release Link: https://www.qualcomm.com/news/releases/2023/10/qualcomm-unleashes-snapdragon-x-elite--the-ai-super-charged-plat

Rajendra Nayak (4):
  dt-bindings: arm: cpus: Add qcom,oryon compatible
  dt-bindings: arm: qcom: Document X1E80100 SoC and boards
  arm64: dts: qcom: Add base X1E80100 dtsi and the QCP dts
  arm64: defconfig: Enable X1E80100 SoC base configs

Sibi Sankar (1):
  arm64: dts: qcom: x1e80100: Add Compute Reference Device

 .../devicetree/bindings/arm/cpus.yaml         |    1 +
 .../devicetree/bindings/arm/qcom.yaml         |    8 +
 arch/arm64/boot/dts/qcom/Makefile             |    2 +
 arch/arm64/boot/dts/qcom/x1e80100-crd.dts     |  425 ++
 arch/arm64/boot/dts/qcom/x1e80100-qcp.dts     |  400 ++
 arch/arm64/boot/dts/qcom/x1e80100.dtsi        | 3509 +++++++++++++++++
 arch/arm64/configs/defconfig                  |    3 +
 7 files changed, 4348 insertions(+)
 create mode 100644 arch/arm64/boot/dts/qcom/x1e80100-crd.dts
 create mode 100644 arch/arm64/boot/dts/qcom/x1e80100-qcp.dts
 create mode 100644 arch/arm64/boot/dts/qcom/x1e80100.dtsi

Comments

Konrad Dybcio Nov. 18, 2023, 1:06 a.m. UTC | #1
On 17.11.2023 12:39, Sibi Sankar wrote:
> From: Rajendra Nayak <quic_rjendra@quicinc.com>
> 
> Add base dtsi and QCP board (Qualcomm Compute Platform) dts file for
> X1E80100 SoC, describing the CPUs, GCC and RPMHCC clock controllers,
> geni UART, interrupt controller, TLMM, reserved memory, interconnects,
> SMMU and LLCC nodes.
> 
> Co-developed-by: Abel Vesa <abel.vesa@linaro.org>
> Signed-off-by: Abel Vesa <abel.vesa@linaro.org>
> Signed-off-by: Rajendra Nayak <quic_rjendra@quicinc.com>
> Co-developed-by: Sibi Sankar <quic_sibis@quicinc.com>
> Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com>
> ---
[...]

> +&tlmm {
> +	gpio-reserved-ranges = <33 3>, <44 4>, /* SPI (TPM) */
Surely SPI doesn't use 7 wires! :D

[...]

> +			L2_0: l2-cache-0 {
the cache device is distinguishable by its parent, so "l2-cache" is enough


> +				compatible = "cache";
> +				cache-level = <2>;
> +				cache-unified;
> +			};
> +		};
> +
[...]

> +		idle-states {
> +			entry-method = "psci";
> +
> +			CLUSTER_C4: cpu-sleep-0 {
> +				compatible = "arm,idle-state";
> +				idle-state-name = "ret";
> +				arm,psci-suspend-param = <0x00000004>;
These suspend parameters look funky.. is this just a PSCI sleep
implementation that strays far away from Arm's suggested guidelines?

[...]


> +		CPU_PD11: power-domain-cpu11 {
> +			#power-domain-cells = <0>;
> +			power-domains = <&CLUSTER_PD>;
> +		};
> +
> +		CLUSTER_PD: power-domain-cpu-cluster {
> +			#power-domain-cells = <0>;
> +			domain-idle-states = <&CLUSTER_CL4>, <&CLUSTER_CL5>;
> +		};
So, can the 3 clusters not shut down their L2 and PLLs (if separate?)
on their own?

> +	};
> +
> +	reserved-memory {
> +		#address-cells = <2>;
> +		#size-cells = <2>;
> +		ranges;
> +
> +		gunyah_hyp_mem: gunyah-hyp@80000000 {
> +			reg = <0x0 0x80000000 0x0 0x800000>;
> +			no-map;
> +		};
> +
> +		hyp_elf_package_mem: hyp-elf_package@80800000 {
no underscores in node names, use hyphens

The rest looks OK I think

Konrad
Sibi Sankar Nov. 29, 2023, 9:25 a.m. UTC | #2
On 11/18/23 06:36, Konrad Dybcio wrote:
> On 17.11.2023 12:39, Sibi Sankar wrote:
>> From: Rajendra Nayak <quic_rjendra@quicinc.com>
>>
>> Add base dtsi and QCP board (Qualcomm Compute Platform) dts file for
>> X1E80100 SoC, describing the CPUs, GCC and RPMHCC clock controllers,
>> geni UART, interrupt controller, TLMM, reserved memory, interconnects,
>> SMMU and LLCC nodes.
>>
>> Co-developed-by: Abel Vesa <abel.vesa@linaro.org>
>> Signed-off-by: Abel Vesa <abel.vesa@linaro.org>
>> Signed-off-by: Rajendra Nayak <quic_rjendra@quicinc.com>
>> Co-developed-by: Sibi Sankar <quic_sibis@quicinc.com>
>> Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com>
>> ---
> [...]
> 
>> +&tlmm {
>> +	gpio-reserved-ranges = <33 3>, <44 4>, /* SPI (TPM) */
> Surely SPI doesn't use 7 wires! :D

yeah, they are just secure reserved unused gpios.

> 
> [...]
> 
>> +			L2_0: l2-cache-0 {
> the cache device is distinguishable by its parent, so "l2-cache" is enough

thanks will fix ^^

> 
> 
>> +				compatible = "cache";
>> +				cache-level = <2>;
>> +				cache-unified;
>> +			};
>> +		};
>> +
> [...]
> 
>> +		idle-states {
>> +			entry-method = "psci";
>> +
>> +			CLUSTER_C4: cpu-sleep-0 {
>> +				compatible = "arm,idle-state";
>> +				idle-state-name = "ret";
>> +				arm,psci-suspend-param = <0x00000004>;
> These suspend parameters look funky.. is this just a PSCI sleep
> implementation that strays far away from Arm's suggested guidelines?

not really! it's just that 30th bit is set according to spec i.e
it's marked as a retention state.

> 
> [...]
> 
> 
>> +		CPU_PD11: power-domain-cpu11 {
>> +			#power-domain-cells = <0>;
>> +			power-domains = <&CLUSTER_PD>;
>> +		};
>> +
>> +		CLUSTER_PD: power-domain-cpu-cluster {
>> +			#power-domain-cells = <0>;
>> +			domain-idle-states = <&CLUSTER_CL4>, <&CLUSTER_CL5>;
>> +		};
> So, can the 3 clusters not shut down their L2 and PLLs (if separate?)
> on their own?

on CL5 the clusters are expected to shutdown their l2 and PLL on their
own.

> 
>> +	};
>> +
>> +	reserved-memory {
>> +		#address-cells = <2>;
>> +		#size-cells = <2>;
>> +		ranges;
>> +
>> +		gunyah_hyp_mem: gunyah-hyp@80000000 {
>> +			reg = <0x0 0x80000000 0x0 0x800000>;
>> +			no-map;
>> +		};
>> +
>> +		hyp_elf_package_mem: hyp-elf_package@80800000 {
> no underscores in node names, use hyphens

ack

-Sibi
> 
> The rest looks OK I think
> 
> Konrad
Konrad Dybcio Nov. 29, 2023, 12:54 p.m. UTC | #3
On 29.11.2023 10:25, Sibi Sankar wrote:
> 
> 
> On 11/18/23 06:36, Konrad Dybcio wrote:
>> On 17.11.2023 12:39, Sibi Sankar wrote:
>>> From: Rajendra Nayak <quic_rjendra@quicinc.com>
>>>
>>> Add base dtsi and QCP board (Qualcomm Compute Platform) dts file for
>>> X1E80100 SoC, describing the CPUs, GCC and RPMHCC clock controllers,
>>> geni UART, interrupt controller, TLMM, reserved memory, interconnects,
>>> SMMU and LLCC nodes.
>>>
>>> Co-developed-by: Abel Vesa <abel.vesa@linaro.org>
>>> Signed-off-by: Abel Vesa <abel.vesa@linaro.org>
>>> Signed-off-by: Rajendra Nayak <quic_rjendra@quicinc.com>
>>> Co-developed-by: Sibi Sankar <quic_sibis@quicinc.com>
>>> Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com>
>>> ---
[...]


>>> +        idle-states {
>>> +            entry-method = "psci";
>>> +
>>> +            CLUSTER_C4: cpu-sleep-0 {
>>> +                compatible = "arm,idle-state";
>>> +                idle-state-name = "ret";
>>> +                arm,psci-suspend-param = <0x00000004>;
>> These suspend parameters look funky.. is this just a PSCI sleep
>> implementation that strays far away from Arm's suggested guidelines?
> 
> not really! it's just that 30th bit is set according to spec i.e
> it's marked as a retention state.
So, is there no state where the cores actually power down? Or is it
not described yet?

FWIW by "power down" I mean it in the sense that Arm DEN0022D does,
so "In this state the core is powered off. Software on the device
needs to save all core state, so that it can be preserved over
the powerdown."

> 
>>
>> [...]
>>
>>
>>> +        CPU_PD11: power-domain-cpu11 {
>>> +            #power-domain-cells = <0>;
>>> +            power-domains = <&CLUSTER_PD>;
>>> +        };
>>> +
>>> +        CLUSTER_PD: power-domain-cpu-cluster {
>>> +            #power-domain-cells = <0>;
>>> +            domain-idle-states = <&CLUSTER_CL4>, <&CLUSTER_CL5>;
>>> +        };
>> So, can the 3 clusters not shut down their L2 and PLLs (if separate?)
>> on their own?
> 
> on CL5 the clusters are expected to shutdown their l2 and PLL on their
> own.
Then I think this won't happen with this description

every cpu has a genpd tree like this:

cpu_n
 |_CPU_PDn
    |_CLUSTER_PD

and CLUSTER_PD has two idle states: CLUSTER_CL4 and CLUSTER_CL5

which IIUC means that neither cluster idle state will be reached
unless all children of CLUSTER_PD (so, all CPUs) go down that low

This is "fine" on e.g. sc8280 where both CPU clusters are part of
the same Arm DynamIQ cluster (which is considered one cluster as
far as MPIDR_EL1 goes) (though perhaps that's misleading and with
the qcom plumbing they perhaps could actually be collapsed separately)

Konrad
Sibi Sankar Nov. 29, 2023, 3:46 p.m. UTC | #4
On 11/29/23 18:24, Konrad Dybcio wrote:
> On 29.11.2023 10:25, Sibi Sankar wrote:
>>
>>
>> On 11/18/23 06:36, Konrad Dybcio wrote:
>>> On 17.11.2023 12:39, Sibi Sankar wrote:
>>>> From: Rajendra Nayak <quic_rjendra@quicinc.com>
>>>>
>>>> Add base dtsi and QCP board (Qualcomm Compute Platform) dts file for
>>>> X1E80100 SoC, describing the CPUs, GCC and RPMHCC clock controllers,
>>>> geni UART, interrupt controller, TLMM, reserved memory, interconnects,
>>>> SMMU and LLCC nodes.
>>>>
>>>> Co-developed-by: Abel Vesa <abel.vesa@linaro.org>
>>>> Signed-off-by: Abel Vesa <abel.vesa@linaro.org>
>>>> Signed-off-by: Rajendra Nayak <quic_rjendra@quicinc.com>
>>>> Co-developed-by: Sibi Sankar <quic_sibis@quicinc.com>
>>>> Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com>
>>>> ---
> [...]
> 
> 
>>>> +        idle-states {
>>>> +            entry-method = "psci";
>>>> +
>>>> +            CLUSTER_C4: cpu-sleep-0 {
>>>> +                compatible = "arm,idle-state";
>>>> +                idle-state-name = "ret";
>>>> +                arm,psci-suspend-param = <0x00000004>;
>>> These suspend parameters look funky.. is this just a PSCI sleep
>>> implementation that strays far away from Arm's suggested guidelines?
>>
>> not really! it's just that 30th bit is set according to spec i.e
>> it's marked as a retention state.
> So, is there no state where the cores actually power down? Or is it
> not described yet?
> 
> FWIW by "power down" I mean it in the sense that Arm DEN0022D does,
> so "In this state the core is powered off. Software on the device
> needs to save all core state, so that it can be preserved over
> the powerdown."

I was told we mark it explicitly as retention because hw is expected
to handle powerdown and we don't want sw to also do the same.

> 
>>
>>>
>>> [...]
>>>
>>>
>>>> +        CPU_PD11: power-domain-cpu11 {
>>>> +            #power-domain-cells = <0>;
>>>> +            power-domains = <&CLUSTER_PD>;
>>>> +        };
>>>> +
>>>> +        CLUSTER_PD: power-domain-cpu-cluster {
>>>> +            #power-domain-cells = <0>;
>>>> +            domain-idle-states = <&CLUSTER_CL4>, <&CLUSTER_CL5>;
>>>> +        };
>>> So, can the 3 clusters not shut down their L2 and PLLs (if separate?)
>>> on their own?
>>
>> on CL5 the clusters are expected to shutdown their l2 and PLL on their
>> own.
> Then I think this won't happen with this description
> 
> every cpu has a genpd tree like this:
> 
> cpu_n
>   |_CPU_PDn
>      |_CLUSTER_PD
> 
> and CLUSTER_PD has two idle states: CLUSTER_CL4 and CLUSTER_CL5
> 
> which IIUC means that neither cluster idle state will be reached
> unless all children of CLUSTER_PD (so, all CPUs) go down that low
> 
> This is "fine" on e.g. sc8280 where both CPU clusters are part of
> the same Arm DynamIQ cluster (which is considered one cluster as
> far as MPIDR_EL1 goes) (though perhaps that's misleading and with
> the qcom plumbing they perhaps could actually be collapsed separately)

We did verify that the sleep stats increase independently for each
cluster, so it's behavior is unlike what you explained above. I'll
re-spin this series again in the meantime and you can take another
stab at it there.

-Sibi

> 
> Konrad
Konrad Dybcio Nov. 29, 2023, 10:29 p.m. UTC | #5
On 29.11.2023 16:46, Sibi Sankar wrote:
> 
> 
> On 11/29/23 18:24, Konrad Dybcio wrote:
>> On 29.11.2023 10:25, Sibi Sankar wrote:
>>>
>>>
>>> On 11/18/23 06:36, Konrad Dybcio wrote:
>>>> On 17.11.2023 12:39, Sibi Sankar wrote:
>>>>> From: Rajendra Nayak <quic_rjendra@quicinc.com>
>>>>>
>>>>> Add base dtsi and QCP board (Qualcomm Compute Platform) dts file for
>>>>> X1E80100 SoC, describing the CPUs, GCC and RPMHCC clock controllers,
>>>>> geni UART, interrupt controller, TLMM, reserved memory, interconnects,
>>>>> SMMU and LLCC nodes.
>>>>>
>>>>> Co-developed-by: Abel Vesa <abel.vesa@linaro.org>
>>>>> Signed-off-by: Abel Vesa <abel.vesa@linaro.org>
>>>>> Signed-off-by: Rajendra Nayak <quic_rjendra@quicinc.com>
>>>>> Co-developed-by: Sibi Sankar <quic_sibis@quicinc.com>
>>>>> Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com>
>>>>> ---
>> [...]
>>
>>
>>>>> +        idle-states {
>>>>> +            entry-method = "psci";
>>>>> +
>>>>> +            CLUSTER_C4: cpu-sleep-0 {
>>>>> +                compatible = "arm,idle-state";
>>>>> +                idle-state-name = "ret";
>>>>> +                arm,psci-suspend-param = <0x00000004>;
>>>> These suspend parameters look funky.. is this just a PSCI sleep
>>>> implementation that strays far away from Arm's suggested guidelines?
>>>
>>> not really! it's just that 30th bit is set according to spec i.e
>>> it's marked as a retention state.
>> So, is there no state where the cores actually power down? Or is it
>> not described yet?
>>
>> FWIW by "power down" I mean it in the sense that Arm DEN0022D does,
>> so "In this state the core is powered off. Software on the device
>> needs to save all core state, so that it can be preserved over
>> the powerdown."
> 
> I was told we mark it explicitly as retention because hw is expected
> to handle powerdown and we don't want sw to also do the same.
> 
>>
>>>
>>>>
>>>> [...]
>>>>
>>>>
>>>>> +        CPU_PD11: power-domain-cpu11 {
>>>>> +            #power-domain-cells = <0>;
>>>>> +            power-domains = <&CLUSTER_PD>;
>>>>> +        };
>>>>> +
>>>>> +        CLUSTER_PD: power-domain-cpu-cluster {
>>>>> +            #power-domain-cells = <0>;
>>>>> +            domain-idle-states = <&CLUSTER_CL4>, <&CLUSTER_CL5>;
>>>>> +        };
>>>> So, can the 3 clusters not shut down their L2 and PLLs (if separate?)
>>>> on their own?
>>>
>>> on CL5 the clusters are expected to shutdown their l2 and PLL on their
>>> own.
>> Then I think this won't happen with this description
>>
>> every cpu has a genpd tree like this:
>>
>> cpu_n
>>   |_CPU_PDn
>>      |_CLUSTER_PD
>>
>> and CLUSTER_PD has two idle states: CLUSTER_CL4 and CLUSTER_CL5
>>
>> which IIUC means that neither cluster idle state will be reached
>> unless all children of CLUSTER_PD (so, all CPUs) go down that low
>>
>> This is "fine" on e.g. sc8280 where both CPU clusters are part of
>> the same Arm DynamIQ cluster (which is considered one cluster as
>> far as MPIDR_EL1 goes) (though perhaps that's misleading and with
>> the qcom plumbing they perhaps could actually be collapsed separately)
> 
> We did verify that the sleep stats increase independently for each
> cluster, so it's behavior is unlike what you explained above. I'll
> re-spin this series again in the meantime and you can take another
> stab at it there.
So are you saying that you checked the RPMh sleep stats and each cluster
managed to sleep on its own, or did you do something different?

Were the sleep durations far apart? What's the order of magnitude of that
difference? Are the values reported in RPMh greater than those in
/sys/kernel/debug/pm_genpd/power-domain-cpu-cluster/total_idle_time?

Is there any other (i.e. non-Linux) source of "go to sleep" votes?

Konrad
Sibi Sankar Nov. 30, 2023, 11:23 a.m. UTC | #6
On 11/30/23 03:59, Konrad Dybcio wrote:
> On 29.11.2023 16:46, Sibi Sankar wrote:
>>
>>
>> On 11/29/23 18:24, Konrad Dybcio wrote:
>>> On 29.11.2023 10:25, Sibi Sankar wrote:
>>>>
>>>>
>>>> On 11/18/23 06:36, Konrad Dybcio wrote:
>>>>> On 17.11.2023 12:39, Sibi Sankar wrote:
>>>>>> From: Rajendra Nayak <quic_rjendra@quicinc.com>
>>>>>>
>>>>>> Add base dtsi and QCP board (Qualcomm Compute Platform) dts file for
>>>>>> X1E80100 SoC, describing the CPUs, GCC and RPMHCC clock controllers,
>>>>>> geni UART, interrupt controller, TLMM, reserved memory, interconnects,
>>>>>> SMMU and LLCC nodes.
>>>>>>
>>>>>> Co-developed-by: Abel Vesa <abel.vesa@linaro.org>
>>>>>> Signed-off-by: Abel Vesa <abel.vesa@linaro.org>
>>>>>> Signed-off-by: Rajendra Nayak <quic_rjendra@quicinc.com>
>>>>>> Co-developed-by: Sibi Sankar <quic_sibis@quicinc.com>
>>>>>> Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com>
>>>>>> ---
>>> [...]
>>>
>>>
>>>>>> +        idle-states {
>>>>>> +            entry-method = "psci";
>>>>>> +
>>>>>> +            CLUSTER_C4: cpu-sleep-0 {
>>>>>> +                compatible = "arm,idle-state";
>>>>>> +                idle-state-name = "ret";
>>>>>> +                arm,psci-suspend-param = <0x00000004>;
>>>>> These suspend parameters look funky.. is this just a PSCI sleep
>>>>> implementation that strays far away from Arm's suggested guidelines?
>>>>
>>>> not really! it's just that 30th bit is set according to spec i.e
>>>> it's marked as a retention state.
>>> So, is there no state where the cores actually power down? Or is it
>>> not described yet?
>>>
>>> FWIW by "power down" I mean it in the sense that Arm DEN0022D does,
>>> so "In this state the core is powered off. Software on the device
>>> needs to save all core state, so that it can be preserved over
>>> the powerdown."
>>
>> I was told we mark it explicitly as retention because hw is expected
>> to handle powerdown and we don't want sw to also do the same.
>>
>>>
>>>>
>>>>>
>>>>> [...]
>>>>>
>>>>>
>>>>>> +        CPU_PD11: power-domain-cpu11 {
>>>>>> +            #power-domain-cells = <0>;
>>>>>> +            power-domains = <&CLUSTER_PD>;
>>>>>> +        };
>>>>>> +
>>>>>> +        CLUSTER_PD: power-domain-cpu-cluster {
>>>>>> +            #power-domain-cells = <0>;
>>>>>> +            domain-idle-states = <&CLUSTER_CL4>, <&CLUSTER_CL5>;
>>>>>> +        };
>>>>> So, can the 3 clusters not shut down their L2 and PLLs (if separate?)
>>>>> on their own?
>>>>
>>>> on CL5 the clusters are expected to shutdown their l2 and PLL on their
>>>> own.
>>> Then I think this won't happen with this description
>>>
>>> every cpu has a genpd tree like this:
>>>
>>> cpu_n
>>>    |_CPU_PDn
>>>       |_CLUSTER_PD
>>>
>>> and CLUSTER_PD has two idle states: CLUSTER_CL4 and CLUSTER_CL5
>>>
>>> which IIUC means that neither cluster idle state will be reached
>>> unless all children of CLUSTER_PD (so, all CPUs) go down that low
>>>
>>> This is "fine" on e.g. sc8280 where both CPU clusters are part of
>>> the same Arm DynamIQ cluster (which is considered one cluster as
>>> far as MPIDR_EL1 goes) (though perhaps that's misleading and with
>>> the qcom plumbing they perhaps could actually be collapsed separately)
>>
>> We did verify that the sleep stats increase independently for each
>> cluster, so it's behavior is unlike what you explained above. I'll
>> re-spin this series again in the meantime and you can take another
>> stab at it there.
> So are you saying that you checked the RPMh sleep stats and each cluster
> managed to sleep on its own, or did you do something different?

We had used some jtag scripts but what you said is correct, there
definitely needs to be separate cluster_pd defined for each cluster.
Will fix this in the next re-spin.

-Sibi

> 
> Were the sleep durations far apart? What's the order of magnitude of that
> difference? Are the values reported in RPMh greater than those in
> /sys/kernel/debug/pm_genpd/power-domain-cpu-cluster/total_idle_time?
> 
> Is there any other (i.e. non-Linux) source of "go to sleep" votes?
> 
> Konrad