diff mbox

[v5,02/16] dt/bindings: Update binding for PM domain idle states

Message ID 1472242678-33700-3-git-send-email-lina.iyer@linaro.org (mailing list archive)
State New, archived
Headers show

Commit Message

Lina Iyer Aug. 26, 2016, 8:17 p.m. UTC
From: Axel Haslam <ahaslam+renesas@baylibre.com>

Update DT bindings to describe idle states of PM domains.

Cc: <devicetree@vger.kernel.org>
Signed-off-by: Marc Titinger <mtitinger+renesas@baylibre.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
[Lina: Added state properties, removed state names, wakeup-latency,
added of_pm_genpd_init() API, pruned commit text]
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
[Ulf: Moved around code to make it compile properly, rebased on top of multiple state support]
---
 .../devicetree/bindings/power/power_domain.txt     | 57 ++++++++++++++++++++++
 1 file changed, 57 insertions(+)

Comments

Sudeep Holla Sept. 2, 2016, 2:21 p.m. UTC | #1
On 26/08/16 21:17, Lina Iyer wrote:
> From: Axel Haslam <ahaslam+renesas@baylibre.com>
>
> Update DT bindings to describe idle states of PM domains.
>
> Cc: <devicetree@vger.kernel.org>
> Signed-off-by: Marc Titinger <mtitinger+renesas@baylibre.com>
> Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
> [Lina: Added state properties, removed state names, wakeup-latency,
> added of_pm_genpd_init() API, pruned commit text]
> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
> [Ulf: Moved around code to make it compile properly, rebased on top of multiple state support]
> ---
>  .../devicetree/bindings/power/power_domain.txt     | 57 ++++++++++++++++++++++
>  1 file changed, 57 insertions(+)
>
> diff --git a/Documentation/devicetree/bindings/power/power_domain.txt b/Documentation/devicetree/bindings/power/power_domain.txt
> index 025b5e7..4960486 100644
> --- a/Documentation/devicetree/bindings/power/power_domain.txt
> +++ b/Documentation/devicetree/bindings/power/power_domain.txt
> @@ -29,6 +29,10 @@ Optional properties:
>     specified by this binding. More details about power domain specifier are
>     available in the next section.
>
> +- domain-idle-states : A phandle of an idle-state that shall be soaked into a
> +                generic domain power state. The idle state definitions are
> +                compatible with arm,idle-state specified in [1].
> +
>  Example:
>
>  	power: power-controller@12340000 {
> @@ -59,6 +63,57 @@ The nodes above define two power controllers: 'parent' and 'child'.
>  Domains created by the 'child' power controller are subdomains of '0' power
>  domain provided by the 'parent' power controller.
>
> +Example 3: ARM v7 style CPU PM domains (Linux domain controller)
> +
> +	cpus {
> +		#address-cells = <1>;
> +		#size-cells = <0>;
> +
> +		CPU0: cpu@0 {
> +			device_type = "cpu";
> +			compatible = "arm,cortex-a7", "arm,armv7";
> +			reg = <0x0>;
> +			power-domains = <&a7_pd>;
> +		};
> +
> +		CPU1: cpu@1 {
> +			device_type = "cpu";
> +			compatible = "arm,cortex-a15", "arm,armv7";
> +			reg = <0x0>;
> +			power-domains = <&a15_pd>;
> +		};
> +	};
> +
> +	pm-domains {
> +		a15_pd: a15_pd {
> +			/* will have A15 platform ARM_PD_METHOD_OF_DECLARE*/
> +			compatible = "arm,cortex-a15";
> +			#power-domain-cells = <0>;
> +			domain-idle-states = <&CLUSTER_SLEEP_0>;
> +		};
> +
> +		a7_pd: a7_pd {
> +			/* will have a A7 platform ARM_PD_METHOD_OF_DECLARE*/
> +			compatible = "arm,cortex-a7";
> +			#power-domain-cells = <0>;
> +			domain-idle-states = <&CLUSTER_SLEEP_0>, <&CLUSTER_SLEEP_1>;
> +		};
> +
> +		CLUSTER_SLEEP_0: state0 {
> +			compatible = "arm,idle-state";
> +			entry-latency-us = <1000>;
> +			exit-latency-us = <2000>;
> +			min-residency-us = <10000>;
> +		};
> +
> +		CLUSTER_SLEEP_1: state1 {
> +			compatible = "arm,idle-state";
> +			entry-latency-us = <5000>;
> +			exit-latency-us = <5000>;
> +			min-residency-us = <100000>;
> +		};
> +	};
> +

This version is *not very descriptive*. Also the discussion we had on v3
version has not yet concluded IMO. So can I take that we agreed on what
was proposed there or not ?

We could have better example above *really* based on the discussions we
had so far. This example always makes me think it's well crafted to
avoid any sort of discussions. We need to consider different use-cases
e.g. what about CPU level states ?

IMO, we need to discuss this DT binding in detail and arrive at some
conclusion before you take all the troubles to respin the series.
Also it's better to keep the DT binding separate until we have some
conclusion instead of posting the implementation for each version.
That's just my opinion(I would be least bothered about implementation
until I know it will be accepted before I can peek into the code, others
may differ.
Lina Iyer Sept. 2, 2016, 8:16 p.m. UTC | #2
On Fri, Sep 02 2016 at 07:21 -0700, Sudeep Holla wrote:
>
>
>On 26/08/16 21:17, Lina Iyer wrote:
>>From: Axel Haslam <ahaslam+renesas@baylibre.com>
>>
>>Update DT bindings to describe idle states of PM domains.
>>
>>Cc: <devicetree@vger.kernel.org>
>>Signed-off-by: Marc Titinger <mtitinger+renesas@baylibre.com>
>>Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
>>[Lina: Added state properties, removed state names, wakeup-latency,
>>added of_pm_genpd_init() API, pruned commit text]
>>Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
>>[Ulf: Moved around code to make it compile properly, rebased on top of multiple state support]
>>---
>> .../devicetree/bindings/power/power_domain.txt     | 57 ++++++++++++++++++++++
>> 1 file changed, 57 insertions(+)
>>
>>diff --git a/Documentation/devicetree/bindings/power/power_domain.txt b/Documentation/devicetree/bindings/power/power_domain.txt
>>index 025b5e7..4960486 100644
>>--- a/Documentation/devicetree/bindings/power/power_domain.txt
>>+++ b/Documentation/devicetree/bindings/power/power_domain.txt
>>@@ -29,6 +29,10 @@ Optional properties:
>>    specified by this binding. More details about power domain specifier are
>>    available in the next section.
>>
>>+- domain-idle-states : A phandle of an idle-state that shall be soaked into a
>>+                generic domain power state. The idle state definitions are
>>+                compatible with arm,idle-state specified in [1].
>>+
>> Example:
>>
>> 	power: power-controller@12340000 {
>>@@ -59,6 +63,57 @@ The nodes above define two power controllers: 'parent' and 'child'.
>> Domains created by the 'child' power controller are subdomains of '0' power
>> domain provided by the 'parent' power controller.
>>
>>+Example 3: ARM v7 style CPU PM domains (Linux domain controller)
>>+
>>+	cpus {
>>+		#address-cells = <1>;
>>+		#size-cells = <0>;
>>+
>>+		CPU0: cpu@0 {
>>+			device_type = "cpu";
>>+			compatible = "arm,cortex-a7", "arm,armv7";
>>+			reg = <0x0>;
>>+			power-domains = <&a7_pd>;
>>+		};
>>+
>>+		CPU1: cpu@1 {
>>+			device_type = "cpu";
>>+			compatible = "arm,cortex-a15", "arm,armv7";
>>+			reg = <0x0>;
>>+			power-domains = <&a15_pd>;
>>+		};
>>+	};
>>+
>>+	pm-domains {
>>+		a15_pd: a15_pd {
>>+			/* will have A15 platform ARM_PD_METHOD_OF_DECLARE*/
>>+			compatible = "arm,cortex-a15";
>>+			#power-domain-cells = <0>;
>>+			domain-idle-states = <&CLUSTER_SLEEP_0>;
>>+		};
>>+
>>+		a7_pd: a7_pd {
>>+			/* will have a A7 platform ARM_PD_METHOD_OF_DECLARE*/
>>+			compatible = "arm,cortex-a7";
>>+			#power-domain-cells = <0>;
>>+			domain-idle-states = <&CLUSTER_SLEEP_0>, <&CLUSTER_SLEEP_1>;
>>+		};
>>+
>>+		CLUSTER_SLEEP_0: state0 {
>>+			compatible = "arm,idle-state";
>>+			entry-latency-us = <1000>;
>>+			exit-latency-us = <2000>;
>>+			min-residency-us = <10000>;
>>+		};
>>+
>>+		CLUSTER_SLEEP_1: state1 {
>>+			compatible = "arm,idle-state";
>>+			entry-latency-us = <5000>;
>>+			exit-latency-us = <5000>;
>>+			min-residency-us = <100000>;
>>+		};
>>+	};
>>+
>
>This version is *not very descriptive*. Also the discussion we had on v3
>version has not yet concluded IMO. So can I take that we agreed on what
>was proposed there or not ?
>
Sorry, this example is not very descriptive. Pls. check the 8916 dtsi
for the new changes in the following patches. Let me know if that makes
sense.

Thanks,
Lina

>We could have better example above *really* based on the discussions we
>had so far. This example always makes me think it's well crafted to
>avoid any sort of discussions. We need to consider different use-cases
>e.g. what about CPU level states ?
>
>IMO, we need to discuss this DT binding in detail and arrive at some
>conclusion before you take all the troubles to respin the series.
>Also it's better to keep the DT binding separate until we have some
>conclusion instead of posting the implementation for each version.
>That's just my opinion(I would be least bothered about implementation
>until I know it will be accepted before I can peek into the code, others
>may differ.
>
>-- 
>Regards,
>Sudeep
Brendan Jackman Sept. 12, 2016, 3:19 p.m. UTC | #3
Hi Lina,

Sorry for the delay here, Sudeep and I were both been on holiday last week.

On Fri, Sep 02 2016 at 21:16, Lina Iyer wrote:
> On Fri, Sep 02 2016 at 07:21 -0700, Sudeep Holla wrote:
[...]
>>This version is *not very descriptive*. Also the discussion we had on v3
>>version has not yet concluded IMO. So can I take that we agreed on what
>>was proposed there or not ?
>>
> Sorry, this example is not very descriptive. Pls. check the 8916 dtsi
> for the new changes in the following patches. Let me know if that makes
> sense.

The not-yet-concluded discussion Sudeep is referring to is at [1].

In that thread we initially proposed the idea of, instead of splitting
state phandles between cpu-idle-states and domain-idle-states, putting
CPUs in their own domains and using domain-idle-states for _all_
phandles, deprecating cpu-idle-states. I've brought this up in other
threads [2] but discussion keeps petering out, and neither this example
nor the 8916 dtsi in this patch series reflect the idea.

It would be great if we could go back to the thread at [1] where Sudeep
has posted examples and come to a clear consensus on the binding design
before reviewing implementation patches. Ideally with input from Ulf,
Rob and Kevin.

[1] https://patchwork.kernel.org/patch/9264507
[2] http://www.spinics.net/lists/devicetree/msg141024.html
>
> Thanks,
> Lina
>
>>We could have better example above *really* based on the discussions we
>>had so far. This example always makes me think it's well crafted to
>>avoid any sort of discussions. We need to consider different use-cases
>>e.g. what about CPU level states ?
>>
>>IMO, we need to discuss this DT binding in detail and arrive at someq
>>conclusion before you take all the troubles to respin the series.
>>Also it's better to keep the DT binding separate until we have some
>>conclusion instead of posting the implementation for each version.
>>That's just my opinion(I would be least bothered about implementation
>>until I know it will be accepted before I can peek into the code, others
>>may differ.
>>
>>--
>>Regards,
>>Sudeep

Cheers,
Brendan
Lina Iyer Sept. 12, 2016, 4:16 p.m. UTC | #4
On Mon, Sep 12 2016 at 09:19 -0600, Brendan Jackman wrote:
>
>Hi Lina,
>
>Sorry for the delay here, Sudeep and I were both been on holiday last week.
>
>On Fri, Sep 02 2016 at 21:16, Lina Iyer wrote:
>> On Fri, Sep 02 2016 at 07:21 -0700, Sudeep Holla wrote:
>[...]
>>>This version is *not very descriptive*. Also the discussion we had on v3
>>>version has not yet concluded IMO. So can I take that we agreed on what
>>>was proposed there or not ?
>>>
>> Sorry, this example is not very descriptive. Pls. check the 8916 dtsi
>> for the new changes in the following patches. Let me know if that makes
>> sense.
>
>The not-yet-concluded discussion Sudeep is referring to is at [1].
>
>In that thread we initially proposed the idea of, instead of splitting
>state phandles between cpu-idle-states and domain-idle-states, putting
>CPUs in their own domains and using domain-idle-states for _all_
>phandles, deprecating cpu-idle-states. I've brought this up in other
>threads [2] but discussion keeps petering out, and neither this example
>nor the 8916 dtsi in this patch series reflect the idea.
>
Brendan, while your idea is good and will work for CPUs, I do not expect
other domains and possibly CPU domains on some architectures to follow
this model. There is nothing that prevents you from doing this today,
you can specify domains around CPUs in your devicetree and CPU PM will
handle the hierarchy. I don't think its fair to force it on all SoCs
using CPU domains. This patchset does not restrict you from organizing
the idle states the way you want it. This revision of the series, clubs
CPU and domain idle states under idle-states umbrella. So part of your
requirement is also satisfied.

You can follow up the series with your new additions, I don't see a
conflict with this change.

Thanks,
Lina


>It would be great if we could go back to the thread at [1] where Sudeep
>has posted examples and come to a clear consensus on the binding design
>before reviewing implementation patches. Ideally with input from Ulf,
>Rob and Kevin.
>
>[1] https://patchwork.kernel.org/patch/9264507
>[2] http://www.spinics.net/lists/devicetree/msg141024.html
>>
>> Thanks,
>> Lina
>>
>>>We could have better example above *really* based on the discussions we
>>>had so far. This example always makes me think it's well crafted to
>>>avoid any sort of discussions. We need to consider different use-cases
>>>e.g. what about CPU level states ?
>>>
>>>IMO, we need to discuss this DT binding in detail and arrive at someq
>>>conclusion before you take all the troubles to respin the series.
>>>Also it's better to keep the DT binding separate until we have some
>>>conclusion instead of posting the implementation for each version.
>>>That's just my opinion(I would be least bothered about implementation
>>>until I know it will be accepted before I can peek into the code, others
>>>may differ.
>>>
>>>--
>>>Regards,
>>>Sudeep
>
>Cheers,
>Brendan
Sudeep Holla Sept. 12, 2016, 5:09 p.m. UTC | #5
On 12/09/16 17:16, Lina Iyer wrote:
> On Mon, Sep 12 2016 at 09:19 -0600, Brendan Jackman wrote:
>>
>> Hi Lina,
>>
>> Sorry for the delay here, Sudeep and I were both been on holiday last
>> week.
>>
>> On Fri, Sep 02 2016 at 21:16, Lina Iyer wrote:
>>> On Fri, Sep 02 2016 at 07:21 -0700, Sudeep Holla wrote:
>> [...]
>>>> This version is *not very descriptive*. Also the discussion we had
>>>> on v3
>>>> version has not yet concluded IMO. So can I take that we agreed on what
>>>> was proposed there or not ?
>>>>
>>> Sorry, this example is not very descriptive. Pls. check the 8916 dtsi
>>> for the new changes in the following patches. Let me know if that makes
>>> sense.

Please add all possible use-cases in the bindings. Though one can refer
the usage examples, it might not cover all usage descriptions. It helps
preventing people from defining their own when they don't see examples.
Again DT bindings are like specifications, it should be descriptive
especially this kind of generic ones.

>>
>> The not-yet-concluded discussion Sudeep is referring to is at [1].
>>
>> In that thread we initially proposed the idea of, instead of splitting
>> state phandles between cpu-idle-states and domain-idle-states, putting
>> CPUs in their own domains and using domain-idle-states for _all_
>> phandles, deprecating cpu-idle-states. I've brought this up in other
>> threads [2] but discussion keeps petering out, and neither this example
>> nor the 8916 dtsi in this patch series reflect the idea.
>>
> Brendan, while your idea is good and will work for CPUs, I do not expect
> other domains and possibly CPU domains on some architectures to follow
> this model. There is nothing that prevents you from doing this today,
> you can specify domains around CPUs in your devicetree and CPU PM will
> handle the hierarchy. I don't think its fair to force it on all SoCs
> using CPU domains.

I disagree. We are defining DT bindings here and it *should* be same for
all the SoC unless there is a compelling reason not to. I am fine if
those reasons are stated and agreed.

> This patchset does not restrict you from organizing
> the idle states the way you want it. This revision of the series, clubs
> CPU and domain idle states under idle-states umbrella. So part of your
> requirement is also satisfied.
>

I will look at the DTS changes in the series. But we *must* have more
description with more examples in the binding document.

> You can follow up the series with your new additions, I don't see a
> conflict with this change.
>

If we just need additions, then it should be fine.
Brendan Jackman Sept. 13, 2016, 5:50 p.m. UTC | #6
On Mon, Sep 12 2016 at 18:09, Sudeep Holla wrote:
> On 12/09/16 17:16, Lina Iyer wrote:
>> On Mon, Sep 12 2016 at 09:19 -0600, Brendan Jackman wrote:
>>>
>>> Hi Lina,
>>>
>>> Sorry for the delay here, Sudeep and I were both been on holiday last
>>> week.
>>>
>>> On Fri, Sep 02 2016 at 21:16, Lina Iyer wrote:
>>>> On Fri, Sep 02 2016 at 07:21 -0700, Sudeep Holla wrote:
>>> [...]
>>>>> This version is *not very descriptive*. Also the discussion we had
>>>>> on v3
>>>>> version has not yet concluded IMO. So can I take that we agreed on what
>>>>> was proposed there or not ?
>>>>>
>>>> Sorry, this example is not very descriptive. Pls. check the 8916 dtsi
>>>> for the new changes in the following patches. Let me know if that makes
>>>> sense.
>
> Please add all possible use-cases in the bindings. Though one can refer
> the usage examples, it might not cover all usage descriptions. It helps
> preventing people from defining their own when they don't see examples.
> Again DT bindings are like specifications, it should be descriptive
> especially this kind of generic ones.
>
>>>
>>> The not-yet-concluded discussion Sudeep is referring to is at [1].
>>>
>>> In that thread we initially proposed the idea of, instead of splitting
>>> state phandles between cpu-idle-states and domain-idle-states, putting
>>> CPUs in their own domains and using domain-idle-states for _all_
>>> phandles, deprecating cpu-idle-states. I've brought this up in other
>>> threads [2] but discussion keeps petering out, and neither this example
>>> nor the 8916 dtsi in this patch series reflect the idea.
>>>
>> Brendan, while your idea is good and will work for CPUs, I do not expect
>> other domains and possibly CPU domains on some architectures to follow
>> this model. There is nothing that prevents you from doing this today,

As I understand it your opposition to this approach is this:

There may be devices/CPUs which have idle states which do not constitute
"power off". If we put those  devices in their own power domain for the
purpose of putting their (non-power-off) idle state phandles in
domain-idle-states, we are "lying" because no true power domain exists
there.

Am I correct that that's your opposition?

If so, it seems we essentially disagree on the definition of a power
domain, i.e. you define it as a set of devices that are powered on/off
together while I define it as a set of devices whose power states
(including idle states, not just on/off) are tied together. I said
something similar on another thread [1] which died out.

Do you agree that this is basically where we disagree, or am I missing
something else?

[2] http://www.spinics.net/lists/devicetree/msg141050.html

>> you can specify domains around CPUs in your devicetree and CPU PM will
>> handle the hierarchy. I don't think its fair to force it on all SoCs
>> using CPU domains.
>
> I disagree. We are defining DT bindings here and it *should* be same for
> all the SoC unless there is a compelling reason not to. I am fine if
> those reasons are stated and agreed.
>
>> This patchset does not restrict you from organizing
>> the idle states the way you want it. This revision of the series, clubs
>> CPU and domain idle states under idle-states umbrella. So part of your
>> requirement is also satisfied.
>>
>
> I will look at the DTS changes in the series. But we *must* have more
> description with more examples in the binding document.
>
>> You can follow up the series with your new additions, I don't see a
>> conflict with this change.
>>
>
> If we just need additions, then it should be fine.
Lina Iyer Sept. 13, 2016, 7:38 p.m. UTC | #7
On Tue, Sep 13 2016 at 11:50 -0600, Brendan Jackman wrote:
>
>On Mon, Sep 12 2016 at 18:09, Sudeep Holla wrote:
>> On 12/09/16 17:16, Lina Iyer wrote:
>>> On Mon, Sep 12 2016 at 09:19 -0600, Brendan Jackman wrote:
>>>>
>>>> Hi Lina,
>>>>
>>>> Sorry for the delay here, Sudeep and I were both been on holiday last
>>>> week.
>>>>
>>>> On Fri, Sep 02 2016 at 21:16, Lina Iyer wrote:
>>>>> On Fri, Sep 02 2016 at 07:21 -0700, Sudeep Holla wrote:
>>>> [...]
>>>>>> This version is *not very descriptive*. Also the discussion we had
>>>>>> on v3
>>>>>> version has not yet concluded IMO. So can I take that we agreed on what
>>>>>> was proposed there or not ?
>>>>>>
>>>>> Sorry, this example is not very descriptive. Pls. check the 8916 dtsi
>>>>> for the new changes in the following patches. Let me know if that makes
>>>>> sense.
>>
>> Please add all possible use-cases in the bindings. Though one can refer
>> the usage examples, it might not cover all usage descriptions. It helps
>> preventing people from defining their own when they don't see examples.
>> Again DT bindings are like specifications, it should be descriptive
>> especially this kind of generic ones.
>>
>>>>
>>>> The not-yet-concluded discussion Sudeep is referring to is at [1].
>>>>
>>>> In that thread we initially proposed the idea of, instead of splitting
>>>> state phandles between cpu-idle-states and domain-idle-states, putting
>>>> CPUs in their own domains and using domain-idle-states for _all_
>>>> phandles, deprecating cpu-idle-states. I've brought this up in other
>>>> threads [2] but discussion keeps petering out, and neither this example
>>>> nor the 8916 dtsi in this patch series reflect the idea.
>>>>
>>> Brendan, while your idea is good and will work for CPUs, I do not expect
>>> other domains and possibly CPU domains on some architectures to follow
>>> this model. There is nothing that prevents you from doing this today,
>
>As I understand it your opposition to this approach is this:
>
>There may be devices/CPUs which have idle states which do not constitute
>"power off". If we put those  devices in their own power domain for the
>purpose of putting their (non-power-off) idle state phandles in
>domain-idle-states, we are "lying" because no true power domain exists
>there.
>
>Am I correct that that's your opposition?
>
>If so, it seems we essentially disagree on the definition of a power
>domain, i.e. you define it as a set of devices that are powered on/off
>together while I define it as a set of devices whose power states
>(including idle states, not just on/off) are tied together. I said
>something similar on another thread [1] which died out.
>
>Do you agree that this is basically where we disagree, or am I missing
>something else?
>
>[2] http://www.spinics.net/lists/devicetree/msg141050.html
>
Yes, you are right, I disagree with the definition of a domain around a
device. However, as long as you don't force SoC's to define devices in
the CPU PM domain to have their own virtual domains, I have no problem.
You are welcome to define it the way you want for Juno or any other
platform. I don't want that to be the forced and expected out of all
SoCs. All I am saying here is that the current implementation would
handle your case as well.

Thanks,
Lina

>>> you can specify domains around CPUs in your devicetree and CPU PM will
>>> handle the hierarchy. I don't think its fair to force it on all SoCs
>>> using CPU domains.
>>
>> I disagree. We are defining DT bindings here and it *should* be same for
>> all the SoC unless there is a compelling reason not to. I am fine if
>> those reasons are stated and agreed.
>>
>>> This patchset does not restrict you from organizing
>>> the idle states the way you want it. This revision of the series, clubs
>>> CPU and domain idle states under idle-states umbrella. So part of your
>>> requirement is also satisfied.
>>>
>>
>> I will look at the DTS changes in the series. But we *must* have more
>> description with more examples in the binding document.
>>
>>> You can follow up the series with your new additions, I don't see a
>>> conflict with this change.
>>>
>>
>> If we just need additions, then it should be fine.
Brendan Jackman Sept. 14, 2016, 10:14 a.m. UTC | #8
On Tue, Sep 13 2016 at 20:38, Lina Iyer wrote:
> On Tue, Sep 13 2016 at 11:50 -0600, Brendan Jackman wrote:
>>
>>On Mon, Sep 12 2016 at 18:09, Sudeep Holla wrote:
>>> On 12/09/16 17:16, Lina Iyer wrote:
>>>> On Mon, Sep 12 2016 at 09:19 -0600, Brendan Jackman wrote:
>>>>>
>>>>> Hi Lina,
>>>>>
>>>>> Sorry for the delay here, Sudeep and I were both been on holiday last
>>>>> week.
>>>>>
>>>>> On Fri, Sep 02 2016 at 21:16, Lina Iyer wrote:
>>>>>> On Fri, Sep 02 2016 at 07:21 -0700, Sudeep Holla wrote:
>>>>> [...]
>>>>>>> This version is *not very descriptive*. Also the discussion we had
>>>>>>> on v3
>>>>>>> version has not yet concluded IMO. So can I take that we agreed on what
>>>>>>> was proposed there or not ?
>>>>>>>
>>>>>> Sorry, this example is not very descriptive. Pls. check the 8916 dtsi
>>>>>> for the new changes in the following patches. Let me know if that makes
>>>>>> sense.
>>>
>>> Please add all possible use-cases in the bindings. Though one can refer
>>> the usage examples, it might not cover all usage descriptions. It helps
>>> preventing people from defining their own when they don't see examples.
>>> Again DT bindings are like specifications, it should be descriptive
>>> especially this kind of generic ones.
>>>
>>>>>
>>>>> The not-yet-concluded discussion Sudeep is referring to is at [1].
>>>>>
>>>>> In that thread we initially proposed the idea of, instead of splitting
>>>>> state phandles between cpu-idle-states and domain-idle-states, putting
>>>>> CPUs in their own domains and using domain-idle-states for _all_
>>>>> phandles, deprecating cpu-idle-states. I've brought this up in other
>>>>> threads [2] but discussion keeps petering out, and neither this example
>>>>> nor the 8916 dtsi in this patch series reflect the idea.
>>>>>
>>>> Brendan, while your idea is good and will work for CPUs, I do not expect
>>>> other domains and possibly CPU domains on some architectures to follow
>>>> this model. There is nothing that prevents you from doing this today,
>>
>>As I understand it your opposition to this approach is this:
>>
>>There may be devices/CPUs which have idle states which do not constitute
>>"power off". If we put those  devices in their own power domain for the
>>purpose of putting their (non-power-off) idle state phandles in
>>domain-idle-states, we are "lying" because no true power domain exists
>>there.
>>
>>Am I correct that that's your opposition?
>>
>>If so, it seems we essentially disagree on the definition of a power
>>domain, i.e. you define it as a set of devices that are powered on/off
>>together while I define it as a set of devices whose power states
>>(including idle states, not just on/off) are tied together. I said
>>something similar on another thread [1] which died out.
>>
>>Do you agree that this is basically where we disagree, or am I missing
>>something else?
>>
>>[2] http://www.spinics.net/lists/devicetree/msg141050.html
>>
> Yes, you are right, I disagree with the definition of a domain around a
> device.
OK, great.
> However, as long as you don't force SoC's to define devices in
> the CPU PM domain to have their own virtual domains, I have no problem.
> You are welcome to define it the way you want for Juno or any other
> platform.
I don't think that's true; the bindings have to work the same way for
all platforms. If for Juno we put CPU idle state phandles in a
domain-idle-states property for per-CPU domains then, with the current
implementation, the CPU-level idle states would be duplicated between
cpuidle and the CPU PM domains.
> I don't want that to be the forced and expected out of all
> SoCs. All I am saying here is that the current implementation would
> handle your case as well.

The current implementation certainly does cover the work I want to
do. The suggestion of per-device power domains for devices/CPUs with
their own idle states is simply intended to minimise the binding design,
since we'd no longer need cpu-idle-states or device-idle-states
(the latter was proposed elsewhere).

I am fine with the bindings as they are implemented currently so long
as:

- The binding doc makes clear how idle state phandles should be split
  between cpu-idle-states and domain-idle-states. It should make it
  obvious that no phandle should ever appear in both properties. It
  would even be worth briefly going over the backward-compatibility
  implications (e.g. what happens with old-kernel/new-DT and
  new-kernel/old-DT combos if a platform has OSI and PC support and we
  move cluster-level idle state phandles out of cpu-idle-states and into
  domai-idle-states).

- We have a reason against the definition of power domains as "a set of
  devices bound by a common power (including idle) state", since that
  definition would simplify the bindings. In my view, "nobody thinks
  that's what a power domain is" _is_ a compelling reason, so if others
  on the list get involved I'm convinced. I think I speak for Sudeep
  here too.

Cheers,
Brendan
Ulf Hansson Sept. 14, 2016, 11:37 a.m. UTC | #9
>>>
>> Yes, you are right, I disagree with the definition of a domain around a
>> device.

To fill in, I agree with Lina (and Kevin).

From my point of view, a domain is per definition containing resources
which are shared among devices. Having one device per domain, does in
general not make sense.

> OK, great.
>> However, as long as you don't force SoC's to define devices in
>> the CPU PM domain to have their own virtual domains, I have no problem.
>> You are welcome to define it the way you want for Juno or any other
>> platform.
> I don't think that's true; the bindings have to work the same way for
> all platforms. If for Juno we put CPU idle state phandles in a
> domain-idle-states property for per-CPU domains then, with the current
> implementation, the CPU-level idle states would be duplicated between
> cpuidle and the CPU PM domains.
>> I don't want that to be the forced and expected out of all
>> SoCs. All I am saying here is that the current implementation would
>> handle your case as well.
>
> The current implementation certainly does cover the work I want to
> do. The suggestion of per-device power domains for devices/CPUs with
> their own idle states is simply intended to minimise the binding design,
> since we'd no longer need cpu-idle-states or device-idle-states
> (the latter was proposed elsewhere).

I see your point, but IMHO that would be to simplify the description
of the hardware. And I don't think it's sufficient to cover all
existing cases.

>
> I am fine with the bindings as they are implemented currently so long
> as:
>
> - The binding doc makes clear how idle state phandles should be split
>   between cpu-idle-states and domain-idle-states. It should make it
>   obvious that no phandle should ever appear in both properties. It
>   would even be worth briefly going over the backward-compatibility
>   implications (e.g. what happens with old-kernel/new-DT and
>   new-kernel/old-DT combos if a platform has OSI and PC support and we
>   move cluster-level idle state phandles out of cpu-idle-states and into
>   domai-idle-states).
>
> - We have a reason against the definition of power domains as "a set of
>   devices bound by a common power (including idle) state", since that
>   definition would simplify the bindings. In my view, "nobody thinks
>   that's what a power domain is" _is_ a compelling reason, so if others
>   on the list get involved I'm convinced. I think I speak for Sudeep
>   here too.
>

From a CPU point of view, I think it may very well be considered as
any other device. Yes, we have treated them in a specific manner
regarding the idle state definitions we currently have - and we can
continue to do that.

Although, in the long run, I think we needs something more flexible
that can be used for domains and devices.

Kind regards
Uffe
Lina Iyer Sept. 14, 2016, 2:55 p.m. UTC | #10
On Wed, Sep 14 2016 at 04:18 -0600, Brendan Jackman wrote:
>
>On Tue, Sep 13 2016 at 20:38, Lina Iyer wrote:
>> On Tue, Sep 13 2016 at 11:50 -0600, Brendan Jackman wrote:
>>>
>>>On Mon, Sep 12 2016 at 18:09, Sudeep Holla wrote:
>>>> On 12/09/16 17:16, Lina Iyer wrote:
>>>>> On Mon, Sep 12 2016 at 09:19 -0600, Brendan Jackman wrote:
>>>>>>
>>>>>> Hi Lina,
>>>>>>
>>>>>> Sorry for the delay here, Sudeep and I were both been on holiday last
>>>>>> week.
>>>>>>
>>>>>> On Fri, Sep 02 2016 at 21:16, Lina Iyer wrote:
>>>>>>> On Fri, Sep 02 2016 at 07:21 -0700, Sudeep Holla wrote:
>>>>>> [...]
>>>>>>>> This version is *not very descriptive*. Also the discussion we had
>>>>>>>> on v3
>>>>>>>> version has not yet concluded IMO. So can I take that we agreed on what
>>>>>>>> was proposed there or not ?
>>>>>>>>
>>>>>>> Sorry, this example is not very descriptive. Pls. check the 8916 dtsi
>>>>>>> for the new changes in the following patches. Let me know if that makes
>>>>>>> sense.
>>>>
>>>> Please add all possible use-cases in the bindings. Though one can refer
>>>> the usage examples, it might not cover all usage descriptions. It helps
>>>> preventing people from defining their own when they don't see examples.
>>>> Again DT bindings are like specifications, it should be descriptive
>>>> especially this kind of generic ones.
>>>>
>>>>>>
>>>>>> The not-yet-concluded discussion Sudeep is referring to is at [1].
>>>>>>
>>>>>> In that thread we initially proposed the idea of, instead of splitting
>>>>>> state phandles between cpu-idle-states and domain-idle-states, putting
>>>>>> CPUs in their own domains and using domain-idle-states for _all_
>>>>>> phandles, deprecating cpu-idle-states. I've brought this up in other
>>>>>> threads [2] but discussion keeps petering out, and neither this example
>>>>>> nor the 8916 dtsi in this patch series reflect the idea.
>>>>>>
>>>>> Brendan, while your idea is good and will work for CPUs, I do not expect
>>>>> other domains and possibly CPU domains on some architectures to follow
>>>>> this model. There is nothing that prevents you from doing this today,
>>>
>>>As I understand it your opposition to this approach is this:
>>>
>>>There may be devices/CPUs which have idle states which do not constitute
>>>"power off". If we put those  devices in their own power domain for the
>>>purpose of putting their (non-power-off) idle state phandles in
>>>domain-idle-states, we are "lying" because no true power domain exists
>>>there.
>>>
>>>Am I correct that that's your opposition?
>>>
>>>If so, it seems we essentially disagree on the definition of a power
>>>domain, i.e. you define it as a set of devices that are powered on/off
>>>together while I define it as a set of devices whose power states
>>>(including idle states, not just on/off) are tied together. I said
>>>something similar on another thread [1] which died out.
>>>
>>>Do you agree that this is basically where we disagree, or am I missing
>>>something else?
>>>
>>>[2] http://www.spinics.net/lists/devicetree/msg141050.html
>>>
>> Yes, you are right, I disagree with the definition of a domain around a
>> device.
>OK, great.
>> However, as long as you don't force SoC's to define devices in
>> the CPU PM domain to have their own virtual domains, I have no problem.
>> You are welcome to define it the way you want for Juno or any other
>> platform.
>I don't think that's true; the bindings have to work the same way for
>all platforms. If for Juno we put CPU idle state phandles in a
>domain-idle-states property for per-CPU domains then, with the current
>implementation, the CPU-level idle states would be duplicated between
>cpuidle and the CPU PM domains.

We don't have the code today. Your patches would add the functionality
of parsing domain idle states and attaching them to cpu-idle-states if
the firmware support and the mode is Platform-coordinated. And that
functionality is an easy addition. Nobody is making this change to
platforms with PC to use the CPU PM domains yet. 

What you are referring to is just a convergence PC and OSI to use
the same domain hierarchy. This definition is not impacted by your 
desire. I have my own doubts of defining PC domains this way, but I
would leave that to you to submit the relevant RFC and bring forth the
discussion. (Per DT, the definition of PC domain states is already
immutable from how it is defined today in DT. You have to be careful in
breaking it up.)

>> I don't want that to be the forced and expected out of all
>> SoCs. All I am saying here is that the current implementation would
>> handle your case as well.
>
>The current implementation certainly does cover the work I want to
>do. The suggestion of per-device power domains for devices/CPUs with
>their own idle states is simply intended to minimise the binding design,
>since we'd no longer need cpu-idle-states or device-idle-states
>(the latter was proposed elsewhere).
>
>I am fine with the bindings as they are implemented currently so long
>as:
>
>- The binding doc makes clear how idle state phandles should be split
>  between cpu-idle-states and domain-idle-states. It should make it
>  obvious that no phandle should ever appear in both properties. It
>  would even be worth briefly going over the backward-compatibility
>  implications (e.g. what happens with old-kernel/new-DT and
>  new-kernel/old-DT combos if a platform has OSI and PC support and we
>  move cluster-level idle state phandles out of cpu-idle-states and into
>  domai-idle-states).
>
Since, I have been only defining OSI initiated PM domains, this is not a
problem. I have clearly distinguished the explanation to be OSI
specific, for now.

>- We have a reason against the definition of power domains as "a set of
>  devices bound by a common power (including idle) state", since that
>  definition would simplify the bindings. In my view, "nobody thinks
>  that's what a power domain is" _is_ a compelling reason, so if others
>  on the list get involved I'm convinced. I think I speak for Sudeep
>  here too.
>

Look outside the context of the CPU - a generic PM domain is collective
of generic devices that share the same power island. A PM Domain may
also have other domains as sub-domains as well. So it is exactly that.
A CPU is just a specialized device.

Hope this helps.

Thanks,
Lina
Kevin Hilman Sept. 16, 2016, 5:13 p.m. UTC | #11
Brendan Jackman <brendan.jackman@arm.com> writes:

> On Tue, Sep 13 2016 at 20:38, Lina Iyer wrote:
>> On Tue, Sep 13 2016 at 11:50 -0600, Brendan Jackman wrote:
>>>
>>>On Mon, Sep 12 2016 at 18:09, Sudeep Holla wrote:
>>>> On 12/09/16 17:16, Lina Iyer wrote:
>>>>> On Mon, Sep 12 2016 at 09:19 -0600, Brendan Jackman wrote:
>>>>>>
>>>>>> Hi Lina,
>>>>>>
>>>>>> Sorry for the delay here, Sudeep and I were both been on holiday last
>>>>>> week.
>>>>>>
>>>>>> On Fri, Sep 02 2016 at 21:16, Lina Iyer wrote:
>>>>>>> On Fri, Sep 02 2016 at 07:21 -0700, Sudeep Holla wrote:
>>>>>> [...]
>>>>>>>> This version is *not very descriptive*. Also the discussion we had
>>>>>>>> on v3
>>>>>>>> version has not yet concluded IMO. So can I take that we agreed on what
>>>>>>>> was proposed there or not ?
>>>>>>>>
>>>>>>> Sorry, this example is not very descriptive. Pls. check the 8916 dtsi
>>>>>>> for the new changes in the following patches. Let me know if that makes
>>>>>>> sense.
>>>>
>>>> Please add all possible use-cases in the bindings. Though one can refer
>>>> the usage examples, it might not cover all usage descriptions. It helps
>>>> preventing people from defining their own when they don't see examples.
>>>> Again DT bindings are like specifications, it should be descriptive
>>>> especially this kind of generic ones.
>>>>
>>>>>>
>>>>>> The not-yet-concluded discussion Sudeep is referring to is at [1].
>>>>>>
>>>>>> In that thread we initially proposed the idea of, instead of splitting
>>>>>> state phandles between cpu-idle-states and domain-idle-states, putting
>>>>>> CPUs in their own domains and using domain-idle-states for _all_
>>>>>> phandles, deprecating cpu-idle-states. I've brought this up in other
>>>>>> threads [2] but discussion keeps petering out, and neither this example
>>>>>> nor the 8916 dtsi in this patch series reflect the idea.
>>>>>>
>>>>> Brendan, while your idea is good and will work for CPUs, I do not expect
>>>>> other domains and possibly CPU domains on some architectures to follow
>>>>> this model. There is nothing that prevents you from doing this today,
>>>
>>>As I understand it your opposition to this approach is this:
>>>
>>>There may be devices/CPUs which have idle states which do not constitute
>>>"power off". If we put those  devices in their own power domain for the
>>>purpose of putting their (non-power-off) idle state phandles in
>>>domain-idle-states, we are "lying" because no true power domain exists
>>>there.
>>>
>>>Am I correct that that's your opposition?
>>>
>>>If so, it seems we essentially disagree on the definition of a power
>>>domain, i.e. you define it as a set of devices that are powered on/off
>>>together while I define it as a set of devices whose power states
>>>(including idle states, not just on/off) are tied together. I said
>>>something similar on another thread [1] which died out.
>>>
>>>Do you agree that this is basically where we disagree, or am I missing
>>>something else?
>>>
>>>[2] http://www.spinics.net/lists/devicetree/msg141050.html
>>>
>> Yes, you are right, I disagree with the definition of a domain around a
>> device.
> OK, great.
>> However, as long as you don't force SoC's to define devices in
>> the CPU PM domain to have their own virtual domains, I have no problem.
>> You are welcome to define it the way you want for Juno or any other
>> platform.
> I don't think that's true; the bindings have to work the same way for
> all platforms. If for Juno we put CPU idle state phandles in a
> domain-idle-states property for per-CPU domains then, with the current
> implementation, the CPU-level idle states would be duplicated between
> cpuidle and the CPU PM domains.
>> I don't want that to be the forced and expected out of all
>> SoCs. All I am saying here is that the current implementation would
>> handle your case as well.
>
> The current implementation certainly does cover the work I want to
> do. The suggestion of per-device power domains for devices/CPUs with
> their own idle states is simply intended to minimise the binding design,
> since we'd no longer need cpu-idle-states or device-idle-states
> (the latter was proposed elsewhere).
>
> I am fine with the bindings as they are implemented currently so long
> as:
>
> - The binding doc makes clear how idle state phandles should be split
>   between cpu-idle-states and domain-idle-states. It should make it
>   obvious that no phandle should ever appear in both properties. It
>   would even be worth briefly going over the backward-compatibility
>   implications (e.g. what happens with old-kernel/new-DT and
>   new-kernel/old-DT combos if a platform has OSI and PC support and we
>   move cluster-level idle state phandles out of cpu-idle-states and into
>   domai-idle-states).
>
> - We have a reason against the definition of power domains as "a set of
>   devices bound by a common power (including idle) state", since that
>   definition would simplify the bindings. In my view, "nobody thinks
>   that's what a power domain is" _is_ a compelling reason, so if others
>   on the list get involved I'm convinced. I think I speak for Sudeep
>   here too.

I think we're having some terminology issues...

FWIW, the kernel terminolgy is actually "PM domain", not power domain.
This was intentional because the goal of the PM domain was to group
devices that some PM features.  To be very specific to the kernel, they
us the same set of PM callbacks.  Today, this is most commonly used to
model power domains, where a group of devices share a power rail, but it
does not need to be limited to that.

That being said, I'm having a hard time understanding the root of the
disagreement.

It seems that you and Sudeep would like to use domain-idle-states to
replace/superceed cpu-idle-states with the primary goal (and benefit)
being that it simplifies the DT bindings.  Is that correct?

The objections have come in because that means that implies that CPUs
become their own domains, which may not be the case in hardware in the
sense that they share a power rail.

However, IMO, thinking of a CPU as it's own "PM domain" may make some
sense based on the terminology above.

I think the other objection may be that using a genpd to model domain
with only a single device in it may be overkill, and I agree with that.
But, I'm not sure if making CPUs use domain-idle-states implies that
they necessarily have to use genpd is what you are proposing.  Maybe
someone could clarify that?

Kevin
Sudeep Holla Sept. 16, 2016, 5:39 p.m. UTC | #12
Hi Kevin,

Thanks for looking at this and simplifying various discussions we had so
far. I was thinking of summarizing something very similar. I couldn't
due to lack of time.

On 16/09/16 18:13, Kevin Hilman wrote:

[...]

> I think we're having some terminology issues...
>
> FWIW, the kernel terminolgy is actually "PM domain", not power domain.
> This was intentional because the goal of the PM domain was to group
> devices that some PM features.  To be very specific to the kernel, they
> us the same set of PM callbacks.  Today, this is most commonly used to
> model power domains, where a group of devices share a power rail, but it
> does not need to be limited to that.
>

Agreed/Understood.

> That being said, I'm having a hard time understanding the root of the
> disagreement.
>

Yes. I tried to convey the same earlier, but have failed. The only
disagreement is about a small part of this DT bindings. We would like to
make it completely hierarchical up to CPU nodes. More comments on that
below.

> It seems that you and Sudeep would like to use domain-idle-states to
> replace/superceed cpu-idle-states with the primary goal (and benefit)
> being that it simplifies the DT bindings.  Is that correct?
>

Correct, we want to deprecate cpu-idle-states with the introduction of
this hierarchical PM bindings. Yes IMO, it simplifies things and avoids
any ABI break we might trigger if we miss to consider some use-case now.

> The objections have come in because that means that implies that CPUs
> become their own domains, which may not be the case in hardware in the
> sense that they share a power rail.
>

Agreed.

> However, IMO, thinking of a CPU as it's own "PM domain" may make some
> sense based on the terminology above.
>

Thanks for that, we do understand that it may not be 100% correct when
we strictly considers hardware terminologies instead of above ones.
As along as we see no issues with the above terminologies it should be fine.

> I think the other objection may be that using a genpd to model domain
> with only a single device in it may be overkill, and I agree with that.

I too agree with that. Just because we represent that in DT in that way
doesn't mean we need to create a genpd to model domain. We can always
skip that if not required. That's pure implementation specifics and I
have tried to convey the same in my previous emails. I must say you have
summarized it very clearly in this email. Thanks again for that.

> But, I'm not sure if making CPUs use domain-idle-states implies that
> they necessarily have to use genpd is what you are proposing.  Maybe
> someone could clarify that?
>

No, I have not proposing anything around implementation in the whole
discussion so far. I have constrained myself just to DT bindings so far.
That's the main reason why I was opposed to mentions of OS vs platform
co-ordinated modes of CPU suspend in this discussion. IMO that's
completely out of scope of this DT binding we are defining here.

Hope that helps/clarifies the misunderstanding/disagreement.
Brendan Jackman Sept. 19, 2016, 3:09 p.m. UTC | #13
On Fri, Sep 16 2016 at 18:39, Sudeep Holla <sudeep.holla@arm.com> wrote:
> Hi Kevin,
>
> Thanks for looking at this and simplifying various discussions we had so
> far. I was thinking of summarizing something very similar. I couldn't
> due to lack of time.
>
> On 16/09/16 18:13, Kevin Hilman wrote:
>
> [...]
>
>> I think we're having some terminology issues...
>>
>> FWIW, the kernel terminolgy is actually "PM domain", not power domain.
>> This was intentional because the goal of the PM domain was to group
>> devices that some PM features.  To be very specific to the kernel, they
>> us the same set of PM callbacks.  Today, this is most commonly used to
>> model power domains, where a group of devices share a power rail, but it
>> does not need to be limited to that.
>>
>
> Agreed/Understood.
>
>> That being said, I'm having a hard time understanding the root of the
>> disagreement.
>>
>
> Yes. I tried to convey the same earlier, but have failed. The only
> disagreement is about a small part of this DT bindings. We would like to
> make it completely hierarchical up to CPU nodes. More comments on that
> below.
>
>> It seems that you and Sudeep would like to use domain-idle-states to
>> replace/superceed cpu-idle-states with the primary goal (and benefit)
>> being that it simplifies the DT bindings.  Is that correct?
>>
>
> Correct, we want to deprecate cpu-idle-states with the introduction of
> this hierarchical PM bindings. Yes IMO, it simplifies things and avoids
> any ABI break we might trigger if we miss to consider some use-case now.
>
>> The objections have come in because that means that implies that CPUs
>> become their own domains, which may not be the case in hardware in the
>> sense that they share a power rail.
>>
>
> Agreed.
>
>> However, IMO, thinking of a CPU as it's own "PM domain" may make some
>> sense based on the terminology above.
>>
>
> Thanks for that, we do understand that it may not be 100% correct when
> we strictly considers hardware terminologies instead of above ones.
> As along as we see no issues with the above terminologies it should be fine.
>
>> I think the other objection may be that using a genpd to model domain
>> with only a single device in it may be overkill, and I agree with that.
>
> I too agree with that. Just because we represent that in DT in that way
> doesn't mean we need to create a genpd to model domain. We can always
> skip that if not required. That's pure implementation specifics and I
> have tried to convey the same in my previous emails. I must say you have
> summarized it very clearly in this email. Thanks again for that.
>
>> But, I'm not sure if making CPUs use domain-idle-states implies that
>> they necessarily have to use genpd is what you are proposing.  Maybe
>> someone could clarify that?
>>
>
> No, I have not proposing anything around implementation in the whole
> discussion so far. I have constrained myself just to DT bindings so far.
> That's the main reason why I was opposed to mentions of OS vs platform
> co-ordinated modes of CPU suspend in this discussion. IMO that's
> completely out of scope of this DT binding we are defining here.
>
> Hope that helps/clarifies the misunderstanding/disagreement.

Indeed. My intention was that the proposal would result in the exact
same kernel behaviour as Lina's current patchset, i.e. there is one
genpd per cluster, and CPU-level idle states are still handled by
cpuidle.

The only change from the current patchset would be in initialisation
code: some coordination would need to be done to determine which idle
states go into cpuidle and which go into the genpds (whereas with the
current bindings, states from cpu-idle-states go into cpuidle and states
from domain-idle-states go into genpd). So you could say that this would
be a trade-off between binding simplicity and implementation simplicity.

Cheers,
Brendan
Lina Iyer Sept. 20, 2016, 4:17 p.m. UTC | #14
On Mon, Sep 19 2016 at 09:09 -0600, Brendan Jackman wrote:
>
>On Fri, Sep 16 2016 at 18:39, Sudeep Holla <sudeep.holla@arm.com> wrote:
>> Hi Kevin,
>>
>> Thanks for looking at this and simplifying various discussions we had so
>> far. I was thinking of summarizing something very similar. I couldn't
>> due to lack of time.
>>
>> On 16/09/16 18:13, Kevin Hilman wrote:
>>
>> [...]
>>
>>> I think we're having some terminology issues...
>>>
>>> FWIW, the kernel terminolgy is actually "PM domain", not power domain.
>>> This was intentional because the goal of the PM domain was to group
>>> devices that some PM features.  To be very specific to the kernel, they
>>> us the same set of PM callbacks.  Today, this is most commonly used to
>>> model power domains, where a group of devices share a power rail, but it
>>> does not need to be limited to that.
>>>
>>
>> Agreed/Understood.
>>
>>> That being said, I'm having a hard time understanding the root of the
>>> disagreement.
>>>
>>
>> Yes. I tried to convey the same earlier, but have failed. The only
>> disagreement is about a small part of this DT bindings. We would like to
>> make it completely hierarchical up to CPU nodes. More comments on that
>> below.
>>
>>> It seems that you and Sudeep would like to use domain-idle-states to
>>> replace/superceed cpu-idle-states with the primary goal (and benefit)
>>> being that it simplifies the DT bindings.  Is that correct?
>>>
>>
>> Correct, we want to deprecate cpu-idle-states with the introduction of
>> this hierarchical PM bindings. Yes IMO, it simplifies things and avoids
>> any ABI break we might trigger if we miss to consider some use-case now.
>>
>>> The objections have come in because that means that implies that CPUs
>>> become their own domains, which may not be the case in hardware in the
>>> sense that they share a power rail.
>>>
>>
>> Agreed.
>>
>>> However, IMO, thinking of a CPU as it's own "PM domain" may make some
>>> sense based on the terminology above.
>>>
>>
>> Thanks for that, we do understand that it may not be 100% correct when
>> we strictly considers hardware terminologies instead of above ones.
>> As along as we see no issues with the above terminologies it should be fine.
>>
>>> I think the other objection may be that using a genpd to model domain
>>> with only a single device in it may be overkill, and I agree with that.
>>
>> I too agree with that. Just because we represent that in DT in that way
>> doesn't mean we need to create a genpd to model domain. We can always
>> skip that if not required. That's pure implementation specifics and I
>> have tried to convey the same in my previous emails. I must say you have
>> summarized it very clearly in this email. Thanks again for that.
>>
>>> But, I'm not sure if making CPUs use domain-idle-states implies that
>>> they necessarily have to use genpd is what you are proposing.  Maybe
>>> someone could clarify that?
>>>
>>
>> No, I have not proposing anything around implementation in the whole
>> discussion so far. I have constrained myself just to DT bindings so far.
>> That's the main reason why I was opposed to mentions of OS vs platform
>> co-ordinated modes of CPU suspend in this discussion. IMO that's
>> completely out of scope of this DT binding we are defining here.
>>
Fair. But understand the PM Domain bindings do not impose any
requirements of hierarchy. Domain idle states are defined by the
property domain-idle-states in the domain node. How the DT bindings are
organized is immaterial to the PM Domain core.

It is a different exercise all together to look at CPU PSCI modes and
have a unified way of representing them in DT. The current set of
patches does not dictate where the domain idle states be located (pardon
my example in the patch, which was not updated to reflect that). That
said, I do require that domains that are controlled by the PSCI f/w be
defined under the 'psci' node in DT, which is fair. All the domain needs
are phandles to the idle state definitions; how the nodes are arranged
in DT is not of consequence to the driver.

In my mind providing a structure to CPU PM domains that can be used for
both OSI and PC is a separate effort. It may also club what Brendan
mentions below as part of the effort. The hierarchy that is presented in
[1] is inherent in the PM domain hierarchy and idle states don't have to
duplicate that information.

>> Hope that helps/clarifies the misunderstanding/disagreement.
>
>Indeed. My intention was that the proposal would result in the exact
>same kernel behaviour as Lina's current patchset, i.e. there is one
>genpd per cluster, and CPU-level idle states are still handled by
>cpuidle.
>
>The only change from the current patchset would be in initialisation
>code: some coordination would need to be done to determine which idle
>states go into cpuidle and which go into the genpds (whereas with the
>current bindings, states from cpu-idle-states go into cpuidle and states
>from domain-idle-states go into genpd). So you could say that this would
>be a trade-off between binding simplicity and implementation simplicity.
>
I would not oppose the idea of virtual domains around CPUs (I admit I am
not comfortable with the idea though), if that is the right thing to do.
But the scope of that work is extensive and should not be clubbed as
part of this proposal. It is an extensive code rework spanning cpuidle
drivers and PSCI and there are hooks in this code to help you achieve
that.

Thanks,
Lina

[1]. https://patchwork.kernel.org/patch/9264507/
Brendan Jackman Sept. 21, 2016, 9:48 a.m. UTC | #15
On Tue, Sep 20 2016 at 17:17, Lina Iyer <lina.iyer@linaro.org> wrote:
> On Mon, Sep 19 2016 at 09:09 -0600, Brendan Jackman wrote:
>>
>>On Fri, Sep 16 2016 at 18:39, Sudeep Holla <sudeep.holla@arm.com> wrote:
>>> Hi Kevin,
>>>
>>> Thanks for looking at this and simplifying various discussions we had so
>>> far. I was thinking of summarizing something very similar. I couldn't
>>> due to lack of time.
>>>
>>> On 16/09/16 18:13, Kevin Hilman wrote:
>>>
>>> [...]
>>>
>>>> I think we're having some terminology issues...
>>>>
>>>> FWIW, the kernel terminolgy is actually "PM domain", not power domain.
>>>> This was intentional because the goal of the PM domain was to group
>>>> devices that some PM features.  To be very specific to the kernel, they
>>>> us the same set of PM callbacks.  Today, this is most commonly used to
>>>> model power domains, where a group of devices share a power rail, but it
>>>> does not need to be limited to that.
>>>>
>>>
>>> Agreed/Understood.
>>>
>>>> That being said, I'm having a hard time understanding the root of the
>>>> disagreement.
>>>>
>>>
>>> Yes. I tried to convey the same earlier, but have failed. The only
>>> disagreement is about a small part of this DT bindings. We would like to
>>> make it completely hierarchical up to CPU nodes. More comments on that
>>> below.
>>>
>>>> It seems that you and Sudeep would like to use domain-idle-states to
>>>> replace/superceed cpu-idle-states with the primary goal (and benefit)
>>>> being that it simplifies the DT bindings.  Is that correct?
>>>>
>>>
>>> Correct, we want to deprecate cpu-idle-states with the introduction of
>>> this hierarchical PM bindings. Yes IMO, it simplifies things and avoids
>>> any ABI break we might trigger if we miss to consider some use-case now.
>>>
>>>> The objections have come in because that means that implies that CPUs
>>>> become their own domains, which may not be the case in hardware in the
>>>> sense that they share a power rail.
>>>>
>>>
>>> Agreed.
>>>
>>>> However, IMO, thinking of a CPU as it's own "PM domain" may make some
>>>> sense based on the terminology above.
>>>>
>>>
>>> Thanks for that, we do understand that it may not be 100% correct when
>>> we strictly considers hardware terminologies instead of above ones.
>>> As along as we see no issues with the above terminologies it should be fine.
>>>
>>>> I think the other objection may be that using a genpd to model domain
>>>> with only a single device in it may be overkill, and I agree with that.
>>>
>>> I too agree with that. Just because we represent that in DT in that way
>>> doesn't mean we need to create a genpd to model domain. We can always
>>> skip that if not required. That's pure implementation specifics and I
>>> have tried to convey the same in my previous emails. I must say you have
>>> summarized it very clearly in this email. Thanks again for that.
>>>
>>>> But, I'm not sure if making CPUs use domain-idle-states implies that
>>>> they necessarily have to use genpd is what you are proposing.  Maybe
>>>> someone could clarify that?
>>>>
>>>
>>> No, I have not proposing anything around implementation in the whole
>>> discussion so far. I have constrained myself just to DT bindings so far.
>>> That's the main reason why I was opposed to mentions of OS vs platform
>>> co-ordinated modes of CPU suspend in this discussion. IMO that's
>>> completely out of scope of this DT binding we are defining here.
>>>
> Fair. But understand the PM Domain bindings do not impose any
> requirements of hierarchy. Domain idle states are defined by the
> property domain-idle-states in the domain node. How the DT bindings are
> organized is immaterial to the PM Domain core.
>
> It is a different exercise all together to look at CPU PSCI modes and
> have a unified way of representing them in DT. The current set of
> patches does not dictate where the domain idle states be located (pardon
> my example in the patch, which was not updated to reflect that). That
> said, I do require that domains that are controlled by the PSCI f/w be
> defined under the 'psci' node in DT, which is fair. All the domain needs
> are phandles to the idle state definitions; how the nodes are arranged
> in DT is not of consequence to the driver.
>
> In my mind providing a structure to CPU PM domains that can be used for
> both OSI and PC is a separate effort.

Do you mean a structure in the kernel or in DT? If the former, I agree,
if the latter I strongly disagree. I think DT bindings should be totally
unaware of PSCI suspend modes.

> It may also club what Brendan
> mentions below as part of the effort. The hierarchy that is presented in
> [1] is inherent in the PM domain hierarchy and idle states don't have to
> duplicate that information.
>
>>> Hope that helps/clarifies the misunderstanding/disagreement.
>>
>>Indeed. My intention was that the proposal would result in the exact
>>same kernel behaviour as Lina's current patchset, i.e. there is one
>>genpd per cluster, and CPU-level idle states are still handled by
>>cpuidle.
>>
>>The only change from the current patchset would be in initialisation
>>code: some coordination would need to be done to determine which idle
>>states go into cpuidle and which go into the genpds (whereas with the
>>current bindings, states from cpu-idle-states go into cpuidle and states
>>from domain-idle-states go into genpd). So you could say that this would
>>be a trade-off between binding simplicity and implementation simplicity.
>>
> I would not oppose the idea of virtual domains around CPUs (I admit I am
> not comfortable with the idea though), if that is the right thing to do.
> But the scope of that work is extensive and should not be clubbed as
> part of this proposal. It is an extensive code rework spanning cpuidle
> drivers and PSCI and there are hooks in this code to help you achieve
> that.

If we want to take the per-CPU domains approach, we _have_ to do it as
part of this proposal; it's a different set of semantics for the
cpu-idle-states/domain-idle-states properties. It would mean
cpu-idle-states is _superseded_ by domain idle states - implementing one
solution (where cpu-idle-states and domain idle states are both taken
into consideration by the implementation) then later switching to the
alternative (where cpu-idle-states is ignored when a CPU PM domain tree
is present) wouldn't make sense from a backward-compatibility
perspective.

You're right that implementing the alternative proposal in the Linux
kernel would mean quite a big rework. But, idealistically speaking,
Linux-specific implementation realities shouldn't be a factor in Device
Tree binding designs, right?

If people think that using both cpu-idle-states and domain-idle-states
is the pragmatic choice (or object fundamentally to the idea of devices
with idle states as being in their own PM domain) then that's fine IMO,
but it's a one-time decision and I think we should be clear about why
we're making it.

Cheers,
Brendan
diff mbox

Patch

diff --git a/Documentation/devicetree/bindings/power/power_domain.txt b/Documentation/devicetree/bindings/power/power_domain.txt
index 025b5e7..4960486 100644
--- a/Documentation/devicetree/bindings/power/power_domain.txt
+++ b/Documentation/devicetree/bindings/power/power_domain.txt
@@ -29,6 +29,10 @@  Optional properties:
    specified by this binding. More details about power domain specifier are
    available in the next section.
 
+- domain-idle-states : A phandle of an idle-state that shall be soaked into a
+                generic domain power state. The idle state definitions are
+                compatible with arm,idle-state specified in [1].
+
 Example:
 
 	power: power-controller@12340000 {
@@ -59,6 +63,57 @@  The nodes above define two power controllers: 'parent' and 'child'.
 Domains created by the 'child' power controller are subdomains of '0' power
 domain provided by the 'parent' power controller.
 
+Example 3: ARM v7 style CPU PM domains (Linux domain controller)
+
+	cpus {
+		#address-cells = <1>;
+		#size-cells = <0>;
+
+		CPU0: cpu@0 {
+			device_type = "cpu";
+			compatible = "arm,cortex-a7", "arm,armv7";
+			reg = <0x0>;
+			power-domains = <&a7_pd>;
+		};
+
+		CPU1: cpu@1 {
+			device_type = "cpu";
+			compatible = "arm,cortex-a15", "arm,armv7";
+			reg = <0x0>;
+			power-domains = <&a15_pd>;
+		};
+	};
+
+	pm-domains {
+		a15_pd: a15_pd {
+			/* will have A15 platform ARM_PD_METHOD_OF_DECLARE*/
+			compatible = "arm,cortex-a15";
+			#power-domain-cells = <0>;
+			domain-idle-states = <&CLUSTER_SLEEP_0>;
+		};
+
+		a7_pd: a7_pd {
+			/* will have a A7 platform ARM_PD_METHOD_OF_DECLARE*/
+			compatible = "arm,cortex-a7";
+			#power-domain-cells = <0>;
+			domain-idle-states = <&CLUSTER_SLEEP_0>, <&CLUSTER_SLEEP_1>;
+		};
+
+		CLUSTER_SLEEP_0: state0 {
+			compatible = "arm,idle-state";
+			entry-latency-us = <1000>;
+			exit-latency-us = <2000>;
+			min-residency-us = <10000>;
+		};
+
+		CLUSTER_SLEEP_1: state1 {
+			compatible = "arm,idle-state";
+			entry-latency-us = <5000>;
+			exit-latency-us = <5000>;
+			min-residency-us = <100000>;
+		};
+	};
+
 ==PM domain consumers==
 
 Required properties:
@@ -76,3 +131,5 @@  Example:
 The node above defines a typical PM domain consumer device, which is located
 inside a PM domain with index 0 of a power controller represented by a node
 with the label "power".
+
+[1]. Documentation/devicetree/bindings/arm/idle-states.txt