diff mbox

[5/9] ARM: common: Introduce PM domains for CPUs/clusters

Message ID 1438731339-58317-6-git-send-email-lina.iyer@linaro.org (mailing list archive)
State New, archived
Headers show

Commit Message

Lina Iyer Aug. 4, 2015, 11:35 p.m. UTC
Define and add Generic PM domains (genpd) for ARM CPU clusters. Many new
SoCs group CPUs as clusters. Clusters share common resources like GIC,
power rail, caches, VFP, Coresight etc. When all CPUs in the cluster are
idle, these shared resources may also be put in their idle state.

The idle time between the last CPU entering idle and a CPU resuming
execution is an opportunity for these shared resources to be powered
down. Generic PM domain provides a framework for defining such power
domains and attach devices to the domain. When the devices in the domain
are idle at runtime, the domain would also be suspended and resumed
before the first of the devices resume execution.

We define a generic PM domain for each cluster and attach CPU devices in
the cluster to that PM domain. The DT definitions for the SoC describe
this relationship. Genpd callbacks for power_on and power_off can then
be used to power up/down the shared resources for the domain.

Cc: Stephen Boyd <sboyd@codeaurora.org>
Cc: Kevin Hilman <khilman@linaro.org>
Cc: Ulf Hansson <ulf.hansson@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Kevin Hilman <khilman@linaro.org>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 Documentation/arm/cpu-domains.txt                  |  49 ++++++
 .../devicetree/bindings/arm/cpudomains.txt         |  23 +++
 arch/arm/common/Makefile                           |   1 +
 arch/arm/common/domains.c                          | 166 +++++++++++++++++++++
 4 files changed, 239 insertions(+)
 create mode 100644 Documentation/arm/cpu-domains.txt
 create mode 100644 Documentation/devicetree/bindings/arm/cpudomains.txt
 create mode 100644 arch/arm/common/domains.c

Comments

Rob Herring Aug. 6, 2015, 3:14 a.m. UTC | #1
On Tue, Aug 4, 2015 at 6:35 PM, Lina Iyer <lina.iyer@linaro.org> wrote:
> Define and add Generic PM domains (genpd) for ARM CPU clusters. Many new
> SoCs group CPUs as clusters. Clusters share common resources like GIC,
> power rail, caches, VFP, Coresight etc. When all CPUs in the cluster are
> idle, these shared resources may also be put in their idle state.
>
> The idle time between the last CPU entering idle and a CPU resuming
> execution is an opportunity for these shared resources to be powered
> down. Generic PM domain provides a framework for defining such power
> domains and attach devices to the domain. When the devices in the domain
> are idle at runtime, the domain would also be suspended and resumed
> before the first of the devices resume execution.
>
> We define a generic PM domain for each cluster and attach CPU devices in
> the cluster to that PM domain. The DT definitions for the SoC describe
> this relationship. Genpd callbacks for power_on and power_off can then
> be used to power up/down the shared resources for the domain.

[...]

> +ARM CPU Power domains
> +
> +The device tree allows describing of CPU power domains in a SoC. In ARM SoC,
> +CPUs may be grouped as clusters. A cluster may have CPUs, GIC, Coresight,
> +caches, VFP and power controller and other peripheral hardware. Generally,
> +when the CPUs in the cluster are idle/suspended, the shared resources may also
> +be suspended and resumed before any of the CPUs resume execution.
> +
> +CPUs are the defined as the PM domain consumers and there is a PM domain
> +provider for the CPUs. Bindings for generic PM domains (genpd) is described in
> +[1].
> +
> +The ARM CPU PM domain follows the same binding convention as any generic PM
> +domain. Additional binding properties are -
> +
> +- compatible:
> +       Usage: required
> +       Value type: <string>
> +       Definition: Must also have
> +                       "arm,pd"
> +               inorder to initialize the genpd provider as ARM CPU PM domain.

A compatible string should represent a particular h/w block. If it is
generic, it should represent some sort of standard programming
interface (e.g, AHCI, EHCI, etc.). This doesn't seem to be either and
is rather just a mapping of what "driver" you want to use.

I would expect that identifying a cpu's or cluster's power domain
would be done by a phandle between the cpu/cluster node and power
domain node. But I've not really looked at the power domain bindings
so who knows.

Rob
Kevin Hilman Aug. 7, 2015, 11:45 p.m. UTC | #2
Rob Herring <robherring2@gmail.com> writes:

> On Tue, Aug 4, 2015 at 6:35 PM, Lina Iyer <lina.iyer@linaro.org> wrote:
>> Define and add Generic PM domains (genpd) for ARM CPU clusters. Many
>> new

@Lina: I know you inherited this from some proof-of-concept code frome me, so
I'm partially to blame, but...

There's really nothing ARM specific about this driver.  

>> SoCs group CPUs as clusters. Clusters share common resources like GIC,
>> power rail, caches, VFP, Coresight etc. When all CPUs in the cluster are
>> idle, these shared resources may also be put in their idle state.
>>
>> The idle time between the last CPU entering idle and a CPU resuming
>> execution is an opportunity for these shared resources to be powered
>> down. Generic PM domain provides a framework for defining such power
>> domains and attach devices to the domain. When the devices in the domain
>> are idle at runtime, the domain would also be suspended and resumed
>> before the first of the devices resume execution.
>>
>> We define a generic PM domain for each cluster and attach CPU devices in
>> the cluster to that PM domain. The DT definitions for the SoC describe
>> this relationship. Genpd callbacks for power_on and power_off can then
>> be used to power up/down the shared resources for the domain.
>
> [...]
>
>> +ARM CPU Power domains
>> +
>> +The device tree allows describing of CPU power domains in a SoC. In ARM SoC,
>> +CPUs may be grouped as clusters. A cluster may have CPUs, GIC, Coresight,
>> +caches, VFP and power controller and other peripheral hardware. Generally,
>> +when the CPUs in the cluster are idle/suspended, the shared resources may also
>> +be suspended and resumed before any of the CPUs resume execution.
>> +
>> +CPUs are the defined as the PM domain consumers and there is a PM domain
>> +provider for the CPUs. Bindings for generic PM domains (genpd) is described in
>> +[1].
>> +
>> +The ARM CPU PM domain follows the same binding convention as any generic PM
>> +domain. Additional binding properties are -
>> +
>> +- compatible:
>> +       Usage: required
>> +       Value type: <string>
>> +       Definition: Must also have
>> +                       "arm,pd"
>> +               inorder to initialize the genpd provider as ARM CPU PM domain.
>
> A compatible string should represent a particular h/w block. If it is
> generic, it should represent some sort of standard programming
> interface (e.g, AHCI, EHCI, etc.). This doesn't seem to be either and
> is rather just a mapping of what "driver" you want to use.
>
> I would expect that identifying a cpu's or cluster's power domain
> would be done by a phandle between the cpu/cluster node and power
> domain node. 

That's correct, the CPU nodes (and other nodes in the cluster like GIC,
Coresight, etc.) would have phandles to the cluster power domain node.

But this series is meant to create the driver & binding for those cluster
power domain(s), so the question is how exactly describe it.

What we're trying to get to is binding to describe a powerdomain of a
generic CPU cluster, but of course the actual programming interface for
powering down the cluster will be platform specific.  In earlier RFC
versions, Lina had proposed ways for platforms to register some
low-level hooks with this generic driver for the platform-specific bits,
but if you have some other suggests, we'd be all ears.

Kevin
Geert Uytterhoeven Aug. 11, 2015, 1:07 p.m. UTC | #3
On Sat, Aug 8, 2015 at 1:45 AM, Kevin Hilman <khilman@kernel.org> wrote:
> Rob Herring <robherring2@gmail.com> writes:
>> On Tue, Aug 4, 2015 at 6:35 PM, Lina Iyer <lina.iyer@linaro.org> wrote:
>>> +ARM CPU Power domains
>>> +
>>> +The device tree allows describing of CPU power domains in a SoC. In ARM SoC,
>>> +CPUs may be grouped as clusters. A cluster may have CPUs, GIC, Coresight,
>>> +caches, VFP and power controller and other peripheral hardware. Generally,
>>> +when the CPUs in the cluster are idle/suspended, the shared resources may also
>>> +be suspended and resumed before any of the CPUs resume execution.
>>> +
>>> +CPUs are the defined as the PM domain consumers and there is a PM domain
>>> +provider for the CPUs. Bindings for generic PM domains (genpd) is described in
>>> +[1].
>>> +
>>> +The ARM CPU PM domain follows the same binding convention as any generic PM
>>> +domain. Additional binding properties are -
>>> +
>>> +- compatible:
>>> +       Usage: required
>>> +       Value type: <string>
>>> +       Definition: Must also have
>>> +                       "arm,pd"
>>> +               inorder to initialize the genpd provider as ARM CPU PM domain.
>>
>> A compatible string should represent a particular h/w block. If it is
>> generic, it should represent some sort of standard programming
>> interface (e.g, AHCI, EHCI, etc.). This doesn't seem to be either and
>> is rather just a mapping of what "driver" you want to use.
>>
>> I would expect that identifying a cpu's or cluster's power domain
>> would be done by a phandle between the cpu/cluster node and power
>> domain node.
>
> That's correct, the CPU nodes (and other nodes in the cluster like GIC,
> Coresight, etc.) would have phandles to the cluster power domain node.

Indeed.

> But this series is meant to create the driver & binding for those cluster
> power domain(s), so the question is how exactly describe it.

I don't think I can add an "arm,pd" compatible property to e.g. a2sl
(for CA15-CPUx) and a3sm (for CA15-SCU) in arch/arm/boot/dts/r8a73a4.dtsi,
as these are just subdomains in a power domain hierarchy, all driven by a
single hardware block.

I can call e.g. a special registration method, or set up some ops pointer,
for the a2sl and a3sm subdomains from within the "renesas,sysc-rmobile"
driver.

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds
Lina Iyer Aug. 11, 2015, 3:58 p.m. UTC | #4
On Tue, Aug 11 2015 at 07:07 -0600, Geert Uytterhoeven wrote:
>On Sat, Aug 8, 2015 at 1:45 AM, Kevin Hilman <khilman@kernel.org> wrote:
>> Rob Herring <robherring2@gmail.com> writes:
>>> On Tue, Aug 4, 2015 at 6:35 PM, Lina Iyer <lina.iyer@linaro.org> wrote:
>>>> +ARM CPU Power domains
>>>> +
>>>> +The device tree allows describing of CPU power domains in a SoC. In ARM SoC,
>>>> +CPUs may be grouped as clusters. A cluster may have CPUs, GIC, Coresight,
>>>> +caches, VFP and power controller and other peripheral hardware. Generally,
>>>> +when the CPUs in the cluster are idle/suspended, the shared resources may also
>>>> +be suspended and resumed before any of the CPUs resume execution.
>>>> +
>>>> +CPUs are the defined as the PM domain consumers and there is a PM domain
>>>> +provider for the CPUs. Bindings for generic PM domains (genpd) is described in
>>>> +[1].
>>>> +
>>>> +The ARM CPU PM domain follows the same binding convention as any generic PM
>>>> +domain. Additional binding properties are -
>>>> +
>>>> +- compatible:
>>>> +       Usage: required
>>>> +       Value type: <string>
>>>> +       Definition: Must also have
>>>> +                       "arm,pd"
>>>> +               inorder to initialize the genpd provider as ARM CPU PM domain.
>>>
>>> A compatible string should represent a particular h/w block. If it is
>>> generic, it should represent some sort of standard programming
>>> interface (e.g, AHCI, EHCI, etc.). This doesn't seem to be either and
>>> is rather just a mapping of what "driver" you want to use.
>>>
>>> I would expect that identifying a cpu's or cluster's power domain
>>> would be done by a phandle between the cpu/cluster node and power
>>> domain node.
>>
>> That's correct, the CPU nodes (and other nodes in the cluster like GIC,
>> Coresight, etc.) would have phandles to the cluster power domain node.
>
>Indeed.
>
>> But this series is meant to create the driver & binding for those cluster
>> power domain(s), so the question is how exactly describe it.
>
>I don't think I can add an "arm,pd" compatible property to e.g. a2sl
>(for CA15-CPUx) and a3sm (for CA15-SCU) in arch/arm/boot/dts/r8a73a4.dtsi,
>as these are just subdomains in a power domain hierarchy, all driven by a
>single hardware block.
>
>I can call e.g. a special registration method, or set up some ops pointer,
>for the a2sl and a3sm subdomains from within the "renesas,sysc-rmobile"
>driver.
>
I was hoping the macro would help in such a case. But since your domain
cannot be defined as arm,pd (could you explain why, I seem to missing
the obvious) would it help if I export a function like that the
renesas,sysc-rmobile driver could call and setup the CPU PM domain?
There is catch to it though.

The problem is -

To be generic and not have every driver write code to do this generic
functionality, the common code would want to instantiate the
arm_pm_domain and therefore the embedded genpd. A pointer instead of
actual object, would mean maintaining a list and iterating through it,
everytime the domain is suspended/resumed. With an object, I could just
do container_of and get the arm_pd. But, we also want to give the
platform an option to define certain aspects of the CPU's generic PM
domain like the flags, genpd callbacks etc, before the genpd is
registered with the framework.

Would such a function work for you? What does everyone think about the
@template?

struct generic_pm_domain *of_arm_cpu_domain(struct device_node *dn,
                struct of_arm_pd_ops *ops, struct generic_pm_domain *template)
{
        struct arm_pm_domain *pd;

        if (!of_device_is_available(dn))
                return NULL;

	/* This creates the memory for pd and setup basic stuff */
        pd = setup_arm_pd(dn);

	/* copy the platform's template genpd over to the pd's genpd */
        memcpy(&pd.genpd, template, sizeof(*template));

	/* 
	 * Now set up the additional ops and flags and register with
	 * genpd framework
	 */
        register_arm_pd(dn, pd);

	/* 
	 * Returning the genpd back to the platform so it can be added
	 * as subdomains to other domains etc.
	 */
        return &pd.genpd;
}
EXPORT_SYMBOL(of_arm_cpu_domain);

The catch is that platform drivers have to provide a template for the
genpd. The @template would never be registered, but would just be used
to create the common code's genpd.

Any other ideas?

Thanks,
Lina
Rob Herring Aug. 11, 2015, 8:12 p.m. UTC | #5
On Tue, Aug 11, 2015 at 10:58 AM, Lina Iyer <lina.iyer@linaro.org> wrote:
> On Tue, Aug 11 2015 at 07:07 -0600, Geert Uytterhoeven wrote:
>>
>> On Sat, Aug 8, 2015 at 1:45 AM, Kevin Hilman <khilman@kernel.org> wrote:
>>>
>>> Rob Herring <robherring2@gmail.com> writes:
>>>>
>>>> On Tue, Aug 4, 2015 at 6:35 PM, Lina Iyer <lina.iyer@linaro.org> wrote:
>>>>>
>>>>> +ARM CPU Power domains
>>>>> +
>>>>> +The device tree allows describing of CPU power domains in a SoC. In
>>>>> ARM SoC,
>>>>> +CPUs may be grouped as clusters. A cluster may have CPUs, GIC,
>>>>> Coresight,
>>>>> +caches, VFP and power controller and other peripheral hardware.
>>>>> Generally,
>>>>> +when the CPUs in the cluster are idle/suspended, the shared resources
>>>>> may also
>>>>> +be suspended and resumed before any of the CPUs resume execution.
>>>>> +
>>>>> +CPUs are the defined as the PM domain consumers and there is a PM
>>>>> domain
>>>>> +provider for the CPUs. Bindings for generic PM domains (genpd) is
>>>>> described in
>>>>> +[1].
>>>>> +
>>>>> +The ARM CPU PM domain follows the same binding convention as any
>>>>> generic PM
>>>>> +domain. Additional binding properties are -
>>>>> +
>>>>> +- compatible:
>>>>> +       Usage: required
>>>>> +       Value type: <string>
>>>>> +       Definition: Must also have
>>>>> +                       "arm,pd"
>>>>> +               inorder to initialize the genpd provider as ARM CPU PM
>>>>> domain.
>>>>
>>>>
>>>> A compatible string should represent a particular h/w block. If it is
>>>> generic, it should represent some sort of standard programming
>>>> interface (e.g, AHCI, EHCI, etc.). This doesn't seem to be either and
>>>> is rather just a mapping of what "driver" you want to use.
>>>>
>>>> I would expect that identifying a cpu's or cluster's power domain
>>>> would be done by a phandle between the cpu/cluster node and power
>>>> domain node.
>>>
>>>
>>> That's correct, the CPU nodes (and other nodes in the cluster like GIC,
>>> Coresight, etc.) would have phandles to the cluster power domain node.
>>
>>
>> Indeed.
>>
>>> But this series is meant to create the driver & binding for those cluster
>>> power domain(s), so the question is how exactly describe it.
>>
>>
>> I don't think I can add an "arm,pd" compatible property to e.g. a2sl
>> (for CA15-CPUx) and a3sm (for CA15-SCU) in arch/arm/boot/dts/r8a73a4.dtsi,
>> as these are just subdomains in a power domain hierarchy, all driven by a
>> single hardware block.
>>
>> I can call e.g. a special registration method, or set up some ops pointer,
>> for the a2sl and a3sm subdomains from within the "renesas,sysc-rmobile"
>> driver.
>>
> I was hoping the macro would help in such a case. But since your domain
> cannot be defined as arm,pd (could you explain why, I seem to missing
> the obvious) would it help if I export a function like that the
> renesas,sysc-rmobile driver could call and setup the CPU PM domain?
> There is catch to it though.
>
> The problem is -
>
> To be generic and not have every driver write code to do this generic
> functionality, the common code would want to instantiate the
> arm_pm_domain and therefore the embedded genpd. A pointer instead of
> actual object, would mean maintaining a list and iterating through it,
> everytime the domain is suspended/resumed. With an object, I could just
> do container_of and get the arm_pd. But, we also want to give the
> platform an option to define certain aspects of the CPU's generic PM
> domain like the flags, genpd callbacks etc, before the genpd is
> registered with the framework.

The problem here is what part of the hardware is generic? It is
generic, but yet ARM specific (presumably not as Kevin pointed out)?
I'm not exactly clear what the problem is, but it seems you are after
a common API/subsystem for managing power domains of CPU/clusters. I
fail to see how the problem is different than any other subsystem
where you have a core providing common API with hardware specific
drivers registering their ops with the core.

>
> Would such a function work for you? What does everyone think about the
> @template?
>
> struct generic_pm_domain *of_arm_cpu_domain(struct device_node *dn,

This is missing a verb. What does it do?

I don't think I really get what you are trying to solve to comment
whether this looks good or not.

Rob

>                struct of_arm_pd_ops *ops, struct generic_pm_domain
> *template)
> {
>        struct arm_pm_domain *pd;
>
>        if (!of_device_is_available(dn))
>                return NULL;
>
>         /* This creates the memory for pd and setup basic stuff */
>        pd = setup_arm_pd(dn);
>
>         /* copy the platform's template genpd over to the pd's genpd */
>        memcpy(&pd.genpd, template, sizeof(*template));
>
>         /*       * Now set up the additional ops and flags and register with
>          * genpd framework
>          */
>        register_arm_pd(dn, pd);
>
>         /*       * Returning the genpd back to the platform so it can be
> added
>          * as subdomains to other domains etc.
>          */
>        return &pd.genpd;
> }
> EXPORT_SYMBOL(of_arm_cpu_domain);
>
> The catch is that platform drivers have to provide a template for the
> genpd. The @template would never be registered, but would just be used
> to create the common code's genpd.
>
> Any other ideas?
>
> Thanks,
> Lina
Lina Iyer Aug. 11, 2015, 10:29 p.m. UTC | #6
On Tue, Aug 11 2015 at 14:13 -0600, Rob Herring wrote:
>On Tue, Aug 11, 2015 at 10:58 AM, Lina Iyer <lina.iyer@linaro.org> wrote:
>> On Tue, Aug 11 2015 at 07:07 -0600, Geert Uytterhoeven wrote:
>>> On Sat, Aug 8, 2015 at 1:45 AM, Kevin Hilman <khilman@kernel.org> wrote:
>>>> Rob Herring <robherring2@gmail.com> writes:
>>>>> On Tue, Aug 4, 2015 at 6:35 PM, Lina Iyer <lina.iyer@linaro.org> wrote:
>>>>>>
>>>>>> +ARM CPU Power domains
>>>>>> +
>>>>>> +The device tree allows describing of CPU power domains in a SoC. In
>>>>>> ARM SoC,
>>>>>> +CPUs may be grouped as clusters. A cluster may have CPUs, GIC,
>>>>>> Coresight,
>>>>>> +caches, VFP and power controller and other peripheral hardware.
>>>>>> Generally,
>>>>>> +when the CPUs in the cluster are idle/suspended, the shared resources
>>>>>> may also
>>>>>> +be suspended and resumed before any of the CPUs resume execution.
>>>>>> +
>>>>>> +CPUs are the defined as the PM domain consumers and there is a PM
>>>>>> domain
>>>>>> +provider for the CPUs. Bindings for generic PM domains (genpd) is
>>>>>> described in
>>>>>> +[1].
>>>>>> +
>>>>>> +The ARM CPU PM domain follows the same binding convention as any
>>>>>> generic PM
>>>>>> +domain. Additional binding properties are -
>>>>>> +
>>>>>> +- compatible:
>>>>>> +       Usage: required
>>>>>> +       Value type: <string>
>>>>>> +       Definition: Must also have
>>>>>> +                       "arm,pd"
>>>>>> +               inorder to initialize the genpd provider as ARM CPU PM
>>>>>> domain.
>>>>>
>>>>>
>>>>> A compatible string should represent a particular h/w block. If it is
>>>>> generic, it should represent some sort of standard programming
>>>>> interface (e.g, AHCI, EHCI, etc.). This doesn't seem to be either and
>>>>> is rather just a mapping of what "driver" you want to use.
>>>>>
>>>>> I would expect that identifying a cpu's or cluster's power domain
>>>>> would be done by a phandle between the cpu/cluster node and power
>>>>> domain node.
>>>>
>>>>
>>>> That's correct, the CPU nodes (and other nodes in the cluster like GIC,
>>>> Coresight, etc.) would have phandles to the cluster power domain node.
>>>
>>>
>>> Indeed.
>>>
>>>> But this series is meant to create the driver & binding for those cluster
>>>> power domain(s), so the question is how exactly describe it.
>>>
>>>
>>> I don't think I can add an "arm,pd" compatible property to e.g. a2sl
>>> (for CA15-CPUx) and a3sm (for CA15-SCU) in arch/arm/boot/dts/r8a73a4.dtsi,
>>> as these are just subdomains in a power domain hierarchy, all driven by a
>>> single hardware block.
>>>
>>> I can call e.g. a special registration method, or set up some ops pointer,
>>> for the a2sl and a3sm subdomains from within the "renesas,sysc-rmobile"
>>> driver.
>>>
>> I was hoping the macro would help in such a case. But since your domain
>> cannot be defined as arm,pd (could you explain why, I seem to missing
>> the obvious) would it help if I export a function like that the
>> renesas,sysc-rmobile driver could call and setup the CPU PM domain?
>> There is catch to it though.
>>
>> The problem is -
>>
>> To be generic and not have every driver write code to do this generic
>> functionality, the common code would want to instantiate the
>> arm_pm_domain and therefore the embedded genpd. A pointer instead of
>> actual object, would mean maintaining a list and iterating through it,
>> everytime the domain is suspended/resumed. With an object, I could just
>> do container_of and get the arm_pd. But, we also want to give the
>> platform an option to define certain aspects of the CPU's generic PM
>> domain like the flags, genpd callbacks etc, before the genpd is
>> registered with the framework.
>
>The problem here is what part of the hardware is generic? It is
>generic, but yet ARM specific (presumably not as Kevin pointed out)?
>
CPU is the generic device that we are currently interested in. It is not
ARM specific and I am open to any better compatible description for such
domain providers.

>I'm not exactly clear what the problem is, but it seems you are after
>a common API/subsystem for managing power domains of CPU/clusters.
>
Partly yes. The common code is just managing common activities during
on/off of the power domain.
Platform driver needs to be involved to power on/off the power domain
hardware.

>I fail to see how the problem is different than any other subsystem
>where you have a core providing common API with hardware specific
>drivers registering their ops with the core.
>
Platform drivers today, directly use PM domains. They setup genpd
properties and callbacks (.power_on, .power_off) before registering with
the PM domain framework. For ex, renesas driver does this - 
 
struct generic_pm_domain *genpd;

genpd->flags = GENPD_FLAG_PM_CLK;
pm_genpd_init(genpd, gov ? : &simple_qos_governor, false);
genpd->dev_ops.active_wakeup    = rmobile_pd_active_wakeup;
genpd->power_off                = rmobile_pd_power_down;
genpd->power_on                 = rmobile_pd_power_up;
genpd->attach_dev               = rmobile_pd_attach_dev;
genpd->detach_dev               = rmobile_pd_detach_dev;

The common code also uses genpd and has some additional properties. It 
would also like to receive callbacks for .power_on and .power_off. On
the *same* genpd object -

genpd->flags |= GENPD_FLAG_IRQ_SAFE;
genpd->power_off = arm_pd_power_down;
genpd->power_on = arm_pd_power_up;

Most of the times the platform driver just would set the power_on,
power_off callbacks. In which case they could just register platform ops
with the core , during the OF_DECLARE_1() callback. The core would set
up the genpd->power_xxx to arm_pd_power_xxx and would relay the callback
to the platform using the callbacks registered the core.

But in the case like Renesas, where the genpd object created by the
core, needs to be modified by the platform driver before registering
with PM domain framework, the complexity arises. This where OF_DECLARE_1
lacks argument. The core code would like to pass the genpd object for the
platform code to amend and when the callback returns, use the genpd to
register with PM domain framework.

>>
>> Would such a function work for you? What does everyone think about the
>> @template?
>>
>> struct generic_pm_domain *of_arm_cpu_domain(struct device_node *dn,
>
>This is missing a verb. What does it do?
>
Sorry. /s/of_arm_cpu_domain/of_arm_init_cpu_domain/

>I don't think I really get what you are trying to solve to comment
>whether this looks good or not.
>
Let me know if this helps clarify.

Thanks,
Lina

>Rob
>
>>                struct of_arm_pd_ops *ops, struct generic_pm_domain
>> *template)
>> {
>>        struct arm_pm_domain *pd;
>>
>>        if (!of_device_is_available(dn))
>>                return NULL;
>>
>>         /* This creates the memory for pd and setup basic stuff */
>>        pd = setup_arm_pd(dn);
>>
>>         /* copy the platform's template genpd over to the pd's genpd */
>>        memcpy(&pd.genpd, template, sizeof(*template));
>>
>>         /*       * Now set up the additional ops and flags and register with
>>          * genpd framework
>>          */
>>        register_arm_pd(dn, pd);
>>
>>         /*       * Returning the genpd back to the platform so it can be
>> added
>>          * as subdomains to other domains etc.
>>          */
>>        return &pd.genpd;
>> }
>> EXPORT_SYMBOL(of_arm_cpu_domain);
>>
>> The catch is that platform drivers have to provide a template for the
>> genpd. The @template would never be registered, but would just be used
>> to create the common code's genpd.
>>
>> Any other ideas?
>>
>> Thanks,
>> Lina
Lorenzo Pieralisi Aug. 13, 2015, 3:01 p.m. UTC | #7
On Thu, Aug 06, 2015 at 04:14:51AM +0100, Rob Herring wrote:
> On Tue, Aug 4, 2015 at 6:35 PM, Lina Iyer <lina.iyer@linaro.org> wrote:
> > Define and add Generic PM domains (genpd) for ARM CPU clusters. Many new
> > SoCs group CPUs as clusters. Clusters share common resources like GIC,
> > power rail, caches, VFP, Coresight etc. When all CPUs in the cluster are
> > idle, these shared resources may also be put in their idle state.
> >
> > The idle time between the last CPU entering idle and a CPU resuming
> > execution is an opportunity for these shared resources to be powered
> > down. Generic PM domain provides a framework for defining such power
> > domains and attach devices to the domain. When the devices in the domain
> > are idle at runtime, the domain would also be suspended and resumed
> > before the first of the devices resume execution.
> >
> > We define a generic PM domain for each cluster and attach CPU devices in
> > the cluster to that PM domain. The DT definitions for the SoC describe
> > this relationship. Genpd callbacks for power_on and power_off can then
> > be used to power up/down the shared resources for the domain.
> 
> [...]
> 
> > +ARM CPU Power domains
> > +
> > +The device tree allows describing of CPU power domains in a SoC. In ARM SoC,
> > +CPUs may be grouped as clusters. A cluster may have CPUs, GIC, Coresight,
> > +caches, VFP and power controller and other peripheral hardware. Generally,
> > +when the CPUs in the cluster are idle/suspended, the shared resources may also
> > +be suspended and resumed before any of the CPUs resume execution.
> > +
> > +CPUs are the defined as the PM domain consumers and there is a PM domain
> > +provider for the CPUs. Bindings for generic PM domains (genpd) is described in
> > +[1].
> > +
> > +The ARM CPU PM domain follows the same binding convention as any generic PM
> > +domain. Additional binding properties are -
> > +
> > +- compatible:
> > +       Usage: required
> > +       Value type: <string>
> > +       Definition: Must also have
> > +                       "arm,pd"
> > +               inorder to initialize the genpd provider as ARM CPU PM domain.
> 
> A compatible string should represent a particular h/w block. If it is
> generic, it should represent some sort of standard programming
> interface (e.g, AHCI, EHCI, etc.). This doesn't seem to be either and
> is rather just a mapping of what "driver" you want to use.
> 
> I would expect that identifying a cpu's or cluster's power domain
> would be done by a phandle between the cpu/cluster node and power
> domain node. But I've not really looked at the power domain bindings
> so who knows.

I would expect the same, meaning that a cpu node, like any other device
node would have a phandle pointing at the respective HW power domain.

I do not really understand why we want a "generic" CPU power domain, what
purpose does it serve ? Creating a collection of cpu devices that we
can call "cluster" ?

Thanks,
Lorenzo
Lina Iyer Aug. 13, 2015, 3:45 p.m. UTC | #8
On Thu, Aug 13 2015 at 09:01 -0600, Lorenzo Pieralisi wrote:
>On Thu, Aug 06, 2015 at 04:14:51AM +0100, Rob Herring wrote:
>> On Tue, Aug 4, 2015 at 6:35 PM, Lina Iyer <lina.iyer@linaro.org> wrote:
>> > Define and add Generic PM domains (genpd) for ARM CPU clusters. Many new
>> > SoCs group CPUs as clusters. Clusters share common resources like GIC,
>> > power rail, caches, VFP, Coresight etc. When all CPUs in the cluster are
>> > idle, these shared resources may also be put in their idle state.
>> >
>> > The idle time between the last CPU entering idle and a CPU resuming
>> > execution is an opportunity for these shared resources to be powered
>> > down. Generic PM domain provides a framework for defining such power
>> > domains and attach devices to the domain. When the devices in the domain
>> > are idle at runtime, the domain would also be suspended and resumed
>> > before the first of the devices resume execution.
>> >
>> > We define a generic PM domain for each cluster and attach CPU devices in
>> > the cluster to that PM domain. The DT definitions for the SoC describe
>> > this relationship. Genpd callbacks for power_on and power_off can then
>> > be used to power up/down the shared resources for the domain.
>>
>> [...]
>>
>> > +ARM CPU Power domains
>> > +
>> > +The device tree allows describing of CPU power domains in a SoC. In ARM SoC,
>> > +CPUs may be grouped as clusters. A cluster may have CPUs, GIC, Coresight,
>> > +caches, VFP and power controller and other peripheral hardware. Generally,
>> > +when the CPUs in the cluster are idle/suspended, the shared resources may also
>> > +be suspended and resumed before any of the CPUs resume execution.
>> > +
>> > +CPUs are the defined as the PM domain consumers and there is a PM domain
>> > +provider for the CPUs. Bindings for generic PM domains (genpd) is described in
>> > +[1].
>> > +
>> > +The ARM CPU PM domain follows the same binding convention as any generic PM
>> > +domain. Additional binding properties are -
>> > +
>> > +- compatible:
>> > +       Usage: required
>> > +       Value type: <string>
>> > +       Definition: Must also have
>> > +                       "arm,pd"
>> > +               inorder to initialize the genpd provider as ARM CPU PM domain.
>>
>> A compatible string should represent a particular h/w block. If it is
>> generic, it should represent some sort of standard programming
>> interface (e.g, AHCI, EHCI, etc.). This doesn't seem to be either and
>> is rather just a mapping of what "driver" you want to use.
>>
>> I would expect that identifying a cpu's or cluster's power domain
>> would be done by a phandle between the cpu/cluster node and power
>> domain node. But I've not really looked at the power domain bindings
>> so who knows.
>
>I would expect the same, meaning that a cpu node, like any other device
>node would have a phandle pointing at the respective HW power domain.
>
CPUs have phandles to their domains. That is how the relationship
between the domain provider (power-controller) and the consumer (CPU) is
established.

>I do not really understand why we want a "generic" CPU power domain, what
>purpose does it serve ? Creating a collection of cpu devices that we
>can call "cluster" ?
>
Nope, not for calling a cluster, a cluster :)

This compatible is used to define a generic behavior of the CPU domain
controller (in addition to the platform specific behavior of the domain
power controller). The kernel activities for such power controller are
generally the same which otherwise would be repeated across platforms.
An analogy to this would be the "arm,idle-state" that defines the DT
node as something that also depicts a generic cpuidle C state.

Thanks,
Lina
Lorenzo Pieralisi Aug. 13, 2015, 3:52 p.m. UTC | #9
On Thu, Aug 13, 2015 at 04:45:03PM +0100, Lina Iyer wrote:
> On Thu, Aug 13 2015 at 09:01 -0600, Lorenzo Pieralisi wrote:
> >On Thu, Aug 06, 2015 at 04:14:51AM +0100, Rob Herring wrote:
> >> On Tue, Aug 4, 2015 at 6:35 PM, Lina Iyer <lina.iyer@linaro.org> wrote:
> >> > Define and add Generic PM domains (genpd) for ARM CPU clusters. Many new
> >> > SoCs group CPUs as clusters. Clusters share common resources like GIC,
> >> > power rail, caches, VFP, Coresight etc. When all CPUs in the cluster are
> >> > idle, these shared resources may also be put in their idle state.
> >> >
> >> > The idle time between the last CPU entering idle and a CPU resuming
> >> > execution is an opportunity for these shared resources to be powered
> >> > down. Generic PM domain provides a framework for defining such power
> >> > domains and attach devices to the domain. When the devices in the domain
> >> > are idle at runtime, the domain would also be suspended and resumed
> >> > before the first of the devices resume execution.
> >> >
> >> > We define a generic PM domain for each cluster and attach CPU devices in
> >> > the cluster to that PM domain. The DT definitions for the SoC describe
> >> > this relationship. Genpd callbacks for power_on and power_off can then
> >> > be used to power up/down the shared resources for the domain.
> >>
> >> [...]
> >>
> >> > +ARM CPU Power domains
> >> > +
> >> > +The device tree allows describing of CPU power domains in a SoC. In ARM SoC,
> >> > +CPUs may be grouped as clusters. A cluster may have CPUs, GIC, Coresight,
> >> > +caches, VFP and power controller and other peripheral hardware. Generally,
> >> > +when the CPUs in the cluster are idle/suspended, the shared resources may also
> >> > +be suspended and resumed before any of the CPUs resume execution.
> >> > +
> >> > +CPUs are the defined as the PM domain consumers and there is a PM domain
> >> > +provider for the CPUs. Bindings for generic PM domains (genpd) is described in
> >> > +[1].
> >> > +
> >> > +The ARM CPU PM domain follows the same binding convention as any generic PM
> >> > +domain. Additional binding properties are -
> >> > +
> >> > +- compatible:
> >> > +       Usage: required
> >> > +       Value type: <string>
> >> > +       Definition: Must also have
> >> > +                       "arm,pd"
> >> > +               inorder to initialize the genpd provider as ARM CPU PM domain.
> >>
> >> A compatible string should represent a particular h/w block. If it is
> >> generic, it should represent some sort of standard programming
> >> interface (e.g, AHCI, EHCI, etc.). This doesn't seem to be either and
> >> is rather just a mapping of what "driver" you want to use.
> >>
> >> I would expect that identifying a cpu's or cluster's power domain
> >> would be done by a phandle between the cpu/cluster node and power
> >> domain node. But I've not really looked at the power domain bindings
> >> so who knows.
> >
> >I would expect the same, meaning that a cpu node, like any other device
> >node would have a phandle pointing at the respective HW power domain.
> >
> CPUs have phandles to their domains. That is how the relationship
> between the domain provider (power-controller) and the consumer (CPU) is
> established.
> 
> >I do not really understand why we want a "generic" CPU power domain, what
> >purpose does it serve ? Creating a collection of cpu devices that we
> >can call "cluster" ?
> >
> Nope, not for calling a cluster, a cluster :)
> 
> This compatible is used to define a generic behavior of the CPU domain
> controller (in addition to the platform specific behavior of the domain
> power controller). The kernel activities for such power controller are
> generally the same which otherwise would be repeated across platforms.

What activities ? CPU PM notifiers ?

Thanks,
Lorenzo
Lina Iyer Aug. 13, 2015, 4:22 p.m. UTC | #10
On Thu, Aug 13 2015 at 09:52 -0600, Lorenzo Pieralisi wrote:
>On Thu, Aug 13, 2015 at 04:45:03PM +0100, Lina Iyer wrote:
>> On Thu, Aug 13 2015 at 09:01 -0600, Lorenzo Pieralisi wrote:
>> >On Thu, Aug 06, 2015 at 04:14:51AM +0100, Rob Herring wrote:
>> >> On Tue, Aug 4, 2015 at 6:35 PM, Lina Iyer <lina.iyer@linaro.org> wrote:
>> >> > Define and add Generic PM domains (genpd) for ARM CPU clusters. Many new
>> >> > SoCs group CPUs as clusters. Clusters share common resources like GIC,
>> >> > power rail, caches, VFP, Coresight etc. When all CPUs in the cluster are
>> >> > idle, these shared resources may also be put in their idle state.
>> >> >
>> >> > The idle time between the last CPU entering idle and a CPU resuming
>> >> > execution is an opportunity for these shared resources to be powered
>> >> > down. Generic PM domain provides a framework for defining such power
>> >> > domains and attach devices to the domain. When the devices in the domain
>> >> > are idle at runtime, the domain would also be suspended and resumed
>> >> > before the first of the devices resume execution.
>> >> >
>> >> > We define a generic PM domain for each cluster and attach CPU devices in
>> >> > the cluster to that PM domain. The DT definitions for the SoC describe
>> >> > this relationship. Genpd callbacks for power_on and power_off can then
>> >> > be used to power up/down the shared resources for the domain.
>> >>
>> >> [...]
>> >>
>> >> > +ARM CPU Power domains
>> >> > +
>> >> > +The device tree allows describing of CPU power domains in a SoC. In ARM SoC,
>> >> > +CPUs may be grouped as clusters. A cluster may have CPUs, GIC, Coresight,
>> >> > +caches, VFP and power controller and other peripheral hardware. Generally,
>> >> > +when the CPUs in the cluster are idle/suspended, the shared resources may also
>> >> > +be suspended and resumed before any of the CPUs resume execution.
>> >> > +
>> >> > +CPUs are the defined as the PM domain consumers and there is a PM domain
>> >> > +provider for the CPUs. Bindings for generic PM domains (genpd) is described in
>> >> > +[1].
>> >> > +
>> >> > +The ARM CPU PM domain follows the same binding convention as any generic PM
>> >> > +domain. Additional binding properties are -
>> >> > +
>> >> > +- compatible:
>> >> > +       Usage: required
>> >> > +       Value type: <string>
>> >> > +       Definition: Must also have
>> >> > +                       "arm,pd"
>> >> > +               inorder to initialize the genpd provider as ARM CPU PM domain.
>> >>
>> >> A compatible string should represent a particular h/w block. If it is
>> >> generic, it should represent some sort of standard programming
>> >> interface (e.g, AHCI, EHCI, etc.). This doesn't seem to be either and
>> >> is rather just a mapping of what "driver" you want to use.
>> >>
>> >> I would expect that identifying a cpu's or cluster's power domain
>> >> would be done by a phandle between the cpu/cluster node and power
>> >> domain node. But I've not really looked at the power domain bindings
>> >> so who knows.
>> >
>> >I would expect the same, meaning that a cpu node, like any other device
>> >node would have a phandle pointing at the respective HW power domain.
>> >
>> CPUs have phandles to their domains. That is how the relationship
>> between the domain provider (power-controller) and the consumer (CPU) is
>> established.
>>
>> >I do not really understand why we want a "generic" CPU power domain, what
>> >purpose does it serve ? Creating a collection of cpu devices that we
>> >can call "cluster" ?
>> >
>> Nope, not for calling a cluster, a cluster :)
>>
>> This compatible is used to define a generic behavior of the CPU domain
>> controller (in addition to the platform specific behavior of the domain
>> power controller). The kernel activities for such power controller are
>> generally the same which otherwise would be repeated across platforms.
>
>What activities ? CPU PM notifiers ?
>
Yes, for now. May be someday we can get rid of these notifiers and
directly invoke subsystems from these callbacks directly. Kevin proposed
this idea. With little exploration that I have done, I dont have a good
way to do that yet.

I am imagining here (only imagining at this time) that I could tie this
with last man down for cluster idle state determination and call into
cpuidle-PSCI to help compose the composite state id.

Thanks,
Lina
Sudeep Holla Aug. 13, 2015, 5:26 p.m. UTC | #11
On 13/08/15 16:45, Lina Iyer wrote:
> On Thu, Aug 13 2015 at 09:01 -0600, Lorenzo Pieralisi wrote:
>> On Thu, Aug 06, 2015 at 04:14:51AM +0100, Rob Herring wrote:
>>> On Tue, Aug 4, 2015 at 6:35 PM, Lina Iyer <lina.iyer@linaro.org> wrote:

[..]

>>>
>>>> +ARM CPU Power domains
>>>> +
>>>> +The device tree allows describing of CPU power domains in a SoC. In ARM SoC,
>>>> +CPUs may be grouped as clusters. A cluster may have CPUs, GIC, Coresight,
>>>> +caches, VFP and power controller and other peripheral hardware. Generally,
>>>> +when the CPUs in the cluster are idle/suspended, the shared resources may also
>>>> +be suspended and resumed before any of the CPUs resume execution.
>>>> +
>>>> +CPUs are the defined as the PM domain consumers and there is a PM domain
>>>> +provider for the CPUs. Bindings for generic PM domains (genpd) is described in
>>>> +[1].
>>>> +
>>>> +The ARM CPU PM domain follows the same binding convention as any generic PM
>>>> +domain. Additional binding properties are -
>>>> +
>>>> +- compatible:
>>>> +       Usage: required
>>>> +       Value type: <string>
>>>> +       Definition: Must also have
>>>> +                       "arm,pd"
>>>> +               inorder to initialize the genpd provider as ARM CPU PM domain.
>>>
>>> A compatible string should represent a particular h/w block. If it is
>>> generic, it should represent some sort of standard programming
>>> interface (e.g, AHCI, EHCI, etc.). This doesn't seem to be either and
>>> is rather just a mapping of what "driver" you want to use.
>>>
>>> I would expect that identifying a cpu's or cluster's power domain
>>> would be done by a phandle between the cpu/cluster node and power
>>> domain node. But I've not really looked at the power domain bindings
>>> so who knows.
>>
>> I would expect the same, meaning that a cpu node, like any other device
>> node would have a phandle pointing at the respective HW power domain.
>>
> CPUs have phandles to their domains. That is how the relationship
> between the domain provider (power-controller) and the consumer (CPU) is
> established.
>
>> I do not really understand why we want a "generic" CPU power domain, what
>> purpose does it serve ? Creating a collection of cpu devices that we
>> can call "cluster" ?
>>
> Nope, not for calling a cluster, a cluster :)
>
> This compatible is used to define a generic behavior of the CPU domain
> controller (in addition to the platform specific behavior of the domain
> power controller). The kernel activities for such power controller are
> generally the same which otherwise would be repeated across platforms.

Having gone through this series and the one using it[1], the only common
activity is just cluster pm notifiers. Other than that it's just
creating indirection for now. The scenario might change in future, but
for now it seems unnecessary.

Also if you look at the shmobile power controller driver, it covers all
the devices including CPUs unlike QCOM power controller which handles
only CPU. Yes we can skip CPU genpd creation there only for CPU, IMO
creating the power domains should be part of power controller driver.

You can add helper functions for all the ARM specific code that can be
reused by multiple power controller drivers handling CPU/Cluster power
domain.

> An analogy to this would be the "arm,idle-state" that defines the DT
> node as something that also depicts a generic cpuidle C state.
>

I tend to disagree. In idle-states, the nodes define that generic
properties and they can be parsed in a generic way. That's not the case
here. Each power controller binding differs.

Yes the generic compatible might be useful to identify that this power
domain handles CPU/Cluster, but there will be more power controller
specific things compared to generic code.

Regards,
Sudeep

[1] http://www.spinics.net/lists/arm-kernel/msg437304.html
Lina Iyer Aug. 13, 2015, 7:27 p.m. UTC | #12
On Thu, Aug 13 2015 at 11:26 -0600, Sudeep Holla wrote:
>
>
>On 13/08/15 16:45, Lina Iyer wrote:
>>On Thu, Aug 13 2015 at 09:01 -0600, Lorenzo Pieralisi wrote:
>>>On Thu, Aug 06, 2015 at 04:14:51AM +0100, Rob Herring wrote:
>>>>On Tue, Aug 4, 2015 at 6:35 PM, Lina Iyer <lina.iyer@linaro.org> wrote:
>
>[..]
>
>>>>
>>>>>+ARM CPU Power domains
>>>>>+
>>>>>+The device tree allows describing of CPU power domains in a SoC. In ARM SoC,
>>>>>+CPUs may be grouped as clusters. A cluster may have CPUs, GIC, Coresight,
>>>>>+caches, VFP and power controller and other peripheral hardware. Generally,
>>>>>+when the CPUs in the cluster are idle/suspended, the shared resources may also
>>>>>+be suspended and resumed before any of the CPUs resume execution.
>>>>>+
>>>>>+CPUs are the defined as the PM domain consumers and there is a PM domain
>>>>>+provider for the CPUs. Bindings for generic PM domains (genpd) is described in
>>>>>+[1].
>>>>>+
>>>>>+The ARM CPU PM domain follows the same binding convention as any generic PM
>>>>>+domain. Additional binding properties are -
>>>>>+
>>>>>+- compatible:
>>>>>+       Usage: required
>>>>>+       Value type: <string>
>>>>>+       Definition: Must also have
>>>>>+                       "arm,pd"
>>>>>+               inorder to initialize the genpd provider as ARM CPU PM domain.
>>>>
>>>>A compatible string should represent a particular h/w block. If it is
>>>>generic, it should represent some sort of standard programming
>>>>interface (e.g, AHCI, EHCI, etc.). This doesn't seem to be either and
>>>>is rather just a mapping of what "driver" you want to use.
>>>>
>>>>I would expect that identifying a cpu's or cluster's power domain
>>>>would be done by a phandle between the cpu/cluster node and power
>>>>domain node. But I've not really looked at the power domain bindings
>>>>so who knows.
>>>
>>>I would expect the same, meaning that a cpu node, like any other device
>>>node would have a phandle pointing at the respective HW power domain.
>>>
>>CPUs have phandles to their domains. That is how the relationship
>>between the domain provider (power-controller) and the consumer (CPU) is
>>established.
>>
>>>I do not really understand why we want a "generic" CPU power domain, what
>>>purpose does it serve ? Creating a collection of cpu devices that we
>>>can call "cluster" ?
>>>
>>Nope, not for calling a cluster, a cluster :)
>>
>>This compatible is used to define a generic behavior of the CPU domain
>>controller (in addition to the platform specific behavior of the domain
>>power controller). The kernel activities for such power controller are
>>generally the same which otherwise would be repeated across platforms.
>
>Having gone through this series and the one using it[1], the only common
>activity is just cluster pm notifiers. Other than that it's just
>creating indirection for now. The scenario might change in future, but
>for now it seems unnecessary.
>
Not sure, what seems unnecessary to you. Platforms do have to send
cluster PM notifications, and they have to duplicate reference counting.
Also PM domain framework allows hierarchy which is quite desirable to
power down parts of the SoC that are powered on or have to clocked high,
until the CPU is running.

Cluster PM notifications are just one aspect of this that we currently
handle in the first submission. The patchset as a whole provides a way
to determine in Linux the last man down and the first man up and carry
out activities. There are a bunch of things that are done to power save
when the last man goes down - Turn off debuggers, switch off PLLs,
reduce bus clocks, flush caches amongst a few that I know of. Some of it
are platform specific and some of it arent. This patches provide the way
for both of them to be done easily. The CPU runtime PM and PM domains as
a framework, closely track what the hardware does.

Mentioned in an other mail in this thread, is also an option to
determine the cluster flush state and use it in conjunction with PSCI to
do OS-Initiated cluster power down.

>Also if you look at the shmobile power controller driver, it covers all
>the devices including CPUs unlike QCOM power controller which handles
>only CPU. Yes we can skip CPU genpd creation there only for CPU, IMO
>creating the power domains should be part of power controller driver.
>
>You can add helper functions for all the ARM specific code that can be
>reused by multiple power controller drivers handling CPU/Cluster power
>domain.
>
Sure, some architectures may desire that. I have them addressed in [2].

>>An analogy to this would be the "arm,idle-state" that defines the DT
>>node as something that also depicts a generic cpuidle C state.
>>
>
>I tend to disagree. In idle-states, the nodes define that generic
>properties and they can be parsed in a generic way. That's not the case
>here. Each power controller binding differs.
>
Yes, may be we will have common elements like latency, residency of
powering on/off a domain that a genpd governor can utilize in
determining if its worth powering off the domain or not.

>Yes the generic compatible might be useful to identify that this power
>domain handles CPU/Cluster, but there will be more power controller
>specific things compared to generic code.
>
Agreed, not debating that. Power controller is very SoC specific, but
not debuggers, GIC, caches, buses etc. Many SoCs have almost similiar
needs for many of these supplemental hardware to the CPUs and trend
seems to be generalizing on many of these components.

Thanks,
Lina

>[1] http://www.spinics.net/lists/arm-kernel/msg437304.html
[2] http://www.spinics.net/lists/arm-kernel/msg438971.html
Kevin Hilman Aug. 14, 2015, 3:51 a.m. UTC | #13
Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> writes:

> On Thu, Aug 13, 2015 at 04:45:03PM +0100, Lina Iyer wrote:
>> On Thu, Aug 13 2015 at 09:01 -0600, Lorenzo Pieralisi wrote:
>> >On Thu, Aug 06, 2015 at 04:14:51AM +0100, Rob Herring wrote:
>> >> On Tue, Aug 4, 2015 at 6:35 PM, Lina Iyer <lina.iyer@linaro.org> wrote:
>> >> > Define and add Generic PM domains (genpd) for ARM CPU clusters. Many new
>> >> > SoCs group CPUs as clusters. Clusters share common resources like GIC,
>> >> > power rail, caches, VFP, Coresight etc. When all CPUs in the cluster are
>> >> > idle, these shared resources may also be put in their idle state.
>> >> >
>> >> > The idle time between the last CPU entering idle and a CPU resuming
>> >> > execution is an opportunity for these shared resources to be powered
>> >> > down. Generic PM domain provides a framework for defining such power
>> >> > domains and attach devices to the domain. When the devices in the domain
>> >> > are idle at runtime, the domain would also be suspended and resumed
>> >> > before the first of the devices resume execution.
>> >> >
>> >> > We define a generic PM domain for each cluster and attach CPU devices in
>> >> > the cluster to that PM domain. The DT definitions for the SoC describe
>> >> > this relationship. Genpd callbacks for power_on and power_off can then
>> >> > be used to power up/down the shared resources for the domain.
>> >>
>> >> [...]
>> >>
>> >> > +ARM CPU Power domains
>> >> > +
>> >> > +The device tree allows describing of CPU power domains in a SoC. In ARM SoC,
>> >> > +CPUs may be grouped as clusters. A cluster may have CPUs, GIC, Coresight,
>> >> > +caches, VFP and power controller and other peripheral hardware. Generally,
>> >> > +when the CPUs in the cluster are idle/suspended, the shared resources may also
>> >> > +be suspended and resumed before any of the CPUs resume execution.
>> >> > +
>> >> > +CPUs are the defined as the PM domain consumers and there is a PM domain
>> >> > +provider for the CPUs. Bindings for generic PM domains (genpd) is described in
>> >> > +[1].
>> >> > +
>> >> > +The ARM CPU PM domain follows the same binding convention as any generic PM
>> >> > +domain. Additional binding properties are -
>> >> > +
>> >> > +- compatible:
>> >> > +       Usage: required
>> >> > +       Value type: <string>
>> >> > +       Definition: Must also have
>> >> > +                       "arm,pd"
>> >> > +               inorder to initialize the genpd provider as ARM CPU PM domain.
>> >>
>> >> A compatible string should represent a particular h/w block. If it is
>> >> generic, it should represent some sort of standard programming
>> >> interface (e.g, AHCI, EHCI, etc.). This doesn't seem to be either and
>> >> is rather just a mapping of what "driver" you want to use.
>> >>
>> >> I would expect that identifying a cpu's or cluster's power domain
>> >> would be done by a phandle between the cpu/cluster node and power
>> >> domain node. But I've not really looked at the power domain bindings
>> >> so who knows.
>> >
>> >I would expect the same, meaning that a cpu node, like any other device
>> >node would have a phandle pointing at the respective HW power domain.
>> >
>> CPUs have phandles to their domains. That is how the relationship
>> between the domain provider (power-controller) and the consumer (CPU) is
>> established.
>> 
>> >I do not really understand why we want a "generic" CPU power domain, what
>> >purpose does it serve ? Creating a collection of cpu devices that we
>> >can call "cluster" ?
>> >
>> Nope, not for calling a cluster, a cluster :)
>> 
>> This compatible is used to define a generic behavior of the CPU domain
>> controller (in addition to the platform specific behavior of the domain
>> power controller). The kernel activities for such power controller are
>> generally the same which otherwise would be repeated across platforms.
>
> What activities ? CPU PM notifiers ?

For today, yes.

However, you can think of CPU PM notifiers as the equivalent of runtime
PM hooks.  They're called when the "devices" are about to be powered off
(runtime suspended) or powered on (runtime resumed.)

However the CPU PM framework and notifiers are rather dumb compared to
runtime PM.  For example, runtime PM gives you usecounting, autosuspend,
control from userspace, statistics, etc. etc.  Also, IMO, CPU PM will 
not scale well for multiple clusters.

What if instead, we used runtime PM for the things that the CPU PM
notifiers manager (GIC, VFP, Coresight, etc.), and those drivers used
runtime PM callbacks to replace their CPU PM notifiers?  We'd then be in
a beautiful land where CPU "devices" (and the other connected logic) can
be modeled as devices using runtime PM just like every other device in
the system.

Then take it up a level... what if we then could use genpd to model the
"cluster", made of of the CPUs and "connected" devices (GIC, VFP, etc.)
but also modeled the shared L2$ as a device which was using runtime PM.

Now we're in a place where we can use all the benefits of runtime PM,
plus the governor features of genpd to start doing a real, multi-CPU,
multi-cluster CPUidle that's flexible enough to model the various
dependencies in an SoC independent way, but generic enough to be able to
use common governors for last-man standing, cache flushing, etc. etc.

Kevin
Lina Iyer Aug. 14, 2015, 4:02 a.m. UTC | #14
On Thu, Aug 13 2015 at 21:51 -0600, Kevin Hilman wrote:
>Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> writes:
>
>> On Thu, Aug 13, 2015 at 04:45:03PM +0100, Lina Iyer wrote:
>>> On Thu, Aug 13 2015 at 09:01 -0600, Lorenzo Pieralisi wrote:
>>> >On Thu, Aug 06, 2015 at 04:14:51AM +0100, Rob Herring wrote:
>>> >> On Tue, Aug 4, 2015 at 6:35 PM, Lina Iyer <lina.iyer@linaro.org> wrote:
>>> >> > Define and add Generic PM domains (genpd) for ARM CPU clusters. Many new
>>> >> > SoCs group CPUs as clusters. Clusters share common resources like GIC,
>>> >> > power rail, caches, VFP, Coresight etc. When all CPUs in the cluster are
>>> >> > idle, these shared resources may also be put in their idle state.
>>> >> >
>>> >> > The idle time between the last CPU entering idle and a CPU resuming
>>> >> > execution is an opportunity for these shared resources to be powered
>>> >> > down. Generic PM domain provides a framework for defining such power
>>> >> > domains and attach devices to the domain. When the devices in the domain
>>> >> > are idle at runtime, the domain would also be suspended and resumed
>>> >> > before the first of the devices resume execution.
>>> >> >
>>> >> > We define a generic PM domain for each cluster and attach CPU devices in
>>> >> > the cluster to that PM domain. The DT definitions for the SoC describe
>>> >> > this relationship. Genpd callbacks for power_on and power_off can then
>>> >> > be used to power up/down the shared resources for the domain.
>>> >>
>>> >> [...]
>>> >>
>>> >> > +ARM CPU Power domains
>>> >> > +
>>> >> > +The device tree allows describing of CPU power domains in a SoC. In ARM SoC,
>>> >> > +CPUs may be grouped as clusters. A cluster may have CPUs, GIC, Coresight,
>>> >> > +caches, VFP and power controller and other peripheral hardware. Generally,
>>> >> > +when the CPUs in the cluster are idle/suspended, the shared resources may also
>>> >> > +be suspended and resumed before any of the CPUs resume execution.
>>> >> > +
>>> >> > +CPUs are the defined as the PM domain consumers and there is a PM domain
>>> >> > +provider for the CPUs. Bindings for generic PM domains (genpd) is described in
>>> >> > +[1].
>>> >> > +
>>> >> > +The ARM CPU PM domain follows the same binding convention as any generic PM
>>> >> > +domain. Additional binding properties are -
>>> >> > +
>>> >> > +- compatible:
>>> >> > +       Usage: required
>>> >> > +       Value type: <string>
>>> >> > +       Definition: Must also have
>>> >> > +                       "arm,pd"
>>> >> > +               inorder to initialize the genpd provider as ARM CPU PM domain.
>>> >>
>>> >> A compatible string should represent a particular h/w block. If it is
>>> >> generic, it should represent some sort of standard programming
>>> >> interface (e.g, AHCI, EHCI, etc.). This doesn't seem to be either and
>>> >> is rather just a mapping of what "driver" you want to use.
>>> >>
>>> >> I would expect that identifying a cpu's or cluster's power domain
>>> >> would be done by a phandle between the cpu/cluster node and power
>>> >> domain node. But I've not really looked at the power domain bindings
>>> >> so who knows.
>>> >
>>> >I would expect the same, meaning that a cpu node, like any other device
>>> >node would have a phandle pointing at the respective HW power domain.
>>> >
>>> CPUs have phandles to their domains. That is how the relationship
>>> between the domain provider (power-controller) and the consumer (CPU) is
>>> established.
>>>
>>> >I do not really understand why we want a "generic" CPU power domain, what
>>> >purpose does it serve ? Creating a collection of cpu devices that we
>>> >can call "cluster" ?
>>> >
>>> Nope, not for calling a cluster, a cluster :)
>>>
>>> This compatible is used to define a generic behavior of the CPU domain
>>> controller (in addition to the platform specific behavior of the domain
>>> power controller). The kernel activities for such power controller are
>>> generally the same which otherwise would be repeated across platforms.
>>
>> What activities ? CPU PM notifiers ?
>
>For today, yes.
>
>However, you can think of CPU PM notifiers as the equivalent of runtime
>PM hooks.  They're called when the "devices" are about to be powered off
>(runtime suspended) or powered on (runtime resumed.)
>
>However the CPU PM framework and notifiers are rather dumb compared to
>runtime PM.  For example, runtime PM gives you usecounting, autosuspend,
>control from userspace, statistics, etc. etc.  Also, IMO, CPU PM will
>not scale well for multiple clusters.
>
>What if instead, we used runtime PM for the things that the CPU PM
>notifiers manager (GIC, VFP, Coresight, etc.), and those drivers used
>runtime PM callbacks to replace their CPU PM notifiers?  We'd then be in
>a beautiful land where CPU "devices" (and the other connected logic) can
>be modeled as devices using runtime PM just like every other device in
>the system.
>
>Then take it up a level... what if we then could use genpd to model the
>"cluster", made of of the CPUs and "connected" devices (GIC, VFP, etc.)
>but also modeled the shared L2$ as a device which was using runtime PM.
>
>Now we're in a place where we can use all the benefits of runtime PM,
>plus the governor features of genpd to start doing a real, multi-CPU,
>multi-cluster CPUidle that's flexible enough to model the various
>dependencies in an SoC independent way, but generic enough to be able to
>use common governors for last-man standing, cache flushing, etc. etc.
>

- Off list

Nicely written response Kevin.
Thank you.

--Lina
Sudeep Holla Aug. 14, 2015, 9:52 a.m. UTC | #15
On 13/08/15 20:27, Lina Iyer wrote:
> On Thu, Aug 13 2015 at 11:26 -0600, Sudeep Holla wrote:
>>

[...]

>>
>> Having gone through this series and the one using it[1], the only common
>> activity is just cluster pm notifiers. Other than that it's just
>> creating indirection for now. The scenario might change in future, but
>> for now it seems unnecessary.
>>
> Not sure, what seems unnecessary to you. Platforms do have to send
> cluster PM notifications, and they have to duplicate reference counting.
> Also PM domain framework allows hierarchy which is quite desirable to
> power down parts of the SoC that are powered on or have to clocked high,
> until the CPU is running.
>

Agreed, no argument on using genpd for CPU PM for all the goodies genpd
provides, but the way this patch was creating the genpd domains. It
needs to be part of your power controller.

> Cluster PM notifications are just one aspect of this that we currently
> handle in the first submission. The patchset as a whole provides a way
> to determine in Linux the last man down and the first man up and carry
> out activities. There are a bunch of things that are done to power save
> when the last man goes down - Turn off debuggers, switch off PLLs,
> reduce bus clocks, flush caches amongst a few that I know of. Some of it
> are platform specific and some of it arent. This patches provide the way
> for both of them to be done easily. The CPU runtime PM and PM domains as
> a framework, closely track what the hardware does.
>

Again no argument, I just favour common interface functions. Since each
power controller/platform will have specific sequence, it might be hard
to generalize that, but I may be wrong. OTH common interface functions
to handle those components might give some flexibility to the power
controllers.

> Mentioned in an other mail in this thread, is also an option to
> determine the cluster flush state and use it in conjunction with PSCI to
> do OS-Initiated cluster power down.
>

I haven't yet explored down that route yet, with platform co-ordination
we don't need much complexity in kernel :). OS co-ordination is a
different story as we need to consider the secure/non-secure world
dimensions there. We will have to consider the privileges/restrictions
Linux has.

>> Also if you look at the shmobile power controller driver, it covers all
>> the devices including CPUs unlike QCOM power controller which handles
>> only CPU. Yes we can skip CPU genpd creation there only for CPU, IMO
>> creating the power domains should be part of power controller driver.
>>
>> You can add helper functions for all the ARM specific code that can be
>> reused by multiple power controller drivers handling CPU/Cluster power
>> domain.
>>
> Sure, some architectures may desire that. I have them addressed in [2].
>

I haven't looked at that yet, but will do soon.

>>> An analogy to this would be the "arm,idle-state" that defines the DT
>>> node as something that also depicts a generic cpuidle C state.
>>>
>>
>> I tend to disagree. In idle-states, the nodes define that generic
>> properties and they can be parsed in a generic way. That's not the case
>> here. Each power controller binding differs.
>>
> Yes, may be we will have common elements like latency, residency of
> powering on/off a domain that a genpd governor can utilize in
> determining if its worth powering off the domain or not.
>

Make sense, but as Rob pointed how generic are those on various
platforms is something we need to check considering few platforms before
we build the generic infrastructure.

>> Yes the generic compatible might be useful to identify that this power
>> domain handles CPU/Cluster, but there will be more power controller
>> specific things compared to generic code.
>>
> Agreed, not debating that. Power controller is very SoC specific, but
> not debuggers, GIC, caches, buses etc. Many SoCs have almost similiar
> needs for many of these supplemental hardware to the CPUs and trend
> seems to be generalizing on many of these components.
>

Generalizing those components and using genpd is absolutely fine, just
the mechanics is what I am debating on.

Regards,
Sudeep
Lorenzo Pieralisi Aug. 14, 2015, 3:49 p.m. UTC | #16
On Fri, Aug 14, 2015 at 04:51:15AM +0100, Kevin Hilman wrote:
> Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> writes:
> 
> > On Thu, Aug 13, 2015 at 04:45:03PM +0100, Lina Iyer wrote:
> >> On Thu, Aug 13 2015 at 09:01 -0600, Lorenzo Pieralisi wrote:
> >> >On Thu, Aug 06, 2015 at 04:14:51AM +0100, Rob Herring wrote:
> >> >> On Tue, Aug 4, 2015 at 6:35 PM, Lina Iyer <lina.iyer@linaro.org> wrote:
> >> >> > Define and add Generic PM domains (genpd) for ARM CPU clusters. Many new
> >> >> > SoCs group CPUs as clusters. Clusters share common resources like GIC,
> >> >> > power rail, caches, VFP, Coresight etc. When all CPUs in the cluster are
> >> >> > idle, these shared resources may also be put in their idle state.
> >> >> >
> >> >> > The idle time between the last CPU entering idle and a CPU resuming
> >> >> > execution is an opportunity for these shared resources to be powered
> >> >> > down. Generic PM domain provides a framework for defining such power
> >> >> > domains and attach devices to the domain. When the devices in the domain
> >> >> > are idle at runtime, the domain would also be suspended and resumed
> >> >> > before the first of the devices resume execution.
> >> >> >
> >> >> > We define a generic PM domain for each cluster and attach CPU devices in
> >> >> > the cluster to that PM domain. The DT definitions for the SoC describe
> >> >> > this relationship. Genpd callbacks for power_on and power_off can then
> >> >> > be used to power up/down the shared resources for the domain.
> >> >>
> >> >> [...]
> >> >>
> >> >> > +ARM CPU Power domains
> >> >> > +
> >> >> > +The device tree allows describing of CPU power domains in a SoC. In ARM SoC,
> >> >> > +CPUs may be grouped as clusters. A cluster may have CPUs, GIC, Coresight,
> >> >> > +caches, VFP and power controller and other peripheral hardware. Generally,
> >> >> > +when the CPUs in the cluster are idle/suspended, the shared resources may also
> >> >> > +be suspended and resumed before any of the CPUs resume execution.
> >> >> > +
> >> >> > +CPUs are the defined as the PM domain consumers and there is a PM domain
> >> >> > +provider for the CPUs. Bindings for generic PM domains (genpd) is described in
> >> >> > +[1].
> >> >> > +
> >> >> > +The ARM CPU PM domain follows the same binding convention as any generic PM
> >> >> > +domain. Additional binding properties are -
> >> >> > +
> >> >> > +- compatible:
> >> >> > +       Usage: required
> >> >> > +       Value type: <string>
> >> >> > +       Definition: Must also have
> >> >> > +                       "arm,pd"
> >> >> > +               inorder to initialize the genpd provider as ARM CPU PM domain.
> >> >>
> >> >> A compatible string should represent a particular h/w block. If it is
> >> >> generic, it should represent some sort of standard programming
> >> >> interface (e.g, AHCI, EHCI, etc.). This doesn't seem to be either and
> >> >> is rather just a mapping of what "driver" you want to use.
> >> >>
> >> >> I would expect that identifying a cpu's or cluster's power domain
> >> >> would be done by a phandle between the cpu/cluster node and power
> >> >> domain node. But I've not really looked at the power domain bindings
> >> >> so who knows.
> >> >
> >> >I would expect the same, meaning that a cpu node, like any other device
> >> >node would have a phandle pointing at the respective HW power domain.
> >> >
> >> CPUs have phandles to their domains. That is how the relationship
> >> between the domain provider (power-controller) and the consumer (CPU) is
> >> established.
> >> 
> >> >I do not really understand why we want a "generic" CPU power domain, what
> >> >purpose does it serve ? Creating a collection of cpu devices that we
> >> >can call "cluster" ?
> >> >
> >> Nope, not for calling a cluster, a cluster :)
> >> 
> >> This compatible is used to define a generic behavior of the CPU domain
> >> controller (in addition to the platform specific behavior of the domain
> >> power controller). The kernel activities for such power controller are
> >> generally the same which otherwise would be repeated across platforms.
> >
> > What activities ? CPU PM notifiers ?
> 
> For today, yes.
> 
> However, you can think of CPU PM notifiers as the equivalent of runtime
> PM hooks.  They're called when the "devices" are about to be powered off
> (runtime suspended) or powered on (runtime resumed.)
> 
> However the CPU PM framework and notifiers are rather dumb compared to
> runtime PM.  For example, runtime PM gives you usecounting, autosuspend,
> control from userspace, statistics, etc. etc.  Also, IMO, CPU PM will 
> not scale well for multiple clusters.
> 
> What if instead, we used runtime PM for the things that the CPU PM
> notifiers manager (GIC, VFP, Coresight, etc.), and those drivers used
> runtime PM callbacks to replace their CPU PM notifiers?  We'd then be in
> a beautiful land where CPU "devices" (and the other connected logic) can
> be modeled as devices using runtime PM just like every other device in
> the system.

I would agree with that (even though I do not see how we can make
eg GIC, VFP and arch timers behave like devices from a runtime PM
standpoint), still I do not see why we need a virtual power domain for
that, the CPU "devices" should be attached to the HW CPU power domain.

More below for systems relying on FW interfaces to handle CPU power
management.

> Then take it up a level... what if we then could use genpd to model the
> "cluster", made of of the CPUs and "connected" devices (GIC, VFP, etc.)
> but also modeled the shared L2$ as a device which was using runtime PM.

I have to understand what "modeled" means (do we create a struct device
on purpose for that ? Same goes for GIC and VFP).

But overall I get the gist of what you are saying, we just have to see
how this can be implemented within the genPD framework.

I suspect the "virtual" power domain you are introducing is there for
systems where the power controller is hidden from the kernel (ie PSCI),
where basically the CPU "devices" can't be attached to a power domain
simply because that power domain is not managed in the kernel but
by firmware.

> Now we're in a place where we can use all the benefits of runtime PM,
> plus the governor features of genpd to start doing a real, multi-CPU,
> multi-cluster CPUidle that's flexible enough to model the various
> dependencies in an SoC independent way, but generic enough to be able to
> use common governors for last-man standing, cache flushing, etc. etc.

I do not disagree (even though I think that last man standing is pushing
this concept a bit over the top), I am just concerned about the points
raised above, most of them should be reasonably simple to solve.

Thanks,
Lorenzo
Kevin Hilman Aug. 14, 2015, 7:11 p.m. UTC | #17
Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> writes:

> On Fri, Aug 14, 2015 at 04:51:15AM +0100, Kevin Hilman wrote:

[...]

>> However, you can think of CPU PM notifiers as the equivalent of runtime
>> PM hooks.  They're called when the "devices" are about to be powered off
>> (runtime suspended) or powered on (runtime resumed.)
>> 
>> However the CPU PM framework and notifiers are rather dumb compared to
>> runtime PM.  For example, runtime PM gives you usecounting, autosuspend,
>> control from userspace, statistics, etc. etc.  Also, IMO, CPU PM will 
>> not scale well for multiple clusters.
>> 
>> What if instead, we used runtime PM for the things that the CPU PM
>> notifiers manager (GIC, VFP, Coresight, etc.), and those drivers used
>> runtime PM callbacks to replace their CPU PM notifiers?  We'd then be in
>> a beautiful land where CPU "devices" (and the other connected logic) can
>> be modeled as devices using runtime PM just like every other device in
>> the system.
>
> I would agree with that (even though I do not see how we can make
> eg GIC, VFP and arch timers behave like devices from a runtime PM
> standpoint), 

Sure, that might be a stretch due the implementation details, but
conceptully it models the hardware well and I'd like to explore runtime
PM for all of these "devices", though it's not the highest priority.

> still I do not see why we need a virtual power domain for
> that, the CPU "devices" should be attached to the HW CPU power domain.
>
> More below for systems relying on FW interfaces to handle CPU power
> management.
>
>> Then take it up a level... what if we then could use genpd to model the
>> "cluster", made of of the CPUs and "connected" devices (GIC, VFP, etc.)
>> but also modeled the shared L2$ as a device which was using runtime PM.
>
> I have to understand what "modeled" means (do we create a struct device
> on purpose for that ? Same goes for GIC and VFP).

Not necessarily a struct device for the cluster, but for the CPUs (which
already have one) and and possibly GIC, VFP, timers. etc.  With that in
place, cluster would just be modleled by a genpd (which is what Lina's
series is doing.)

> But overall I get the gist of what you are saying, we just have to see
> how this can be implemented within the genPD framework.
>
> I suspect the "virtual" power domain you are introducing is there for
> systems where the power controller is hidden from the kernel (ie PSCI),
> where basically the CPU "devices" can't be attached to a power domain
> simply because that power domain is not managed in the kernel but
> by firmware.

The main idea behind a "virtual" power domain was to collect the common
parts of cluster management, possibly governors etc.  However, maybe
it's better just have a set of functions that the "real" hw power domain
drivers could use for the common parts.  That might get rid of the need
to describe this in DT, which I think is what Rob is suggesting also.

>> Now we're in a place where we can use all the benefits of runtime PM,
>> plus the governor features of genpd to start doing a real, multi-CPU,
>> multi-cluster CPUidle that's flexible enough to model the various
>> dependencies in an SoC independent way, but generic enough to be able to
>> use common governors for last-man standing, cache flushing, etc. etc.
>
> I do not disagree (even though I think that last man standing is pushing
> this concept a bit over the top), I am just concerned about the points
> raised above, most of them should be reasonably simple to solve.

Good, hopefully we can have a good discussion about this at Plumbers
next week as the issues above and proposed in Lina's series are the main
issues I want to raise in my part of the EAS/PM track[1].

See you there!

Kevin

[1] https://linuxplumbersconf.org/2015/ocw/events/LPC2015/tracks/501
diff mbox

Patch

diff --git a/Documentation/arm/cpu-domains.txt b/Documentation/arm/cpu-domains.txt
new file mode 100644
index 0000000..3e535b7
--- /dev/null
+++ b/Documentation/arm/cpu-domains.txt
@@ -0,0 +1,49 @@ 
+CPU Clusters and PM domain
+
+Newer ARM CPUs are grouped in a SoC as clusters. A cluster in addition to the
+CPUs may have caches, GIC, VFP and architecture specific power controller to
+power the cluster. A cluster may also be nested in another cluster, the
+hierarchy of which is depicted in the device tree. CPUIdle frameworks enables
+the CPUs to determine the sleep time and enter low power state to save power
+during periods of idle. CPUs in a cluster may enter and exit idle state
+independently. During the time when all the CPUs are in idle state, the
+cluster can safely be in idle state as well. When the last of the CPUs is
+powered off as a result of idle, the cluster may also be powered down, but the
+domain must be powered on before the first of the CPUs in the cluster resumes
+execution.
+
+ARM SoCs can power down the CPU and resume execution in a few uSecs and the
+domain that powers the CPU cluster also have comparable idle latencies. The
+ARM CPU WFI signal is used as a hardware trigger for the cluster hardware to
+enter their idle state. The hardware can be programmed in advance to put the
+cluster in the desired idle state befitting the wakeup latency requested by
+the CPUs. When all the CPUs in a cluster have executed their WFI instruction,
+the state machine for the power controller may put the cluster components in
+their power down or idle state. Generally, the domains would power on with the
+hardware sensing the CPU's interrupts. The domains may however, need to be
+reconfigured by the CPU to remain active, until the last CPU is ready to enter
+idle again. To power down a cluster, it is generally required to power down
+all the CPUs. The caches would also need to be flushed. The hardware state of
+some of the components may need to be saved and restored when powered back on.
+SoC vendors may also have hardware specific configuration that must be done
+before the cluster can be powered off. When the cluster is powered off,
+notifications may be sent out to other SoC components to scale down or even
+power off their resources.
+
+Power management domains represent relationship of devices and their power
+controllers. They are represented in the DT as domain consumers and providers.
+A device may have a domain provider and a domain provider may support multiple
+domain consumers. Domains like clusters, may also be nested inside one
+another. A domain that has no active consumer, may be powered off and any
+resuming consumer would trigger the domain back to active. Parent domains may
+be powered off when the child domains are powered off. ARM CPU cluster can be
+fashioned as a PM domain. When the CPU devices are powered off, the PM domain
+may be powered off.
+
+The code in Generic PM domains handles the hierarchy of devices, domains and
+the reference counting of objects leading to last man down and first man up.
+The ARM CPU domains common code defines PM domains for each CPU cluster and
+attaches the domains' CPU devices to as specified in the DT. This happens
+automatically at kernel init, when the domain is specified as compatible with
+"arm,pd". Powering on/off the common cluster hardware would also be done when
+the PM domain is runtime suspended or resumed.
diff --git a/Documentation/devicetree/bindings/arm/cpudomains.txt b/Documentation/devicetree/bindings/arm/cpudomains.txt
new file mode 100644
index 0000000..d945861
--- /dev/null
+++ b/Documentation/devicetree/bindings/arm/cpudomains.txt
@@ -0,0 +1,23 @@ 
+ARM CPU Power domains
+
+The device tree allows describing of CPU power domains in a SoC. In ARM SoC,
+CPUs may be grouped as clusters. A cluster may have CPUs, GIC, Coresight,
+caches, VFP and power controller and other peripheral hardware. Generally,
+when the CPUs in the cluster are idle/suspended, the shared resources may also
+be suspended and resumed before any of the CPUs resume execution.
+
+CPUs are the defined as the PM domain consumers and there is a PM domain
+provider for the CPUs. Bindings for generic PM domains (genpd) is described in
+[1].
+
+The ARM CPU PM domain follows the same binding convention as any generic PM
+domain. Additional binding properties are -
+
+- compatible:
+	Usage: required
+	Value type: <string>
+	Definition: Must also have
+			"arm,pd"
+		inorder to initialize the genpd provider as ARM CPU PM domain.
+
+[1]. Documentation/devicetree/bindings/power/power_domain.txt
diff --git a/arch/arm/common/Makefile b/arch/arm/common/Makefile
index 6ee5959..e2e2c63 100644
--- a/arch/arm/common/Makefile
+++ b/arch/arm/common/Makefile
@@ -18,3 +18,4 @@  AFLAGS_vlock.o			:= -march=armv7-a
 obj-$(CONFIG_TI_PRIV_EDMA)	+= edma.o
 obj-$(CONFIG_BL_SWITCHER)	+= bL_switcher.o
 obj-$(CONFIG_BL_SWITCHER_DUMMY_IF) += bL_switcher_dummy_if.o
+obj-$(CONFIG_PM_GENERIC_DOMAINS) += domains.o
diff --git a/arch/arm/common/domains.c b/arch/arm/common/domains.c
new file mode 100644
index 0000000..15981e9
--- /dev/null
+++ b/arch/arm/common/domains.c
@@ -0,0 +1,166 @@ 
+/*
+ * ARM CPU Generic PM Domain.
+ *
+ * Copyright (C) 2015 Linaro Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/cpu_pm.h>
+#include <linux/device.h>
+#include <linux/of_platform.h>
+#include <linux/platform_device.h>
+#include <linux/pm_domain.h>
+#include <linux/pm_runtime.h>
+#include <linux/slab.h>
+
+#define NAME_MAX 36
+
+struct arm_pm_domain {
+	struct generic_pm_domain genpd;
+};
+
+static inline
+struct arm_pm_domain *to_arm_pd(struct generic_pm_domain *d)
+{
+	return container_of(d, struct arm_pm_domain, genpd);
+}
+
+static int arm_pd_power_down(struct generic_pm_domain *genpd)
+{
+	/*
+	 * Notify CPU PM domain power down
+	 * TODO: Call the notificated directly from here.
+	 */
+	cpu_cluster_pm_enter();
+
+	return 0;
+}
+
+static int arm_pd_power_up(struct generic_pm_domain *genpd)
+{
+	/* Notify CPU PM domain power up */
+	cpu_cluster_pm_exit();
+
+	return 0;
+}
+
+static void __init run_cpu(void *unused)
+{
+	struct device *cpu_dev = get_cpu_device(smp_processor_id());
+
+	/* We are running, increment the usage count */
+	pm_runtime_get_noresume(cpu_dev);
+}
+
+static int __init arm_domain_cpu_init(void)
+{
+	int cpuid, ret;
+
+	/* Find any CPU nodes with a phandle to this power domain */
+	for_each_possible_cpu(cpuid) {
+		struct device *cpu_dev;
+		struct of_phandle_args pd_args;
+
+		cpu_dev = get_cpu_device(cpuid);
+		if (!cpu_dev) {
+			pr_warn("%s: Unable to get device for CPU%d\n",
+					__func__, cpuid);
+			return -ENODEV;
+		}
+
+		/*
+		 * We are only interested in CPUs that can be attached to
+		 * PM domains that are arm,pd compatible.
+		 */
+		ret = of_parse_phandle_with_args(cpu_dev->of_node,
+				"power-domains", "#power-domain-cells",
+				0, &pd_args);
+		if (ret) {
+			dev_dbg(cpu_dev,
+				"%s: Did not find a valid PM domain\n",
+					__func__);
+			continue;
+		}
+
+		if (!of_device_is_compatible(pd_args.np, "arm,pd")) {
+			dev_dbg(cpu_dev, "%s: does not have an ARM PD\n",
+					__func__);
+			continue;
+		}
+
+		if (cpu_online(cpuid)) {
+			pm_runtime_set_active(cpu_dev);
+			/*
+			 * Execute the below on that 'cpu' to ensure that the
+			 * reference counting is correct. Its possible that
+			 * while this code is executing, the 'cpu' may be
+			 * powered down, but we may incorrectly increment the
+			 * usage. By executing the get_cpu on the 'cpu',
+			 * we can ensure that the 'cpu' and its usage count are
+			 * matched.
+			 */
+			smp_call_function_single(cpuid, run_cpu, NULL, true);
+		} else {
+			pm_runtime_set_suspended(cpu_dev);
+		}
+		pm_runtime_irq_safe(cpu_dev);
+		pm_runtime_enable(cpu_dev);
+
+		/*
+		 * We attempt to attach the device to genpd again. We would
+		 * have failed in our earlier attempt to attach to the domain
+		 * provider as the CPU device would not have been IRQ safe,
+		 * while the domain is defined as IRQ safe. IRQ safe domains
+		 * can only have IRQ safe devices.
+		 */
+		ret = genpd_dev_pm_attach(cpu_dev);
+		if (ret) {
+			dev_warn(cpu_dev,
+				"%s: Unable to attach to power-domain: %d\n",
+				__func__, ret);
+			pm_runtime_disable(cpu_dev);
+		}
+	}
+
+	return 0;
+}
+
+static int __init arm_domain_init(void)
+{
+	struct device_node *np;
+	int count = 0;
+
+	for_each_compatible_node(np, NULL, "arm,pd") {
+		struct arm_pm_domain *pd;
+
+		if (!of_device_is_available(np))
+			continue;
+
+		pd = kzalloc(sizeof(*pd), GFP_KERNEL);
+		if (!pd)
+			return -ENOMEM;
+
+		pd->genpd.name = kstrndup(np->name, NAME_MAX, GFP_KERNEL);
+		pd->genpd.power_off = arm_pd_power_down;
+		pd->genpd.power_on = arm_pd_power_up;
+		pd->genpd.flags |= GENPD_FLAG_IRQ_SAFE;
+
+		pr_debug("adding %s as generic power domain.\n", np->full_name);
+		pm_genpd_init(&pd->genpd, &simple_qos_governor, false);
+		of_genpd_add_provider_simple(np, &pd->genpd);
+
+		count++;
+	}
+
+	/* We have ARM PD(s), attach CPUs to their domain */
+	if (count)
+		return arm_domain_cpu_init();
+
+	return 0;
+}
+device_initcall(arm_domain_init);