Message ID | 1455310238-8963-8-git-send-email-lina.iyer@linaro.org (mailing list archive) |
---|---|
State | RFC |
Delegated to: | Andy Gross |
Headers | show |
On 02/12, Lina Iyer wrote: > @@ -52,6 +55,76 @@ struct cpu_pm_domain *to_cpu_pd(struct generic_pm_domain *d) > return res; > } > > +static bool cpu_pd_down_ok(struct dev_pm_domain *pd) > +{ > + struct generic_pm_domain *genpd = pd_to_genpd(pd); > + struct cpu_pm_domain *cpu_pd = to_cpu_pd(genpd); > + int qos = pm_qos_request(PM_QOS_CPU_DMA_LATENCY); > + u64 sleep_ns; > + ktime_t earliest, next_wakeup; > + int cpu; > + int i; > + > + /* Reset the last set genpd state, default to index 0 */ > + genpd->state_idx = 0; > + > + /* We dont want to power down, if QoS is 0 */ > + if (!qos) > + return false; > + > + /* > + * Find the sleep time for the cluster. > + * The time between now and the first wake up of any CPU that > + * are in this domain hierarchy is the time available for the > + * domain to be idle. > + */ > + earliest = ktime_set(KTIME_SEC_MAX, 0); > + for_each_cpu_and(cpu, cpu_pd->cpus, cpu_online_mask) { We're not worried about hotplug happening in parallel because preemption is disabled here? > + next_wakeup = tick_nohz_get_next_wakeup(cpu); > + if (earliest.tv64 > next_wakeup.tv64) if (ktime_before(next_wakeup, earliest)) > + earliest = next_wakeup; > + } > + > + sleep_ns = ktime_to_ns(ktime_sub(earliest, ktime_get())); > + if (sleep_ns <= 0) > + return false; > + > + /* > + * Find the deepest sleep state that satisfies the residency > + * requirement and the QoS constraint > + */ > + for (i = genpd->state_count - 1; i >= 0; i--) { > + u64 state_sleep_ns; > + > + state_sleep_ns = genpd->states[i].power_off_latency_ns + > + genpd->states[i].power_on_latency_ns + > + genpd->states[i].residency_ns; > + > + /* > + * If we cant sleep to save power in the state, move on s/cant/can't/ > + * to the next lower idle state. > + */ > + if (state_sleep_ns > sleep_ns) > + continue; > + > + /* > + * We also dont want to sleep more than we should to s/dont/don't/ > + * gaurantee QoS. > + */ > + if (state_sleep_ns < (qos * NSEC_PER_USEC)) Maybe we should make qos into qos_ns? Presumably the compiler would hoist out the multiplication here, but it doesn't hurt to do it explicitly. > + break; > + } > + > + if (i >= 0) > + genpd->state_idx = i; > + > + return (i >= 0) ? true : false; Just return i >= 0?
On Fri, Feb 26 2016 at 12:33 -0700, Stephen Boyd wrote: >On 02/12, Lina Iyer wrote: >> @@ -52,6 +55,76 @@ struct cpu_pm_domain *to_cpu_pd(struct generic_pm_domain *d) >> return res; >> } >> >> +static bool cpu_pd_down_ok(struct dev_pm_domain *pd) >> +{ >> + struct generic_pm_domain *genpd = pd_to_genpd(pd); >> + struct cpu_pm_domain *cpu_pd = to_cpu_pd(genpd); >> + int qos = pm_qos_request(PM_QOS_CPU_DMA_LATENCY); >> + u64 sleep_ns; >> + ktime_t earliest, next_wakeup; >> + int cpu; >> + int i; >> + >> + /* Reset the last set genpd state, default to index 0 */ >> + genpd->state_idx = 0; >> + >> + /* We dont want to power down, if QoS is 0 */ >> + if (!qos) >> + return false; >> + >> + /* >> + * Find the sleep time for the cluster. >> + * The time between now and the first wake up of any CPU that >> + * are in this domain hierarchy is the time available for the >> + * domain to be idle. >> + */ >> + earliest = ktime_set(KTIME_SEC_MAX, 0); >> + for_each_cpu_and(cpu, cpu_pd->cpus, cpu_online_mask) { > >We're not worried about hotplug happening in parallel because >preemption is disabled here? > Nope. Hotplug on the same domain or in its hierarchy will be waiting on the domain lock to released before becoming online. Any other domain is not of concern for this domain governor. If a core was hotplugged out while this is happening, then we may risk making an premature wake up decision, which would happen either way if we lock hotplug here. >> + next_wakeup = tick_nohz_get_next_wakeup(cpu); >> + if (earliest.tv64 > next_wakeup.tv64) > > if (ktime_before(next_wakeup, earliest)) > >> + earliest = next_wakeup; >> + } >> + >> + sleep_ns = ktime_to_ns(ktime_sub(earliest, ktime_get())); >> + if (sleep_ns <= 0) >> + return false; >> + >> + /* >> + * Find the deepest sleep state that satisfies the residency >> + * requirement and the QoS constraint >> + */ >> + for (i = genpd->state_count - 1; i >= 0; i--) { >> + u64 state_sleep_ns; >> + >> + state_sleep_ns = genpd->states[i].power_off_latency_ns + >> + genpd->states[i].power_on_latency_ns + >> + genpd->states[i].residency_ns; >> + >> + /* >> + * If we cant sleep to save power in the state, move on > >s/cant/can't/ > argh. Fixed. >> + * to the next lower idle state. >> + */ >> + if (state_sleep_ns > sleep_ns) >> + continue; >> + >> + /* >> + * We also dont want to sleep more than we should to > >s/dont/don't/ > Done >> + * gaurantee QoS. >> + */ >> + if (state_sleep_ns < (qos * NSEC_PER_USEC)) > >Maybe we should make qos into qos_ns? Presumably the compiler >would hoist out the multiplication here, but it doesn't hurt to >do it explicitly. > Okay >> + break; >> + } >> + >> + if (i >= 0) >> + genpd->state_idx = i; >> + >> + return (i >= 0) ? true : false; > >Just return i >= 0? > Ok Thanks, Lina >-- >Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, >a Linux Foundation Collaborative Project -- To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 03/01/2016 11:32 AM, Lina Iyer wrote: > On Fri, Feb 26 2016 at 12:33 -0700, Stephen Boyd wrote: >> On 02/12, Lina Iyer wrote: >>> @@ -52,6 +55,76 @@ struct cpu_pm_domain *to_cpu_pd(struct >>> generic_pm_domain *d) >>> return res; >>> } >>> >>> +static bool cpu_pd_down_ok(struct dev_pm_domain *pd) >>> +{ >>> + struct generic_pm_domain *genpd = pd_to_genpd(pd); >>> + struct cpu_pm_domain *cpu_pd = to_cpu_pd(genpd); >>> + int qos = pm_qos_request(PM_QOS_CPU_DMA_LATENCY); >>> + u64 sleep_ns; >>> + ktime_t earliest, next_wakeup; >>> + int cpu; >>> + int i; >>> + >>> + /* Reset the last set genpd state, default to index 0 */ >>> + genpd->state_idx = 0; >>> + >>> + /* We dont want to power down, if QoS is 0 */ >>> + if (!qos) >>> + return false; >>> + >>> + /* >>> + * Find the sleep time for the cluster. >>> + * The time between now and the first wake up of any CPU that >>> + * are in this domain hierarchy is the time available for the >>> + * domain to be idle. >>> + */ >>> + earliest = ktime_set(KTIME_SEC_MAX, 0); >>> + for_each_cpu_and(cpu, cpu_pd->cpus, cpu_online_mask) { >> >> We're not worried about hotplug happening in parallel because >> preemption is disabled here? >> > Nope. Hotplug on the same domain or in its hierarchy will be waiting on > the domain lock to released before becoming online. Any other domain is > not of concern for this domain governor. > > If a core was hotplugged out while this is happening, then we may risk > making an premature wake up decision, which would happen either way if > we lock hotplug here. Ok please make this into a comment in the code.
diff --git a/drivers/base/power/cpu_domains.c b/drivers/base/power/cpu_domains.c index c99710c..7069411 100644 --- a/drivers/base/power/cpu_domains.c +++ b/drivers/base/power/cpu_domains.c @@ -17,9 +17,12 @@ #include <linux/list.h> #include <linux/of.h> #include <linux/pm_domain.h> +#include <linux/pm_qos.h> +#include <linux/pm_runtime.h> #include <linux/rculist.h> #include <linux/rcupdate.h> #include <linux/slab.h> +#include <linux/tick.h> #define CPU_PD_NAME_MAX 36 @@ -52,6 +55,76 @@ struct cpu_pm_domain *to_cpu_pd(struct generic_pm_domain *d) return res; } +static bool cpu_pd_down_ok(struct dev_pm_domain *pd) +{ + struct generic_pm_domain *genpd = pd_to_genpd(pd); + struct cpu_pm_domain *cpu_pd = to_cpu_pd(genpd); + int qos = pm_qos_request(PM_QOS_CPU_DMA_LATENCY); + u64 sleep_ns; + ktime_t earliest, next_wakeup; + int cpu; + int i; + + /* Reset the last set genpd state, default to index 0 */ + genpd->state_idx = 0; + + /* We dont want to power down, if QoS is 0 */ + if (!qos) + return false; + + /* + * Find the sleep time for the cluster. + * The time between now and the first wake up of any CPU that + * are in this domain hierarchy is the time available for the + * domain to be idle. + */ + earliest = ktime_set(KTIME_SEC_MAX, 0); + for_each_cpu_and(cpu, cpu_pd->cpus, cpu_online_mask) { + next_wakeup = tick_nohz_get_next_wakeup(cpu); + if (earliest.tv64 > next_wakeup.tv64) + earliest = next_wakeup; + } + + sleep_ns = ktime_to_ns(ktime_sub(earliest, ktime_get())); + if (sleep_ns <= 0) + return false; + + /* + * Find the deepest sleep state that satisfies the residency + * requirement and the QoS constraint + */ + for (i = genpd->state_count - 1; i >= 0; i--) { + u64 state_sleep_ns; + + state_sleep_ns = genpd->states[i].power_off_latency_ns + + genpd->states[i].power_on_latency_ns + + genpd->states[i].residency_ns; + + /* + * If we cant sleep to save power in the state, move on + * to the next lower idle state. + */ + if (state_sleep_ns > sleep_ns) + continue; + + /* + * We also dont want to sleep more than we should to + * gaurantee QoS. + */ + if (state_sleep_ns < (qos * NSEC_PER_USEC)) + break; + } + + if (i >= 0) + genpd->state_idx = i; + + return (i >= 0) ? true : false; +} + +static struct dev_power_governor cpu_pd_gov = { + .power_down_ok = cpu_pd_down_ok, +}; + static int cpu_pd_attach_cpu(struct cpu_pm_domain *cpu_pd, int cpu) { int ret; @@ -143,7 +216,7 @@ static struct generic_pm_domain *of_init_cpu_pm_domain(struct device_node *dn, /* Register the CPU genpd */ pr_debug("adding %s as CPU PM domain.\n", pd->genpd->name); - ret = of_pm_genpd_init(dn, pd->genpd, &simple_qos_governor, false); + ret = of_pm_genpd_init(dn, pd->genpd, &cpu_pd_gov, false); if (ret) { pr_err("Unable to initialize domain %s\n", dn->full_name); goto fail;
A PM domain comprising of CPUs may be powered off when all the CPUs in the domain are powered down. Powering down a CPU domain is generally a expensive operation and therefore the power performance trade offs should be considered. The time between the last CPU powering down and the first CPU powering up in a domain, is the time available for the domain to sleep. Ideally, the sleep time of the domain should fulfill the residency requirement of the domains' idle state. To do this effectively, read the time before the wakeup of the cluster's CPUs and ensure that the domain's idle state sleep time guarantees the QoS requirements of each of the CPU, the PM QoS CPU_DMA_LATENCY and the state's residency. Signed-off-by: Lina Iyer <lina.iyer@linaro.org> --- Changes since RFC v1 - - bug fix drivers/base/power/cpu_domains.c | 75 +++++++++++++++++++++++++++++++++++++++- 1 file changed, 74 insertions(+), 1 deletion(-)