Message ID | 87d517c24630494afd9ba5769c2e2b10ee1d3f5d.1608010334.git.viresh.kumar@linaro.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [V3,1/3] arm64: topology: Avoid the have_policy check | expand |
On Tuesday 15 Dec 2020 at 11:04:15 (+0530), Viresh Kumar wrote: > This patch does a couple of optimizations in init_amu_fie(), like early > exits from paths where we don't need to continue any further, avoid the > enable/disable dance, moving the calls to > topology_scale_freq_invariant() just when we need them, instead of at > the top of the routine, and avoiding calling it for the third time. > > Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> > --- > V3: > - Skipped the enable/disable dance. > - No need to call topology_scale_freq_invariant() multiple times. > > arch/arm64/kernel/topology.c | 27 ++++++++++++++------------- > 1 file changed, 14 insertions(+), 13 deletions(-) > > diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c > index ebadc73449f9..57267d694495 100644 > --- a/arch/arm64/kernel/topology.c > +++ b/arch/arm64/kernel/topology.c > @@ -221,8 +221,8 @@ static DEFINE_STATIC_KEY_FALSE(amu_fie_key); > > static int __init init_amu_fie(void) > { > - bool invariance_status = topology_scale_freq_invariant(); > cpumask_var_t valid_cpus; > + bool invariant; > int ret = 0; > int cpu; > > @@ -249,18 +249,19 @@ static int __init init_amu_fie(void) > if (cpumask_equal(valid_cpus, cpu_present_mask)) > cpumask_copy(amu_fie_cpus, cpu_present_mask); > > - if (!cpumask_empty(amu_fie_cpus)) { > - pr_info("CPUs[%*pbl]: counters will be used for FIE.", > - cpumask_pr_args(amu_fie_cpus)); > - static_branch_enable(&amu_fie_key); > - } > + if (cpumask_empty(amu_fie_cpus)) > + goto free_valid_mask; > > - /* > - * If the system is not fully invariant after AMU init, disable > - * partial use of counters for frequency invariance. > - */ > - if (!topology_scale_freq_invariant()) > - static_branch_disable(&amu_fie_key); > + invariant = topology_scale_freq_invariant(); > + > + /* We aren't fully invariant yet */ > + if (!invariant && !cpumask_equal(amu_fie_cpus, cpu_present_mask)) > + goto free_valid_mask; > + > + static_branch_enable(&amu_fie_key); > + > + pr_info("CPUs[%*pbl]: counters will be used for FIE.", > + cpumask_pr_args(amu_fie_cpus)); > > /* > * Task scheduler behavior depends on frequency invariance support, > @@ -268,7 +269,7 @@ static int __init init_amu_fie(void) > * a result of counter initialisation and use, retrigger the build of > * scheduling domains to ensure the information is propagated properly. > */ > - if (invariance_status != topology_scale_freq_invariant()) > + if (!invariant) > rebuild_sched_domains_energy(); > > free_valid_mask: > -- > 2.25.0.rc1.19.g042ed3e048af > Looks good! Reviewed-by: Ionela Voinescu <ionela.voinescu@arm.com>
diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c index ebadc73449f9..57267d694495 100644 --- a/arch/arm64/kernel/topology.c +++ b/arch/arm64/kernel/topology.c @@ -221,8 +221,8 @@ static DEFINE_STATIC_KEY_FALSE(amu_fie_key); static int __init init_amu_fie(void) { - bool invariance_status = topology_scale_freq_invariant(); cpumask_var_t valid_cpus; + bool invariant; int ret = 0; int cpu; @@ -249,18 +249,19 @@ static int __init init_amu_fie(void) if (cpumask_equal(valid_cpus, cpu_present_mask)) cpumask_copy(amu_fie_cpus, cpu_present_mask); - if (!cpumask_empty(amu_fie_cpus)) { - pr_info("CPUs[%*pbl]: counters will be used for FIE.", - cpumask_pr_args(amu_fie_cpus)); - static_branch_enable(&amu_fie_key); - } + if (cpumask_empty(amu_fie_cpus)) + goto free_valid_mask; - /* - * If the system is not fully invariant after AMU init, disable - * partial use of counters for frequency invariance. - */ - if (!topology_scale_freq_invariant()) - static_branch_disable(&amu_fie_key); + invariant = topology_scale_freq_invariant(); + + /* We aren't fully invariant yet */ + if (!invariant && !cpumask_equal(amu_fie_cpus, cpu_present_mask)) + goto free_valid_mask; + + static_branch_enable(&amu_fie_key); + + pr_info("CPUs[%*pbl]: counters will be used for FIE.", + cpumask_pr_args(amu_fie_cpus)); /* * Task scheduler behavior depends on frequency invariance support, @@ -268,7 +269,7 @@ static int __init init_amu_fie(void) * a result of counter initialisation and use, retrigger the build of * scheduling domains to ensure the information is propagated properly. */ - if (invariance_status != topology_scale_freq_invariant()) + if (!invariant) rebuild_sched_domains_energy(); free_valid_mask:
This patch does a couple of optimizations in init_amu_fie(), like early exits from paths where we don't need to continue any further, avoid the enable/disable dance, moving the calls to topology_scale_freq_invariant() just when we need them, instead of at the top of the routine, and avoiding calling it for the third time. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> --- V3: - Skipped the enable/disable dance. - No need to call topology_scale_freq_invariant() multiple times. arch/arm64/kernel/topology.c | 27 ++++++++++++++------------- 1 file changed, 14 insertions(+), 13 deletions(-)