diff mbox series

[RFC] PM: domains: Decrement subdomain counter after powering it off

Message ID 20220524210721.2072317-1-vladimir.zapolskiy@linaro.org (mailing list archive)
State RFC, archived
Headers show
Series [RFC] PM: domains: Decrement subdomain counter after powering it off | expand

Commit Message

Vladimir Zapolskiy May 24, 2022, 9:07 p.m. UTC
Over the code a counter of powered on subdomains is incremented before
an attempt to power some subdomain on, which makes a perfect sense,
however it is asymmetric to a decrement of the counter on power off,
which is also done before powering a subdomain off.

As a result of a race over powering off domains it might happen that
a parent power domain could get misinformed about actual power states
of its subdomains, and thus a parent power domain can be powered off
in front of its children domains, while an expected result is to
bail out with a busy state.

Signed-off-by: Vladimir Zapolskiy <vladimir.zapolskiy@linaro.org>
---
I found this problem while debugging simultaneous and dependant parent
and children power domain operations, a parent domain can be turned off
right before its last still enabled child. While in my particular case
it does not lead to any noticeable issues, it seems that the change is
necessary to have anyway.

Sending with RFC tag and without Fixes, I would appreciate to get
feedback, thanks.

 drivers/base/power/domain.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

Comments

Ulf Hansson June 9, 2022, 11:06 a.m. UTC | #1
On Tue, 24 May 2022 at 23:07, Vladimir Zapolskiy
<vladimir.zapolskiy@linaro.org> wrote:
>
> Over the code a counter of powered on subdomains is incremented before
> an attempt to power some subdomain on, which makes a perfect sense,
> however it is asymmetric to a decrement of the counter on power off,
> which is also done before powering a subdomain off.
>
> As a result of a race over powering off domains it might happen that
> a parent power domain could get misinformed about actual power states
> of its subdomains, and thus a parent power domain can be powered off
> in front of its children domains, while an expected result is to
> bail out with a busy state.

Thanks for a nice description of the problem. However, I would be
surprised if you have managed to observe this accordingly. Anyway,
let's look at the code.

>
> Signed-off-by: Vladimir Zapolskiy <vladimir.zapolskiy@linaro.org>
> ---
> I found this problem while debugging simultaneous and dependant parent
> and children power domain operations, a parent domain can be turned off
> right before its last still enabled child. While in my particular case
> it does not lead to any noticeable issues, it seems that the change is
> necessary to have anyway.
>
> Sending with RFC tag and without Fixes, I would appreciate to get
> feedback, thanks.
>
>  drivers/base/power/domain.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
> index 739e52cd4aba..ee66c54d87b1 100644
> --- a/drivers/base/power/domain.c
> +++ b/drivers/base/power/domain.c
> @@ -690,10 +690,10 @@ static int genpd_power_off(struct generic_pm_domain *genpd, bool one_dev_on,
>         genpd->states[genpd->state_idx].usage++;
>
>         list_for_each_entry(link, &genpd->child_links, child_node) {
> -               genpd_sd_counter_dec(link->parent);
>                 genpd_lock_nested(link->parent, depth + 1);
>                 genpd_power_off(link->parent, false, depth + 1);
>                 genpd_unlock(link->parent);
> +               genpd_sd_counter_dec(link->parent);

By moving the decrementation to this point, will always prevent the
parent from being powered off.

Moreover, the current (the child) genpd has already been powered off
at this point, so it's certainly safe to power off its parents at this
point (unless the parent has other children being powered on of
course). So, I am not sure I understand the issue correctly.

>         }
>
>         return 0;
> @@ -748,10 +748,10 @@ static int genpd_power_on(struct generic_pm_domain *genpd, unsigned int depth)
>         list_for_each_entry_continue_reverse(link,
>                                         &genpd->child_links,
>                                         child_node) {
> -               genpd_sd_counter_dec(link->parent);
>                 genpd_lock_nested(link->parent, depth + 1);
>                 genpd_power_off(link->parent, false, depth + 1);
>                 genpd_unlock(link->parent);
> +               genpd_sd_counter_dec(link->parent);

Similar comment as above. As this is the error path in
genpd_power_on(), the current (the child) genpd failed to be powered
on, so it's safe to power off the parent.

Moving the decrementation of the parent->sd_count to this point will
prevent the parent from being off with genpd_power_off() as few lines
above.

>         }
>
>         return ret;
> @@ -1096,8 +1096,6 @@ static void genpd_sync_power_off(struct generic_pm_domain *genpd, bool use_lock,
>         genpd->status = GENPD_STATE_OFF;
>
>         list_for_each_entry(link, &genpd->child_links, child_node) {
> -               genpd_sd_counter_dec(link->parent);
> -
>                 if (use_lock)
>                         genpd_lock_nested(link->parent, depth + 1);
>
> @@ -1105,6 +1103,8 @@ static void genpd_sync_power_off(struct generic_pm_domain *genpd, bool use_lock,
>
>                 if (use_lock)
>                         genpd_unlock(link->parent);
> +
> +               genpd_sd_counter_dec(link->parent);
>         }
>  }
>
> --
> 2.33.0
>

Kind regards
Uffe
diff mbox series

Patch

diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index 739e52cd4aba..ee66c54d87b1 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -690,10 +690,10 @@  static int genpd_power_off(struct generic_pm_domain *genpd, bool one_dev_on,
 	genpd->states[genpd->state_idx].usage++;
 
 	list_for_each_entry(link, &genpd->child_links, child_node) {
-		genpd_sd_counter_dec(link->parent);
 		genpd_lock_nested(link->parent, depth + 1);
 		genpd_power_off(link->parent, false, depth + 1);
 		genpd_unlock(link->parent);
+		genpd_sd_counter_dec(link->parent);
 	}
 
 	return 0;
@@ -748,10 +748,10 @@  static int genpd_power_on(struct generic_pm_domain *genpd, unsigned int depth)
 	list_for_each_entry_continue_reverse(link,
 					&genpd->child_links,
 					child_node) {
-		genpd_sd_counter_dec(link->parent);
 		genpd_lock_nested(link->parent, depth + 1);
 		genpd_power_off(link->parent, false, depth + 1);
 		genpd_unlock(link->parent);
+		genpd_sd_counter_dec(link->parent);
 	}
 
 	return ret;
@@ -1096,8 +1096,6 @@  static void genpd_sync_power_off(struct generic_pm_domain *genpd, bool use_lock,
 	genpd->status = GENPD_STATE_OFF;
 
 	list_for_each_entry(link, &genpd->child_links, child_node) {
-		genpd_sd_counter_dec(link->parent);
-
 		if (use_lock)
 			genpd_lock_nested(link->parent, depth + 1);
 
@@ -1105,6 +1103,8 @@  static void genpd_sync_power_off(struct generic_pm_domain *genpd, bool use_lock,
 
 		if (use_lock)
 			genpd_unlock(link->parent);
+
+		genpd_sd_counter_dec(link->parent);
 	}
 }