Message ID | 20240527142557.321610-2-ulf.hansson@linaro.org (mailing list archive) |
---|---|
State | Handled Elsewhere, archived |
Headers | show |
Series | pmdomain/cpuidle-psci: Support s2idle/s2ram on PREEMPT_RT | expand |
On 5/27/2024 7:25 AM, Ulf Hansson wrote: > To allow a genpd provider for a CPU PM domain to enter a domain-idle-state > during s2idle on a PREEMPT_RT based configuration, we can't use the regular > spinlock, as they are turned into sleepable locks on PREEMPT_RT. > > To address this problem, let's convert into using the raw spinlock, but > only for genpd providers that have the GENPD_FLAG_CPU_DOMAIN bit set. In > this way, the lock can still be acquired/released in atomic context, which > is needed in the idle-path for PREEMPT_RT. > > Do note that the genpd power-on/off notifiers may also be fired during > s2idle, but these are already prepared for PREEMPT_RT as they are based on > the raw notifiers. However, consumers of them may need to adopt accordingly > to work properly on PREEMPT_RT. > > Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> > --- > > Changes in v2: > - None. > > --- > drivers/pmdomain/core.c | 47 ++++++++++++++++++++++++++++++++++++++- > include/linux/pm_domain.h | 5 ++++- > 2 files changed, 50 insertions(+), 2 deletions(-) > > diff --git a/drivers/pmdomain/core.c b/drivers/pmdomain/core.c > index 623d15b68707..072e6bdb6ee6 100644 > --- a/drivers/pmdomain/core.c > +++ b/drivers/pmdomain/core.c > @@ -117,6 +117,48 @@ static const struct genpd_lock_ops genpd_spin_ops = { > .unlock = genpd_unlock_spin, > }; > > +static void genpd_lock_raw_spin(struct generic_pm_domain *genpd) > + __acquires(&genpd->raw_slock) > +{ > + unsigned long flags; > + > + raw_spin_lock_irqsave(&genpd->raw_slock, flags); > + genpd->raw_lock_flags = flags; > +} > + > +static void genpd_lock_nested_raw_spin(struct generic_pm_domain *genpd, > + int depth) > + __acquires(&genpd->raw_slock) > +{ > + unsigned long flags; > + > + raw_spin_lock_irqsave_nested(&genpd->raw_slock, flags, depth); > + genpd->raw_lock_flags = flags; > +} > + > +static int genpd_lock_interruptible_raw_spin(struct generic_pm_domain *genpd) > + __acquires(&genpd->raw_slock) > +{ > + unsigned long flags; > + > + raw_spin_lock_irqsave(&genpd->raw_slock, flags); > + genpd->raw_lock_flags = flags; > + return 0; > +} > + > +static void genpd_unlock_raw_spin(struct generic_pm_domain *genpd) > + __releases(&genpd->raw_slock) > +{ > + raw_spin_unlock_irqrestore(&genpd->raw_slock, genpd->raw_lock_flags); > +} > + > +static const struct genpd_lock_ops genpd_raw_spin_ops = { > + .lock = genpd_lock_raw_spin, > + .lock_nested = genpd_lock_nested_raw_spin, > + .lock_interruptible = genpd_lock_interruptible_raw_spin, > + .unlock = genpd_unlock_raw_spin, > +}; > + > #define genpd_lock(p) p->lock_ops->lock(p) > #define genpd_lock_nested(p, d) p->lock_ops->lock_nested(p, d) > #define genpd_lock_interruptible(p) p->lock_ops->lock_interruptible(p) > @@ -2079,7 +2121,10 @@ static void genpd_free_data(struct generic_pm_domain *genpd) > > static void genpd_lock_init(struct generic_pm_domain *genpd) > { > - if (genpd->flags & GENPD_FLAG_IRQ_SAFE) { > + if (genpd->flags & GENPD_FLAG_CPU_DOMAIN) { > + raw_spin_lock_init(&genpd->raw_slock); > + genpd->lock_ops = &genpd_raw_spin_ops; > + } else if (genpd->flags & GENPD_FLAG_IRQ_SAFE) { Hi Ulf, though you are targeting only CPU domains for now, I wonder if FLAG_IRQ_SAFE will be a better choice? The description of the flag says it is safe for atomic context which won't be the case for PREEMPT_RT? > spin_lock_init(&genpd->slock); > genpd->lock_ops = &genpd_spin_ops; > } else { > diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h > index f24546a3d3db..0d7fc47de3bc 100644 > --- a/include/linux/pm_domain.h > +++ b/include/linux/pm_domain.h > @@ -194,8 +194,11 @@ struct generic_pm_domain { > spinlock_t slock; > unsigned long lock_flags; > }; > + struct { > + raw_spinlock_t raw_slock; > + unsigned long raw_lock_flags; > + }; > }; > - > }; > > static inline struct generic_pm_domain *pd_to_genpd(struct dev_pm_domain *pd)
On Tue, 28 May 2024 at 21:56, Nikunj Kela <quic_nkela@quicinc.com> wrote: > > > On 5/27/2024 7:25 AM, Ulf Hansson wrote: > > To allow a genpd provider for a CPU PM domain to enter a domain-idle-state > > during s2idle on a PREEMPT_RT based configuration, we can't use the regular > > spinlock, as they are turned into sleepable locks on PREEMPT_RT. > > > > To address this problem, let's convert into using the raw spinlock, but > > only for genpd providers that have the GENPD_FLAG_CPU_DOMAIN bit set. In > > this way, the lock can still be acquired/released in atomic context, which > > is needed in the idle-path for PREEMPT_RT. > > > > Do note that the genpd power-on/off notifiers may also be fired during > > s2idle, but these are already prepared for PREEMPT_RT as they are based on > > the raw notifiers. However, consumers of them may need to adopt accordingly > > to work properly on PREEMPT_RT. > > > > Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> > > --- > > > > Changes in v2: > > - None. > > > > --- > > drivers/pmdomain/core.c | 47 ++++++++++++++++++++++++++++++++++++++- > > include/linux/pm_domain.h | 5 ++++- > > 2 files changed, 50 insertions(+), 2 deletions(-) > > > > diff --git a/drivers/pmdomain/core.c b/drivers/pmdomain/core.c > > index 623d15b68707..072e6bdb6ee6 100644 > > --- a/drivers/pmdomain/core.c > > +++ b/drivers/pmdomain/core.c > > @@ -117,6 +117,48 @@ static const struct genpd_lock_ops genpd_spin_ops = { > > .unlock = genpd_unlock_spin, > > }; > > > > +static void genpd_lock_raw_spin(struct generic_pm_domain *genpd) > > + __acquires(&genpd->raw_slock) > > +{ > > + unsigned long flags; > > + > > + raw_spin_lock_irqsave(&genpd->raw_slock, flags); > > + genpd->raw_lock_flags = flags; > > +} > > + > > +static void genpd_lock_nested_raw_spin(struct generic_pm_domain *genpd, > > + int depth) > > + __acquires(&genpd->raw_slock) > > +{ > > + unsigned long flags; > > + > > + raw_spin_lock_irqsave_nested(&genpd->raw_slock, flags, depth); > > + genpd->raw_lock_flags = flags; > > +} > > + > > +static int genpd_lock_interruptible_raw_spin(struct generic_pm_domain *genpd) > > + __acquires(&genpd->raw_slock) > > +{ > > + unsigned long flags; > > + > > + raw_spin_lock_irqsave(&genpd->raw_slock, flags); > > + genpd->raw_lock_flags = flags; > > + return 0; > > +} > > + > > +static void genpd_unlock_raw_spin(struct generic_pm_domain *genpd) > > + __releases(&genpd->raw_slock) > > +{ > > + raw_spin_unlock_irqrestore(&genpd->raw_slock, genpd->raw_lock_flags); > > +} > > + > > +static const struct genpd_lock_ops genpd_raw_spin_ops = { > > + .lock = genpd_lock_raw_spin, > > + .lock_nested = genpd_lock_nested_raw_spin, > > + .lock_interruptible = genpd_lock_interruptible_raw_spin, > > + .unlock = genpd_unlock_raw_spin, > > +}; > > + > > #define genpd_lock(p) p->lock_ops->lock(p) > > #define genpd_lock_nested(p, d) p->lock_ops->lock_nested(p, d) > > #define genpd_lock_interruptible(p) p->lock_ops->lock_interruptible(p) > > @@ -2079,7 +2121,10 @@ static void genpd_free_data(struct generic_pm_domain *genpd) > > > > static void genpd_lock_init(struct generic_pm_domain *genpd) > > { > > - if (genpd->flags & GENPD_FLAG_IRQ_SAFE) { > > + if (genpd->flags & GENPD_FLAG_CPU_DOMAIN) { > > + raw_spin_lock_init(&genpd->raw_slock); > > + genpd->lock_ops = &genpd_raw_spin_ops; > > + } else if (genpd->flags & GENPD_FLAG_IRQ_SAFE) { > > Hi Ulf, though you are targeting only CPU domains for now, I wonder if > FLAG_IRQ_SAFE will be a better choice? The description of the flag says > it is safe for atomic context which won't be the case for PREEMPT_RT? You have a point! However, we also need to limit the use of raw spinlocks, from PREEMPT_RT point of view. In other words, just because a genpd provider is capable of executing its callbacks in atomic context, doesn't always mean that it should use raw spinlocks too. [...] Kind regards Uffe
On 5/30/2024 1:15 AM, Ulf Hansson wrote: > On Tue, 28 May 2024 at 21:56, Nikunj Kela <quic_nkela@quicinc.com> wrote: >> >> On 5/27/2024 7:25 AM, Ulf Hansson wrote: >>> To allow a genpd provider for a CPU PM domain to enter a domain-idle-state >>> during s2idle on a PREEMPT_RT based configuration, we can't use the regular >>> spinlock, as they are turned into sleepable locks on PREEMPT_RT. >>> >>> To address this problem, let's convert into using the raw spinlock, but >>> only for genpd providers that have the GENPD_FLAG_CPU_DOMAIN bit set. In >>> this way, the lock can still be acquired/released in atomic context, which >>> is needed in the idle-path for PREEMPT_RT. >>> >>> Do note that the genpd power-on/off notifiers may also be fired during >>> s2idle, but these are already prepared for PREEMPT_RT as they are based on >>> the raw notifiers. However, consumers of them may need to adopt accordingly >>> to work properly on PREEMPT_RT. >>> >>> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> >>> --- >>> >>> Changes in v2: >>> - None. >>> >>> --- >>> drivers/pmdomain/core.c | 47 ++++++++++++++++++++++++++++++++++++++- >>> include/linux/pm_domain.h | 5 ++++- >>> 2 files changed, 50 insertions(+), 2 deletions(-) >>> >>> diff --git a/drivers/pmdomain/core.c b/drivers/pmdomain/core.c >>> index 623d15b68707..072e6bdb6ee6 100644 >>> --- a/drivers/pmdomain/core.c >>> +++ b/drivers/pmdomain/core.c >>> @@ -117,6 +117,48 @@ static const struct genpd_lock_ops genpd_spin_ops = { >>> .unlock = genpd_unlock_spin, >>> }; >>> >>> +static void genpd_lock_raw_spin(struct generic_pm_domain *genpd) >>> + __acquires(&genpd->raw_slock) >>> +{ >>> + unsigned long flags; >>> + >>> + raw_spin_lock_irqsave(&genpd->raw_slock, flags); >>> + genpd->raw_lock_flags = flags; >>> +} >>> + >>> +static void genpd_lock_nested_raw_spin(struct generic_pm_domain *genpd, >>> + int depth) >>> + __acquires(&genpd->raw_slock) >>> +{ >>> + unsigned long flags; >>> + >>> + raw_spin_lock_irqsave_nested(&genpd->raw_slock, flags, depth); >>> + genpd->raw_lock_flags = flags; >>> +} >>> + >>> +static int genpd_lock_interruptible_raw_spin(struct generic_pm_domain *genpd) >>> + __acquires(&genpd->raw_slock) >>> +{ >>> + unsigned long flags; >>> + >>> + raw_spin_lock_irqsave(&genpd->raw_slock, flags); >>> + genpd->raw_lock_flags = flags; >>> + return 0; >>> +} >>> + >>> +static void genpd_unlock_raw_spin(struct generic_pm_domain *genpd) >>> + __releases(&genpd->raw_slock) >>> +{ >>> + raw_spin_unlock_irqrestore(&genpd->raw_slock, genpd->raw_lock_flags); >>> +} >>> + >>> +static const struct genpd_lock_ops genpd_raw_spin_ops = { >>> + .lock = genpd_lock_raw_spin, >>> + .lock_nested = genpd_lock_nested_raw_spin, >>> + .lock_interruptible = genpd_lock_interruptible_raw_spin, >>> + .unlock = genpd_unlock_raw_spin, >>> +}; >>> + >>> #define genpd_lock(p) p->lock_ops->lock(p) >>> #define genpd_lock_nested(p, d) p->lock_ops->lock_nested(p, d) >>> #define genpd_lock_interruptible(p) p->lock_ops->lock_interruptible(p) >>> @@ -2079,7 +2121,10 @@ static void genpd_free_data(struct generic_pm_domain *genpd) >>> >>> static void genpd_lock_init(struct generic_pm_domain *genpd) >>> { >>> - if (genpd->flags & GENPD_FLAG_IRQ_SAFE) { >>> + if (genpd->flags & GENPD_FLAG_CPU_DOMAIN) { >>> + raw_spin_lock_init(&genpd->raw_slock); >>> + genpd->lock_ops = &genpd_raw_spin_ops; >>> + } else if (genpd->flags & GENPD_FLAG_IRQ_SAFE) { >> Hi Ulf, though you are targeting only CPU domains for now, I wonder if >> FLAG_IRQ_SAFE will be a better choice? The description of the flag says >> it is safe for atomic context which won't be the case for PREEMPT_RT? > You have a point! > > However, we also need to limit the use of raw spinlocks, from > PREEMPT_RT point of view. In other words, just because a genpd > provider is capable of executing its callbacks in atomic context, > doesn't always mean that it should use raw spinlocks too. Got it! Thanks. Maybe in future, if there is a need, a new GENPD FLAG for RT, something like GENPD_FLAG_IRQ_SAFE_RT, can be added to address this. > > [...] > > Kind regards > Uffe
On Thu, 30 May 2024 at 16:23, Nikunj Kela <quic_nkela@quicinc.com> wrote: > > > On 5/30/2024 1:15 AM, Ulf Hansson wrote: > > On Tue, 28 May 2024 at 21:56, Nikunj Kela <quic_nkela@quicinc.com> wrote: > >> > >> On 5/27/2024 7:25 AM, Ulf Hansson wrote: > >>> To allow a genpd provider for a CPU PM domain to enter a domain-idle-state > >>> during s2idle on a PREEMPT_RT based configuration, we can't use the regular > >>> spinlock, as they are turned into sleepable locks on PREEMPT_RT. > >>> > >>> To address this problem, let's convert into using the raw spinlock, but > >>> only for genpd providers that have the GENPD_FLAG_CPU_DOMAIN bit set. In > >>> this way, the lock can still be acquired/released in atomic context, which > >>> is needed in the idle-path for PREEMPT_RT. > >>> > >>> Do note that the genpd power-on/off notifiers may also be fired during > >>> s2idle, but these are already prepared for PREEMPT_RT as they are based on > >>> the raw notifiers. However, consumers of them may need to adopt accordingly > >>> to work properly on PREEMPT_RT. > >>> > >>> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> > >>> --- > >>> > >>> Changes in v2: > >>> - None. > >>> > >>> --- > >>> drivers/pmdomain/core.c | 47 ++++++++++++++++++++++++++++++++++++++- > >>> include/linux/pm_domain.h | 5 ++++- > >>> 2 files changed, 50 insertions(+), 2 deletions(-) > >>> > >>> diff --git a/drivers/pmdomain/core.c b/drivers/pmdomain/core.c > >>> index 623d15b68707..072e6bdb6ee6 100644 > >>> --- a/drivers/pmdomain/core.c > >>> +++ b/drivers/pmdomain/core.c > >>> @@ -117,6 +117,48 @@ static const struct genpd_lock_ops genpd_spin_ops = { > >>> .unlock = genpd_unlock_spin, > >>> }; > >>> > >>> +static void genpd_lock_raw_spin(struct generic_pm_domain *genpd) > >>> + __acquires(&genpd->raw_slock) > >>> +{ > >>> + unsigned long flags; > >>> + > >>> + raw_spin_lock_irqsave(&genpd->raw_slock, flags); > >>> + genpd->raw_lock_flags = flags; > >>> +} > >>> + > >>> +static void genpd_lock_nested_raw_spin(struct generic_pm_domain *genpd, > >>> + int depth) > >>> + __acquires(&genpd->raw_slock) > >>> +{ > >>> + unsigned long flags; > >>> + > >>> + raw_spin_lock_irqsave_nested(&genpd->raw_slock, flags, depth); > >>> + genpd->raw_lock_flags = flags; > >>> +} > >>> + > >>> +static int genpd_lock_interruptible_raw_spin(struct generic_pm_domain *genpd) > >>> + __acquires(&genpd->raw_slock) > >>> +{ > >>> + unsigned long flags; > >>> + > >>> + raw_spin_lock_irqsave(&genpd->raw_slock, flags); > >>> + genpd->raw_lock_flags = flags; > >>> + return 0; > >>> +} > >>> + > >>> +static void genpd_unlock_raw_spin(struct generic_pm_domain *genpd) > >>> + __releases(&genpd->raw_slock) > >>> +{ > >>> + raw_spin_unlock_irqrestore(&genpd->raw_slock, genpd->raw_lock_flags); > >>> +} > >>> + > >>> +static const struct genpd_lock_ops genpd_raw_spin_ops = { > >>> + .lock = genpd_lock_raw_spin, > >>> + .lock_nested = genpd_lock_nested_raw_spin, > >>> + .lock_interruptible = genpd_lock_interruptible_raw_spin, > >>> + .unlock = genpd_unlock_raw_spin, > >>> +}; > >>> + > >>> #define genpd_lock(p) p->lock_ops->lock(p) > >>> #define genpd_lock_nested(p, d) p->lock_ops->lock_nested(p, d) > >>> #define genpd_lock_interruptible(p) p->lock_ops->lock_interruptible(p) > >>> @@ -2079,7 +2121,10 @@ static void genpd_free_data(struct generic_pm_domain *genpd) > >>> > >>> static void genpd_lock_init(struct generic_pm_domain *genpd) > >>> { > >>> - if (genpd->flags & GENPD_FLAG_IRQ_SAFE) { > >>> + if (genpd->flags & GENPD_FLAG_CPU_DOMAIN) { > >>> + raw_spin_lock_init(&genpd->raw_slock); > >>> + genpd->lock_ops = &genpd_raw_spin_ops; > >>> + } else if (genpd->flags & GENPD_FLAG_IRQ_SAFE) { > >> Hi Ulf, though you are targeting only CPU domains for now, I wonder if > >> FLAG_IRQ_SAFE will be a better choice? The description of the flag says > >> it is safe for atomic context which won't be the case for PREEMPT_RT? > > You have a point! > > > > However, we also need to limit the use of raw spinlocks, from > > PREEMPT_RT point of view. In other words, just because a genpd > > provider is capable of executing its callbacks in atomic context, > > doesn't always mean that it should use raw spinlocks too. > > Got it! Thanks. Maybe in future, if there is a need, a new GENPD FLAG > for RT, something like GENPD_FLAG_IRQ_SAFE_RT, can be added to address this. Yes, I agree, something along those lines would make sense. BTW, did you manage to get some time to test the series on your end? Kind regards Uffe
On 6/3/2024 6:58 AM, Ulf Hansson wrote: > On Thu, 30 May 2024 at 16:23, Nikunj Kela <quic_nkela@quicinc.com> wrote: >> >> On 5/30/2024 1:15 AM, Ulf Hansson wrote: >>> On Tue, 28 May 2024 at 21:56, Nikunj Kela <quic_nkela@quicinc.com> wrote: >>>> On 5/27/2024 7:25 AM, Ulf Hansson wrote: >>>>> To allow a genpd provider for a CPU PM domain to enter a domain-idle-state >>>>> during s2idle on a PREEMPT_RT based configuration, we can't use the regular >>>>> spinlock, as they are turned into sleepable locks on PREEMPT_RT. >>>>> >>>>> To address this problem, let's convert into using the raw spinlock, but >>>>> only for genpd providers that have the GENPD_FLAG_CPU_DOMAIN bit set. In >>>>> this way, the lock can still be acquired/released in atomic context, which >>>>> is needed in the idle-path for PREEMPT_RT. >>>>> >>>>> Do note that the genpd power-on/off notifiers may also be fired during >>>>> s2idle, but these are already prepared for PREEMPT_RT as they are based on >>>>> the raw notifiers. However, consumers of them may need to adopt accordingly >>>>> to work properly on PREEMPT_RT. >>>>> >>>>> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> >>>>> --- >>>>> >>>>> Changes in v2: >>>>> - None. >>>>> >>>>> --- >>>>> drivers/pmdomain/core.c | 47 ++++++++++++++++++++++++++++++++++++++- >>>>> include/linux/pm_domain.h | 5 ++++- >>>>> 2 files changed, 50 insertions(+), 2 deletions(-) >>>>> >>>>> diff --git a/drivers/pmdomain/core.c b/drivers/pmdomain/core.c >>>>> index 623d15b68707..072e6bdb6ee6 100644 >>>>> --- a/drivers/pmdomain/core.c >>>>> +++ b/drivers/pmdomain/core.c >>>>> @@ -117,6 +117,48 @@ static const struct genpd_lock_ops genpd_spin_ops = { >>>>> .unlock = genpd_unlock_spin, >>>>> }; >>>>> >>>>> +static void genpd_lock_raw_spin(struct generic_pm_domain *genpd) >>>>> + __acquires(&genpd->raw_slock) >>>>> +{ >>>>> + unsigned long flags; >>>>> + >>>>> + raw_spin_lock_irqsave(&genpd->raw_slock, flags); >>>>> + genpd->raw_lock_flags = flags; >>>>> +} >>>>> + >>>>> +static void genpd_lock_nested_raw_spin(struct generic_pm_domain *genpd, >>>>> + int depth) >>>>> + __acquires(&genpd->raw_slock) >>>>> +{ >>>>> + unsigned long flags; >>>>> + >>>>> + raw_spin_lock_irqsave_nested(&genpd->raw_slock, flags, depth); >>>>> + genpd->raw_lock_flags = flags; >>>>> +} >>>>> + >>>>> +static int genpd_lock_interruptible_raw_spin(struct generic_pm_domain *genpd) >>>>> + __acquires(&genpd->raw_slock) >>>>> +{ >>>>> + unsigned long flags; >>>>> + >>>>> + raw_spin_lock_irqsave(&genpd->raw_slock, flags); >>>>> + genpd->raw_lock_flags = flags; >>>>> + return 0; >>>>> +} >>>>> + >>>>> +static void genpd_unlock_raw_spin(struct generic_pm_domain *genpd) >>>>> + __releases(&genpd->raw_slock) >>>>> +{ >>>>> + raw_spin_unlock_irqrestore(&genpd->raw_slock, genpd->raw_lock_flags); >>>>> +} >>>>> + >>>>> +static const struct genpd_lock_ops genpd_raw_spin_ops = { >>>>> + .lock = genpd_lock_raw_spin, >>>>> + .lock_nested = genpd_lock_nested_raw_spin, >>>>> + .lock_interruptible = genpd_lock_interruptible_raw_spin, >>>>> + .unlock = genpd_unlock_raw_spin, >>>>> +}; >>>>> + >>>>> #define genpd_lock(p) p->lock_ops->lock(p) >>>>> #define genpd_lock_nested(p, d) p->lock_ops->lock_nested(p, d) >>>>> #define genpd_lock_interruptible(p) p->lock_ops->lock_interruptible(p) >>>>> @@ -2079,7 +2121,10 @@ static void genpd_free_data(struct generic_pm_domain *genpd) >>>>> >>>>> static void genpd_lock_init(struct generic_pm_domain *genpd) >>>>> { >>>>> - if (genpd->flags & GENPD_FLAG_IRQ_SAFE) { >>>>> + if (genpd->flags & GENPD_FLAG_CPU_DOMAIN) { >>>>> + raw_spin_lock_init(&genpd->raw_slock); >>>>> + genpd->lock_ops = &genpd_raw_spin_ops; >>>>> + } else if (genpd->flags & GENPD_FLAG_IRQ_SAFE) { >>>> Hi Ulf, though you are targeting only CPU domains for now, I wonder if >>>> FLAG_IRQ_SAFE will be a better choice? The description of the flag says >>>> it is safe for atomic context which won't be the case for PREEMPT_RT? >>> You have a point! >>> >>> However, we also need to limit the use of raw spinlocks, from >>> PREEMPT_RT point of view. In other words, just because a genpd >>> provider is capable of executing its callbacks in atomic context, >>> doesn't always mean that it should use raw spinlocks too. >> Got it! Thanks. Maybe in future, if there is a need, a new GENPD FLAG >> for RT, something like GENPD_FLAG_IRQ_SAFE_RT, can be added to address this. > Yes, I agree, something along those lines would make sense. > > BTW, did you manage to get some time to test the series on your end? I haven't been able spend time testing it but I have requested Maulik to test it and update you. Thanks > Kind regards > Uffe
diff --git a/drivers/pmdomain/core.c b/drivers/pmdomain/core.c index 623d15b68707..072e6bdb6ee6 100644 --- a/drivers/pmdomain/core.c +++ b/drivers/pmdomain/core.c @@ -117,6 +117,48 @@ static const struct genpd_lock_ops genpd_spin_ops = { .unlock = genpd_unlock_spin, }; +static void genpd_lock_raw_spin(struct generic_pm_domain *genpd) + __acquires(&genpd->raw_slock) +{ + unsigned long flags; + + raw_spin_lock_irqsave(&genpd->raw_slock, flags); + genpd->raw_lock_flags = flags; +} + +static void genpd_lock_nested_raw_spin(struct generic_pm_domain *genpd, + int depth) + __acquires(&genpd->raw_slock) +{ + unsigned long flags; + + raw_spin_lock_irqsave_nested(&genpd->raw_slock, flags, depth); + genpd->raw_lock_flags = flags; +} + +static int genpd_lock_interruptible_raw_spin(struct generic_pm_domain *genpd) + __acquires(&genpd->raw_slock) +{ + unsigned long flags; + + raw_spin_lock_irqsave(&genpd->raw_slock, flags); + genpd->raw_lock_flags = flags; + return 0; +} + +static void genpd_unlock_raw_spin(struct generic_pm_domain *genpd) + __releases(&genpd->raw_slock) +{ + raw_spin_unlock_irqrestore(&genpd->raw_slock, genpd->raw_lock_flags); +} + +static const struct genpd_lock_ops genpd_raw_spin_ops = { + .lock = genpd_lock_raw_spin, + .lock_nested = genpd_lock_nested_raw_spin, + .lock_interruptible = genpd_lock_interruptible_raw_spin, + .unlock = genpd_unlock_raw_spin, +}; + #define genpd_lock(p) p->lock_ops->lock(p) #define genpd_lock_nested(p, d) p->lock_ops->lock_nested(p, d) #define genpd_lock_interruptible(p) p->lock_ops->lock_interruptible(p) @@ -2079,7 +2121,10 @@ static void genpd_free_data(struct generic_pm_domain *genpd) static void genpd_lock_init(struct generic_pm_domain *genpd) { - if (genpd->flags & GENPD_FLAG_IRQ_SAFE) { + if (genpd->flags & GENPD_FLAG_CPU_DOMAIN) { + raw_spin_lock_init(&genpd->raw_slock); + genpd->lock_ops = &genpd_raw_spin_ops; + } else if (genpd->flags & GENPD_FLAG_IRQ_SAFE) { spin_lock_init(&genpd->slock); genpd->lock_ops = &genpd_spin_ops; } else { diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h index f24546a3d3db..0d7fc47de3bc 100644 --- a/include/linux/pm_domain.h +++ b/include/linux/pm_domain.h @@ -194,8 +194,11 @@ struct generic_pm_domain { spinlock_t slock; unsigned long lock_flags; }; + struct { + raw_spinlock_t raw_slock; + unsigned long raw_lock_flags; + }; }; - }; static inline struct generic_pm_domain *pd_to_genpd(struct dev_pm_domain *pd)
To allow a genpd provider for a CPU PM domain to enter a domain-idle-state during s2idle on a PREEMPT_RT based configuration, we can't use the regular spinlock, as they are turned into sleepable locks on PREEMPT_RT. To address this problem, let's convert into using the raw spinlock, but only for genpd providers that have the GENPD_FLAG_CPU_DOMAIN bit set. In this way, the lock can still be acquired/released in atomic context, which is needed in the idle-path for PREEMPT_RT. Do note that the genpd power-on/off notifiers may also be fired during s2idle, but these are already prepared for PREEMPT_RT as they are based on the raw notifiers. However, consumers of them may need to adopt accordingly to work properly on PREEMPT_RT. Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> --- Changes in v2: - None. --- drivers/pmdomain/core.c | 47 ++++++++++++++++++++++++++++++++++++++- include/linux/pm_domain.h | 5 ++++- 2 files changed, 50 insertions(+), 2 deletions(-)