Message ID | 1472737551-15272-2-git-send-email-m.szyprowski@samsung.com (mailing list archive) |
---|---|
State | Not Applicable |
Headers | show |
Hi Marek,
[auto build test WARNING on clk/clk-next]
[also build test WARNING on v4.8-rc4 next-20160825]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
[Suggest to use git(>=2.9.0) format-patch --base=<commit> (or --base=auto for convenience) to record what (public, well-known) commit your patch series was built on]
[Check https://git-scm.com/docs/git-format-patch for more information]
url: https://github.com/0day-ci/linux/commits/Marek-Szyprowski/Add-runtime-PM-support-for-clocks-on-Exynos-SoC-example/20160901-215042
base: https://git.kernel.org/pub/scm/linux/kernel/git/clk/linux.git clk-next
coccinelle warnings: (new ones prefixed by >>)
>> drivers/clk/clk.c:123:9-10: WARNING: return of 0/1 in function 'clk_pm_runtime_suspended' with return type bool
Please review and possibly fold the followup patch.
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation
--
To unsubscribe from this list: send the line "unsubscribe linux-samsung-soc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 09/01, Marek Szyprowski wrote: > Registers for some clocks might be located in the SOC area, which are under the > power domain. To enable access to those registers respective domain has to be > turned on. Additionally, registers for such clocks will usually loose its > contents when power domain is turned off, so additional saving and restoring of > them might be needed in the clock controller driver. > > This patch adds basic infrastructure in the clocks core to allow implementing > driver for such clocks under power domains. Clock provider can supply a > struct device pointer, which is the used by clock core for tracking and managing > clock's controller runtime pm state. Each clk_prepare() operation > will first call pm_runtime_get_sync() on the supplied device, while > clk_unprepare() will do pm_runtime_put() at the end. > > Additional calls to pm_runtime_get/put functions are required to ensure that any > register access (like calculating/chaning clock rates) will be done with clock > controller in active runtime state. > > Special handling of the case when runtime pm is disabled for clock controller's > device is needed to let this feature work properly also during system sleep > suspend/resume operations (runtime pm is first disabled before entering sleep > state's, but controller is usually still operational until its suspend pm > callback is called). > > Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> My "knee jerk" concern is that we're going to take a runtime PM lock underneath the prepare lock. That seems like a situation where we could hit a lock inversion if the runtime PM callbacks themselves acquire the prepare lock by calling clk APIs? But this concern is false right? We release the runtime PM lock before calling the PM callback, so we shouldn't hit any deadlock and lockdep won't complain?
Hi Stephen, On 2016-09-08 02:19, Stephen Boyd wrote: > On 09/01, Marek Szyprowski wrote: >> Registers for some clocks might be located in the SOC area, which are under the >> power domain. To enable access to those registers respective domain has to be >> turned on. Additionally, registers for such clocks will usually loose its >> contents when power domain is turned off, so additional saving and restoring of >> them might be needed in the clock controller driver. >> >> This patch adds basic infrastructure in the clocks core to allow implementing >> driver for such clocks under power domains. Clock provider can supply a >> struct device pointer, which is the used by clock core for tracking and managing >> clock's controller runtime pm state. Each clk_prepare() operation >> will first call pm_runtime_get_sync() on the supplied device, while >> clk_unprepare() will do pm_runtime_put() at the end. >> >> Additional calls to pm_runtime_get/put functions are required to ensure that any >> register access (like calculating/chaning clock rates) will be done with clock >> controller in active runtime state. >> >> Special handling of the case when runtime pm is disabled for clock controller's >> device is needed to let this feature work properly also during system sleep >> suspend/resume operations (runtime pm is first disabled before entering sleep >> state's, but controller is usually still operational until its suspend pm >> callback is called). >> >> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> > My "knee jerk" concern is that we're going to take a runtime PM > lock underneath the prepare lock. That seems like a situation > where we could hit a lock inversion if the runtime PM callbacks > themselves acquire the prepare lock by calling clk APIs? But this > concern is false right? We release the runtime PM lock before > calling the PM callback, so we shouldn't hit any deadlock and > lockdep won't complain? Runtime PM uses fine grained locking based on per-device locks, so there should be no problem with global clock prepare lock. The only lock interaction is between clock controller device's rpm lock and clocks global prepare lock, but it always done with the same access pattern. I've tested it extensively (also with lock dep) with various use cases and found no problems. Best regards
On 09/12, Marek Szyprowski wrote: > Hi Stephen, > > > On 2016-09-08 02:19, Stephen Boyd wrote: > >On 09/01, Marek Szyprowski wrote: > >>Registers for some clocks might be located in the SOC area, which are under the > >>power domain. To enable access to those registers respective domain has to be > >>turned on. Additionally, registers for such clocks will usually loose its > >>contents when power domain is turned off, so additional saving and restoring of > >>them might be needed in the clock controller driver. > >> > >>This patch adds basic infrastructure in the clocks core to allow implementing > >>driver for such clocks under power domains. Clock provider can supply a > >>struct device pointer, which is the used by clock core for tracking and managing > >>clock's controller runtime pm state. Each clk_prepare() operation > >>will first call pm_runtime_get_sync() on the supplied device, while > >>clk_unprepare() will do pm_runtime_put() at the end. > >> > >>Additional calls to pm_runtime_get/put functions are required to ensure that any > >>register access (like calculating/chaning clock rates) will be done with clock > >>controller in active runtime state. > >> > >>Special handling of the case when runtime pm is disabled for clock controller's > >>device is needed to let this feature work properly also during system sleep > >>suspend/resume operations (runtime pm is first disabled before entering sleep > >>state's, but controller is usually still operational until its suspend pm > >>callback is called). > >> > >>Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> > >My "knee jerk" concern is that we're going to take a runtime PM > >lock underneath the prepare lock. That seems like a situation > >where we could hit a lock inversion if the runtime PM callbacks > >themselves acquire the prepare lock by calling clk APIs? But this > >concern is false right? We release the runtime PM lock before > >calling the PM callback, so we shouldn't hit any deadlock and > >lockdep won't complain? > > Runtime PM uses fine grained locking based on per-device locks, so there > should be no problem with global clock prepare lock. The only lock > interaction > is between clock controller device's rpm lock and clocks global > prepare lock, but > it always done with the same access pattern. I've tested it > extensively (also > with lock dep) with various use cases and found no problems. > Great! So you have runtime PM callbacks that are calling clk_prepare/unprepare?
On 8 September 2016 at 02:19, Stephen Boyd <sboyd@codeaurora.org> wrote: > On 09/01, Marek Szyprowski wrote: >> Registers for some clocks might be located in the SOC area, which are under the >> power domain. To enable access to those registers respective domain has to be >> turned on. Additionally, registers for such clocks will usually loose its >> contents when power domain is turned off, so additional saving and restoring of >> them might be needed in the clock controller driver. >> >> This patch adds basic infrastructure in the clocks core to allow implementing >> driver for such clocks under power domains. Clock provider can supply a >> struct device pointer, which is the used by clock core for tracking and managing >> clock's controller runtime pm state. Each clk_prepare() operation >> will first call pm_runtime_get_sync() on the supplied device, while >> clk_unprepare() will do pm_runtime_put() at the end. >> >> Additional calls to pm_runtime_get/put functions are required to ensure that any >> register access (like calculating/chaning clock rates) will be done with clock >> controller in active runtime state. >> >> Special handling of the case when runtime pm is disabled for clock controller's >> device is needed to let this feature work properly also during system sleep >> suspend/resume operations (runtime pm is first disabled before entering sleep >> state's, but controller is usually still operational until its suspend pm >> callback is called). >> >> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> > > My "knee jerk" concern is that we're going to take a runtime PM > lock underneath the prepare lock. That seems like a situation > where we could hit a lock inversion if the runtime PM callbacks > themselves acquire the prepare lock by calling clk APIs? But this > concern is false right? We release the runtime PM lock before > calling the PM callback, so we shouldn't hit any deadlock and > lockdep won't complain? You assumption is correct! Before the runtime PM core invokes a runtime PM callback it will unlock the -">dev->power.lock" spinlock. When the callback returns it will re-lock the spinlock. Kind regards Uffe -- To unsubscribe from this list: send the line "unsubscribe linux-samsung-soc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 1 September 2016 at 15:45, Marek Szyprowski <m.szyprowski@samsung.com> wrote: > Registers for some clocks might be located in the SOC area, which are under the > power domain. To enable access to those registers respective domain has to be > turned on. Additionally, registers for such clocks will usually loose its > contents when power domain is turned off, so additional saving and restoring of > them might be needed in the clock controller driver. This is indeed correct, I can confirm that the UX500 SoC's PRCC clock controllers also needs to be managed like this. > > This patch adds basic infrastructure in the clocks core to allow implementing > driver for such clocks under power domains. Clock provider can supply a > struct device pointer, which is the used by clock core for tracking and managing > clock's controller runtime pm state. Each clk_prepare() operation > will first call pm_runtime_get_sync() on the supplied device, while > clk_unprepare() will do pm_runtime_put() at the end. This make sense! > > Additional calls to pm_runtime_get/put functions are required to ensure that any > register access (like calculating/chaning clock rates) will be done with clock /s/chaning/changing > controller in active runtime state. /s/active runtime/runtime resumed > > Special handling of the case when runtime pm is disabled for clock controller's > device is needed to let this feature work properly also during system sleep > suspend/resume operations (runtime pm is first disabled before entering sleep > state's, but controller is usually still operational until its suspend pm > callback is called). This needs to be clarified. I agree we need to cover system PM as well, but let's try be a bit more precise about it. > > Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> I would also like to extend the change log to describe a little bit of how a clk provider should interact with this new and nice feature. Something like: *) It needs to provide a struct device to the core when registering the provider. **) It needs to enable runtime PM. ***) It needs to make sure the runtime PM status of the controller device reflects the HW state. > --- > drivers/clk/clk.c | 84 +++++++++++++++++++++++++++++++++++++++++++++++++------ > 1 file changed, 76 insertions(+), 8 deletions(-) > > diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c > index 820a939fb6bb..a1934e9b4e95 100644 > --- a/drivers/clk/clk.c > +++ b/drivers/clk/clk.c > @@ -21,6 +21,7 @@ > #include <linux/of.h> > #include <linux/device.h> > #include <linux/init.h> > +#include <linux/pm_runtime.h> > #include <linux/sched.h> > #include <linux/clkdev.h> > > @@ -46,6 +47,7 @@ struct clk_core { > const struct clk_ops *ops; > struct clk_hw *hw; > struct module *owner; > + struct device *dev; > struct clk_core *parent; > const char **parent_names; > struct clk_core **parents; > @@ -87,6 +89,42 @@ struct clk { > struct hlist_node clks_node; > }; > > +/*** runtime pm ***/ > +static int clk_pm_runtime_get(struct clk_core *core) > +{ > + int ret = 0; > + > + if (!core->dev) > + return 0; > + > + if (pm_runtime_enabled(core->dev)) { Why do you need to check for this? > + ret = pm_runtime_get_sync(core->dev); > + } else { > + if (!pm_runtime_status_suspended(core->dev)) > + pm_runtime_get_noresume(core->dev); This looks weird. I guess it's related to the system PM case somehow? > + } > + return ret < 0 ? ret : 0; > +} > + > +static void clk_pm_runtime_put(struct clk_core *core) > +{ Similar comments as for clk_pm_runtime_get(). > + if (!core->dev) > + return; > + > + if (pm_runtime_enabled(core->dev)) > + pm_runtime_put(core->dev); > + else > + pm_runtime_put_noidle(core->dev); > +} > + > +static bool clk_pm_runtime_suspended(struct clk_core *core) > +{ > + if (!core->dev) > + return 0; > + > + return pm_runtime_suspended(core->dev); > +} > + > /*** locking ***/ > static void clk_prepare_lock(void) > { > @@ -150,6 +188,9 @@ static void clk_enable_unlock(unsigned long flags) > > static bool clk_core_is_prepared(struct clk_core *core) > { > + if (clk_pm_runtime_suspended(core)) > + return false; > + This isn't safe, as even if the clock controller is runtime resumed at this point, that's *not* a guarantee that is stays runtime resumed while invoking the ->ops->is_prepared(). Instead you must call a pm_runtime_get_noresume() before you check the runtime PM status, as that should avoid the device from being runtime suspended. Then when the ->ops->is_prepared() has been invoked, we should call pm_runtime_put(). Although, I am not sure the above change becomes entirely correct as I think we are mixing the runtime PM status with the clock prepare status here. In other words, the next time the clock controller becomes runtime resumed, it may very well restore some register context which may prepare the clock, unless someone explicitly has unprepared it. Of course, it all depends on how clk_core_is_prepared() is used by the clock framework. > /* > * .is_prepared is optional for clocks that can prepare > * fall back to software usage counter if it is missing > @@ -162,6 +203,9 @@ static bool clk_core_is_prepared(struct clk_core *core) > > static bool clk_core_is_enabled(struct clk_core *core) > { > + if (clk_pm_runtime_suspended(core)) > + return false; > + Similar comment as for clk_core_is_prepared(). > /* > * .is_enabled is only mandatory for clocks that gate > * fall back to software usage counter if .is_enabled is missing > @@ -489,6 +533,8 @@ static void clk_core_unprepare(struct clk_core *core) > if (core->ops->unprepare) > core->ops->unprepare(core->hw); > > + clk_pm_runtime_put(core); > + > trace_clk_unprepare_complete(core); > clk_core_unprepare(core->parent); > } > @@ -530,10 +576,14 @@ static int clk_core_prepare(struct clk_core *core) > return 0; > > if (core->prepare_count == 0) { > - ret = clk_core_prepare(core->parent); > + ret = clk_pm_runtime_get(core); > if (ret) > return ret; > > + ret = clk_core_prepare(core->parent); > + if (ret) > + goto runtime_put; > + > trace_clk_prepare(core); > > if (core->ops->prepare) > @@ -541,15 +591,18 @@ static int clk_core_prepare(struct clk_core *core) > > trace_clk_prepare_complete(core); > > - if (ret) { > - clk_core_unprepare(core->parent); > - return ret; > - } > + if (ret) > + goto unprepare; > } > > core->prepare_count++; > > return 0; > +unprepare: > + clk_core_unprepare(core->parent); > +runtime_put: > + clk_pm_runtime_put(core); > + return ret; > } > > static int clk_core_prepare_lock(struct clk_core *core) > @@ -1563,6 +1616,7 @@ static int clk_core_set_rate_nolock(struct clk_core *core, > { > struct clk_core *top, *fail_clk; > unsigned long rate = req_rate; > + int ret = 0; > > if (!core) > return 0; > @@ -1579,21 +1633,28 @@ static int clk_core_set_rate_nolock(struct clk_core *core, > if (!top) > return -EINVAL; > > + ret = clk_pm_runtime_get(core); > + if (ret) > + return ret; > + > /* notify that we are about to change rates */ > fail_clk = clk_propagate_rate_change(top, PRE_RATE_CHANGE); > if (fail_clk) { > pr_debug("%s: failed to set %s rate\n", __func__, > fail_clk->name); > clk_propagate_rate_change(top, ABORT_RATE_CHANGE); > - return -EBUSY; > + ret = -EBUSY; > + goto err; > } > > /* change the rates */ > clk_change_rate(top); > > core->req_rate = req_rate; > +err: > + clk_pm_runtime_put(core); > > - return 0; > + return ret; > } > > /** > @@ -1824,12 +1885,16 @@ static int clk_core_set_parent(struct clk_core *core, struct clk_core *parent) > p_rate = parent->rate; > } > > + ret = clk_pm_runtime_get(core); > + if (ret) > + goto out; > + > /* propagate PRE_RATE_CHANGE notifications */ > ret = __clk_speculate_rates(core, p_rate); > > /* abort if a driver objects */ > if (ret & NOTIFY_STOP_MASK) > - goto out; > + goto runtime_put; > > /* do the re-parent */ > ret = __clk_set_parent(core, parent, p_index); > @@ -1842,6 +1907,8 @@ static int clk_core_set_parent(struct clk_core *core, struct clk_core *parent) > __clk_recalc_accuracies(core); > } > > +runtime_put: > + clk_pm_runtime_put(core); > out: > clk_prepare_unlock(); > > @@ -2546,6 +2613,7 @@ struct clk *clk_register(struct device *dev, struct clk_hw *hw) > goto fail_name; > } > core->ops = hw->init->ops; > + core->dev = dev; > if (dev && dev->driver) > core->owner = dev->driver->owner; > core->hw = hw; > -- > 1.9.1 > I believe we are also accessing the clock controller HW from the late_initcall_sync(clk_disable_unused) function. More precisely, in clk_disable_unused_subtree(), we probably need a pm_runtime_get_sync() before calling clk_core_is_enabled(). And then restore that with a pm_runtime_put() after the clock has been disabled. The similar is needed in clk_unprepare_unused_subtree(). Kind regards Uffe -- To unsubscribe from this list: send the line "unsubscribe linux-samsung-soc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Stephen, On 2016-09-13 00:31, Stephen Boyd wrote: > On 09/12, Marek Szyprowski wrote: >> Hi Stephen, >> >> >> On 2016-09-08 02:19, Stephen Boyd wrote: >>> On 09/01, Marek Szyprowski wrote: >>>> Registers for some clocks might be located in the SOC area, which are under the >>>> power domain. To enable access to those registers respective domain has to be >>>> turned on. Additionally, registers for such clocks will usually loose its >>>> contents when power domain is turned off, so additional saving and restoring of >>>> them might be needed in the clock controller driver. >>>> >>>> This patch adds basic infrastructure in the clocks core to allow implementing >>>> driver for such clocks under power domains. Clock provider can supply a >>>> struct device pointer, which is the used by clock core for tracking and managing >>>> clock's controller runtime pm state. Each clk_prepare() operation >>>> will first call pm_runtime_get_sync() on the supplied device, while >>>> clk_unprepare() will do pm_runtime_put() at the end. >>>> >>>> Additional calls to pm_runtime_get/put functions are required to ensure that any >>>> register access (like calculating/chaning clock rates) will be done with clock >>>> controller in active runtime state. >>>> >>>> Special handling of the case when runtime pm is disabled for clock controller's >>>> device is needed to let this feature work properly also during system sleep >>>> suspend/resume operations (runtime pm is first disabled before entering sleep >>>> state's, but controller is usually still operational until its suspend pm >>>> callback is called). >>>> >>>> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> >>> My "knee jerk" concern is that we're going to take a runtime PM >>> lock underneath the prepare lock. That seems like a situation >>> where we could hit a lock inversion if the runtime PM callbacks >>> themselves acquire the prepare lock by calling clk APIs? But this >>> concern is false right? We release the runtime PM lock before >>> calling the PM callback, so we shouldn't hit any deadlock and >>> lockdep won't complain? >> Runtime PM uses fine grained locking based on per-device locks, so there >> should be no problem with global clock prepare lock. The only lock >> interaction >> is between clock controller device's rpm lock and clocks global >> prepare lock, but >> it always done with the same access pattern. I've tested it >> extensively (also >> with lock dep) with various use cases and found no problems. >> > Great! So you have runtime PM callbacks that are calling > clk_prepare/unprepare? Well, not really. clock controller's runtime pm functions must not call clk_prepare/unprepare yet. I didn't get your question. I thought that you are asking if my change won't introduce any deadlock related to prepare and dev->pm locks. My runtime pm functions doesn't do any call to clk_prepare/unprepare. Although global clock prepare lock is re-entrant from the same process, it would cause deadlock if called from runtime pm functions, because runtime pm functions might be called from the worker running on the different cpu/process. I hope that the work started by Krzysztof Kozlowski on splitting prepare lock on per-controller basis will solve limitation and one would be able to call clk_prapare/unprepare on clocks from other controllers even from the runtime pm functions. Best regards
Hi Ulf, Thanks for looking into this patch! On 2016-09-13 10:49, Ulf Hansson wrote: > On 1 September 2016 at 15:45, Marek Szyprowski <m.szyprowski@samsung.com> wrote: >> Registers for some clocks might be located in the SOC area, which are under the >> power domain. To enable access to those registers respective domain has to be >> turned on. Additionally, registers for such clocks will usually loose its >> contents when power domain is turned off, so additional saving and restoring of >> them might be needed in the clock controller driver. > This is indeed correct, I can confirm that the UX500 SoC's PRCC clock > controllers also needs to be managed like this. > >> This patch adds basic infrastructure in the clocks core to allow implementing >> driver for such clocks under power domains. Clock provider can supply a >> struct device pointer, which is the used by clock core for tracking and managing >> clock's controller runtime pm state. Each clk_prepare() operation >> will first call pm_runtime_get_sync() on the supplied device, while >> clk_unprepare() will do pm_runtime_put() at the end. > This make sense! > >> Additional calls to pm_runtime_get/put functions are required to ensure that any >> register access (like calculating/chaning clock rates) will be done with clock > /s/chaning/changing > >> controller in active runtime state. > /s/active runtime/runtime resumed > >> Special handling of the case when runtime pm is disabled for clock controller's >> device is needed to let this feature work properly also during system sleep >> suspend/resume operations (runtime pm is first disabled before entering sleep >> state's, but controller is usually still operational until its suspend pm >> callback is called). > This needs to be clarified. I agree we need to cover system PM as > well, but let's try be a bit more precise about it. Right, I wasn't precise here. I've developed this code on older (v4.1 and v4.6) kernels, which had a code which disables runtime pm during system sleep transition time. Maybe I need to revisit it and consider your change merged to v4.8-rc1, which keeps runtime pm enabled during system sleep transitions. > >> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> > I would also like to extend the change log to describe a little bit of > how a clk provider should interact with this new and nice feature. > Something like: > > *) It needs to provide a struct device to the core when registering > the provider. > **) It needs to enable runtime PM. > ***) It needs to make sure the runtime PM status of the controller > device reflects the HW state. Right, this definitely has to be added. Thank you for reminding about such obvious things. >> --- >> drivers/clk/clk.c | 84 +++++++++++++++++++++++++++++++++++++++++++++++++------ >> 1 file changed, 76 insertions(+), 8 deletions(-) >> >> diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c >> index 820a939fb6bb..a1934e9b4e95 100644 >> --- a/drivers/clk/clk.c >> +++ b/drivers/clk/clk.c >> @@ -21,6 +21,7 @@ >> #include <linux/of.h> >> #include <linux/device.h> >> #include <linux/init.h> >> +#include <linux/pm_runtime.h> >> #include <linux/sched.h> >> #include <linux/clkdev.h> >> >> @@ -46,6 +47,7 @@ struct clk_core { >> const struct clk_ops *ops; >> struct clk_hw *hw; >> struct module *owner; >> + struct device *dev; >> struct clk_core *parent; >> const char **parent_names; >> struct clk_core **parents; >> @@ -87,6 +89,42 @@ struct clk { >> struct hlist_node clks_node; >> }; >> >> +/*** runtime pm ***/ >> +static int clk_pm_runtime_get(struct clk_core *core) >> +{ >> + int ret = 0; >> + >> + if (!core->dev) >> + return 0; >> + >> + if (pm_runtime_enabled(core->dev)) { > Why do you need to check for this? This was a workaround, which let it work during the system sleep transition state. > >> + ret = pm_runtime_get_sync(core->dev); >> + } else { >> + if (!pm_runtime_status_suspended(core->dev)) >> + pm_runtime_get_noresume(core->dev); > This looks weird. I guess it's related to the system PM case somehow? > >> + } >> + return ret < 0 ? ret : 0; >> +} >> + >> +static void clk_pm_runtime_put(struct clk_core *core) >> +{ > Similar comments as for clk_pm_runtime_get(). > >> + if (!core->dev) >> + return; >> + >> + if (pm_runtime_enabled(core->dev)) >> + pm_runtime_put(core->dev); >> + else >> + pm_runtime_put_noidle(core->dev); >> +} >> + >> +static bool clk_pm_runtime_suspended(struct clk_core *core) >> +{ >> + if (!core->dev) >> + return 0; >> + >> + return pm_runtime_suspended(core->dev); >> +} >> + >> /*** locking ***/ >> static void clk_prepare_lock(void) >> { >> @@ -150,6 +188,9 @@ static void clk_enable_unlock(unsigned long flags) >> >> static bool clk_core_is_prepared(struct clk_core *core) >> { >> + if (clk_pm_runtime_suspended(core)) >> + return false; >> + > This isn't safe, as even if the clock controller is runtime resumed at > this point, that's *not* a guarantee that is stays runtime resumed > while invoking the ->ops->is_prepared(). > > Instead you must call a pm_runtime_get_noresume() before you check the > runtime PM status, as that should avoid the device from being runtime > suspended. Then when the ->ops->is_prepared() has been invoked, we > should call pm_runtime_put(). > > Although, I am not sure the above change becomes entirely correct as I > think we are mixing the runtime PM status with the clock prepare > status here. In other words, the next time the clock controller > becomes runtime resumed, it may very well restore some register > context which may prepare the clock, unless someone explicitly has > unprepared it. > > Of course, it all depends on how clk_core_is_prepared() is used by the > clock framework. clk_core_is_prepared() is mainly used by disable_unused_tree_*. You are right that it mixes a bit clock prepared state with runtime pm active state of clock controller's, but I assumed here that clock cannot be prepared if runtime pm state of controller is suspended. Other approach here would be to call pm_runtime_get(), check status and then pm_runtime_put(). If you prefer such approach, I will change it. > >> /* >> * .is_prepared is optional for clocks that can prepare >> * fall back to software usage counter if it is missing >> @@ -162,6 +203,9 @@ static bool clk_core_is_prepared(struct clk_core *core) >> >> static bool clk_core_is_enabled(struct clk_core *core) >> { >> + if (clk_pm_runtime_suspended(core)) >> + return false; >> + > Similar comment as for clk_core_is_prepared(). > >> /* >> * .is_enabled is only mandatory for clocks that gate >> * fall back to software usage counter if .is_enabled is missing >> @@ -489,6 +533,8 @@ static void clk_core_unprepare(struct clk_core *core) >> if (core->ops->unprepare) >> core->ops->unprepare(core->hw); >> >> + clk_pm_runtime_put(core); >> + >> trace_clk_unprepare_complete(core); >> clk_core_unprepare(core->parent); >> } >> @@ -530,10 +576,14 @@ static int clk_core_prepare(struct clk_core *core) >> return 0; >> >> if (core->prepare_count == 0) { >> - ret = clk_core_prepare(core->parent); >> + ret = clk_pm_runtime_get(core); >> if (ret) >> return ret; >> >> + ret = clk_core_prepare(core->parent); >> + if (ret) >> + goto runtime_put; >> + >> trace_clk_prepare(core); >> >> if (core->ops->prepare) >> @@ -541,15 +591,18 @@ static int clk_core_prepare(struct clk_core *core) >> >> trace_clk_prepare_complete(core); >> >> - if (ret) { >> - clk_core_unprepare(core->parent); >> - return ret; >> - } >> + if (ret) >> + goto unprepare; >> } >> >> core->prepare_count++; >> >> return 0; >> +unprepare: >> + clk_core_unprepare(core->parent); >> +runtime_put: >> + clk_pm_runtime_put(core); >> + return ret; >> } >> >> static int clk_core_prepare_lock(struct clk_core *core) >> @@ -1563,6 +1616,7 @@ static int clk_core_set_rate_nolock(struct clk_core *core, >> { >> struct clk_core *top, *fail_clk; >> unsigned long rate = req_rate; >> + int ret = 0; >> >> if (!core) >> return 0; >> @@ -1579,21 +1633,28 @@ static int clk_core_set_rate_nolock(struct clk_core *core, >> if (!top) >> return -EINVAL; >> >> + ret = clk_pm_runtime_get(core); >> + if (ret) >> + return ret; >> + >> /* notify that we are about to change rates */ >> fail_clk = clk_propagate_rate_change(top, PRE_RATE_CHANGE); >> if (fail_clk) { >> pr_debug("%s: failed to set %s rate\n", __func__, >> fail_clk->name); >> clk_propagate_rate_change(top, ABORT_RATE_CHANGE); >> - return -EBUSY; >> + ret = -EBUSY; >> + goto err; >> } >> >> /* change the rates */ >> clk_change_rate(top); >> >> core->req_rate = req_rate; >> +err: >> + clk_pm_runtime_put(core); >> >> - return 0; >> + return ret; >> } >> >> /** >> @@ -1824,12 +1885,16 @@ static int clk_core_set_parent(struct clk_core *core, struct clk_core *parent) >> p_rate = parent->rate; >> } >> >> + ret = clk_pm_runtime_get(core); >> + if (ret) >> + goto out; >> + >> /* propagate PRE_RATE_CHANGE notifications */ >> ret = __clk_speculate_rates(core, p_rate); >> >> /* abort if a driver objects */ >> if (ret & NOTIFY_STOP_MASK) >> - goto out; >> + goto runtime_put; >> >> /* do the re-parent */ >> ret = __clk_set_parent(core, parent, p_index); >> @@ -1842,6 +1907,8 @@ static int clk_core_set_parent(struct clk_core *core, struct clk_core *parent) >> __clk_recalc_accuracies(core); >> } >> >> +runtime_put: >> + clk_pm_runtime_put(core); >> out: >> clk_prepare_unlock(); >> >> @@ -2546,6 +2613,7 @@ struct clk *clk_register(struct device *dev, struct clk_hw *hw) >> goto fail_name; >> } >> core->ops = hw->init->ops; >> + core->dev = dev; >> if (dev && dev->driver) >> core->owner = dev->driver->owner; >> core->hw = hw; >> -- >> 1.9.1 >> > I believe we are also accessing the clock controller HW from the > late_initcall_sync(clk_disable_unused) function. This was indirectly handled by the runtime pm state check in is_prepared and is_enabled(). > More precisely, in clk_disable_unused_subtree(), we probably need a > pm_runtime_get_sync() before calling clk_core_is_enabled(). And then > restore that with a pm_runtime_put() after the clock has been > disabled. > The similar is needed in clk_unprepare_unused_subtree(). Best regards
[...] >> >> This needs to be clarified. I agree we need to cover system PM as >> well, but let's try be a bit more precise about it. > > > Right, I wasn't precise here. I've developed this code on older (v4.1 and > v4.6) > kernels, which had a code which disables runtime pm during system sleep > transition > time. Maybe I need to revisit it and consider your change merged to > v4.8-rc1, which > keeps runtime pm enabled during system sleep transitions. Right, I see. >>> static bool clk_core_is_prepared(struct clk_core *core) >>> { >>> + if (clk_pm_runtime_suspended(core)) >>> + return false; >>> + >> >> This isn't safe, as even if the clock controller is runtime resumed at >> this point, that's *not* a guarantee that is stays runtime resumed >> while invoking the ->ops->is_prepared(). >> >> Instead you must call a pm_runtime_get_noresume() before you check the >> runtime PM status, as that should avoid the device from being runtime >> suspended. Then when the ->ops->is_prepared() has been invoked, we >> should call pm_runtime_put(). >> >> Although, I am not sure the above change becomes entirely correct as I >> think we are mixing the runtime PM status with the clock prepare >> status here. In other words, the next time the clock controller >> becomes runtime resumed, it may very well restore some register >> context which may prepare the clock, unless someone explicitly has >> unprepared it. >> >> Of course, it all depends on how clk_core_is_prepared() is used by the >> clock framework. > > > clk_core_is_prepared() is mainly used by disable_unused_tree_*. You are > right that it mixes a bit clock prepared state with runtime pm active > state of clock controller's, but I assumed here that clock cannot be > prepared if runtime pm state of controller is suspended. Other approach > here would be to call pm_runtime_get(), check status and then > pm_runtime_put(). If you prefer such approach, I will change it. Using pm_runtime_get|put() would work for the clk_core_is_prepared() case, although perhaps not for the clk_core_is_enabled() case. The reason is that I guess the clk_core_is_enabled() API may be called from atomic context? Thus we would need to enable pm_runtime_irq_safe() for the clock provider device, which I *really* would like to avoid. [...] >> >> I believe we are also accessing the clock controller HW from the >> late_initcall_sync(clk_disable_unused) function. > > > This was indirectly handled by the runtime pm state check in is_prepared > and is_enabled(). I see. Although, I was thinking that you explicitly would like to disable/unprepare unused clocks in this phase, so then it isn't sufficient to rely on the runtime PM status to know whether the clock is prepared/enabled. Perhaps, this is the only case when you actually need a pm_runtime_get|put() around the ->is_enabled|prepared()!? > >> More precisely, in clk_disable_unused_subtree(), we probably need a >> pm_runtime_get_sync() before calling clk_core_is_enabled(). And then >> restore that with a pm_runtime_put() after the clock has been >> disabled. >> The similar is needed in clk_unprepare_unused_subtree(). > > > Best regards > -- > Marek Szyprowski, PhD > Samsung R&D Institute Poland > Kind regards Uffe -- To unsubscribe from this list: send the line "unsubscribe linux-samsung-soc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Ulf, On 2016-09-13 17:03, Ulf Hansson wrote: > [...] > >>> This needs to be clarified. I agree we need to cover system PM as >>> well, but let's try be a bit more precise about it. >> >> Right, I wasn't precise here. I've developed this code on older (v4.1 and >> v4.6) >> kernels, which had a code which disables runtime pm during system sleep >> transition >> time. Maybe I need to revisit it and consider your change merged to >> v4.8-rc1, which >> keeps runtime pm enabled during system sleep transitions. > Right, I see. > >>>> static bool clk_core_is_prepared(struct clk_core *core) >>>> { >>>> + if (clk_pm_runtime_suspended(core)) >>>> + return false; >>>> + >>> This isn't safe, as even if the clock controller is runtime resumed at >>> this point, that's *not* a guarantee that is stays runtime resumed >>> while invoking the ->ops->is_prepared(). >>> >>> Instead you must call a pm_runtime_get_noresume() before you check the >>> runtime PM status, as that should avoid the device from being runtime >>> suspended. Then when the ->ops->is_prepared() has been invoked, we >>> should call pm_runtime_put(). >>> >>> Although, I am not sure the above change becomes entirely correct as I >>> think we are mixing the runtime PM status with the clock prepare >>> status here. In other words, the next time the clock controller >>> becomes runtime resumed, it may very well restore some register >>> context which may prepare the clock, unless someone explicitly has >>> unprepared it. >>> >>> Of course, it all depends on how clk_core_is_prepared() is used by the >>> clock framework. >> >> clk_core_is_prepared() is mainly used by disable_unused_tree_*. You are >> right that it mixes a bit clock prepared state with runtime pm active >> state of clock controller's, but I assumed here that clock cannot be >> prepared if runtime pm state of controller is suspended. Other approach >> here would be to call pm_runtime_get(), check status and then >> pm_runtime_put(). If you prefer such approach, I will change it. > Using pm_runtime_get|put() would work for the clk_core_is_prepared() > case, although perhaps not for the clk_core_is_enabled() case. > > The reason is that I guess the clk_core_is_enabled() API may be called > from atomic context? Thus we would need to enable > pm_runtime_irq_safe() for the clock provider device, which I *really* > would like to avoid. I've checked clk_core_is_enabled() is only used for implementing disabling of unused clock trees or implementing ->is_enabled() callback, which is used for the same purpose, so it should be safe to use standard pm_runtime_get/put there. There should be no other usecases for ->is_enabled() method, as it itself is not really race prone, as other caller might enable/disable given clock in meantime. I will remove clk_pm_runtime_suspended() usage then. > [...] > >>> I believe we are also accessing the clock controller HW from the >>> late_initcall_sync(clk_disable_unused) function. >> >> This was indirectly handled by the runtime pm state check in is_prepared >> and is_enabled(). > I see. > > Although, I was thinking that you explicitly would like to > disable/unprepare unused clocks in this phase, so then it isn't > sufficient to rely on the runtime PM status to know whether the clock > is prepared/enabled. > > Perhaps, this is the only case when you actually need a > pm_runtime_get|put() around the ->is_enabled|prepared()!? Right. [...] Best regards
On 09/13, Marek Szyprowski wrote: > On 2016-09-13 00:31, Stephen Boyd wrote: > > > >Great! So you have runtime PM callbacks that are calling > >clk_prepare/unprepare? > > Well, not really. clock controller's runtime pm functions must not call > clk_prepare/unprepare yet. > > I didn't get your question. I thought that you are asking if my change > won't introduce any deadlock related to prepare and dev->pm locks. My > runtime pm functions doesn't do any call to clk_prepare/unprepare. > Although global clock prepare lock is re-entrant from the same process, it > would cause deadlock if called from runtime pm functions, because runtime > pm functions might be called from the worker running on the different > cpu/process. I mean non-clk controller driver based runtime PM callbacks that call clk_prepare/unprepare in them. For example, some i2c or spi device driver that has clk operations in the runtime PM callbacks. That would allow lockdep to see any potential deadlock because of aliasing lock classes for the device power lock and the global prepare lock.
Hi Stephen, On 2016-09-14 23:39, Stephen Boyd wrote: > On 09/13, Marek Szyprowski wrote: >> On 2016-09-13 00:31, Stephen Boyd wrote: >>> Great! So you have runtime PM callbacks that are calling >>> clk_prepare/unprepare? >> Well, not really. clock controller's runtime pm functions must not call >> clk_prepare/unprepare yet. >> >> I didn't get your question. I thought that you are asking if my change >> won't introduce any deadlock related to prepare and dev->pm locks. My >> runtime pm functions doesn't do any call to clk_prepare/unprepare. >> Although global clock prepare lock is re-entrant from the same process, it >> would cause deadlock if called from runtime pm functions, because runtime >> pm functions might be called from the worker running on the different >> cpu/process. > I mean non-clk controller driver based runtime PM callbacks that > call clk_prepare/unprepare in them. For example, some i2c or spi > device driver that has clk operations in the runtime PM > callbacks. That would allow lockdep to see any potential deadlock > because of aliasing lock classes for the device power lock and > the global prepare lock. This works perfectly fine. Runtime pm callbacks are called with power lock released, so there is no deadlock possible related to dev->power.lock. See __rpm_callback() function in drivers/base/power/runtime.c Best regards
diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c index 820a939fb6bb..a1934e9b4e95 100644 --- a/drivers/clk/clk.c +++ b/drivers/clk/clk.c @@ -21,6 +21,7 @@ #include <linux/of.h> #include <linux/device.h> #include <linux/init.h> +#include <linux/pm_runtime.h> #include <linux/sched.h> #include <linux/clkdev.h> @@ -46,6 +47,7 @@ struct clk_core { const struct clk_ops *ops; struct clk_hw *hw; struct module *owner; + struct device *dev; struct clk_core *parent; const char **parent_names; struct clk_core **parents; @@ -87,6 +89,42 @@ struct clk { struct hlist_node clks_node; }; +/*** runtime pm ***/ +static int clk_pm_runtime_get(struct clk_core *core) +{ + int ret = 0; + + if (!core->dev) + return 0; + + if (pm_runtime_enabled(core->dev)) { + ret = pm_runtime_get_sync(core->dev); + } else { + if (!pm_runtime_status_suspended(core->dev)) + pm_runtime_get_noresume(core->dev); + } + return ret < 0 ? ret : 0; +} + +static void clk_pm_runtime_put(struct clk_core *core) +{ + if (!core->dev) + return; + + if (pm_runtime_enabled(core->dev)) + pm_runtime_put(core->dev); + else + pm_runtime_put_noidle(core->dev); +} + +static bool clk_pm_runtime_suspended(struct clk_core *core) +{ + if (!core->dev) + return 0; + + return pm_runtime_suspended(core->dev); +} + /*** locking ***/ static void clk_prepare_lock(void) { @@ -150,6 +188,9 @@ static void clk_enable_unlock(unsigned long flags) static bool clk_core_is_prepared(struct clk_core *core) { + if (clk_pm_runtime_suspended(core)) + return false; + /* * .is_prepared is optional for clocks that can prepare * fall back to software usage counter if it is missing @@ -162,6 +203,9 @@ static bool clk_core_is_prepared(struct clk_core *core) static bool clk_core_is_enabled(struct clk_core *core) { + if (clk_pm_runtime_suspended(core)) + return false; + /* * .is_enabled is only mandatory for clocks that gate * fall back to software usage counter if .is_enabled is missing @@ -489,6 +533,8 @@ static void clk_core_unprepare(struct clk_core *core) if (core->ops->unprepare) core->ops->unprepare(core->hw); + clk_pm_runtime_put(core); + trace_clk_unprepare_complete(core); clk_core_unprepare(core->parent); } @@ -530,10 +576,14 @@ static int clk_core_prepare(struct clk_core *core) return 0; if (core->prepare_count == 0) { - ret = clk_core_prepare(core->parent); + ret = clk_pm_runtime_get(core); if (ret) return ret; + ret = clk_core_prepare(core->parent); + if (ret) + goto runtime_put; + trace_clk_prepare(core); if (core->ops->prepare) @@ -541,15 +591,18 @@ static int clk_core_prepare(struct clk_core *core) trace_clk_prepare_complete(core); - if (ret) { - clk_core_unprepare(core->parent); - return ret; - } + if (ret) + goto unprepare; } core->prepare_count++; return 0; +unprepare: + clk_core_unprepare(core->parent); +runtime_put: + clk_pm_runtime_put(core); + return ret; } static int clk_core_prepare_lock(struct clk_core *core) @@ -1563,6 +1616,7 @@ static int clk_core_set_rate_nolock(struct clk_core *core, { struct clk_core *top, *fail_clk; unsigned long rate = req_rate; + int ret = 0; if (!core) return 0; @@ -1579,21 +1633,28 @@ static int clk_core_set_rate_nolock(struct clk_core *core, if (!top) return -EINVAL; + ret = clk_pm_runtime_get(core); + if (ret) + return ret; + /* notify that we are about to change rates */ fail_clk = clk_propagate_rate_change(top, PRE_RATE_CHANGE); if (fail_clk) { pr_debug("%s: failed to set %s rate\n", __func__, fail_clk->name); clk_propagate_rate_change(top, ABORT_RATE_CHANGE); - return -EBUSY; + ret = -EBUSY; + goto err; } /* change the rates */ clk_change_rate(top); core->req_rate = req_rate; +err: + clk_pm_runtime_put(core); - return 0; + return ret; } /** @@ -1824,12 +1885,16 @@ static int clk_core_set_parent(struct clk_core *core, struct clk_core *parent) p_rate = parent->rate; } + ret = clk_pm_runtime_get(core); + if (ret) + goto out; + /* propagate PRE_RATE_CHANGE notifications */ ret = __clk_speculate_rates(core, p_rate); /* abort if a driver objects */ if (ret & NOTIFY_STOP_MASK) - goto out; + goto runtime_put; /* do the re-parent */ ret = __clk_set_parent(core, parent, p_index); @@ -1842,6 +1907,8 @@ static int clk_core_set_parent(struct clk_core *core, struct clk_core *parent) __clk_recalc_accuracies(core); } +runtime_put: + clk_pm_runtime_put(core); out: clk_prepare_unlock(); @@ -2546,6 +2613,7 @@ struct clk *clk_register(struct device *dev, struct clk_hw *hw) goto fail_name; } core->ops = hw->init->ops; + core->dev = dev; if (dev && dev->driver) core->owner = dev->driver->owner; core->hw = hw;
Registers for some clocks might be located in the SOC area, which are under the power domain. To enable access to those registers respective domain has to be turned on. Additionally, registers for such clocks will usually loose its contents when power domain is turned off, so additional saving and restoring of them might be needed in the clock controller driver. This patch adds basic infrastructure in the clocks core to allow implementing driver for such clocks under power domains. Clock provider can supply a struct device pointer, which is the used by clock core for tracking and managing clock's controller runtime pm state. Each clk_prepare() operation will first call pm_runtime_get_sync() on the supplied device, while clk_unprepare() will do pm_runtime_put() at the end. Additional calls to pm_runtime_get/put functions are required to ensure that any register access (like calculating/chaning clock rates) will be done with clock controller in active runtime state. Special handling of the case when runtime pm is disabled for clock controller's device is needed to let this feature work properly also during system sleep suspend/resume operations (runtime pm is first disabled before entering sleep state's, but controller is usually still operational until its suspend pm callback is called). Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> --- drivers/clk/clk.c | 84 +++++++++++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 76 insertions(+), 8 deletions(-)