Message ID | 84r6s34s-opq7-9358-o45n-27s17084012@onlyvoer.pbz (mailing list archive) |
---|---|
State | Mainlined, archived |
Headers | show |
Series | [v2] PM / clk: make PM clock layer compatible with clocks that must sleep | expand |
On Thu, Jan 21, 2021 at 6:23 PM Nicolas Pitre <npitre@baylibre.com> wrote: > > The clock API splits its interface into sleepable ant atomic contexts: > > - clk_prepare/clk_unprepare for stuff that might sleep > > - clk_enable_clk_disable for anything that may be done in atomic context > > The code handling runtime PM for clocks only calls clk_disable() on > suspend requests, and clk_enable on resume requests. This means that > runtime PM with clock providers that only have the prepare/unprepare > methods implemented is basically useless. > > Many clock implementations can't accommodate atomic contexts. This is > often the case when communication with the clock happens through another > subsystem like I2C or SCMI. > > Let's make the clock PM code useful with such clocks by safely invoking > clk_prepare/clk_unprepare upon resume/suspend requests. Of course, when > such clocks are registered with the PM layer then pm_runtime_irq_safe() > can't be used, and neither pm_runtime_suspend() nor pm_runtime_resume() > may be invoked in atomic context. > > For clocks that do implement the enable and disable methods then > everything just works as before. > > Signed-off-by: Nicolas Pitre <npitre@baylibre.com> > > --- > > On Thu, 21 Jan 2021, Rafael J. Wysocki wrote: > > > So I'm going to drop this patch from linux-next until the issue is > > resolved, thanks! > > Here's the fixed version. Applied instead of the v1, thanks! > Changes from v1: > > - Moved clk_is_enabled_when_prepared() declaration under > CONFIG_HAVE_CLK_PREPARE and provided a dummy definition when that > config option is unset. > > diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c > index ced6863a16..a62fb0f9b1 100644 > --- a/drivers/base/power/clock_ops.c > +++ b/drivers/base/power/clock_ops.c > @@ -23,6 +23,7 @@ > enum pce_status { > PCE_STATUS_NONE = 0, > PCE_STATUS_ACQUIRED, > + PCE_STATUS_PREPARED, > PCE_STATUS_ENABLED, > PCE_STATUS_ERROR, > }; > @@ -32,8 +33,102 @@ struct pm_clock_entry { > char *con_id; > struct clk *clk; > enum pce_status status; > + bool enabled_when_prepared; > }; > > +/** > + * pm_clk_list_lock - ensure exclusive access for modifying the PM clock > + * entry list. > + * @psd: pm_subsys_data instance corresponding to the PM clock entry list > + * and clk_op_might_sleep count to be modified. > + * > + * Get exclusive access before modifying the PM clock entry list and the > + * clock_op_might_sleep count to guard against concurrent modifications. > + * This also protects against a concurrent clock_op_might_sleep and PM clock > + * entry list usage in pm_clk_suspend()/pm_clk_resume() that may or may not > + * happen in atomic context, hence both the mutex and the spinlock must be > + * taken here. > + */ > +static void pm_clk_list_lock(struct pm_subsys_data *psd) > +{ > + mutex_lock(&psd->clock_mutex); > + spin_lock_irq(&psd->lock); > +} > + > +/** > + * pm_clk_list_unlock - counterpart to pm_clk_list_lock(). > + * @psd: the same pm_subsys_data instance previously passed to > + * pm_clk_list_lock(). > + */ > +static void pm_clk_list_unlock(struct pm_subsys_data *psd) > +{ > + spin_unlock_irq(&psd->lock); > + mutex_unlock(&psd->clock_mutex); > +} > + > +/** > + * pm_clk_op_lock - ensure exclusive access for performing clock operations. > + * @psd: pm_subsys_data instance corresponding to the PM clock entry list > + * and clk_op_might_sleep count being used. > + * @flags: stored irq flags. > + * @fn: string for the caller function's name. > + * > + * This is used by pm_clk_suspend() and pm_clk_resume() to guard > + * against concurrent modifications to the clock entry list and the > + * clock_op_might_sleep count. If clock_op_might_sleep is != 0 then > + * only the mutex can be locked and those functions can only be used in > + * non atomic context. If clock_op_might_sleep == 0 then these functions > + * may be used in any context and only the spinlock can be locked. > + * Returns -EINVAL if called in atomic context when clock ops might sleep. > + */ > +static int pm_clk_op_lock(struct pm_subsys_data *psd, unsigned long *flags, > + const char *fn) > +{ > + bool atomic_context = in_atomic() || irqs_disabled(); > + > +try_again: > + spin_lock_irqsave(&psd->lock, *flags); > + if (!psd->clock_op_might_sleep) > + return 0; > + > + /* bail out if in atomic context */ > + if (atomic_context) { > + pr_err("%s: atomic context with clock_ops_might_sleep = %d", > + fn, psd->clock_op_might_sleep); > + spin_unlock_irqrestore(&psd->lock, *flags); > + might_sleep(); > + return -EPERM; > + } > + > + /* we must switch to the mutex */ > + spin_unlock_irqrestore(&psd->lock, *flags); > + mutex_lock(&psd->clock_mutex); > + > + /* > + * There was a possibility for psd->clock_op_might_sleep > + * to become 0 above. Keep the mutex only if not the case. > + */ > + if (likely(psd->clock_op_might_sleep)) > + return 0; > + > + mutex_unlock(&psd->clock_mutex); > + goto try_again; > +} > + > +/** > + * pm_clk_op_unlock - counterpart to pm_clk_op_lock(). > + * @psd: the same pm_subsys_data instance previously passed to > + * pm_clk_op_lock(). > + * @flags: irq flags provided by pm_clk_op_lock(). > + */ > +static void pm_clk_op_unlock(struct pm_subsys_data *psd, unsigned long *flags) > +{ > + if (psd->clock_op_might_sleep) > + mutex_unlock(&psd->clock_mutex); > + else > + spin_unlock_irqrestore(&psd->lock, *flags); > +} > + > /** > * pm_clk_enable - Enable a clock, reporting any errors > * @dev: The device for the given clock > @@ -43,14 +138,21 @@ static inline void __pm_clk_enable(struct device *dev, struct pm_clock_entry *ce > { > int ret; > > - if (ce->status < PCE_STATUS_ERROR) { > + switch (ce->status) { > + case PCE_STATUS_ACQUIRED: > + ret = clk_prepare_enable(ce->clk); > + break; > + case PCE_STATUS_PREPARED: > ret = clk_enable(ce->clk); > - if (!ret) > - ce->status = PCE_STATUS_ENABLED; > - else > - dev_err(dev, "%s: failed to enable clk %p, error %d\n", > - __func__, ce->clk, ret); > + break; > + default: > + return; > } > + if (!ret) > + ce->status = PCE_STATUS_ENABLED; > + else > + dev_err(dev, "%s: failed to enable clk %p, error %d\n", > + __func__, ce->clk, ret); > } > > /** > @@ -64,17 +166,20 @@ static void pm_clk_acquire(struct device *dev, struct pm_clock_entry *ce) > ce->clk = clk_get(dev, ce->con_id); > if (IS_ERR(ce->clk)) { > ce->status = PCE_STATUS_ERROR; > + return; > + } else if (clk_is_enabled_when_prepared(ce->clk)) { > + /* we defer preparing the clock in that case */ > + ce->status = PCE_STATUS_ACQUIRED; > + ce->enabled_when_prepared = true; > + } else if (clk_prepare(ce->clk)) { > + ce->status = PCE_STATUS_ERROR; > + dev_err(dev, "clk_prepare() failed\n"); > + return; > } else { > - if (clk_prepare(ce->clk)) { > - ce->status = PCE_STATUS_ERROR; > - dev_err(dev, "clk_prepare() failed\n"); > - } else { > - ce->status = PCE_STATUS_ACQUIRED; > - dev_dbg(dev, > - "Clock %pC con_id %s managed by runtime PM.\n", > - ce->clk, ce->con_id); > - } > + ce->status = PCE_STATUS_PREPARED; > } > + dev_dbg(dev, "Clock %pC con_id %s managed by runtime PM.\n", > + ce->clk, ce->con_id); > } > > static int __pm_clk_add(struct device *dev, const char *con_id, > @@ -106,9 +211,11 @@ static int __pm_clk_add(struct device *dev, const char *con_id, > > pm_clk_acquire(dev, ce); > > - spin_lock_irq(&psd->lock); > + pm_clk_list_lock(psd); > list_add_tail(&ce->node, &psd->clock_list); > - spin_unlock_irq(&psd->lock); > + if (ce->enabled_when_prepared) > + psd->clock_op_might_sleep++; > + pm_clk_list_unlock(psd); > return 0; > } > > @@ -239,14 +346,20 @@ static void __pm_clk_remove(struct pm_clock_entry *ce) > if (!ce) > return; > > - if (ce->status < PCE_STATUS_ERROR) { > - if (ce->status == PCE_STATUS_ENABLED) > - clk_disable(ce->clk); > - > - if (ce->status >= PCE_STATUS_ACQUIRED) { > - clk_unprepare(ce->clk); > + switch (ce->status) { > + case PCE_STATUS_ENABLED: > + clk_disable(ce->clk); > + fallthrough; > + case PCE_STATUS_PREPARED: > + clk_unprepare(ce->clk); > + fallthrough; > + case PCE_STATUS_ACQUIRED: > + case PCE_STATUS_ERROR: > + if (!IS_ERR(ce->clk)) > clk_put(ce->clk); > - } > + break; > + default: > + break; > } > > kfree(ce->con_id); > @@ -269,7 +382,7 @@ void pm_clk_remove(struct device *dev, const char *con_id) > if (!psd) > return; > > - spin_lock_irq(&psd->lock); > + pm_clk_list_lock(psd); > > list_for_each_entry(ce, &psd->clock_list, node) { > if (!con_id && !ce->con_id) > @@ -280,12 +393,14 @@ void pm_clk_remove(struct device *dev, const char *con_id) > goto remove; > } > > - spin_unlock_irq(&psd->lock); > + pm_clk_list_unlock(psd); > return; > > remove: > list_del(&ce->node); > - spin_unlock_irq(&psd->lock); > + if (ce->enabled_when_prepared) > + psd->clock_op_might_sleep--; > + pm_clk_list_unlock(psd); > > __pm_clk_remove(ce); > } > @@ -307,19 +422,21 @@ void pm_clk_remove_clk(struct device *dev, struct clk *clk) > if (!psd || !clk) > return; > > - spin_lock_irq(&psd->lock); > + pm_clk_list_lock(psd); > > list_for_each_entry(ce, &psd->clock_list, node) { > if (clk == ce->clk) > goto remove; > } > > - spin_unlock_irq(&psd->lock); > + pm_clk_list_unlock(psd); > return; > > remove: > list_del(&ce->node); > - spin_unlock_irq(&psd->lock); > + if (ce->enabled_when_prepared) > + psd->clock_op_might_sleep--; > + pm_clk_list_unlock(psd); > > __pm_clk_remove(ce); > } > @@ -330,13 +447,16 @@ EXPORT_SYMBOL_GPL(pm_clk_remove_clk); > * @dev: Device to initialize the list of PM clocks for. > * > * Initialize the lock and clock_list members of the device's pm_subsys_data > - * object. > + * object, set the count of clocks that might sleep to 0. > */ > void pm_clk_init(struct device *dev) > { > struct pm_subsys_data *psd = dev_to_psd(dev); > - if (psd) > + if (psd) { > INIT_LIST_HEAD(&psd->clock_list); > + mutex_init(&psd->clock_mutex); > + psd->clock_op_might_sleep = 0; > + } > } > EXPORT_SYMBOL_GPL(pm_clk_init); > > @@ -372,12 +492,13 @@ void pm_clk_destroy(struct device *dev) > > INIT_LIST_HEAD(&list); > > - spin_lock_irq(&psd->lock); > + pm_clk_list_lock(psd); > > list_for_each_entry_safe_reverse(ce, c, &psd->clock_list, node) > list_move(&ce->node, &list); > + psd->clock_op_might_sleep = 0; > > - spin_unlock_irq(&psd->lock); > + pm_clk_list_unlock(psd); > > dev_pm_put_subsys_data(dev); > > @@ -397,23 +518,30 @@ int pm_clk_suspend(struct device *dev) > struct pm_subsys_data *psd = dev_to_psd(dev); > struct pm_clock_entry *ce; > unsigned long flags; > + int ret; > > dev_dbg(dev, "%s()\n", __func__); > > if (!psd) > return 0; > > - spin_lock_irqsave(&psd->lock, flags); > + ret = pm_clk_op_lock(psd, &flags, __func__); > + if (ret) > + return ret; > > list_for_each_entry_reverse(ce, &psd->clock_list, node) { > - if (ce->status < PCE_STATUS_ERROR) { > - if (ce->status == PCE_STATUS_ENABLED) > + if (ce->status == PCE_STATUS_ENABLED) { > + if (ce->enabled_when_prepared) { > + clk_disable_unprepare(ce->clk); > + ce->status = PCE_STATUS_ACQUIRED; > + } else { > clk_disable(ce->clk); > - ce->status = PCE_STATUS_ACQUIRED; > + ce->status = PCE_STATUS_PREPARED; > + } > } > } > > - spin_unlock_irqrestore(&psd->lock, flags); > + pm_clk_op_unlock(psd, &flags); > > return 0; > } > @@ -428,18 +556,21 @@ int pm_clk_resume(struct device *dev) > struct pm_subsys_data *psd = dev_to_psd(dev); > struct pm_clock_entry *ce; > unsigned long flags; > + int ret; > > dev_dbg(dev, "%s()\n", __func__); > > if (!psd) > return 0; > > - spin_lock_irqsave(&psd->lock, flags); > + ret = pm_clk_op_lock(psd, &flags, __func__); > + if (ret) > + return ret; > > list_for_each_entry(ce, &psd->clock_list, node) > __pm_clk_enable(dev, ce); > > - spin_unlock_irqrestore(&psd->lock, flags); > + pm_clk_op_unlock(psd, &flags); > > return 0; > } > diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c > index 8c1d04db99..3d751ae5bc 100644 > --- a/drivers/clk/clk.c > +++ b/drivers/clk/clk.c > @@ -1164,6 +1164,27 @@ int clk_enable(struct clk *clk) > } > EXPORT_SYMBOL_GPL(clk_enable); > > +/** > + * clk_is_enabled_when_prepared - indicate if preparing a clock also enables it. > + * @clk: clock source > + * > + * Returns true if clk_prepare() implicitly enables the clock, effectively > + * making clk_enable()/clk_disable() no-ops, false otherwise. > + * > + * This is of interest mainly to power management code where actually > + * disabling the clock also requires unpreparing it to have any material > + * effect. > + * > + * Regardless of the value returned here, the caller must always invoke > + * clk_enable() or clk_prepare_enable() and counterparts for usage counts > + * to be right. > + */ > +bool clk_is_enabled_when_prepared(struct clk *clk) > +{ > + return clk && !(clk->core->ops->enable && clk->core->ops->disable); > +} > +EXPORT_SYMBOL_GPL(clk_is_enabled_when_prepared); > + > static int clk_core_prepare_enable(struct clk_core *core) > { > int ret; > diff --git a/include/linux/clk.h b/include/linux/clk.h > index 31ff1bf1b7..a4a86aa8b1 100644 > --- a/include/linux/clk.h > +++ b/include/linux/clk.h > @@ -238,6 +238,7 @@ static inline bool clk_is_match(const struct clk *p, const struct clk *q) > > #endif > > +#ifdef CONFIG_HAVE_CLK_PREPARE > /** > * clk_prepare - prepare a clock source > * @clk: clock source > @@ -246,10 +247,26 @@ static inline bool clk_is_match(const struct clk *p, const struct clk *q) > * > * Must not be called from within atomic context. > */ > -#ifdef CONFIG_HAVE_CLK_PREPARE > int clk_prepare(struct clk *clk); > int __must_check clk_bulk_prepare(int num_clks, > const struct clk_bulk_data *clks); > + > +/** > + * clk_is_enabled_when_prepared - indicate if preparing a clock also enables it. > + * @clk: clock source > + * > + * Returns true if clk_prepare() implicitly enables the clock, effectively > + * making clk_enable()/clk_disable() no-ops, false otherwise. > + * > + * This is of interest mainly to the power management code where actually > + * disabling the clock also requires unpreparing it to have any material > + * effect. > + * > + * Regardless of the value returned here, the caller must always invoke > + * clk_enable() or clk_prepare_enable() and counterparts for usage counts > + * to be right. > + */ > +bool clk_is_enabled_when_prepared(struct clk *clk); > #else > static inline int clk_prepare(struct clk *clk) > { > @@ -263,6 +280,11 @@ clk_bulk_prepare(int num_clks, const struct clk_bulk_data *clks) > might_sleep(); > return 0; > } > + > +static inline bool clk_is_enabled_when_prepared(struct clk *clk) > +{ > + return false; > +} > #endif > > /** > diff --git a/include/linux/pm.h b/include/linux/pm.h > index 47aca6bac1..482313a8cc 100644 > --- a/include/linux/pm.h > +++ b/include/linux/pm.h > @@ -537,6 +537,8 @@ struct pm_subsys_data { > spinlock_t lock; > unsigned int refcount; > #ifdef CONFIG_PM_CLK > + unsigned int clock_op_might_sleep; > + struct mutex clock_mutex; > struct list_head clock_list; > #endif > #ifdef CONFIG_PM_GENERIC_DOMAINS
On Thu, Jan 21, 2021 at 8:01 PM Rafael J. Wysocki <rafael@kernel.org> wrote: > > On Thu, Jan 21, 2021 at 6:23 PM Nicolas Pitre <npitre@baylibre.com> wrote: > > > > The clock API splits its interface into sleepable ant atomic contexts: > > > > - clk_prepare/clk_unprepare for stuff that might sleep > > > > - clk_enable_clk_disable for anything that may be done in atomic context > > > > The code handling runtime PM for clocks only calls clk_disable() on > > suspend requests, and clk_enable on resume requests. This means that > > runtime PM with clock providers that only have the prepare/unprepare > > methods implemented is basically useless. > > > > Many clock implementations can't accommodate atomic contexts. This is > > often the case when communication with the clock happens through another > > subsystem like I2C or SCMI. > > > > Let's make the clock PM code useful with such clocks by safely invoking > > clk_prepare/clk_unprepare upon resume/suspend requests. Of course, when > > such clocks are registered with the PM layer then pm_runtime_irq_safe() > > can't be used, and neither pm_runtime_suspend() nor pm_runtime_resume() > > may be invoked in atomic context. > > > > For clocks that do implement the enable and disable methods then > > everything just works as before. > > > > Signed-off-by: Nicolas Pitre <npitre@baylibre.com> > > > > --- > > > > On Thu, 21 Jan 2021, Rafael J. Wysocki wrote: > > > > > So I'm going to drop this patch from linux-next until the issue is > > > resolved, thanks! > > > > Here's the fixed version. > > Applied instead of the v1, thanks! > > > Changes from v1: > > > > - Moved clk_is_enabled_when_prepared() declaration under > > CONFIG_HAVE_CLK_PREPARE and provided a dummy definition when that > > config option is unset. > > > > diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c > > index ced6863a16..a62fb0f9b1 100644 > > --- a/drivers/base/power/clock_ops.c > > +++ b/drivers/base/power/clock_ops.c > > @@ -23,6 +23,7 @@ > > enum pce_status { > > PCE_STATUS_NONE = 0, > > PCE_STATUS_ACQUIRED, > > + PCE_STATUS_PREPARED, > > PCE_STATUS_ENABLED, > > PCE_STATUS_ERROR, > > }; > > @@ -32,8 +33,102 @@ struct pm_clock_entry { > > char *con_id; > > struct clk *clk; > > enum pce_status status; > > + bool enabled_when_prepared; > > }; > > > > +/** > > + * pm_clk_list_lock - ensure exclusive access for modifying the PM clock > > + * entry list. > > + * @psd: pm_subsys_data instance corresponding to the PM clock entry list > > + * and clk_op_might_sleep count to be modified. > > + * > > + * Get exclusive access before modifying the PM clock entry list and the > > + * clock_op_might_sleep count to guard against concurrent modifications. > > + * This also protects against a concurrent clock_op_might_sleep and PM clock > > + * entry list usage in pm_clk_suspend()/pm_clk_resume() that may or may not > > + * happen in atomic context, hence both the mutex and the spinlock must be > > + * taken here. > > + */ > > +static void pm_clk_list_lock(struct pm_subsys_data *psd) > > +{ > > + mutex_lock(&psd->clock_mutex); > > + spin_lock_irq(&psd->lock); > > +} > > + > > +/** > > + * pm_clk_list_unlock - counterpart to pm_clk_list_lock(). > > + * @psd: the same pm_subsys_data instance previously passed to > > + * pm_clk_list_lock(). > > + */ > > +static void pm_clk_list_unlock(struct pm_subsys_data *psd) Locking annotations for sparse were missing here and above, so I've added them by hand. Please double check the result in my linux-next branch (just pushed). Thanks! > > +{ > > + spin_unlock_irq(&psd->lock); > > + mutex_unlock(&psd->clock_mutex); > > +}
On Fri, 22 Jan 2021 at 20:39, Rafael J. Wysocki <rafael@kernel.org> wrote: > > On Thu, Jan 21, 2021 at 8:01 PM Rafael J. Wysocki <rafael@kernel.org> wrote: > > > > On Thu, Jan 21, 2021 at 6:23 PM Nicolas Pitre <npitre@baylibre.com> wrote: > > > > > > The clock API splits its interface into sleepable ant atomic contexts: > > > > > > - clk_prepare/clk_unprepare for stuff that might sleep > > > > > > - clk_enable_clk_disable for anything that may be done in atomic context > > > > > > The code handling runtime PM for clocks only calls clk_disable() on > > > suspend requests, and clk_enable on resume requests. This means that > > > runtime PM with clock providers that only have the prepare/unprepare > > > methods implemented is basically useless. > > > > > > Many clock implementations can't accommodate atomic contexts. This is > > > often the case when communication with the clock happens through another > > > subsystem like I2C or SCMI. > > > > > > Let's make the clock PM code useful with such clocks by safely invoking > > > clk_prepare/clk_unprepare upon resume/suspend requests. Of course, when > > > such clocks are registered with the PM layer then pm_runtime_irq_safe() > > > can't be used, and neither pm_runtime_suspend() nor pm_runtime_resume() > > > may be invoked in atomic context. > > > > > > For clocks that do implement the enable and disable methods then > > > everything just works as before. > > > > > > Signed-off-by: Nicolas Pitre <npitre@baylibre.com> > > > > > > --- > > > > > > On Thu, 21 Jan 2021, Rafael J. Wysocki wrote: > > > > > > > So I'm going to drop this patch from linux-next until the issue is > > > > resolved, thanks! > > > > > > Here's the fixed version. > > > > Applied instead of the v1, thanks! > > > > > Changes from v1: > > > > > > - Moved clk_is_enabled_when_prepared() declaration under > > > CONFIG_HAVE_CLK_PREPARE and provided a dummy definition when that > > > config option is unset. > > > > > > diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c > > > index ced6863a16..a62fb0f9b1 100644 > > > --- a/drivers/base/power/clock_ops.c > > > +++ b/drivers/base/power/clock_ops.c > > > @@ -23,6 +23,7 @@ > > > enum pce_status { > > > PCE_STATUS_NONE = 0, > > > PCE_STATUS_ACQUIRED, > > > + PCE_STATUS_PREPARED, > > > PCE_STATUS_ENABLED, > > > PCE_STATUS_ERROR, > > > }; > > > @@ -32,8 +33,102 @@ struct pm_clock_entry { > > > char *con_id; > > > struct clk *clk; > > > enum pce_status status; > > > + bool enabled_when_prepared; > > > }; > > > > > > +/** > > > + * pm_clk_list_lock - ensure exclusive access for modifying the PM clock > > > + * entry list. > > > + * @psd: pm_subsys_data instance corresponding to the PM clock entry list > > > + * and clk_op_might_sleep count to be modified. > > > + * > > > + * Get exclusive access before modifying the PM clock entry list and the > > > + * clock_op_might_sleep count to guard against concurrent modifications. > > > + * This also protects against a concurrent clock_op_might_sleep and PM clock > > > + * entry list usage in pm_clk_suspend()/pm_clk_resume() that may or may not > > > + * happen in atomic context, hence both the mutex and the spinlock must be > > > + * taken here. > > > + */ > > > +static void pm_clk_list_lock(struct pm_subsys_data *psd) > > > +{ > > > + mutex_lock(&psd->clock_mutex); > > > + spin_lock_irq(&psd->lock); > > > +} > > > + > > > +/** > > > + * pm_clk_list_unlock - counterpart to pm_clk_list_lock(). > > > + * @psd: the same pm_subsys_data instance previously passed to > > > + * pm_clk_list_lock(). > > > + */ > > > +static void pm_clk_list_unlock(struct pm_subsys_data *psd) > > Locking annotations for sparse were missing here and above, so I've > added them by hand. > > Please double check the result in my linux-next branch (just pushed). May i request to add Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org> - Naresh
On Fri, 22 Jan 2021, Rafael J. Wysocki wrote: > > > +/** > > > + * pm_clk_list_unlock - counterpart to pm_clk_list_lock(). > > > + * @psd: the same pm_subsys_data instance previously passed to > > > + * pm_clk_list_lock(). > > > + */ > > > +static void pm_clk_list_unlock(struct pm_subsys_data *psd) > > Locking annotations for sparse were missing here and above, so I've > added them by hand. Thanks. > Please double check the result in my linux-next branch (just pushed). There are still the following warnings: drivers/base/power/clock_ops.c:52:13: warning: context imbalance in 'pm_clk_list_lock' - wrong count at exit drivers/base/power/clock_ops.c:64:13: warning: context imbalance in 'pm_clk_list_unlock' - wrong count at exit I guess this can be silenced (still need to investigate how those annotations work). But I'm more worried about these: drivers/base/power/clock_ops.c:86:12: warning: context imbalance in 'pm_clk_op_lock' - different lock contexts for basic block drivers/base/power/clock_ops.c:131:39: warning: context imbalance in 'pm_clk_op_unlock' - unexpected unlock Those are special locking helpers indeed and I don't know if that can be dealt with. Nicolas
On Fri, Jan 22, 2021 at 4:48 PM Naresh Kamboju <naresh.kamboju@linaro.org> wrote: > > On Fri, 22 Jan 2021 at 20:39, Rafael J. Wysocki <rafael@kernel.org> wrote: > > > > On Thu, Jan 21, 2021 at 8:01 PM Rafael J. Wysocki <rafael@kernel.org> wrote: > > > > > > On Thu, Jan 21, 2021 at 6:23 PM Nicolas Pitre <npitre@baylibre.com> wrote: > > > > > > > > The clock API splits its interface into sleepable ant atomic contexts: > > > > > > > > - clk_prepare/clk_unprepare for stuff that might sleep > > > > > > > > - clk_enable_clk_disable for anything that may be done in atomic context > > > > > > > > The code handling runtime PM for clocks only calls clk_disable() on > > > > suspend requests, and clk_enable on resume requests. This means that > > > > runtime PM with clock providers that only have the prepare/unprepare > > > > methods implemented is basically useless. > > > > > > > > Many clock implementations can't accommodate atomic contexts. This is > > > > often the case when communication with the clock happens through another > > > > subsystem like I2C or SCMI. > > > > > > > > Let's make the clock PM code useful with such clocks by safely invoking > > > > clk_prepare/clk_unprepare upon resume/suspend requests. Of course, when > > > > such clocks are registered with the PM layer then pm_runtime_irq_safe() > > > > can't be used, and neither pm_runtime_suspend() nor pm_runtime_resume() > > > > may be invoked in atomic context. > > > > > > > > For clocks that do implement the enable and disable methods then > > > > everything just works as before. > > > > > > > > Signed-off-by: Nicolas Pitre <npitre@baylibre.com> > > > > > > > > --- > > > > > > > > On Thu, 21 Jan 2021, Rafael J. Wysocki wrote: > > > > > > > > > So I'm going to drop this patch from linux-next until the issue is > > > > > resolved, thanks! > > > > > > > > Here's the fixed version. > > > > > > Applied instead of the v1, thanks! > > > > > > > Changes from v1: > > > > > > > > - Moved clk_is_enabled_when_prepared() declaration under > > > > CONFIG_HAVE_CLK_PREPARE and provided a dummy definition when that > > > > config option is unset. > > > > > > > > diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c > > > > index ced6863a16..a62fb0f9b1 100644 > > > > --- a/drivers/base/power/clock_ops.c > > > > +++ b/drivers/base/power/clock_ops.c > > > > @@ -23,6 +23,7 @@ > > > > enum pce_status { > > > > PCE_STATUS_NONE = 0, > > > > PCE_STATUS_ACQUIRED, > > > > + PCE_STATUS_PREPARED, > > > > PCE_STATUS_ENABLED, > > > > PCE_STATUS_ERROR, > > > > }; > > > > @@ -32,8 +33,102 @@ struct pm_clock_entry { > > > > char *con_id; > > > > struct clk *clk; > > > > enum pce_status status; > > > > + bool enabled_when_prepared; > > > > }; > > > > > > > > +/** > > > > + * pm_clk_list_lock - ensure exclusive access for modifying the PM clock > > > > + * entry list. > > > > + * @psd: pm_subsys_data instance corresponding to the PM clock entry list > > > > + * and clk_op_might_sleep count to be modified. > > > > + * > > > > + * Get exclusive access before modifying the PM clock entry list and the > > > > + * clock_op_might_sleep count to guard against concurrent modifications. > > > > + * This also protects against a concurrent clock_op_might_sleep and PM clock > > > > + * entry list usage in pm_clk_suspend()/pm_clk_resume() that may or may not > > > > + * happen in atomic context, hence both the mutex and the spinlock must be > > > > + * taken here. > > > > + */ > > > > +static void pm_clk_list_lock(struct pm_subsys_data *psd) > > > > +{ > > > > + mutex_lock(&psd->clock_mutex); > > > > + spin_lock_irq(&psd->lock); > > > > +} > > > > + > > > > +/** > > > > + * pm_clk_list_unlock - counterpart to pm_clk_list_lock(). > > > > + * @psd: the same pm_subsys_data instance previously passed to > > > > + * pm_clk_list_lock(). > > > > + */ > > > > +static void pm_clk_list_unlock(struct pm_subsys_data *psd) > > > > Locking annotations for sparse were missing here and above, so I've > > added them by hand. > > > > Please double check the result in my linux-next branch (just pushed). > > May i request to add Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org> If this had been a patch fixing a problem reported by you, there would have been a reason to add a Reported-by, In this case, it is just a new version of a patch taking your testing feedback into account. I can add a Tested-by for you to it if desired, though.
On Fri, 22 Jan 2021, Rafael J. Wysocki wrote: > On Fri, Jan 22, 2021 at 4:48 PM Naresh Kamboju > <naresh.kamboju@linaro.org> wrote: > > > > On Fri, 22 Jan 2021 at 20:39, Rafael J. Wysocki <rafael@kernel.org> wrote: > > > > > > On Thu, Jan 21, 2021 at 8:01 PM Rafael J. Wysocki <rafael@kernel.org> wrote: > > > > > > > > On Thu, Jan 21, 2021 at 6:23 PM Nicolas Pitre <npitre@baylibre.com> wrote: > > > > > > > > > > The clock API splits its interface into sleepable ant atomic contexts: > > > > > > > > > > - clk_prepare/clk_unprepare for stuff that might sleep > > > > > > > > > > - clk_enable_clk_disable for anything that may be done in atomic context > > > > > > > > > > The code handling runtime PM for clocks only calls clk_disable() on > > > > > suspend requests, and clk_enable on resume requests. This means that > > > > > runtime PM with clock providers that only have the prepare/unprepare > > > > > methods implemented is basically useless. > > > > > > > > > > Many clock implementations can't accommodate atomic contexts. This is > > > > > often the case when communication with the clock happens through another > > > > > subsystem like I2C or SCMI. > > > > > > > > > > Let's make the clock PM code useful with such clocks by safely invoking > > > > > clk_prepare/clk_unprepare upon resume/suspend requests. Of course, when > > > > > such clocks are registered with the PM layer then pm_runtime_irq_safe() > > > > > can't be used, and neither pm_runtime_suspend() nor pm_runtime_resume() > > > > > may be invoked in atomic context. > > > > > > > > > > For clocks that do implement the enable and disable methods then > > > > > everything just works as before. > > > > > > > > > > Signed-off-by: Nicolas Pitre <npitre@baylibre.com> > > > > > > > > > > --- > > > > > > > > > > On Thu, 21 Jan 2021, Rafael J. Wysocki wrote: > > > > > > > > > > > So I'm going to drop this patch from linux-next until the issue is > > > > > > resolved, thanks! > > > > > > > > > > Here's the fixed version. > > > > > > > > Applied instead of the v1, thanks! > > > > > > > > > Changes from v1: > > > > > > > > > > - Moved clk_is_enabled_when_prepared() declaration under > > > > > CONFIG_HAVE_CLK_PREPARE and provided a dummy definition when that > > > > > config option is unset. > > > > > > > > > > diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c > > > > > index ced6863a16..a62fb0f9b1 100644 > > > > > --- a/drivers/base/power/clock_ops.c > > > > > +++ b/drivers/base/power/clock_ops.c > > > > > @@ -23,6 +23,7 @@ > > > > > enum pce_status { > > > > > PCE_STATUS_NONE = 0, > > > > > PCE_STATUS_ACQUIRED, > > > > > + PCE_STATUS_PREPARED, > > > > > PCE_STATUS_ENABLED, > > > > > PCE_STATUS_ERROR, > > > > > }; > > > > > @@ -32,8 +33,102 @@ struct pm_clock_entry { > > > > > char *con_id; > > > > > struct clk *clk; > > > > > enum pce_status status; > > > > > + bool enabled_when_prepared; > > > > > }; > > > > > > > > > > +/** > > > > > + * pm_clk_list_lock - ensure exclusive access for modifying the PM clock > > > > > + * entry list. > > > > > + * @psd: pm_subsys_data instance corresponding to the PM clock entry list > > > > > + * and clk_op_might_sleep count to be modified. > > > > > + * > > > > > + * Get exclusive access before modifying the PM clock entry list and the > > > > > + * clock_op_might_sleep count to guard against concurrent modifications. > > > > > + * This also protects against a concurrent clock_op_might_sleep and PM clock > > > > > + * entry list usage in pm_clk_suspend()/pm_clk_resume() that may or may not > > > > > + * happen in atomic context, hence both the mutex and the spinlock must be > > > > > + * taken here. > > > > > + */ > > > > > +static void pm_clk_list_lock(struct pm_subsys_data *psd) > > > > > +{ > > > > > + mutex_lock(&psd->clock_mutex); > > > > > + spin_lock_irq(&psd->lock); > > > > > +} > > > > > + > > > > > +/** > > > > > + * pm_clk_list_unlock - counterpart to pm_clk_list_lock(). > > > > > + * @psd: the same pm_subsys_data instance previously passed to > > > > > + * pm_clk_list_lock(). > > > > > + */ > > > > > +static void pm_clk_list_unlock(struct pm_subsys_data *psd) > > > > > > Locking annotations for sparse were missing here and above, so I've > > > added them by hand. > > > > > > Please double check the result in my linux-next branch (just pushed). > > > > May i request to add Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org> > > If this had been a patch fixing a problem reported by you, there would > have been a reason to add a Reported-by, > > In this case, it is just a new version of a patch taking your testing > feedback into account. > > I can add a Tested-by for you to it if desired, though. It is probably fair to mention that Naresh reported the issue too. My bad, I should have added the tag myself in v2. Nicolas
On Fri, 22 Jan 2021, Nicolas Pitre wrote: > On Fri, 22 Jan 2021, Rafael J. Wysocki wrote: > > > On Fri, Jan 22, 2021 at 4:48 PM Naresh Kamboju > > <naresh.kamboju@linaro.org> wrote: > > > > > > On Fri, 22 Jan 2021 at 20:39, Rafael J. Wysocki <rafael@kernel.org> wrote: > > > > > > > > On Thu, Jan 21, 2021 at 8:01 PM Rafael J. Wysocki <rafael@kernel.org> wrote: > > > > > > > > > > On Thu, Jan 21, 2021 at 6:23 PM Nicolas Pitre <npitre@baylibre.com> wrote: > > > > > > > > > > > > The clock API splits its interface into sleepable ant atomic contexts: > > > > > > > > > > > > - clk_prepare/clk_unprepare for stuff that might sleep > > > > > > > > > > > > - clk_enable_clk_disable for anything that may be done in atomic context > > > > > > > > > > > > The code handling runtime PM for clocks only calls clk_disable() on > > > > > > suspend requests, and clk_enable on resume requests. This means that > > > > > > runtime PM with clock providers that only have the prepare/unprepare > > > > > > methods implemented is basically useless. > > > > > > > > > > > > Many clock implementations can't accommodate atomic contexts. This is > > > > > > often the case when communication with the clock happens through another > > > > > > subsystem like I2C or SCMI. > > > > > > > > > > > > Let's make the clock PM code useful with such clocks by safely invoking > > > > > > clk_prepare/clk_unprepare upon resume/suspend requests. Of course, when > > > > > > such clocks are registered with the PM layer then pm_runtime_irq_safe() > > > > > > can't be used, and neither pm_runtime_suspend() nor pm_runtime_resume() > > > > > > may be invoked in atomic context. > > > > > > > > > > > > For clocks that do implement the enable and disable methods then > > > > > > everything just works as before. > > > > > > > > > > > > Signed-off-by: Nicolas Pitre <npitre@baylibre.com> > > > > > > > > > > > > --- > > > > > > > > > > > > On Thu, 21 Jan 2021, Rafael J. Wysocki wrote: > > > > > > > > > > > > > So I'm going to drop this patch from linux-next until the issue is > > > > > > > resolved, thanks! > > > > > > > > > > > > Here's the fixed version. > > > > > > > > > > Applied instead of the v1, thanks! > > > > > > > > > > > Changes from v1: > > > > > > > > > > > > - Moved clk_is_enabled_when_prepared() declaration under > > > > > > CONFIG_HAVE_CLK_PREPARE and provided a dummy definition when that > > > > > > config option is unset. > > > > > > > > > > > > diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c > > > > > > index ced6863a16..a62fb0f9b1 100644 > > > > > > --- a/drivers/base/power/clock_ops.c > > > > > > +++ b/drivers/base/power/clock_ops.c > > > > > > @@ -23,6 +23,7 @@ > > > > > > enum pce_status { > > > > > > PCE_STATUS_NONE = 0, > > > > > > PCE_STATUS_ACQUIRED, > > > > > > + PCE_STATUS_PREPARED, > > > > > > PCE_STATUS_ENABLED, > > > > > > PCE_STATUS_ERROR, > > > > > > }; > > > > > > @@ -32,8 +33,102 @@ struct pm_clock_entry { > > > > > > char *con_id; > > > > > > struct clk *clk; > > > > > > enum pce_status status; > > > > > > + bool enabled_when_prepared; > > > > > > }; > > > > > > > > > > > > +/** > > > > > > + * pm_clk_list_lock - ensure exclusive access for modifying the PM clock > > > > > > + * entry list. > > > > > > + * @psd: pm_subsys_data instance corresponding to the PM clock entry list > > > > > > + * and clk_op_might_sleep count to be modified. > > > > > > + * > > > > > > + * Get exclusive access before modifying the PM clock entry list and the > > > > > > + * clock_op_might_sleep count to guard against concurrent modifications. > > > > > > + * This also protects against a concurrent clock_op_might_sleep and PM clock > > > > > > + * entry list usage in pm_clk_suspend()/pm_clk_resume() that may or may not > > > > > > + * happen in atomic context, hence both the mutex and the spinlock must be > > > > > > + * taken here. > > > > > > + */ > > > > > > +static void pm_clk_list_lock(struct pm_subsys_data *psd) > > > > > > +{ > > > > > > + mutex_lock(&psd->clock_mutex); > > > > > > + spin_lock_irq(&psd->lock); > > > > > > +} > > > > > > + > > > > > > +/** > > > > > > + * pm_clk_list_unlock - counterpart to pm_clk_list_lock(). > > > > > > + * @psd: the same pm_subsys_data instance previously passed to > > > > > > + * pm_clk_list_lock(). > > > > > > + */ > > > > > > +static void pm_clk_list_unlock(struct pm_subsys_data *psd) > > > > > > > > Locking annotations for sparse were missing here and above, so I've > > > > added them by hand. > > > > > > > > Please double check the result in my linux-next branch (just pushed). > > > > > > May i request to add Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org> > > > > If this had been a patch fixing a problem reported by you, there would > > have been a reason to add a Reported-by, > > > > In this case, it is just a new version of a patch taking your testing > > feedback into account. > > > > I can add a Tested-by for you to it if desired, though. > > It is probably fair to mention that Naresh reported the issue too. > My bad, I should have added the tag myself in v2. That being said, I agree that this isn't a fix but a whole new patch, so I agree that Tested-by is probably more appropriate. Nicolas
On Fri, Jan 22, 2021 at 4:59 PM Nicolas Pitre <nico@fluxnic.net> wrote: > > On Fri, 22 Jan 2021, Rafael J. Wysocki wrote: > > > On Fri, Jan 22, 2021 at 4:48 PM Naresh Kamboju > > <naresh.kamboju@linaro.org> wrote: > > > > > > On Fri, 22 Jan 2021 at 20:39, Rafael J. Wysocki <rafael@kernel.org> wrote: > > > > > > > > On Thu, Jan 21, 2021 at 8:01 PM Rafael J. Wysocki <rafael@kernel.org> wrote: > > > > > > > > > > On Thu, Jan 21, 2021 at 6:23 PM Nicolas Pitre <npitre@baylibre.com> wrote: > > > > > > > > > > > > The clock API splits its interface into sleepable ant atomic contexts: > > > > > > > > > > > > - clk_prepare/clk_unprepare for stuff that might sleep > > > > > > > > > > > > - clk_enable_clk_disable for anything that may be done in atomic context > > > > > > > > > > > > The code handling runtime PM for clocks only calls clk_disable() on > > > > > > suspend requests, and clk_enable on resume requests. This means that > > > > > > runtime PM with clock providers that only have the prepare/unprepare > > > > > > methods implemented is basically useless. > > > > > > > > > > > > Many clock implementations can't accommodate atomic contexts. This is > > > > > > often the case when communication with the clock happens through another > > > > > > subsystem like I2C or SCMI. > > > > > > > > > > > > Let's make the clock PM code useful with such clocks by safely invoking > > > > > > clk_prepare/clk_unprepare upon resume/suspend requests. Of course, when > > > > > > such clocks are registered with the PM layer then pm_runtime_irq_safe() > > > > > > can't be used, and neither pm_runtime_suspend() nor pm_runtime_resume() > > > > > > may be invoked in atomic context. > > > > > > > > > > > > For clocks that do implement the enable and disable methods then > > > > > > everything just works as before. > > > > > > > > > > > > Signed-off-by: Nicolas Pitre <npitre@baylibre.com> > > > > > > > > > > > > --- > > > > > > > > > > > > On Thu, 21 Jan 2021, Rafael J. Wysocki wrote: > > > > > > > > > > > > > So I'm going to drop this patch from linux-next until the issue is > > > > > > > resolved, thanks! > > > > > > > > > > > > Here's the fixed version. > > > > > > > > > > Applied instead of the v1, thanks! > > > > > > > > > > > Changes from v1: > > > > > > > > > > > > - Moved clk_is_enabled_when_prepared() declaration under > > > > > > CONFIG_HAVE_CLK_PREPARE and provided a dummy definition when that > > > > > > config option is unset. > > > > > > > > > > > > diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c > > > > > > index ced6863a16..a62fb0f9b1 100644 > > > > > > --- a/drivers/base/power/clock_ops.c > > > > > > +++ b/drivers/base/power/clock_ops.c > > > > > > @@ -23,6 +23,7 @@ > > > > > > enum pce_status { > > > > > > PCE_STATUS_NONE = 0, > > > > > > PCE_STATUS_ACQUIRED, > > > > > > + PCE_STATUS_PREPARED, > > > > > > PCE_STATUS_ENABLED, > > > > > > PCE_STATUS_ERROR, > > > > > > }; > > > > > > @@ -32,8 +33,102 @@ struct pm_clock_entry { > > > > > > char *con_id; > > > > > > struct clk *clk; > > > > > > enum pce_status status; > > > > > > + bool enabled_when_prepared; > > > > > > }; > > > > > > > > > > > > +/** > > > > > > + * pm_clk_list_lock - ensure exclusive access for modifying the PM clock > > > > > > + * entry list. > > > > > > + * @psd: pm_subsys_data instance corresponding to the PM clock entry list > > > > > > + * and clk_op_might_sleep count to be modified. > > > > > > + * > > > > > > + * Get exclusive access before modifying the PM clock entry list and the > > > > > > + * clock_op_might_sleep count to guard against concurrent modifications. > > > > > > + * This also protects against a concurrent clock_op_might_sleep and PM clock > > > > > > + * entry list usage in pm_clk_suspend()/pm_clk_resume() that may or may not > > > > > > + * happen in atomic context, hence both the mutex and the spinlock must be > > > > > > + * taken here. > > > > > > + */ > > > > > > +static void pm_clk_list_lock(struct pm_subsys_data *psd) > > > > > > +{ > > > > > > + mutex_lock(&psd->clock_mutex); > > > > > > + spin_lock_irq(&psd->lock); > > > > > > +} > > > > > > + > > > > > > +/** > > > > > > + * pm_clk_list_unlock - counterpart to pm_clk_list_lock(). > > > > > > + * @psd: the same pm_subsys_data instance previously passed to > > > > > > + * pm_clk_list_lock(). > > > > > > + */ > > > > > > +static void pm_clk_list_unlock(struct pm_subsys_data *psd) > > > > > > > > Locking annotations for sparse were missing here and above, so I've > > > > added them by hand. > > > > > > > > Please double check the result in my linux-next branch (just pushed). > > > > > > May i request to add Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org> > > > > If this had been a patch fixing a problem reported by you, there would > > have been a reason to add a Reported-by, > > > > In this case, it is just a new version of a patch taking your testing > > feedback into account. > > > > I can add a Tested-by for you to it if desired, though. > > It is probably fair to mention that Naresh reported the issue too. > My bad, I should have added the tag myself in v2. OK I'm assuming that there will be a v3 because of the sparse warnings, so please add the tags as needed when posting it.
diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c index ced6863a16..a62fb0f9b1 100644 --- a/drivers/base/power/clock_ops.c +++ b/drivers/base/power/clock_ops.c @@ -23,6 +23,7 @@ enum pce_status { PCE_STATUS_NONE = 0, PCE_STATUS_ACQUIRED, + PCE_STATUS_PREPARED, PCE_STATUS_ENABLED, PCE_STATUS_ERROR, }; @@ -32,8 +33,102 @@ struct pm_clock_entry { char *con_id; struct clk *clk; enum pce_status status; + bool enabled_when_prepared; }; +/** + * pm_clk_list_lock - ensure exclusive access for modifying the PM clock + * entry list. + * @psd: pm_subsys_data instance corresponding to the PM clock entry list + * and clk_op_might_sleep count to be modified. + * + * Get exclusive access before modifying the PM clock entry list and the + * clock_op_might_sleep count to guard against concurrent modifications. + * This also protects against a concurrent clock_op_might_sleep and PM clock + * entry list usage in pm_clk_suspend()/pm_clk_resume() that may or may not + * happen in atomic context, hence both the mutex and the spinlock must be + * taken here. + */ +static void pm_clk_list_lock(struct pm_subsys_data *psd) +{ + mutex_lock(&psd->clock_mutex); + spin_lock_irq(&psd->lock); +} + +/** + * pm_clk_list_unlock - counterpart to pm_clk_list_lock(). + * @psd: the same pm_subsys_data instance previously passed to + * pm_clk_list_lock(). + */ +static void pm_clk_list_unlock(struct pm_subsys_data *psd) +{ + spin_unlock_irq(&psd->lock); + mutex_unlock(&psd->clock_mutex); +} + +/** + * pm_clk_op_lock - ensure exclusive access for performing clock operations. + * @psd: pm_subsys_data instance corresponding to the PM clock entry list + * and clk_op_might_sleep count being used. + * @flags: stored irq flags. + * @fn: string for the caller function's name. + * + * This is used by pm_clk_suspend() and pm_clk_resume() to guard + * against concurrent modifications to the clock entry list and the + * clock_op_might_sleep count. If clock_op_might_sleep is != 0 then + * only the mutex can be locked and those functions can only be used in + * non atomic context. If clock_op_might_sleep == 0 then these functions + * may be used in any context and only the spinlock can be locked. + * Returns -EINVAL if called in atomic context when clock ops might sleep. + */ +static int pm_clk_op_lock(struct pm_subsys_data *psd, unsigned long *flags, + const char *fn) +{ + bool atomic_context = in_atomic() || irqs_disabled(); + +try_again: + spin_lock_irqsave(&psd->lock, *flags); + if (!psd->clock_op_might_sleep) + return 0; + + /* bail out if in atomic context */ + if (atomic_context) { + pr_err("%s: atomic context with clock_ops_might_sleep = %d", + fn, psd->clock_op_might_sleep); + spin_unlock_irqrestore(&psd->lock, *flags); + might_sleep(); + return -EPERM; + } + + /* we must switch to the mutex */ + spin_unlock_irqrestore(&psd->lock, *flags); + mutex_lock(&psd->clock_mutex); + + /* + * There was a possibility for psd->clock_op_might_sleep + * to become 0 above. Keep the mutex only if not the case. + */ + if (likely(psd->clock_op_might_sleep)) + return 0; + + mutex_unlock(&psd->clock_mutex); + goto try_again; +} + +/** + * pm_clk_op_unlock - counterpart to pm_clk_op_lock(). + * @psd: the same pm_subsys_data instance previously passed to + * pm_clk_op_lock(). + * @flags: irq flags provided by pm_clk_op_lock(). + */ +static void pm_clk_op_unlock(struct pm_subsys_data *psd, unsigned long *flags) +{ + if (psd->clock_op_might_sleep) + mutex_unlock(&psd->clock_mutex); + else + spin_unlock_irqrestore(&psd->lock, *flags); +} + /** * pm_clk_enable - Enable a clock, reporting any errors * @dev: The device for the given clock @@ -43,14 +138,21 @@ static inline void __pm_clk_enable(struct device *dev, struct pm_clock_entry *ce { int ret; - if (ce->status < PCE_STATUS_ERROR) { + switch (ce->status) { + case PCE_STATUS_ACQUIRED: + ret = clk_prepare_enable(ce->clk); + break; + case PCE_STATUS_PREPARED: ret = clk_enable(ce->clk); - if (!ret) - ce->status = PCE_STATUS_ENABLED; - else - dev_err(dev, "%s: failed to enable clk %p, error %d\n", - __func__, ce->clk, ret); + break; + default: + return; } + if (!ret) + ce->status = PCE_STATUS_ENABLED; + else + dev_err(dev, "%s: failed to enable clk %p, error %d\n", + __func__, ce->clk, ret); } /** @@ -64,17 +166,20 @@ static void pm_clk_acquire(struct device *dev, struct pm_clock_entry *ce) ce->clk = clk_get(dev, ce->con_id); if (IS_ERR(ce->clk)) { ce->status = PCE_STATUS_ERROR; + return; + } else if (clk_is_enabled_when_prepared(ce->clk)) { + /* we defer preparing the clock in that case */ + ce->status = PCE_STATUS_ACQUIRED; + ce->enabled_when_prepared = true; + } else if (clk_prepare(ce->clk)) { + ce->status = PCE_STATUS_ERROR; + dev_err(dev, "clk_prepare() failed\n"); + return; } else { - if (clk_prepare(ce->clk)) { - ce->status = PCE_STATUS_ERROR; - dev_err(dev, "clk_prepare() failed\n"); - } else { - ce->status = PCE_STATUS_ACQUIRED; - dev_dbg(dev, - "Clock %pC con_id %s managed by runtime PM.\n", - ce->clk, ce->con_id); - } + ce->status = PCE_STATUS_PREPARED; } + dev_dbg(dev, "Clock %pC con_id %s managed by runtime PM.\n", + ce->clk, ce->con_id); } static int __pm_clk_add(struct device *dev, const char *con_id, @@ -106,9 +211,11 @@ static int __pm_clk_add(struct device *dev, const char *con_id, pm_clk_acquire(dev, ce); - spin_lock_irq(&psd->lock); + pm_clk_list_lock(psd); list_add_tail(&ce->node, &psd->clock_list); - spin_unlock_irq(&psd->lock); + if (ce->enabled_when_prepared) + psd->clock_op_might_sleep++; + pm_clk_list_unlock(psd); return 0; } @@ -239,14 +346,20 @@ static void __pm_clk_remove(struct pm_clock_entry *ce) if (!ce) return; - if (ce->status < PCE_STATUS_ERROR) { - if (ce->status == PCE_STATUS_ENABLED) - clk_disable(ce->clk); - - if (ce->status >= PCE_STATUS_ACQUIRED) { - clk_unprepare(ce->clk); + switch (ce->status) { + case PCE_STATUS_ENABLED: + clk_disable(ce->clk); + fallthrough; + case PCE_STATUS_PREPARED: + clk_unprepare(ce->clk); + fallthrough; + case PCE_STATUS_ACQUIRED: + case PCE_STATUS_ERROR: + if (!IS_ERR(ce->clk)) clk_put(ce->clk); - } + break; + default: + break; } kfree(ce->con_id); @@ -269,7 +382,7 @@ void pm_clk_remove(struct device *dev, const char *con_id) if (!psd) return; - spin_lock_irq(&psd->lock); + pm_clk_list_lock(psd); list_for_each_entry(ce, &psd->clock_list, node) { if (!con_id && !ce->con_id) @@ -280,12 +393,14 @@ void pm_clk_remove(struct device *dev, const char *con_id) goto remove; } - spin_unlock_irq(&psd->lock); + pm_clk_list_unlock(psd); return; remove: list_del(&ce->node); - spin_unlock_irq(&psd->lock); + if (ce->enabled_when_prepared) + psd->clock_op_might_sleep--; + pm_clk_list_unlock(psd); __pm_clk_remove(ce); } @@ -307,19 +422,21 @@ void pm_clk_remove_clk(struct device *dev, struct clk *clk) if (!psd || !clk) return; - spin_lock_irq(&psd->lock); + pm_clk_list_lock(psd); list_for_each_entry(ce, &psd->clock_list, node) { if (clk == ce->clk) goto remove; } - spin_unlock_irq(&psd->lock); + pm_clk_list_unlock(psd); return; remove: list_del(&ce->node); - spin_unlock_irq(&psd->lock); + if (ce->enabled_when_prepared) + psd->clock_op_might_sleep--; + pm_clk_list_unlock(psd); __pm_clk_remove(ce); } @@ -330,13 +447,16 @@ EXPORT_SYMBOL_GPL(pm_clk_remove_clk); * @dev: Device to initialize the list of PM clocks for. * * Initialize the lock and clock_list members of the device's pm_subsys_data - * object. + * object, set the count of clocks that might sleep to 0. */ void pm_clk_init(struct device *dev) { struct pm_subsys_data *psd = dev_to_psd(dev); - if (psd) + if (psd) { INIT_LIST_HEAD(&psd->clock_list); + mutex_init(&psd->clock_mutex); + psd->clock_op_might_sleep = 0; + } } EXPORT_SYMBOL_GPL(pm_clk_init); @@ -372,12 +492,13 @@ void pm_clk_destroy(struct device *dev) INIT_LIST_HEAD(&list); - spin_lock_irq(&psd->lock); + pm_clk_list_lock(psd); list_for_each_entry_safe_reverse(ce, c, &psd->clock_list, node) list_move(&ce->node, &list); + psd->clock_op_might_sleep = 0; - spin_unlock_irq(&psd->lock); + pm_clk_list_unlock(psd); dev_pm_put_subsys_data(dev); @@ -397,23 +518,30 @@ int pm_clk_suspend(struct device *dev) struct pm_subsys_data *psd = dev_to_psd(dev); struct pm_clock_entry *ce; unsigned long flags; + int ret; dev_dbg(dev, "%s()\n", __func__); if (!psd) return 0; - spin_lock_irqsave(&psd->lock, flags); + ret = pm_clk_op_lock(psd, &flags, __func__); + if (ret) + return ret; list_for_each_entry_reverse(ce, &psd->clock_list, node) { - if (ce->status < PCE_STATUS_ERROR) { - if (ce->status == PCE_STATUS_ENABLED) + if (ce->status == PCE_STATUS_ENABLED) { + if (ce->enabled_when_prepared) { + clk_disable_unprepare(ce->clk); + ce->status = PCE_STATUS_ACQUIRED; + } else { clk_disable(ce->clk); - ce->status = PCE_STATUS_ACQUIRED; + ce->status = PCE_STATUS_PREPARED; + } } } - spin_unlock_irqrestore(&psd->lock, flags); + pm_clk_op_unlock(psd, &flags); return 0; } @@ -428,18 +556,21 @@ int pm_clk_resume(struct device *dev) struct pm_subsys_data *psd = dev_to_psd(dev); struct pm_clock_entry *ce; unsigned long flags; + int ret; dev_dbg(dev, "%s()\n", __func__); if (!psd) return 0; - spin_lock_irqsave(&psd->lock, flags); + ret = pm_clk_op_lock(psd, &flags, __func__); + if (ret) + return ret; list_for_each_entry(ce, &psd->clock_list, node) __pm_clk_enable(dev, ce); - spin_unlock_irqrestore(&psd->lock, flags); + pm_clk_op_unlock(psd, &flags); return 0; } diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c index 8c1d04db99..3d751ae5bc 100644 --- a/drivers/clk/clk.c +++ b/drivers/clk/clk.c @@ -1164,6 +1164,27 @@ int clk_enable(struct clk *clk) } EXPORT_SYMBOL_GPL(clk_enable); +/** + * clk_is_enabled_when_prepared - indicate if preparing a clock also enables it. + * @clk: clock source + * + * Returns true if clk_prepare() implicitly enables the clock, effectively + * making clk_enable()/clk_disable() no-ops, false otherwise. + * + * This is of interest mainly to power management code where actually + * disabling the clock also requires unpreparing it to have any material + * effect. + * + * Regardless of the value returned here, the caller must always invoke + * clk_enable() or clk_prepare_enable() and counterparts for usage counts + * to be right. + */ +bool clk_is_enabled_when_prepared(struct clk *clk) +{ + return clk && !(clk->core->ops->enable && clk->core->ops->disable); +} +EXPORT_SYMBOL_GPL(clk_is_enabled_when_prepared); + static int clk_core_prepare_enable(struct clk_core *core) { int ret; diff --git a/include/linux/clk.h b/include/linux/clk.h index 31ff1bf1b7..a4a86aa8b1 100644 --- a/include/linux/clk.h +++ b/include/linux/clk.h @@ -238,6 +238,7 @@ static inline bool clk_is_match(const struct clk *p, const struct clk *q) #endif +#ifdef CONFIG_HAVE_CLK_PREPARE /** * clk_prepare - prepare a clock source * @clk: clock source @@ -246,10 +247,26 @@ static inline bool clk_is_match(const struct clk *p, const struct clk *q) * * Must not be called from within atomic context. */ -#ifdef CONFIG_HAVE_CLK_PREPARE int clk_prepare(struct clk *clk); int __must_check clk_bulk_prepare(int num_clks, const struct clk_bulk_data *clks); + +/** + * clk_is_enabled_when_prepared - indicate if preparing a clock also enables it. + * @clk: clock source + * + * Returns true if clk_prepare() implicitly enables the clock, effectively + * making clk_enable()/clk_disable() no-ops, false otherwise. + * + * This is of interest mainly to the power management code where actually + * disabling the clock also requires unpreparing it to have any material + * effect. + * + * Regardless of the value returned here, the caller must always invoke + * clk_enable() or clk_prepare_enable() and counterparts for usage counts + * to be right. + */ +bool clk_is_enabled_when_prepared(struct clk *clk); #else static inline int clk_prepare(struct clk *clk) { @@ -263,6 +280,11 @@ clk_bulk_prepare(int num_clks, const struct clk_bulk_data *clks) might_sleep(); return 0; } + +static inline bool clk_is_enabled_when_prepared(struct clk *clk) +{ + return false; +} #endif /** diff --git a/include/linux/pm.h b/include/linux/pm.h index 47aca6bac1..482313a8cc 100644 --- a/include/linux/pm.h +++ b/include/linux/pm.h @@ -537,6 +537,8 @@ struct pm_subsys_data { spinlock_t lock; unsigned int refcount; #ifdef CONFIG_PM_CLK + unsigned int clock_op_might_sleep; + struct mutex clock_mutex; struct list_head clock_list; #endif #ifdef CONFIG_PM_GENERIC_DOMAINS