Message ID | 1392755543-28335-4-git-send-email-nm@ti.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Tue, Feb 18, 2014 at 02:32:20PM -0600, Nishanth Menon wrote: > The current regulator model provides the basic building blocks for the > transitions, however SoC drivers specific to each of these devices, be > it cpufreq/devfreq have to replicate the logic for functionality. > To simply the logic, we can hence introduce a layer that takes care > of the mundane transition logic, registration mechanisms to provide > the "user drivers" such as cpufreq/devfreq a generic interface, whose > details are abstracted by the device tree description for the SoC on > which the driver operates on. This doesn't really provide a picture of what the generic interface that's being offered is and... > drivers/power/voltdm/Kconfig | 5 + > drivers/power/voltdm/Makefile | 3 + > drivers/power/voltdm/core.c | 347 +++++++++++++++++++++++-- > drivers/power/voltdm/voltage_domain_private.h | 86 ++++++ > 4 files changed, 424 insertions(+), 17 deletions(-) ...the diffstat doesn't make it obvious what the external interface is either. It would be much easier to review this with a clearer picture of what it's aiming to implement. > + voltdm_np = of_parse_phandle(np, prop_name, 0); > + if (voltdm_np) { > + ret = of_parse_phandle_with_args(np, prop_name, "#voltdm-cells", > + 0, args); > + if (ret) > + return ERR_PTR(ret); There seems to be some DT stuff going on here, is the interface DT only?
On 02/23/2014 07:58 PM, Mark Brown wrote: > On Tue, Feb 18, 2014 at 02:32:20PM -0600, Nishanth Menon wrote: > >> The current regulator model provides the basic building blocks for the >> transitions, however SoC drivers specific to each of these devices, be >> it cpufreq/devfreq have to replicate the logic for functionality. > >> To simply the logic, we can hence introduce a layer that takes care >> of the mundane transition logic, registration mechanisms to provide >> the "user drivers" such as cpufreq/devfreq a generic interface, whose >> details are abstracted by the device tree description for the SoC on >> which the driver operates on. > > This doesn't really provide a picture of what the generic interface > that's being offered is and... I should have probably picked up part of the cover letter[1] and elaborated further in the commit message here. Intent here is to allow drivers such as cpufreq-cpu0 to be reused on platforms such as TI's OMAP derivatives, and other SoCs which differ only by the sequence involved in voltage scale operations. So, this patch provides a framework for registering the underlying implementation of the SoC specific voltage change methodology. Overall the sequence takes place after this patch is as follows: a) voltage domain drivers such as those of TI or others register with voltage domain with devm_voltdm_register. b) cpufreq-cpu0/devfreq drivers: of_pm_voltdm_notifier_register(introduced as part of patch #1) to register notifiers around clk of interest. This request is linked to the specific voltage domain using phandle in device tree. c) when cpufreq-cpu0/devfreq does a clk_set_rate, the common clock framework triggers notifiers in voltage domain core which in turn, invokes the corresponding handlers for the voltage domain driver ensuring the right dvfs sequence specific to the SoC is triggered. With this patch, the logic for a SoC specific voltage domain intricacies can now be abstracted out to the voltage domain layer. This allows reuse of the logic by multiple drivers such as devfreq/cpufreq and allows these to remain consistent with very minimal SoC(if any at all) specific implementations in them. Among the other obvious alternatives to this approach are: a) To duplicate the logic multiple times for various drivers such as devfreq/cpufreq and try to ensure the logic sequences are proper. basically, create cpufreq-abb-omap, cpufreq-avs-omap, devfreq-abb-omap, devfreq-avs-omap etc.. b) introduce a specific API for voltage change (something like pm_dev_scale_opp_frequency or the equivalent). > >> drivers/power/voltdm/Kconfig | 5 + >> drivers/power/voltdm/Makefile | 3 + >> drivers/power/voltdm/core.c | 347 +++++++++++++++++++++++-- >> drivers/power/voltdm/voltage_domain_private.h | 86 ++++++ >> 4 files changed, 424 insertions(+), 17 deletions(-) > > ...the diffstat doesn't make it obvious what the external interface is > either. It would be much easier to review this with a clearer picture > of what it's aiming to implement. The drivers such as cpufreq/devfreq has the same interface introduced by patch #1 in this series of_pm_voltdm_notifier_register/unregister which is exposed by include/linux/pm_voltage_domain.h Underlying implementation of the SoC specific voltage change methodology can now be isolated by voltage domain driver using voltage_domain_private.h and services provided by core.c > >> + voltdm_np = of_parse_phandle(np, prop_name, 0); >> + if (voltdm_np) { >> + ret = of_parse_phandle_with_args(np, prop_name, "#voltdm-cells", >> + 0, args); >> + if (ret) >> + return ERR_PTR(ret); > > There seems to be some DT stuff going on here, is the interface DT only? > yes. i will make that clear in the documentation. [1] http://marc.info/?l=linux-kernel&m=139275560531711&w=2
On Mon, Feb 24, 2014 at 08:38:07AM -0600, Nishanth Menon wrote: > Intent here is to allow drivers such as cpufreq-cpu0 to be reused on > platforms such as TI's OMAP derivatives, and other SoCs which differ > only by the sequence involved in voltage scale operations. So, this > patch provides a framework for registering the underlying > implementation of the SoC specific voltage change methodology. That bit is clear, what's very opaque from the code is how this is going to be accomplished. > Overall the sequence takes place after this patch is as follows: > a) voltage domain drivers such as those of TI or others register with > voltage domain with devm_voltdm_register. > b) cpufreq-cpu0/devfreq drivers: > of_pm_voltdm_notifier_register(introduced as part of patch #1) to > register notifiers around clk of interest. This request is linked to > the specific voltage domain using phandle in device tree. > c) when cpufreq-cpu0/devfreq does a clk_set_rate, the common clock > framework triggers notifiers in voltage domain core which in turn, > invokes the corresponding handlers for the voltage domain driver > ensuring the right dvfs sequence specific to the SoC is triggered. So the first question I have here is what happens if multiple clocks need to be updated in lock step - if we're only triggering off clock notifiers that seems tricky. The other thing here is that the fact that your API is "of_" suggests that it is in fact linked very srongly to DT - it'd be good to split out the layers to make sure things make sense standalone, the DT helpers are obviously good but the API should be able to stand separately.
On 03/02/2014 09:54 PM, Mark Brown wrote: > On Mon, Feb 24, 2014 at 08:38:07AM -0600, Nishanth Menon wrote: > >> Intent here is to allow drivers such as cpufreq-cpu0 to be reused on >> platforms such as TI's OMAP derivatives, and other SoCs which differ >> only by the sequence involved in voltage scale operations. So, this >> patch provides a framework for registering the underlying >> implementation of the SoC specific voltage change methodology. > > That bit is clear, what's very opaque from the code is how this is going > to be accomplished. The SoC specific voltage domain drivers register with devm_voltdm_register. the fops provide the abstraction needed for the SoC (example in patch #5 - which introduces OMAP specific voltage domain which handles ABB and VDD regulators). What would you suggest that we do to clarify the usage here? >> Overall the sequence takes place after this patch is as follows: >> a) voltage domain drivers such as those of TI or others register with >> voltage domain with devm_voltdm_register. >> b) cpufreq-cpu0/devfreq drivers: >> of_pm_voltdm_notifier_register(introduced as part of patch #1) to >> register notifiers around clk of interest. This request is linked to >> the specific voltage domain using phandle in device tree. >> c) when cpufreq-cpu0/devfreq does a clk_set_rate, the common clock >> framework triggers notifiers in voltage domain core which in turn, >> invokes the corresponding handlers for the voltage domain driver >> ensuring the right dvfs sequence specific to the SoC is triggered. > > So the first question I have here is what happens if multiple clocks > need to be updated in lock step - if we're only triggering off clock > notifiers that seems tricky. The other thing here is that the fact that Yes, that is true, however, there are ways to implement them, for example: We could implement an higher level clock that takes care of the multiple clock node control to handle this kind of scenario. I can elaborate that in the commit message, if that is desirable. > your API is "of_" suggests that it is in fact linked very srongly to DT > - it'd be good to split out the layers to make sure things make sense > standalone, the DT helpers are obviously good but the API should be able > to stand separately. You are correct, I had intended the RFC as purely "OF only". I will make it independent of of in the next revision.
On Mon, Mar 10, 2014 at 12:11:44PM -0500, Nishanth Menon wrote: > On 03/02/2014 09:54 PM, Mark Brown wrote: > > On Mon, Feb 24, 2014 at 08:38:07AM -0600, Nishanth Menon wrote: > >> Intent here is to allow drivers such as cpufreq-cpu0 to be reused on > >> platforms such as TI's OMAP derivatives, and other SoCs which differ > >> only by the sequence involved in voltage scale operations. So, this > >> patch provides a framework for registering the underlying > >> implementation of the SoC specific voltage change methodology. > > That bit is clear, what's very opaque from the code is how this is going > > to be accomplished. > The SoC specific voltage domain drivers register with > devm_voltdm_register. the fops provide the abstraction needed for the > SoC (example in patch #5 - which introduces OMAP specific voltage > domain which handles ABB and VDD regulators). > What would you suggest that we do to clarify the usage here? Probably saying something about this in the commit message would be enough - mentioning how the registration occurs and that things are triggered by clock frequency changes. > > So the first question I have here is what happens if multiple clocks > > need to be updated in lock step - if we're only triggering off clock > > notifiers that seems tricky. The other thing here is that the fact that > Yes, that is true, however, there are ways to implement them, for > example: We could implement an higher level clock that takes care of > the multiple clock node control to handle this kind of scenario. That seems concerning given the fact that people seem to like describing their entire clock trees in DT, we shouldn't be putting implementation stuff there.
On 03/10/2014 12:22 PM, Mark Brown wrote: > On Mon, Mar 10, 2014 at 12:11:44PM -0500, Nishanth Menon wrote: >> On 03/02/2014 09:54 PM, Mark Brown wrote: >>> On Mon, Feb 24, 2014 at 08:38:07AM -0600, Nishanth Menon wrote: > >>>> Intent here is to allow drivers such as cpufreq-cpu0 to be reused on >>>> platforms such as TI's OMAP derivatives, and other SoCs which differ >>>> only by the sequence involved in voltage scale operations. So, this >>>> patch provides a framework for registering the underlying >>>> implementation of the SoC specific voltage change methodology. > >>> That bit is clear, what's very opaque from the code is how this is going >>> to be accomplished. > >> The SoC specific voltage domain drivers register with >> devm_voltdm_register. the fops provide the abstraction needed for the >> SoC (example in patch #5 - which introduces OMAP specific voltage >> domain which handles ABB and VDD regulators). > >> What would you suggest that we do to clarify the usage here? > > Probably saying something about this in the commit message would be > enough - mentioning how the registration occurs and that things are > triggered by clock frequency changes. OK. will do, thanks for suggesting the same. >>> So the first question I have here is what happens if multiple clocks >>> need to be updated in lock step - if we're only triggering off clock >>> notifiers that seems tricky. The other thing here is that the fact that > >> Yes, that is true, however, there are ways to implement them, for >> example: We could implement an higher level clock that takes care of >> the multiple clock node control to handle this kind of scenario. > > That seems concerning given the fact that people seem to like describing > their entire clock trees in DT, we shouldn't be putting implementation > stuff there. The only other options are: a) Abstract it at a higher level at "user drivers", since they are aware of the sequencing needs - but this partially defeats the purpose, unless ofcourse, we do a tricky implementation such as: clk a, b, c -> prenotifiers in a, postnotifiers in c (which as you mentioned is a little trickier to get right). b) introduce a higher level generic dvfs function[1] which does not seem very attractive either. Any other suggestions other than limiting the usage(and documenting it so) and hoping for a future evolution to take this into consideration? [1] http://marc.info/?t=139275581900004&r=1&w=2
On Mon, Mar 10, 2014 at 12:41:05PM -0500, Nishanth Menon wrote: > The only other options are: > a) Abstract it at a higher level at "user drivers", since they are > aware of the sequencing needs - but this partially defeats the > purpose, unless ofcourse, we do a tricky implementation such as: > clk a, b, c -> prenotifiers in a, postnotifiers in c (which as you > mentioned is a little trickier to get right). > b) introduce a higher level generic dvfs function[1] which does not > seem very attractive either. > Any other suggestions other than limiting the usage(and documenting it > so) and hoping for a future evolution to take this into consideration? Something might be doable with telling the clock API about maintianing ratios between clocks? I do think we should have an idea where we'd go with such requirements, even if we don't currently handle it, so that we can hopefully avoid another round of refactoring everything but it doesn't seem 100% essential, just very nice to have.
On 03/10/2014 01:01 PM, Mark Brown wrote: > On Mon, Mar 10, 2014 at 12:41:05PM -0500, Nishanth Menon wrote: > >> The only other options are: >> a) Abstract it at a higher level at "user drivers", since they are >> aware of the sequencing needs - but this partially defeats the >> purpose, unless ofcourse, we do a tricky implementation such as: >> clk a, b, c -> prenotifiers in a, postnotifiers in c (which as you >> mentioned is a little trickier to get right). >> b) introduce a higher level generic dvfs function[1] which does not >> seem very attractive either. > >> Any other suggestions other than limiting the usage(and documenting it >> so) and hoping for a future evolution to take this into consideration? > > Something might be doable with telling the clock API about maintianing > ratios between clocks? I do think we should have an idea where we'd go > with such requirements, even if we don't currently handle it, so that we > can hopefully avoid another round of refactoring everything but it > doesn't seem 100% essential, just very nice to have. > Mike, Any suggestions about the above? could we use composite clocks in some manner here(even though I think the original intent of the same was not the same)?
Quoting Nishanth Menon (2014-03-10 12:25:43) > On 03/10/2014 01:01 PM, Mark Brown wrote: > > On Mon, Mar 10, 2014 at 12:41:05PM -0500, Nishanth Menon wrote: > > > >> The only other options are: > >> a) Abstract it at a higher level at "user drivers", since they are > >> aware of the sequencing needs - but this partially defeats the > >> purpose, unless ofcourse, we do a tricky implementation such as: > >> clk a, b, c -> prenotifiers in a, postnotifiers in c (which as you > >> mentioned is a little trickier to get right). > >> b) introduce a higher level generic dvfs function[1] which does not > >> seem very attractive either. I disagree that a higher level DVFS interface is unattractive. I think it has taken a while to get there but people are finally interested in this. A small amount of discussion about this took place at Linaro Connect earlier this month (just hallway talk) but we shouldn't rule out the idea of a general framework for managing DVFS, which frankly is complex enough to merit not being shoved into the clock layer. > > > >> Any other suggestions other than limiting the usage(and documenting it > >> so) and hoping for a future evolution to take this into consideration? > > > > Something might be doable with telling the clock API about maintianing > > ratios between clocks? I do think we should have an idea where we'd go > > with such requirements, even if we don't currently handle it, so that we > > can hopefully avoid another round of refactoring everything but it > > doesn't seem 100% essential, just very nice to have. This sounds reasonable but I must point out that the clock framework today (based on the existing clk.h api) does not have a single instance of constraint-based logic. For instance there is no clk_set_min_rate or clk_set_max_rate or clk_set_rate_range. These have been discussed before but I feel that pushing something before now would have been premature given that the level of discussion around those features never hit critical mass. So the ratio thing is sensible, but it would be the first constraint-like mechanism in the clock framework so what that looks like bares careful consideration. > > > Mike, > Any suggestions about the above? could we use composite clocks in some > manner here(even though I think the original intent of the same was > not the same)? NACK. Let's just model the hardware in the clock framework, please. If you need to invent fake clocks to represent other constructs then it means the framework is missing something (or we're missing an entire framework, e.g. DVFS). Regards, Mike > > -- > Regards, > Nishanth Menon -- To unsubscribe from this list: send the line "unsubscribe linux-omap" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/drivers/power/voltdm/Kconfig b/drivers/power/voltdm/Kconfig index c5353bd..270fdab 100644 --- a/drivers/power/voltdm/Kconfig +++ b/drivers/power/voltdm/Kconfig @@ -5,3 +5,8 @@ config VOLTAGE_DOMAIN ---help--- Core voltage domain framework using common clock framework's notifier mechanism. + +menu "Voltage Domain Framework Drivers" + depends on VOLTAGE_DOMAIN + +endmenu diff --git a/drivers/power/voltdm/Makefile b/drivers/power/voltdm/Makefile index 3fa4408..fd32040 100644 --- a/drivers/power/voltdm/Makefile +++ b/drivers/power/voltdm/Makefile @@ -1,2 +1,5 @@ # Generic voltage domain obj-$(CONFIG_VOLTAGE_DOMAIN) += core.o + +# Hardware specific voltage domain +# please keep this section sorted lexicographically by file/directory path name diff --git a/drivers/power/voltdm/core.c b/drivers/power/voltdm/core.c index d0ed27e..69f6e81 100644 --- a/drivers/power/voltdm/core.c +++ b/drivers/power/voltdm/core.c @@ -10,12 +10,32 @@ * that scale voltage when a clock changes its output frequency. */ #include <linux/device.h> +#include <linux/module.h> #include <linux/of.h> #include <linux/pm_opp.h> #include <linux/pm_voltage_domain.h> #include <linux/regulator/consumer.h> #include <linux/slab.h> +#include "voltage_domain_private.h" + +/** + * struct pm_voltdm_dev - internal representation of voltage domain devices + * @desc: voltage domain description + * @dev: voltage domain device + * @list: list to remaining voltage domain devices + * @lock: mutex to control data structure modifications and serialize ops + * @notifier_list: list of notifiers registered for this device + */ +struct pm_voltdm_dev { + const struct pm_voltdm_desc *desc; + struct device *dev; + struct list_head list; + /* list lock */ + struct mutex lock; + struct list_head notifier_list; +}; + /** * struct volt_scale_data - Internal structure to maintain notifier information * @dev: device on behalf of which we register the notifier @@ -23,6 +43,9 @@ * @reg: regulator if any which is used for scaling voltage * @tol: voltage tolerance in % * @nb: notifier block pointer + * @list: list head for the notifier + * @vdev: pointer to voltage domain device for this notifier + * @voltdm_data: voltdm driver specific data */ struct volt_scale_data { struct device *dev; @@ -30,23 +53,208 @@ struct volt_scale_data { struct regulator *reg; int tol; struct notifier_block nb; + struct list_head list; + + struct pm_voltdm_dev *vdev; + void *voltdm_data; }; #define to_volt_scale_data(_nb) container_of(_nb, \ struct volt_scale_data, nb) +static DEFINE_MUTEX(pm_voltdm_list_lock); +static LIST_HEAD(pm_voltdm_list); + +static inline bool voltmd_skip_check(struct pm_voltdm_dev *vdev) +{ + bool ret = false; + + if (vdev) { + const struct pm_voltdm_desc *desc; + + if (IS_ERR(vdev)) + return false; + + mutex_lock(&vdev->lock); + desc = vdev->desc; + + if (desc->flags & VOLTAGE_DOMAIN_FLAG_NOTIFY_ALL) + ret = true; + mutex_unlock(&vdev->lock); + } + + return ret; +} + +static inline int voltmd_scale_voltage(struct volt_scale_data *vsd, + unsigned long flags, int volt, int tol) +{ + int ret; + struct pm_voltdm_dev *vdev = vsd->vdev; + + if (vdev) { + const struct pm_voltdm_ops *ops; + + if (IS_ERR(vdev)) + return PTR_ERR(vdev); + + mutex_lock(&vdev->lock); + ops = vdev->desc->ops; + + ret = ops->voltdm_do_transition(vdev->dev, + vsd->voltdm_data, + flags, volt, tol); + mutex_unlock(&vdev->lock); + } else { + ret = regulator_set_voltage_tol(vsd->reg, volt, tol); + } + + return ret; +} + +static struct pm_voltdm_dev *voltdm_parse_of(struct device_node *np, + const char *supply, + struct of_phandle_args *args) +{ + char prop_name[32]; /* 32 is max size of property name */ + bool found = false; + struct device_node *voltdm_np; + struct pm_voltdm_dev *vdev = NULL; + int ret; + + snprintf(prop_name, sizeof(prop_name), "%s-voltdm", supply); + voltdm_np = of_parse_phandle(np, prop_name, 0); + if (voltdm_np) { + ret = of_parse_phandle_with_args(np, prop_name, "#voltdm-cells", + 0, args); + if (ret) + return ERR_PTR(ret); + + mutex_lock(&pm_voltdm_list_lock); + list_for_each_entry(vdev, &pm_voltdm_list, list) + if (vdev->dev->parent && voltdm_np == vdev->dev->of_node) { + found = true; + break; + } + mutex_unlock(&pm_voltdm_list_lock); + + /* if node is present and not ready, then defer */ + if (!found) + return ERR_PTR(-EPROBE_DEFER); + } else { + return NULL; + } + + return vdev; +} + +static int voltdm_get(struct volt_scale_data *vsd, struct device_node *np, + const char *supply, struct of_phandle_args *args, + bool *skip_reg) +{ + struct pm_voltdm_dev *vdev = vsd->vdev; + struct device *dev = vsd->dev; + int ret = 0; + + if (vdev) { + const struct pm_voltdm_ops *ops; + if (IS_ERR(vdev)) + return PTR_ERR(vdev); + + mutex_lock(&vdev->lock); + if (!try_module_get(vdev->dev->driver->owner)) { + ret = -ENODEV; + } else { + ops = vdev->desc->ops; + if (ops->voltdm_get) + ret = ops->voltdm_get(vdev->dev, dev, np, + args, supply, + &vsd->voltdm_data); + if (ret) + module_put(vdev->dev->driver->owner); + } + if (!ret) + list_add(&vsd->list, &vdev->notifier_list); + + mutex_unlock(&vdev->lock); + } else { + vsd->reg = regulator_get_optional(dev, supply); + if (IS_ERR(vsd->reg)) + ret = PTR_ERR(vsd->reg); + /* Regulator is not mandatory */ + if (ret != -EPROBE_DEFER) { + ret = 0; + *skip_reg = false; + dev_dbg(dev, "%s: Failed to get %s regulator:%d\n", + __func__, supply, ret); + } + } + + return ret; +} + +static void voltdm_put(struct volt_scale_data *vsd) +{ + struct pm_voltdm_dev *vdev = vsd->vdev; + + if (vdev) { + const struct pm_voltdm_ops *ops; + + if (IS_ERR(vdev)) + return; + + mutex_lock(&vdev->lock); + ops = vdev->desc->ops; + if (ops->voltdm_put) + ops->voltdm_put(vdev->dev, vsd->dev, vsd->voltdm_data); + list_del(&vsd->list); + module_put(vdev->dev->driver->owner); + mutex_unlock(&vdev->lock); + } else { + if (!IS_ERR(vsd->reg)) + regulator_put(vsd->reg); + } +} + +static int voltdm_get_latency(struct volt_scale_data *vsd, int min, int max) +{ + struct pm_voltdm_dev *vdev = vsd->vdev; + const struct pm_voltdm_ops *ops; + int ret; + + if (!vdev) + return regulator_set_voltage_time(vsd->reg, min, max); + + if (IS_ERR(vdev)) + return PTR_ERR(vdev); + + mutex_lock(&vdev->lock); + ops = vdev->desc->ops; + + if (!ops->voltdm_latency) + ret = -ENXIO; + else + ret = ops->voltdm_latency(vdev->dev, vsd->voltdm_data, + min, max); + mutex_unlock(&vdev->lock); + + return ret; +} + static int clk_volt_notifier_handler(struct notifier_block *nb, unsigned long flags, void *data) { struct clk_notifier_data *cnd = data; struct volt_scale_data *vsd = to_volt_scale_data(nb); + struct pm_voltdm_dev *vdev = vsd->vdev; int ret, volt, tol; struct dev_pm_opp *opp; unsigned long old_rate = cnd->old_rate; unsigned long new_rate = cnd->new_rate; - if ((new_rate < old_rate && flags == PRE_RATE_CHANGE) || - (new_rate > old_rate && flags == POST_RATE_CHANGE)) + if (!voltmd_skip_check(vdev) && + ((new_rate < old_rate && flags == PRE_RATE_CHANGE) || + (new_rate > old_rate && flags == POST_RATE_CHANGE))) return NOTIFY_OK; rcu_read_lock(); @@ -69,7 +277,7 @@ static int clk_volt_notifier_handler(struct notifier_block *nb, dev_dbg(vsd->dev, "%s: %lu -> %lu, V=%d, tol=%d, clk_flag=%lu\n", __func__, old_rate, new_rate, volt, tol, flags); - ret = regulator_set_voltage_tol(vsd->reg, volt, tol); + ret = voltmd_scale_voltage(vsd, flags, volt, tol); if (ret) { dev_err(vsd->dev, "%s: Failed to scale voltage(%u): %d\n", __func__, @@ -101,6 +309,14 @@ struct notifier_block *of_pm_voltdm_notifier_register(struct device *dev, struct dev_pm_opp *opp; unsigned long min, max, freq; int ret; + struct of_phandle_args voltdm_args = {NULL}; + struct pm_voltdm_dev *vdev = NULL; + bool skip_reg = false; + + /* First look for voltdm of node */ + vdev = voltdm_parse_of(np, supply, &voltdm_args); + if (IS_ERR(vdev)) + return (struct notifier_block *)vdev; vsd = kzalloc(sizeof(*vsd), GFP_KERNEL); if (!vsd) @@ -109,19 +325,16 @@ struct notifier_block *of_pm_voltdm_notifier_register(struct device *dev, vsd->dev = dev; vsd->clk = clk; vsd->nb.notifier_call = clk_volt_notifier_handler; - vsd->reg = regulator_get_optional(dev, supply); - ret = 0; - if (IS_ERR(vsd->reg)) - ret = PTR_ERR(vsd->reg); - /* regulator is not mandatory */ - if (ret != -EPROBE_DEFER) { - dev_warn(dev, "%s: Failed to get %s regulator:%d\n", + vsd->vdev = vdev; + ret = voltdm_get(vsd, np, supply, &voltdm_args, &skip_reg); + + if (ret) { + dev_warn(dev, "%s: Failed to get %s regulator/voltdm: %d\n", __func__, supply, ret); - ret = 0; goto err_free_vsd; } - /* For devices that are not ready.... */ - if (ret) + /* if not mandatory... */ + if (skip_reg) goto err_free_vsd; rcu_read_lock(); @@ -138,7 +351,7 @@ struct notifier_block *of_pm_voltdm_notifier_register(struct device *dev, max = dev_pm_opp_get_voltage(opp); rcu_read_unlock(); - *voltage_latency = regulator_set_voltage_time(vsd->reg, min, max); + *voltage_latency = voltdm_get_latency(vsd, min, max); if (*voltage_latency < 0) { dev_warn(dev, "%s: Fail calculating voltage latency[%ld<->%ld]:%d\n", @@ -163,7 +376,7 @@ err_bad_opp: dev_err(dev, "%s: failed to get OPP, %d\n", __func__, ret); err_free_reg: - regulator_put(vsd->reg); + voltdm_put(vsd); err_free_vsd: kfree(vsd); @@ -187,9 +400,109 @@ void of_pm_voltdm_notifier_unregister(struct notifier_block *nb) vsd = to_volt_scale_data(nb); clk = vsd->clk; clk_notifier_unregister(clk, nb); - if (!IS_ERR(vsd->reg)) - regulator_put(vsd->reg); + voltdm_put(vsd); kfree(vsd); } EXPORT_SYMBOL_GPL(of_pm_voltdm_notifier_unregister); + +static void devm_voltdm_release(struct device *dev, void *res) +{ + struct pm_voltdm_dev *vdev = *(struct pm_voltdm_dev **)res; + struct volt_scale_data *vsd; + + mutex_lock(&pm_voltdm_list_lock); + mutex_lock(&vdev->lock); + list_for_each_entry(vsd, &vdev->notifier_list, list) { + dev_warn(dev, "%s: pending notifier from device %s!\n", + __func__, dev_name(vsd->dev)); + vsd->vdev = ERR_PTR(-EINVAL); + } + mutex_unlock(&vdev->lock); + + list_del(&vdev->list); + mutex_unlock(&pm_voltdm_list_lock); + + kfree(vdev); +} + +/** + * devm_voltdm_register - Resource managed voltage domain registration + * @dev: pointer to the device representing the voltage domain + * @desc: voltage domain descriptor + * + * Called by voltage domain drivers to register a voltagedomain. Returns a + * valid pointer to struct pm_voltdm_dev on success or an ERR_PTR() on + * error. The voltagedomain will automatically be released when the device + * is unbound. + */ +struct pm_voltdm_dev *devm_voltdm_register(struct device *dev, + const struct pm_voltdm_desc *desc) +{ + struct pm_voltdm_dev **ptr, *vdev; + + if (!dev || !desc) + return ERR_PTR(-EINVAL); + + if (!desc->ops) + return ERR_PTR(-EINVAL); + + /* Mandatory to have notify transition */ + if (!desc->ops->voltdm_do_transition) { + dev_err(dev, "%s: Bad desc -do_transition missing\n", __func__); + return ERR_PTR(-EINVAL); + } + + vdev = kzalloc(sizeof(*vdev), GFP_KERNEL); + if (!vdev) + return ERR_PTR(-ENOMEM); + + ptr = devres_alloc(devm_voltdm_release, sizeof(*ptr), GFP_KERNEL); + if (!ptr) { + kfree(vdev); + return ERR_PTR(-ENOMEM); + } + + vdev->desc = desc; + vdev->dev = dev; + mutex_init(&vdev->lock); + INIT_LIST_HEAD(&vdev->notifier_list); + mutex_lock(&pm_voltdm_list_lock); + list_add(&vdev->list, &pm_voltdm_list); + mutex_unlock(&pm_voltdm_list_lock); + + *ptr = vdev; + devres_add(dev, ptr); + + return vdev; +} +EXPORT_SYMBOL_GPL(devm_voltdm_register); + +static int devm_vdev_match(struct device *dev, void *res, void *data) +{ + struct pm_voltdm_dev **r = res; + if (!r || !*r) { + WARN_ON(!r || !*r); + return 0; + } + return *r == data; +} + +/** + * devm_voltdm_unregister - Resource managed voltagedomain unregister + * @vdev: voltage domain device returned by devm_voltdm_register() + * + * Unregister a voltdm registered with devm_voltdm_register(). + * Normally this function will not need to be called and the resource + * management code will ensure that the resource is freed. + */ +void devm_voltdm_unregister(struct pm_voltdm_dev *vdev) +{ + int rc; + struct device *dev = vdev->dev; + + rc = devres_release(dev, devm_voltdm_release, devm_vdev_match, vdev); + if (rc != 0) + WARN_ON(rc); +} +EXPORT_SYMBOL_GPL(devm_voltdm_unregister); diff --git a/drivers/power/voltdm/voltage_domain_private.h b/drivers/power/voltdm/voltage_domain_private.h new file mode 100644 index 0000000..e63c2b1 --- /dev/null +++ b/drivers/power/voltdm/voltage_domain_private.h @@ -0,0 +1,86 @@ +/* + * Voltage Domain private header for voltage domain drivers + * + * Copyright (C) 2014 Texas Instruments Incorporated - http://www.ti.com/ + * Nishanth Menon + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed "as is" WITHOUT ANY WARRANTY of any + * kind, whether express or implied; without even the implied warranty + * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#ifndef __PM_VOLTAGE_DOMAIN_PRIVATE__ +#define __PM_VOLTAGE_DOMAIN_PRIVATE__ + +#include <linux/pm_voltage_domain.h> + +struct pm_voltdm_dev; + +/** + * struct pm_voltdm_ops - Operations functions for voltage domain + * @voltdm_get: (optional) invoked when notifier is to be registered. + * @voltdm_put: (optional) invoked when notifier is unregistered. + * @voltdm_latency: (optional) compute and provide voltage domain + * transition latency. + * @voltdm_do_transition: (mandatory) callback for notification + * + * np_args - is arguments to the node phandle - useful when specific voltage + * domain is refered to using indices. + * voltdm_data - is the voltage domain driver specific data corresponding to + * driver information per registration (may point to domain driver specific + * data including reference to index. This is always provided back to the + * driver at various follow on operations. + * clk_notifier_flags - follows the standard clk notification flags + */ +struct pm_voltdm_ops { + int (*voltdm_get)(struct device *voltdm_dev, + struct device *request_dev, + struct device_node *np, + struct of_phandle_args *np_args, + const char *supply, + void **voltdm_data); + int (*voltdm_latency)(struct device *voltdm_dev, void *voltdm_data, + unsigned long min_uv, unsigned long max_uv); + int (*voltdm_do_transition)(struct device *voltdm_dev, + void *voltdm_data, + unsigned long clk_notifier_flags, + int uv, int tol_uv); + void (*voltdm_put)(struct device *voltdm_dev, + struct device *request_dev, + void *voltdm_data); +}; + +#define VOLTAGE_DOMAIN_FLAG_NOTIFY_ALL (0x1 << 0) + +/** + * struct pm_voltdm_desc - Descriptor for the voltage domain + * @ops: operations for the voltage domain + * @flags: flags controlling the various operations + */ +struct pm_voltdm_desc { + const struct pm_voltdm_ops *ops; + u16 flags; +}; + +#if defined(CONFIG_VOLTAGE_DOMAIN) +struct pm_voltdm_dev *devm_voltdm_register(struct device *dev, + const struct pm_voltdm_desc *desc); +void devm_voltdm_unregister(struct pm_voltdm_dev *vdev); +#else +static inline struct pm_voltdm_dev *devm_voltdm_register(struct device *dev, + const struct pm_voltdm_desc *desc) +{ + return ERR_PTR(-ENODEV); +} + +static inline void devm_voltdm_unregister(struct pm_voltdm_dev *vdev) +{ +} +#endif /* VOLTAGE_DOMAIN */ + +#endif /* __PM_VOLTAGE_DOMAIN_PRIVATE__ */
Many SoCs have basic concepts of voltage rails supplying a specific SoC device. These voltage rails may be as simple as a single regulator or complex to be three or more regulators that are transitioned in tandem with respect to clock changes. In some cases, they may tend to use custom frameworks to do the actual voltage transition OR use hardware modules that takes hints about the required configuration and does optimized voltage transitions. The current regulator model provides the basic building blocks for the transitions, however SoC drivers specific to each of these devices, be it cpufreq/devfreq have to replicate the logic for functionality. To simply the logic, we can hence introduce a layer that takes care of the mundane transition logic, registration mechanisms to provide the "user drivers" such as cpufreq/devfreq a generic interface, whose details are abstracted by the device tree description for the SoC on which the driver operates on. This allows us opportunity to make generic cpufreq / devfreq drivers which may deal with a single clock, however the actual voltage change mechanism can be customized specific to SoC without impact to the generic driver. Signed-off-by: Nishanth Menon <nm@ti.com> --- drivers/power/voltdm/Kconfig | 5 + drivers/power/voltdm/Makefile | 3 + drivers/power/voltdm/core.c | 347 +++++++++++++++++++++++-- drivers/power/voltdm/voltage_domain_private.h | 86 ++++++ 4 files changed, 424 insertions(+), 17 deletions(-) create mode 100644 drivers/power/voltdm/voltage_domain_private.h