Message ID | 1452533760-13787-5-git-send-email-juri.lelli@arm.com (mailing list archive) |
---|---|
State | RFC |
Headers | show |
On Mon, Jan 11, 2016 at 05:35:45PM +0000, Juri Lelli wrote: > +/** > + * The "cpufreq driver" - the arch- or hardware-dependent low > + * level driver of CPUFreq support, and its spinlock (cpufreq_driver_lock). > + * This lock also protects cpufreq_cpu_data array and cpufreq_policy_list. > + */ > +static struct cpufreq_driver *cpufreq_driver; > +static DEFINE_PER_CPU(struct cpufreq_policy *, cpufreq_cpu_data); > static LIST_HEAD(cpufreq_policy_list); > +static DEFINE_RWLOCK(cpufreq_driver_lock); Part of my suggestion was to fold the per-cpu data of cpufreq_cpu_data into struct cpufreq_driver. That way each cpufreq_driver will have its own copy and there'd be only the one global pointer to swizzle. Something very well suited to RCU. -- To unsubscribe from this list: send the line "unsubscribe linux-pm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, Jan 11, 2016 at 05:35:45PM +0000, Juri Lelli wrote: > +/** > + * Iterate over governors > + * > + * cpufreq_governor_list is protected by cpufreq_governor_mutex. > + */ > +static LIST_HEAD(cpufreq_governor_list); > +static DEFINE_MUTEX(cpufreq_governor_mutex); > +#define for_each_governor(__governor) \ > + list_for_each_entry(__governor, &cpufreq_governor_list, governor_list) So you could stuff the lockdep_assert_held() you later add intididually into the for_each_governor macro, impossible to forget that way. -- To unsubscribe from this list: send the line "unsubscribe linux-pm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Monday, January 11, 2016 11:05:28 PM Peter Zijlstra wrote: > On Mon, Jan 11, 2016 at 05:35:45PM +0000, Juri Lelli wrote: > > +/** > > + * The "cpufreq driver" - the arch- or hardware-dependent low > > + * level driver of CPUFreq support, and its spinlock (cpufreq_driver_lock). > > + * This lock also protects cpufreq_cpu_data array and cpufreq_policy_list. > > + */ > > +static struct cpufreq_driver *cpufreq_driver; > > +static DEFINE_PER_CPU(struct cpufreq_policy *, cpufreq_cpu_data); > > static LIST_HEAD(cpufreq_policy_list); > > +static DEFINE_RWLOCK(cpufreq_driver_lock); > > Part of my suggestion was to fold the per-cpu data of cpufreq_cpu_data > into struct cpufreq_driver. > > That way each cpufreq_driver will have its own copy and there'd be only > the one global pointer to swizzle. Something very well suited to RCU. Well, I'm not really sure reworking all that is necessary. What we need is to be able to call something analogous to dbs_timer_handler() from the scheduler and a driver callback from there (if present). For that, it should be sufficient to have a pointer to that callback (that may be set upon driver registration) protected by RCU (or should that be sched RCU rather?) if I'm not missing anything. Thanks, Rafael -- To unsubscribe from this list: send the line "unsubscribe linux-pm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Jan 12, 2016 at 12:03:39AM +0100, Rafael J. Wysocki wrote: > On Monday, January 11, 2016 11:05:28 PM Peter Zijlstra wrote: > > On Mon, Jan 11, 2016 at 05:35:45PM +0000, Juri Lelli wrote: > > > +/** > > > + * The "cpufreq driver" - the arch- or hardware-dependent low > > > + * level driver of CPUFreq support, and its spinlock (cpufreq_driver_lock). > > > + * This lock also protects cpufreq_cpu_data array and cpufreq_policy_list. > > > + */ > > > +static struct cpufreq_driver *cpufreq_driver; > > > +static DEFINE_PER_CPU(struct cpufreq_policy *, cpufreq_cpu_data); > > > static LIST_HEAD(cpufreq_policy_list); > > > +static DEFINE_RWLOCK(cpufreq_driver_lock); > > > > Part of my suggestion was to fold the per-cpu data of cpufreq_cpu_data > > into struct cpufreq_driver. > > > > That way each cpufreq_driver will have its own copy and there'd be only > > the one global pointer to swizzle. Something very well suited to RCU. > > Well, I'm not really sure reworking all that is necessary. > > What we need is to be able to call something analogous to dbs_timer_handler() > from the scheduler and a driver callback from there (if present). For that, > it should be sufficient to have a pointer to that callback (that may be set > upon driver registration) protected by RCU (or should that be sched RCU > rather?) if I'm not missing anything. But such a callback will invariably want to use the per-cpu state. And now you have two pointers, one for the driver and one for the per-cpu state. Keeping that in sync is a pain. Moving the per-cpu data into the driver solves that trivially. -- To unsubscribe from this list: send the line "unsubscribe linux-pm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 11-01-16, 17:35, Juri Lelli wrote: > Currently it is not easy to figure out which lock/mutex protects which data > structure. Clean things up by moving data structures and their locks/mutexs > closer; also, change comments to document relations further. > > Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> > Cc: Viresh Kumar <viresh.kumar@linaro.org> > Signed-off-by: Juri Lelli <juri.lelli@arm.com> > --- > drivers/cpufreq/cpufreq.c | 33 ++++++++++++++++++--------------- > 1 file changed, 18 insertions(+), 15 deletions(-) Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
On 11-01-16, 23:07, Peter Zijlstra wrote: > On Mon, Jan 11, 2016 at 05:35:45PM +0000, Juri Lelli wrote: > > +/** > > + * Iterate over governors > > + * > > + * cpufreq_governor_list is protected by cpufreq_governor_mutex. > > + */ > > +static LIST_HEAD(cpufreq_governor_list); > > +static DEFINE_MUTEX(cpufreq_governor_mutex); > > +#define for_each_governor(__governor) \ > > + list_for_each_entry(__governor, &cpufreq_governor_list, governor_list) > > So you could stuff the lockdep_assert_held() you later add intididually > into the for_each_governor macro, impossible to forget that way. How exactly? I couldn't see how it can be done in a neat and clean way.
Hi, On 12/01/16 09:27, Peter Zijlstra wrote: > On Tue, Jan 12, 2016 at 12:03:39AM +0100, Rafael J. Wysocki wrote: > > On Monday, January 11, 2016 11:05:28 PM Peter Zijlstra wrote: > > > On Mon, Jan 11, 2016 at 05:35:45PM +0000, Juri Lelli wrote: > > > > +/** > > > > + * The "cpufreq driver" - the arch- or hardware-dependent low > > > > + * level driver of CPUFreq support, and its spinlock (cpufreq_driver_lock). > > > > + * This lock also protects cpufreq_cpu_data array and cpufreq_policy_list. > > > > + */ > > > > +static struct cpufreq_driver *cpufreq_driver; > > > > +static DEFINE_PER_CPU(struct cpufreq_policy *, cpufreq_cpu_data); > > > > static LIST_HEAD(cpufreq_policy_list); > > > > +static DEFINE_RWLOCK(cpufreq_driver_lock); > > > > > > Part of my suggestion was to fold the per-cpu data of cpufreq_cpu_data > > > into struct cpufreq_driver. > > > > > > That way each cpufreq_driver will have its own copy and there'd be only > > > the one global pointer to swizzle. Something very well suited to RCU. > > > > Well, I'm not really sure reworking all that is necessary. > > > > What we need is to be able to call something analogous to dbs_timer_handler() > > from the scheduler and a driver callback from there (if present). For that, > > it should be sufficient to have a pointer to that callback (that may be set > > upon driver registration) protected by RCU (or should that be sched RCU > > rather?) if I'm not missing anything. > > But such a callback will invariably want to use the per-cpu state. And > now you have two pointers, one for the driver and one for the per-cpu > state. Keeping that in sync is a pain. > > Moving the per-cpu data into the driver solves that trivially. > Oh, I think I now finally get your suggestion completely (I hope :-)); and it makes sense to me. On top of this series I have patches that implement RCU logic. What I tried to do is to protect all cpufreq.c stuff with a single mutex (plus RCU logic) and single policies with another mutex (plus RCU logic). What you are saying should make things easier to get right. I have to go back and try that. My idea was to try to build some confidence that this first set is right and then post the second part implementing RCU logic. Thanks, - Juri -- To unsubscribe from this list: send the line "unsubscribe linux-pm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tuesday, January 12, 2016 09:27:18 AM Peter Zijlstra wrote: > On Tue, Jan 12, 2016 at 12:03:39AM +0100, Rafael J. Wysocki wrote: > > On Monday, January 11, 2016 11:05:28 PM Peter Zijlstra wrote: > > > On Mon, Jan 11, 2016 at 05:35:45PM +0000, Juri Lelli wrote: > > > > +/** > > > > + * The "cpufreq driver" - the arch- or hardware-dependent low > > > > + * level driver of CPUFreq support, and its spinlock (cpufreq_driver_lock). > > > > + * This lock also protects cpufreq_cpu_data array and cpufreq_policy_list. > > > > + */ > > > > +static struct cpufreq_driver *cpufreq_driver; > > > > +static DEFINE_PER_CPU(struct cpufreq_policy *, cpufreq_cpu_data); > > > > static LIST_HEAD(cpufreq_policy_list); > > > > +static DEFINE_RWLOCK(cpufreq_driver_lock); > > > > > > Part of my suggestion was to fold the per-cpu data of cpufreq_cpu_data > > > into struct cpufreq_driver. > > > > > > That way each cpufreq_driver will have its own copy and there'd be only > > > the one global pointer to swizzle. Something very well suited to RCU. > > > > Well, I'm not really sure reworking all that is necessary. > > > > What we need is to be able to call something analogous to dbs_timer_handler() > > from the scheduler and a driver callback from there (if present). For that, > > it should be sufficient to have a pointer to that callback (that may be set > > upon driver registration) protected by RCU (or should that be sched RCU > > rather?) if I'm not missing anything. > > But such a callback will invariably want to use the per-cpu state. Which likely is the driver's own per-cpu state, not the policy object itself. > And now you have two pointers, one for the driver and one for the per-cpu > state. Keeping that in sync is a pain. Well, I basically need to guarantee that all of the pointers involved are set and the data structures are valid when the driver pointer is set. > Moving the per-cpu data into the driver solves that trivially. It doesn't really address the case when the driver has its own per-cpu state as I said above. Thanks, Rafael -- To unsubscribe from this list: send the line "unsubscribe linux-pm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index 2e41356..00a00cd 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -31,7 +31,25 @@ #include <linux/tick.h> #include <trace/events/power.h> +/** + * Iterate over governors + * + * cpufreq_governor_list is protected by cpufreq_governor_mutex. + */ +static LIST_HEAD(cpufreq_governor_list); +static DEFINE_MUTEX(cpufreq_governor_mutex); +#define for_each_governor(__governor) \ + list_for_each_entry(__governor, &cpufreq_governor_list, governor_list) + +/** + * The "cpufreq driver" - the arch- or hardware-dependent low + * level driver of CPUFreq support, and its spinlock (cpufreq_driver_lock). + * This lock also protects cpufreq_cpu_data array and cpufreq_policy_list. + */ +static struct cpufreq_driver *cpufreq_driver; +static DEFINE_PER_CPU(struct cpufreq_policy *, cpufreq_cpu_data); static LIST_HEAD(cpufreq_policy_list); +static DEFINE_RWLOCK(cpufreq_driver_lock); static inline bool policy_is_inactive(struct cpufreq_policy *policy) { @@ -86,21 +104,6 @@ static struct cpufreq_policy *first_policy(bool active) #define for_each_inactive_policy(__policy) \ for_each_suitable_policy(__policy, false) -/* Iterate over governors */ -static LIST_HEAD(cpufreq_governor_list); -#define for_each_governor(__governor) \ - list_for_each_entry(__governor, &cpufreq_governor_list, governor_list) - -/** - * The "cpufreq driver" - the arch- or hardware-dependent low - * level driver of CPUFreq support, and its spinlock. This lock - * also protects the cpufreq_cpu_data array. - */ -static struct cpufreq_driver *cpufreq_driver; -static DEFINE_PER_CPU(struct cpufreq_policy *, cpufreq_cpu_data); -static DEFINE_RWLOCK(cpufreq_driver_lock); -static DEFINE_MUTEX(cpufreq_governor_mutex); - /* Flag to suspend/resume CPUFreq governors */ static bool cpufreq_suspended;
Currently it is not easy to figure out which lock/mutex protects which data structure. Clean things up by moving data structures and their locks/mutexs closer; also, change comments to document relations further. Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Cc: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Juri Lelli <juri.lelli@arm.com> --- drivers/cpufreq/cpufreq.c | 33 ++++++++++++++++++--------------- 1 file changed, 18 insertions(+), 15 deletions(-)