Message ID | 53298A7D.3080400@linux.vnet.ibm.com (mailing list archive) |
---|---|
State | RFC, archived |
Headers | show |
On 19 March 2014 17:45, Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> wrote: > diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h > + bool transition_ongoing; /* Tracks transition status */ > + struct mutex transition_lock; > + wait_queue_head_t transition_wait; Similar to what I have done in my last version, why do you need transition_ongoing and transition_wait? Simply work with transition_lock? i.e. Acquire it for the complete transition sequence. -- To unsubscribe from this list: send the line "unsubscribe linux-pm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 03/19/2014 07:05 PM, Viresh Kumar wrote: > On 19 March 2014 17:45, Srivatsa S. Bhat > <srivatsa.bhat@linux.vnet.ibm.com> wrote: >> diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h >> + bool transition_ongoing; /* Tracks transition status */ >> + struct mutex transition_lock; >> + wait_queue_head_t transition_wait; > > Similar to what I have done in my last version, why do you need > transition_ongoing and transition_wait? Simply work with > transition_lock? i.e. Acquire it for the complete transition sequence. > We *can't* acquire it for the complete transition sequence in case of drivers that do asynchronous notification, because PRECHANGE is done in one thread and POSTCHANGE is done in a totally different thread! You can't acquire a lock in one task and release it in a different task. That would be a fundamental violation of locking. That's why I introduced the wait queue to help us create a "flow" which encompasses 2 different, but co-ordinating tasks. You simply can't do that elegantly by using plain locks alone. Regards, Srivatsa S. Bhat -- To unsubscribe from this list: send the line "unsubscribe linux-pm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 03/19/2014 08:18 PM, Srivatsa S. Bhat wrote: > On 03/19/2014 07:05 PM, Viresh Kumar wrote: >> On 19 March 2014 17:45, Srivatsa S. Bhat >> <srivatsa.bhat@linux.vnet.ibm.com> wrote: >>> diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h >>> + bool transition_ongoing; /* Tracks transition status */ >>> + struct mutex transition_lock; >>> + wait_queue_head_t transition_wait; >> >> Similar to what I have done in my last version, why do you need >> transition_ongoing and transition_wait? Simply work with >> transition_lock? i.e. Acquire it for the complete transition sequence. >> > > We *can't* acquire it for the complete transition sequence > in case of drivers that do asynchronous notification, because > PRECHANGE is done in one thread and POSTCHANGE is done in a > totally different thread! You can't acquire a lock in one > task and release it in a different task. That would be a > fundamental violation of locking. > > That's why I introduced the wait queue to help us create > a "flow" which encompasses 2 different, but co-ordinating > tasks. You simply can't do that elegantly by using plain > locks alone. > By the way, note the updated changelog in my patch. It includes a brief overview of the synchronization design, which is copy-pasted below for reference. I forgot to mention this earlier! ----- This patch introduces a set of synchronization primitives to serialize frequency transitions, which are to be used as shown below: cpufreq_freq_transition_begin(); //Perform the frequency change cpufreq_freq_transition_end(); The _begin() call sends the PRECHANGE notification whereas the _end() call sends the POSTCHANGE notification. Also, all the necessary synchronization is handled within these calls. In particular, even drivers which set the ASYNC_NOTIFICATION flag can also use these APIs for performing frequency transitions (ie., you can call _begin() from one task, and call the corresponding _end() from a different task). The actual synchronization underneath is not that complicated: The key challenge is to allow drivers to begin the transition from one thread and end it in a completely different thread (this is to enable drivers that do asynchronous POSTCHANGE notification from bottom-halves, to also use the same interface). To achieve this, a 'transition_ongoing' flag, a 'transition_lock' mutex and a wait-queue are added per-policy. The flag and the wait-queue are used in conjunction to create an "uninterrupted flow" from _begin() to _end(). The mutex-lock is used to ensure that only one such "flow" is in flight at any given time. Put together, this provides us all the necessary synchronization. Regards, Srivatsa S. Bhat -- To unsubscribe from this list: send the line "unsubscribe linux-pm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 19 March 2014 17:45, Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> wrote: > diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c > index 199b52b..e90388f 100644 > --- a/drivers/cpufreq/cpufreq.c > +++ b/drivers/cpufreq/cpufreq.c > @@ -349,6 +349,38 @@ void cpufreq_notify_post_transition(struct cpufreq_policy *policy, > EXPORT_SYMBOL_GPL(cpufreq_notify_post_transition); > > > +void cpufreq_freq_transition_begin(struct cpufreq_policy *policy, > + struct cpufreq_freqs *freqs, unsigned int state) > +{ > +wait: > + wait_event(&policy->transition_wait, !policy->transition_ongoing); I think its broken here. At this point another thread can come take lock, update transition_ongoing, send notification and finally unlock.. And after that we can take the lock and send another notification.. Correct? > + if (!mutex_trylock(&policy->transition_lock)) > + goto wait; > + > + policy->transition_ongoing++; s/++/ = true > + cpufreq_notify_transition(policy, freqs, CPUFREQ_PRECHANGE); > + > + mutex_unlock(&policy->transition_lock); We can release the lock before sending notifications, its there just to protect transition_ongoing. -- To unsubscribe from this list: send the line "unsubscribe linux-pm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index 199b52b..e90388f 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -349,6 +349,38 @@ void cpufreq_notify_post_transition(struct cpufreq_policy *policy, EXPORT_SYMBOL_GPL(cpufreq_notify_post_transition); +void cpufreq_freq_transition_begin(struct cpufreq_policy *policy, + struct cpufreq_freqs *freqs, unsigned int state) +{ +wait: + wait_event(&policy->transition_wait, !policy->transition_ongoing); + + if (!mutex_trylock(&policy->transition_lock)) + goto wait; + + policy->transition_ongoing++; + + cpufreq_notify_transition(policy, freqs, CPUFREQ_PRECHANGE); + + mutex_unlock(&policy->transition_lock); +} + +void cpufreq_freq_transition_end(struct cpufreq_policy *policy, + struct cpufreq_freqs *freqs, unsigned int state) +{ + cpufreq_notify_transition(policy, freqs, CPUFREQ_POSTCHANGE); + + /* + * We don't need to take any locks for this update, since only + * one POSTCHANGE notification can be pending at any time, and + * at the moment, that's us :-) + */ + policy->transition_ongoing = false; + + wake_up(&policy->transition_wait); +} + + /********************************************************************* * SYSFS INTERFACE * *********************************************************************/ @@ -968,6 +1000,8 @@ static struct cpufreq_policy *cpufreq_policy_alloc(void) INIT_LIST_HEAD(&policy->policy_list); init_rwsem(&policy->rwsem); + mutex_init(&policy->transition_lock); + init_waitqueue_head(&policy->transition_wait); return policy; diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h index 4d89e0e..8bded24 100644 --- a/include/linux/cpufreq.h +++ b/include/linux/cpufreq.h @@ -101,6 +101,11 @@ struct cpufreq_policy { * __cpufreq_governor(data, CPUFREQ_GOV_POLICY_EXIT); */ struct rw_semaphore rwsem; + + /* Synchronization for frequency transitions */ + bool transition_ongoing; /* Tracks transition status */ + struct mutex transition_lock; + wait_queue_head_t transition_wait; }; /* Only for ACPI */