diff mbox

tick, broadcast: Prevent false alarm when force mask contains offline cpus

Message ID 20140326035648.21736.85740.stgit@preeti.in.ibm.com (mailing list archive)
State Not Applicable, archived
Headers show

Commit Message

preeti March 26, 2014, 3:56 a.m. UTC
Its possible that the tick_broadcast_force_mask contains cpus which are not
in cpu_online_mask when a broadcast tick occurs. This could happen under the
following circumstance assuming CPU1 is among the CPUs waiting for broadcast.

CPU0					CPU1

Run CPU_DOWN_PREPARE notifiers

Start stop_machine			Gets woken up by IPI to run
					stop_machine, sets itself in
					tick_broadcast_force_mask if the
					time of broadcast interrupt is around
					the same time as this IPI.

					Start stop_machine
					  set_cpu_online(cpu1, false)
End stop_machine			End stop_machine

Broadcast interrupt
  Finds that cpu1 in
  tick_broadcast_force_mask is offline
  and triggers the WARN_ON in
  tick_handle_oneshot_broadcast()

Clears all broadcast masks
in CPU_DEAD stage.

This WARN_ON was added to capture scenarios where the broadcast mask, be it
oneshot/pending/force_mask contain offline cpus whose tick devices have been
removed. But here is a case where we trigger the warn on in a valid scenario.

One could argue that the scenario is invalid and ought to be warned against
because ideally the broadcast masks need to be cleared of the cpus about to
go offine before clearing them in the online_mask so that we dont hit these
scenarios.

This would mean clearing the masks in CPU_DOWN_PREPARE stage. But
it is quite possible that this stage itself will fail and cpu hotplug will
not go through. We would then end up in a situation where the cpu has not gone
offline, and continues to wait for the broadcast interrupt like before.
  However it is cleared in the broadcast masks and this interrupt will never
be delivered. Hence clearing of masks is best kept off until we are sure that
the cpu is dead, i.e. in the CPU_DEAD stage.

Hence simply ensure that the tick_broadcast_force_mask is a subset of the
online cpus to take care of rare occurences such as above. Moreover this is
not a harmful scenario where the cpu is in the mask but its tick device was
shutdown. The WARN_ON will then continue to capture cases where we could
possibly cause a kernel crash.

Signed-off-by: Preeti U Murthy <preeti@linux.vnet.ibm.com>
---

 kernel/time/tick-broadcast.c |    7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)


--
To unsubscribe from this list: send the line "unsubscribe linux-pm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Srivatsa S. Bhat March 26, 2014, 11:21 a.m. UTC | #1
On 03/26/2014 09:26 AM, Preeti U Murthy wrote:
> Its possible that the tick_broadcast_force_mask contains cpus which are not
> in cpu_online_mask when a broadcast tick occurs. This could happen under the
> following circumstance assuming CPU1 is among the CPUs waiting for broadcast.
> 
> CPU0					CPU1
> 
> Run CPU_DOWN_PREPARE notifiers
> 
> Start stop_machine			Gets woken up by IPI to run
> 					stop_machine, sets itself in
> 					tick_broadcast_force_mask if the
> 					time of broadcast interrupt is around
> 					the same time as this IPI.
> 
> 					Start stop_machine
> 					  set_cpu_online(cpu1, false)
> End stop_machine			End stop_machine
> 
> Broadcast interrupt
>   Finds that cpu1 in
>   tick_broadcast_force_mask is offline
>   and triggers the WARN_ON in
>   tick_handle_oneshot_broadcast()
> 
> Clears all broadcast masks
> in CPU_DEAD stage.
> 
> This WARN_ON was added to capture scenarios where the broadcast mask, be it
> oneshot/pending/force_mask contain offline cpus whose tick devices have been
> removed. But here is a case where we trigger the warn on in a valid scenario.
> 
> One could argue that the scenario is invalid and ought to be warned against
> because ideally the broadcast masks need to be cleared of the cpus about to
> go offine before clearing them in the online_mask so that we dont hit these
> scenarios.
> 
> This would mean clearing the masks in CPU_DOWN_PREPARE stage.

Not necessarily. We could clear the mask in the CPU_DYING stage. That way,
offline CPUs will automatically get cleared from the force_mask and hence
the tick-broadcast code will not need to have a special case to deal with
this scenario. What do you think?

Regards,
Srivatsa S. Bhat

> ---
> 
>  kernel/time/tick-broadcast.c |    7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
> index 63c7b2d..30b8731 100644
> --- a/kernel/time/tick-broadcast.c
> +++ b/kernel/time/tick-broadcast.c
> @@ -606,7 +606,12 @@ again:
>  	 */
>  	cpumask_clear_cpu(smp_processor_id(), tick_broadcast_pending_mask);
> 
> -	/* Take care of enforced broadcast requests */
> +	/* Take care of enforced broadcast requests. We could have offline
> +	 * cpus in the tick_broadcast_force_mask. Thats ok, we got the interrupt
> +	 * before we could clear the mask.
> +	 */
> +	cpumask_and(tick_broadcast_force_mask,
> +			tick_broadcast_force_mask, cpu_online_mask);
>  	cpumask_or(tmpmask, tmpmask, tick_broadcast_force_mask);
>  	cpumask_clear(tick_broadcast_force_mask);
> 
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-pm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
preeti March 26, 2014, 1:22 p.m. UTC | #2
On 03/26/2014 04:51 PM, Srivatsa S. Bhat wrote:
> On 03/26/2014 09:26 AM, Preeti U Murthy wrote:
>> Its possible that the tick_broadcast_force_mask contains cpus which are not
>> in cpu_online_mask when a broadcast tick occurs. This could happen under the
>> following circumstance assuming CPU1 is among the CPUs waiting for broadcast.
>>
>> CPU0					CPU1
>>
>> Run CPU_DOWN_PREPARE notifiers
>>
>> Start stop_machine			Gets woken up by IPI to run
>> 					stop_machine, sets itself in
>> 					tick_broadcast_force_mask if the
>> 					time of broadcast interrupt is around
>> 					the same time as this IPI.
>>
>> 					Start stop_machine
>> 					  set_cpu_online(cpu1, false)
>> End stop_machine			End stop_machine
>>
>> Broadcast interrupt
>>   Finds that cpu1 in
>>   tick_broadcast_force_mask is offline
>>   and triggers the WARN_ON in
>>   tick_handle_oneshot_broadcast()
>>
>> Clears all broadcast masks
>> in CPU_DEAD stage.
>>
>> This WARN_ON was added to capture scenarios where the broadcast mask, be it
>> oneshot/pending/force_mask contain offline cpus whose tick devices have been
>> removed. But here is a case where we trigger the warn on in a valid scenario.
>>
>> One could argue that the scenario is invalid and ought to be warned against
>> because ideally the broadcast masks need to be cleared of the cpus about to
>> go offine before clearing them in the online_mask so that we dont hit these
>> scenarios.
>>
>> This would mean clearing the masks in CPU_DOWN_PREPARE stage.
> 
> Not necessarily. We could clear the mask in the CPU_DYING stage. That way,
> offline CPUs will automatically get cleared from the force_mask and hence
> the tick-broadcast code will not need to have a special case to deal with
> this scenario. What do you think?

Hmm yeah. Let me confirm this by verifying if we could miss something by
clearing masks in CPU_DYING stage.

Thanks!

Regards
Preeti U Murthy
> 
> Regards,
> Srivatsa S. Bhat
> 
>> ---
>>
>>  kernel/time/tick-broadcast.c |    7 ++++++-
>>  1 file changed, 6 insertions(+), 1 deletion(-)
>>
>> diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
>> index 63c7b2d..30b8731 100644
>> --- a/kernel/time/tick-broadcast.c
>> +++ b/kernel/time/tick-broadcast.c
>> @@ -606,7 +606,12 @@ again:
>>  	 */
>>  	cpumask_clear_cpu(smp_processor_id(), tick_broadcast_pending_mask);
>>
>> -	/* Take care of enforced broadcast requests */
>> +	/* Take care of enforced broadcast requests. We could have offline
>> +	 * cpus in the tick_broadcast_force_mask. Thats ok, we got the interrupt
>> +	 * before we could clear the mask.
>> +	 */
>> +	cpumask_and(tick_broadcast_force_mask,
>> +			tick_broadcast_force_mask, cpu_online_mask);
>>  	cpumask_or(tmpmask, tmpmask, tick_broadcast_force_mask);
>>  	cpumask_clear(tick_broadcast_force_mask);
>>
>>
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-pm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
preeti March 27, 2014, 3:02 a.m. UTC | #3
On 03/26/2014 04:51 PM, Srivatsa S. Bhat wrote:
> On 03/26/2014 09:26 AM, Preeti U Murthy wrote:
>> Its possible that the tick_broadcast_force_mask contains cpus which are not
>> in cpu_online_mask when a broadcast tick occurs. This could happen under the
>> following circumstance assuming CPU1 is among the CPUs waiting for broadcast.
>>
>> CPU0					CPU1
>>
>> Run CPU_DOWN_PREPARE notifiers
>>
>> Start stop_machine			Gets woken up by IPI to run
>> 					stop_machine, sets itself in
>> 					tick_broadcast_force_mask if the
>> 					time of broadcast interrupt is around
>> 					the same time as this IPI.
>>
>> 					Start stop_machine
>> 					  set_cpu_online(cpu1, false)
>> End stop_machine			End stop_machine
>>
>> Broadcast interrupt
>>   Finds that cpu1 in
>>   tick_broadcast_force_mask is offline
>>   and triggers the WARN_ON in
>>   tick_handle_oneshot_broadcast()
>>
>> Clears all broadcast masks
>> in CPU_DEAD stage.
>>
>> This WARN_ON was added to capture scenarios where the broadcast mask, be it
>> oneshot/pending/force_mask contain offline cpus whose tick devices have been
>> removed. But here is a case where we trigger the warn on in a valid scenario.
>>
>> One could argue that the scenario is invalid and ought to be warned against
>> because ideally the broadcast masks need to be cleared of the cpus about to
>> go offine before clearing them in the online_mask so that we dont hit these
>> scenarios.
>>
>> This would mean clearing the masks in CPU_DOWN_PREPARE stage.
> 
> Not necessarily. We could clear the mask in the CPU_DYING stage. That way,
> offline CPUs will automatically get cleared from the force_mask and hence
> the tick-broadcast code will not need to have a special case to deal with
> this scenario. What do you think?

Ok I gave some thought to this. This will not work with the hrtimer mode
of broadcast framework going in. This is the feature that was added for
implementations of such archs which do not have an external clock device
to wake them up in deep idle states when the local timers stop. They
assign one of the CPUs as an agent to wake them up. When this designated
CPU gets hotplugged out, we need to assign this duty to some other CPU.

The way this is being done now is in
tick_shutdown_broadcast_oneshot_control() which is also responsible for
clearing the broadcast masks. When the hrtimer mode of broadcast is
active, then in addition to clearing masks in this function we make the
CPU executing this function take on the task of waking up CPUs in deep
idle state if the hotplugged CPU was doing this earlier.

Currently tick_shutdown_broadcast_oneshot_control() is being executed in
the CPU_DEAD notification and this is guarenteed to run on a CPU *other
than* the dying CPU. Hence we can safely do this.

However if we move this function underneath CPU_DYING notifier, this
will turn out to be a disaster since IIUC the dying CPU is running this
notifier and will end up re-assigning the duty of waking up CPUs to itself.

Does this make sense?

Regards
Preeti U Murthy
> 
> Regards,
> Srivatsa S. Bhat
> 
>> ---
>>
>>  kernel/time/tick-broadcast.c |    7 ++++++-
>>  1 file changed, 6 insertions(+), 1 deletion(-)
>>
>> diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
>> index 63c7b2d..30b8731 100644
>> --- a/kernel/time/tick-broadcast.c
>> +++ b/kernel/time/tick-broadcast.c
>> @@ -606,7 +606,12 @@ again:
>>  	 */
>>  	cpumask_clear_cpu(smp_processor_id(), tick_broadcast_pending_mask);
>>
>> -	/* Take care of enforced broadcast requests */
>> +	/* Take care of enforced broadcast requests. We could have offline
>> +	 * cpus in the tick_broadcast_force_mask. Thats ok, we got the interrupt
>> +	 * before we could clear the mask.
>> +	 */
>> +	cpumask_and(tick_broadcast_force_mask,
>> +			tick_broadcast_force_mask, cpu_online_mask);
>>  	cpumask_or(tmpmask, tmpmask, tick_broadcast_force_mask);
>>  	cpumask_clear(tick_broadcast_force_mask);
>>
>>
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-pm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Srivatsa S. Bhat March 27, 2014, 6:28 a.m. UTC | #4
On 03/27/2014 08:32 AM, Preeti U Murthy wrote:
> On 03/26/2014 04:51 PM, Srivatsa S. Bhat wrote:
>> On 03/26/2014 09:26 AM, Preeti U Murthy wrote:
>>> Its possible that the tick_broadcast_force_mask contains cpus which are not
>>> in cpu_online_mask when a broadcast tick occurs. This could happen under the
>>> following circumstance assuming CPU1 is among the CPUs waiting for broadcast.
>>>
>>> CPU0					CPU1
>>>
>>> Run CPU_DOWN_PREPARE notifiers
>>>
>>> Start stop_machine			Gets woken up by IPI to run
>>> 					stop_machine, sets itself in
>>> 					tick_broadcast_force_mask if the
>>> 					time of broadcast interrupt is around
>>> 					the same time as this IPI.
>>>
>>> 					Start stop_machine
>>> 					  set_cpu_online(cpu1, false)
>>> End stop_machine			End stop_machine
>>>
>>> Broadcast interrupt
>>>   Finds that cpu1 in
>>>   tick_broadcast_force_mask is offline
>>>   and triggers the WARN_ON in
>>>   tick_handle_oneshot_broadcast()
>>>
>>> Clears all broadcast masks
>>> in CPU_DEAD stage.
>>>
>>> This WARN_ON was added to capture scenarios where the broadcast mask, be it
>>> oneshot/pending/force_mask contain offline cpus whose tick devices have been
>>> removed. But here is a case where we trigger the warn on in a valid scenario.
>>>
>>> One could argue that the scenario is invalid and ought to be warned against
>>> because ideally the broadcast masks need to be cleared of the cpus about to
>>> go offine before clearing them in the online_mask so that we dont hit these
>>> scenarios.
>>>
>>> This would mean clearing the masks in CPU_DOWN_PREPARE stage.
>>
>> Not necessarily. We could clear the mask in the CPU_DYING stage. That way,
>> offline CPUs will automatically get cleared from the force_mask and hence
>> the tick-broadcast code will not need to have a special case to deal with
>> this scenario. What do you think?
> 
> Ok I gave some thought to this. This will not work with the hrtimer mode
> of broadcast framework going in. This is the feature that was added for
> implementations of such archs which do not have an external clock device
> to wake them up in deep idle states when the local timers stop. They
> assign one of the CPUs as an agent to wake them up. When this designated
> CPU gets hotplugged out, we need to assign this duty to some other CPU.
> 
> The way this is being done now is in
> tick_shutdown_broadcast_oneshot_control() which is also responsible for
> clearing the broadcast masks. When the hrtimer mode of broadcast is
> active, then in addition to clearing masks in this function we make the
> CPU executing this function take on the task of waking up CPUs in deep
> idle state if the hotplugged CPU was doing this earlier.
> 
> Currently tick_shutdown_broadcast_oneshot_control() is being executed in
> the CPU_DEAD notification and this is guarenteed to run on a CPU *other
> than* the dying CPU. Hence we can safely do this.
> 
> However if we move this function underneath CPU_DYING notifier, this
> will turn out to be a disaster since IIUC the dying CPU is running this
> notifier and will end up re-assigning the duty of waking up CPUs to itself.
> 

Actually, my suggestion was to remove the dying CPU from the force_mask alone,
in the CPU_DYING notifier. The rest of the cleanup (removing it from the other
masks, moving the broadcast duty to someone else etc can still be done at
the CPU_DEAD stage). Also, note that the CPU which is set in force_mask is
definitely not the one doing the broadcast.

Basically, my reasoning was this:

If we look at how the 3 broadcast masks (oneshot, pending and force) are
set and cleared during idle entry/exit, we see this pattern:

oneshot_mask: This is set at BROADCAST_ENTER and cleared at EXIT.
pending_mask: This is set at tick_handle_oneshot_broadcast and cleared at
              EXIT.
force_mask:   This is set at EXIT and cleared at the next call to
              tick_handle_oneshot_broadcast. (Also, if the CPU is set in this
              mask, the CPU doesn't enter deep idle states in subsequent
              idle durations, and keeps polling instead, until it gets the
              broadcast interrupt).

What we can derive from this is that force_mask is the only mask that can
remain set across an idle ENTER/EXIT sequence. Both of the other 2 masks
can never remain set across a full idle ENTER/EXIT sequence. And a CPU going
offline certainly goes through EXIT if it had gone through ENTER, before
entering stop_machine().

That means, force_mask is the only odd one out here, which can remain set
when entering stop_machine() for CPU offline. So that's the only mask that
needs to be cleared separately. The other 2 masks take care of themselves
automatically. So, we can have a CPU_DYING callback which just clears the
dying CPU from the force_mask (and does nothing more). That should work, no?

Regards,
Srivatsa S. Bhat

--
To unsubscribe from this list: send the line "unsubscribe linux-pm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
index 63c7b2d..30b8731 100644
--- a/kernel/time/tick-broadcast.c
+++ b/kernel/time/tick-broadcast.c
@@ -606,7 +606,12 @@  again:
 	 */
 	cpumask_clear_cpu(smp_processor_id(), tick_broadcast_pending_mask);
 
-	/* Take care of enforced broadcast requests */
+	/* Take care of enforced broadcast requests. We could have offline
+	 * cpus in the tick_broadcast_force_mask. Thats ok, we got the interrupt
+	 * before we could clear the mask.
+	 */
+	cpumask_and(tick_broadcast_force_mask,
+			tick_broadcast_force_mask, cpu_online_mask);
 	cpumask_or(tmpmask, tmpmask, tick_broadcast_force_mask);
 	cpumask_clear(tick_broadcast_force_mask);