diff mbox series

[2/2] mm: lru_cache_disable: replace work queue synchronization with synchronize_rcu

Message ID 20220222144907.056089321@redhat.com (mailing list archive)
State New
Headers show
Series replace work queue synchronization with synchronize_rcu | expand

Commit Message

Marcelo Tosatti Feb. 22, 2022, 2:47 p.m. UTC
On systems that run FIFO:1 applications that busy loop 
on isolated CPUs, executing tasks on such CPUs under
lower priority is undesired (since that will either
hang the system, or cause longer interruption to the
FIFO task due to execution of lower priority task 
with very small sched slices).


Commit d479960e44f27e0e52ba31b21740b703c538027c ("mm: disable LRU 
pagevec during the migration temporarily") relies on 
queueing work items on all online CPUs to ensure visibility
of lru_disable_count.

However, its possible to use synchronize_rcu which will provide the same
guarantees:

    * synchronize_rcu() waits for preemption disabled
    * and RCU read side critical sections
    * For the users of lru_disable_count:
    *
    * preempt_disable, local_irq_disable() [bh_lru_lock()]
    * rcu_read_lock                        [lru_pvecs CONFIG_PREEMPT_RT]
    * preempt_disable                      [lru_pvecs !CONFIG_PREEMPT_RT]
    *
    *
    * so any calls of lru_cache_disabled wrapped by
    * local_lock+rcu_read_lock or preemption disabled would be
    * ordered by that. 


Fixes:

[ 1873.243925] INFO: task kworker/u160:0:9 blocked for more than 622 seconds.
[ 1873.243927]       Tainted: G          I      --------- ---  5.14.0-31.rt21.31.el9.x86_64 #1
[ 1873.243929] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1873.243929] task:kworker/u160:0  state:D stack:    0 pid:    9 ppid:     2 flags:0x00004000
[ 1873.243932] Workqueue: cpuset_migrate_mm cpuset_migrate_mm_workfn
[ 1873.243936] Call Trace:
[ 1873.243938]  __schedule+0x21b/0x5b0
[ 1873.243941]  schedule+0x43/0xe0
[ 1873.243943]  schedule_timeout+0x14d/0x190
[ 1873.243946]  ? resched_curr+0x20/0xe0
[ 1873.243953]  ? __prepare_to_swait+0x4b/0x70
[ 1873.243958]  wait_for_completion+0x84/0xe0
[ 1873.243962]  __flush_work.isra.0+0x146/0x200
[ 1873.243966]  ? flush_workqueue_prep_pwqs+0x130/0x130
[ 1873.243971]  __lru_add_drain_all+0x158/0x1f0
[ 1873.243978]  do_migrate_pages+0x3d/0x2d0
[ 1873.243985]  ? pick_next_task_fair+0x39/0x3b0
[ 1873.243989]  ? put_prev_task_fair+0x1e/0x30
[ 1873.243992]  ? pick_next_task+0xb30/0xbd0
[ 1873.243995]  ? __tick_nohz_task_switch+0x1e/0x70
[ 1873.244000]  ? raw_spin_rq_unlock+0x18/0x60
[ 1873.244002]  ? finish_task_switch.isra.0+0xc1/0x2d0
[ 1873.244005]  ? __switch_to+0x12f/0x510
[ 1873.244013]  cpuset_migrate_mm_workfn+0x22/0x40
[ 1873.244016]  process_one_work+0x1e0/0x410
[ 1873.244019]  worker_thread+0x50/0x3b0
[ 1873.244022]  ? process_one_work+0x410/0x410
[ 1873.244024]  kthread+0x173/0x190
[ 1873.244027]  ? set_kthread_struct+0x40/0x40
[ 1873.244031]  ret_from_fork+0x1f/0x30

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>

Comments

Nicolas Saenz Julienne Feb. 22, 2022, 3:53 p.m. UTC | #1
On Tue, 2022-02-22 at 11:47 -0300, Marcelo Tosatti wrote:
> @@ -918,14 +917,23 @@ atomic_t lru_disable_count = ATOMIC_INIT
>  void lru_cache_disable(void)
>  {
>  	atomic_inc(&lru_disable_count);
> +	synchronize_rcu();
>  #ifdef CONFIG_SMP
>  	/*
> -	 * lru_add_drain_all in the force mode will schedule draining on
> -	 * all online CPUs so any calls of lru_cache_disabled wrapped by
> -	 * local_lock or preemption disabled would be ordered by that.
> -	 * The atomic operation doesn't need to have stronger ordering
> -	 * requirements because that is enforced by the scheduling
> -	 * guarantees.
> +	 * synchronize_rcu() waits for preemption disabled
> +	 * and RCU read side critical sections
> +	 * For the users of lru_disable_count:
> +	 *
> +	 * preempt_disable, local_irq_disable() [bh_lru_lock()]
> +	 * rcu_read_lock			[lru_pvecs CONFIG_PREEMPT_RT]
> +	 * preempt_disable			[lru_pvecs !CONFIG_PREEMPT_RT]
> +	 *
> +	 *
> +	 * so any calls of lru_cache_disabled wrapped by
> +	 * local_lock+rcu_read_lock or preemption disabled would be
> +	 * ordered by that. The atomic operation doesn't need to have
> +	 * stronger ordering requirements because that is enforced
> +	 * by the scheduling guarantees.

"The atomic operation doesn't need to have stronger ordering requirements
because that is enforced by the scheduling guarantees."

This is no longer needed.

Regards,
diff mbox series

Patch

Index: linux-rt-devel/mm/swap.c
===================================================================
--- linux-rt-devel.orig/mm/swap.c
+++ linux-rt-devel/mm/swap.c
@@ -873,8 +873,7 @@  inline void __lru_add_drain_all(bool for
 	for_each_online_cpu(cpu) {
 		struct work_struct *work = &per_cpu(lru_add_drain_work, cpu);
 
-		if (force_all_cpus ||
-		    pagevec_count(&per_cpu(lru_pvecs.lru_add, cpu)) ||
+		if (pagevec_count(&per_cpu(lru_pvecs.lru_add, cpu)) ||
 		    data_race(pagevec_count(&per_cpu(lru_rotate.pvec, cpu))) ||
 		    pagevec_count(&per_cpu(lru_pvecs.lru_deactivate_file, cpu)) ||
 		    pagevec_count(&per_cpu(lru_pvecs.lru_deactivate, cpu)) ||
@@ -918,14 +917,23 @@  atomic_t lru_disable_count = ATOMIC_INIT
 void lru_cache_disable(void)
 {
 	atomic_inc(&lru_disable_count);
+	synchronize_rcu();
 #ifdef CONFIG_SMP
 	/*
-	 * lru_add_drain_all in the force mode will schedule draining on
-	 * all online CPUs so any calls of lru_cache_disabled wrapped by
-	 * local_lock or preemption disabled would be ordered by that.
-	 * The atomic operation doesn't need to have stronger ordering
-	 * requirements because that is enforced by the scheduling
-	 * guarantees.
+	 * synchronize_rcu() waits for preemption disabled
+	 * and RCU read side critical sections
+	 * For the users of lru_disable_count:
+	 *
+	 * preempt_disable, local_irq_disable() [bh_lru_lock()]
+	 * rcu_read_lock			[lru_pvecs CONFIG_PREEMPT_RT]
+	 * preempt_disable			[lru_pvecs !CONFIG_PREEMPT_RT]
+	 *
+	 *
+	 * so any calls of lru_cache_disabled wrapped by
+	 * local_lock+rcu_read_lock or preemption disabled would be
+	 * ordered by that. The atomic operation doesn't need to have
+	 * stronger ordering requirements because that is enforced
+	 * by the scheduling guarantees.
 	 */
 	__lru_add_drain_all(true);
 #else