diff mbox series

[RFC,3/4] workqueue: reap dead pool workqueue on queuing work

Message ID 20191211112229.22652-1-hdanton@sina.com (mailing list archive)
State New, archived
Headers show
Series workqueue: fix selecting cpu for queuing work and cleanup | expand

Commit Message

Hillf Danton Dec. 11, 2019, 11:22 a.m. UTC
Release rcu lock to reap dead pool workqueue.

Signed-off-by: Hillf Danton <hdanton@sina.com>
---

Comments

Daniel Jordan Dec. 11, 2019, 11:25 p.m. UTC | #1
On Wed, Dec 11, 2019 at 07:22:29PM +0800, Hillf Danton wrote:
> Release rcu lock to reap dead pool workqueue.

What's to be gained by reaping the pwq (and possibly worker pool and wq) before
__queue_work() retries?  It'll just happen after the queueing finishes.
Hillf Danton Dec. 12, 2019, 2:28 a.m. UTC | #2
On Wed, 11 Dec 2019 18:25:04 -0500 Daniel Jordan wrote:
> On Wed, Dec 11, 2019 at 07:22:29PM +0800, Hillf Danton wrote:
> > Release rcu lock to reap dead pool workqueue.
> 
> What's to be gained by reaping the pwq (and possibly worker pool and wq) before
> __queue_work() retries?  It'll just happen after the queueing finishes.

Releasing rcu lock just says that the dead pwp no longer makes sense
on the local cpu and it can go now, without the local queuing work AFAICS
affected because of irq disabled. But it's hard to say how it will be
reclaimed on other cpus, say before this queuing ends, and this does
not matter in terms of the local queuing.

Hillf
diff mbox series

Patch

--- d/kernel/workqueue.c
+++ e/kernel/workqueue.c
@@ -1409,9 +1409,9 @@  static void __queue_work(int cpu, struct
 	if (unlikely(wq->flags & __WQ_DRAINING) &&
 	    WARN_ON_ONCE(!is_chained_work(wq)))
 		return;
-
-	rcu_read_lock();
 retry:
+	rcu_read_lock();
+
 	/* pwq which will be used unless @work is executing elsewhere */
 	if (wq->flags & WQ_UNBOUND) {
 		if (req_cpu == WORK_CPU_UNBOUND)
@@ -1458,6 +1458,7 @@  retry:
 	if (unlikely(!pwq->refcnt)) {
 		if (wq->flags & WQ_UNBOUND) {
 			spin_unlock(&pwq->pool->lock);
+			rcu_read_unlock();
 			cpu_relax();
 			goto retry;
 		}