diff mbox series

[RFC,1/4] workqueue: fix selecting cpu for queuing work

Message ID 20191211105919.10652-1-hdanton@sina.com (mailing list archive)
State New, archived
Headers show
Series workqueue: fix selecting cpu for queuing work and cleanup | expand

Commit Message

Hillf Danton Dec. 11, 2019, 10:59 a.m. UTC
Round robin is needed only for unbound workqueue and wq_unbound_cpumask
has nothing to do with standard workqueues, so we have to select cpu in
case of WORK_CPU_UNBOUND also with workqueue type taken into account.

Fixes: ef557180447f ("workqueue: schedule WORK_CPU_UNBOUND work on wq_unbound_cpumask CPUs")
Signed-off-by: Hillf Danton <hdanton@sina.com>
---

Comments

Daniel Jordan Dec. 11, 2019, 11:07 p.m. UTC | #1
[please cc maintainers]

On Wed, Dec 11, 2019 at 06:59:19PM +0800, Hillf Danton wrote:
> Round robin is needed only for unbound workqueue and wq_unbound_cpumask
> has nothing to do with standard workqueues, so we have to select cpu in
> case of WORK_CPU_UNBOUND also with workqueue type taken into account.

Good catch.  I'd include something like this in the changelog.

  Otherwise, work queued on a bound workqueue with WORK_CPU_UNBOUND might
  not prefer the local CPU if wq_unbound_cpumask is non-empty and doesn't
  include that CPU.

With that you can add

Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Daniel Jordan Jan. 23, 2020, 10:37 p.m. UTC | #2
On Wed, Dec 11, 2019 at 06:07:35PM -0500, Daniel Jordan wrote:
> [please cc maintainers]
> 
> On Wed, Dec 11, 2019 at 06:59:19PM +0800, Hillf Danton wrote:
> > Round robin is needed only for unbound workqueue and wq_unbound_cpumask
> > has nothing to do with standard workqueues, so we have to select cpu in
> > case of WORK_CPU_UNBOUND also with workqueue type taken into account.
> 
> Good catch.  I'd include something like this in the changelog.
> 
>   Otherwise, work queued on a bound workqueue with WORK_CPU_UNBOUND might
>   not prefer the local CPU if wq_unbound_cpumask is non-empty and doesn't
>   include that CPU.
> 
> With that you can add
> 
> Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>

Any plans to repost this patch, Hillf?  If not, I can do it while retaining
your authorship.

Adding back the context, which I forgot to keep when adding the maintainers.

> > Fixes: ef557180447f ("workqueue: schedule WORK_CPU_UNBOUND work on wq_unbound_cpumask CPUs")
> > Signed-off-by: Hillf Danton <hdanton@sina.com>
> > ---
> > 
> > --- a/kernel/workqueue.c
> > +++ c/kernel/workqueue.c
> > @@ -1409,16 +1409,19 @@ static void __queue_work(int cpu, struct
> >  	if (unlikely(wq->flags & __WQ_DRAINING) &&
> >  	    WARN_ON_ONCE(!is_chained_work(wq)))
> >  		return;
> > +
> >  	rcu_read_lock();
> >  retry:
> > -	if (req_cpu == WORK_CPU_UNBOUND)
> > -		cpu = wq_select_unbound_cpu(raw_smp_processor_id());
> > -
> >  	/* pwq which will be used unless @work is executing elsewhere */
> > -	if (!(wq->flags & WQ_UNBOUND))
> > -		pwq = per_cpu_ptr(wq->cpu_pwqs, cpu);
> > -	else
> > +	if (wq->flags & WQ_UNBOUND) {
> > +		if (req_cpu == WORK_CPU_UNBOUND)
> > +			cpu = wq_select_unbound_cpu(raw_smp_processor_id());
> >  		pwq = unbound_pwq_by_node(wq, cpu_to_node(cpu));
> > +	} else {
> > +		if (req_cpu == WORK_CPU_UNBOUND)
> > +			cpu = raw_smp_processor_id();
> > +		pwq = per_cpu_ptr(wq->cpu_pwqs, cpu);
> > +	}
> >  
> >  	/*
> >  	 * If @work was previously on a different pool, it might still be
> > 
> >
Hillf Danton Jan. 24, 2020, 1:01 a.m. UTC | #3
On Thu, 23 Jan 2020 17:37:43 -0500 Daniel Jordan wrote:
> 
> Any plans to repost this patch, Hillf?  If not, I can do it while retaining
> your authorship.


Feel free to do it please and a Cc is enough.

Thanks
Hillf
diff mbox series

Patch

--- a/kernel/workqueue.c
+++ c/kernel/workqueue.c
@@ -1409,16 +1409,19 @@  static void __queue_work(int cpu, struct
 	if (unlikely(wq->flags & __WQ_DRAINING) &&
 	    WARN_ON_ONCE(!is_chained_work(wq)))
 		return;
+
 	rcu_read_lock();
 retry:
-	if (req_cpu == WORK_CPU_UNBOUND)
-		cpu = wq_select_unbound_cpu(raw_smp_processor_id());
-
 	/* pwq which will be used unless @work is executing elsewhere */
-	if (!(wq->flags & WQ_UNBOUND))
-		pwq = per_cpu_ptr(wq->cpu_pwqs, cpu);
-	else
+	if (wq->flags & WQ_UNBOUND) {
+		if (req_cpu == WORK_CPU_UNBOUND)
+			cpu = wq_select_unbound_cpu(raw_smp_processor_id());
 		pwq = unbound_pwq_by_node(wq, cpu_to_node(cpu));
+	} else {
+		if (req_cpu == WORK_CPU_UNBOUND)
+			cpu = raw_smp_processor_id();
+		pwq = per_cpu_ptr(wq->cpu_pwqs, cpu);
+	}
 
 	/*
 	 * If @work was previously on a different pool, it might still be