Message ID | Ynp46bc5Rt54skl8@linutronix.de (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v2] SUNRPC: Don't disable preemption while calling svc_pool_for_cpu(). | expand |
> On May 10, 2022, at 10:38 AM, Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote: > > svc_xprt_enqueue() disables preemption via get_cpu() and then asks > for a pool of a specific CPU (current) via svc_pool_for_cpu(). > While preemption is disabled, svc_xprt_enqueue() acquires > svc_pool::sp_lock with bottom-halfs disabled, which can sleep on > PREEMPT_RT. > > Disabling preemption is not required here. The pool is protected with a > lock so the following list access is safe even cross-CPU. The following > iteration through svc_pool::sp_all_threads is under RCU-readlock and > remaining operations within the loop are atomic and do not rely on > disabled-preemption. > > Use raw_smp_processor_id() as the argument for the requested CPU in > svc_pool_for_cpu(). > > Reported-by: Mike Galbraith <umgwanakikbuti@gmail.com> > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Thanks. I'll apply this and add a clean-up patch to move the raw_smp_processor_id() into svc_pool_for_cpu(). Sorry for the noise! > --- > v1…v2: > - Reword the first part of the commit message as per Chuck Lever III. > > net/sunrpc/svc_xprt.c | 5 +---- > 1 file changed, 1 insertion(+), 4 deletions(-) > > diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c > index 5b59e2103526e..79965deec5b12 100644 > --- a/net/sunrpc/svc_xprt.c > +++ b/net/sunrpc/svc_xprt.c > @@ -448,7 +448,6 @@ void svc_xprt_enqueue(struct svc_xprt *xprt) > { > struct svc_pool *pool; > struct svc_rqst *rqstp = NULL; > - int cpu; > > if (!svc_xprt_ready(xprt)) > return; > @@ -461,8 +460,7 @@ void svc_xprt_enqueue(struct svc_xprt *xprt) > if (test_and_set_bit(XPT_BUSY, &xprt->xpt_flags)) > return; > > - cpu = get_cpu(); > - pool = svc_pool_for_cpu(xprt->xpt_server, cpu); > + pool = svc_pool_for_cpu(xprt->xpt_server, raw_smp_processor_id()); > > atomic_long_inc(&pool->sp_stats.packets); > > @@ -485,7 +483,6 @@ void svc_xprt_enqueue(struct svc_xprt *xprt) > rqstp = NULL; > out_unlock: > rcu_read_unlock(); > - put_cpu(); > trace_svc_xprt_enqueue(xprt, rqstp); > } > EXPORT_SYMBOL_GPL(svc_xprt_enqueue); > -- > 2.36.0 > -- Chuck Lever
diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c index 5b59e2103526e..79965deec5b12 100644 --- a/net/sunrpc/svc_xprt.c +++ b/net/sunrpc/svc_xprt.c @@ -448,7 +448,6 @@ void svc_xprt_enqueue(struct svc_xprt *xprt) { struct svc_pool *pool; struct svc_rqst *rqstp = NULL; - int cpu; if (!svc_xprt_ready(xprt)) return; @@ -461,8 +460,7 @@ void svc_xprt_enqueue(struct svc_xprt *xprt) if (test_and_set_bit(XPT_BUSY, &xprt->xpt_flags)) return; - cpu = get_cpu(); - pool = svc_pool_for_cpu(xprt->xpt_server, cpu); + pool = svc_pool_for_cpu(xprt->xpt_server, raw_smp_processor_id()); atomic_long_inc(&pool->sp_stats.packets); @@ -485,7 +483,6 @@ void svc_xprt_enqueue(struct svc_xprt *xprt) rqstp = NULL; out_unlock: rcu_read_unlock(); - put_cpu(); trace_svc_xprt_enqueue(xprt, rqstp); } EXPORT_SYMBOL_GPL(svc_xprt_enqueue);
svc_xprt_enqueue() disables preemption via get_cpu() and then asks for a pool of a specific CPU (current) via svc_pool_for_cpu(). While preemption is disabled, svc_xprt_enqueue() acquires svc_pool::sp_lock with bottom-halfs disabled, which can sleep on PREEMPT_RT. Disabling preemption is not required here. The pool is protected with a lock so the following list access is safe even cross-CPU. The following iteration through svc_pool::sp_all_threads is under RCU-readlock and remaining operations within the loop are atomic and do not rely on disabled-preemption. Use raw_smp_processor_id() as the argument for the requested CPU in svc_pool_for_cpu(). Reported-by: Mike Galbraith <umgwanakikbuti@gmail.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> --- v1…v2: - Reword the first part of the commit message as per Chuck Lever III. net/sunrpc/svc_xprt.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-)