Message ID | 9fc554880eeb0bc9d1749d9577e3aa058eb9f61c.1669312450.git.lucien.xin@gmail.com (mailing list archive) |
---|---|
State | Awaiting Upstream |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | [nf] netfilter: fix using __this_cpu_add in preemptible in nf_flow_table_offload | expand |
On Thu, Nov 24, 2022 at 12:54:10PM -0500, Xin Long wrote: > flow_offload_queue_work() can be called in workqueue without > bh disabled, like the call trace showed in my act_ct testing, > calling NF_FLOW_TABLE_STAT_INC() there would cause a call > trace: > > BUG: using __this_cpu_add() in preemptible [00000000] code: kworker/u4:0/138560 > caller is flow_offload_queue_work+0xec/0x1b0 [nf_flow_table] > Workqueue: act_ct_workqueue tcf_ct_flow_table_cleanup_work [act_ct] > Call Trace: > <TASK> > dump_stack_lvl+0x33/0x46 > check_preemption_disabled+0xc3/0xf0 > flow_offload_queue_work+0xec/0x1b0 [nf_flow_table] > nf_flow_table_iterate+0x138/0x170 [nf_flow_table] > nf_flow_table_free+0x140/0x1a0 [nf_flow_table] > tcf_ct_flow_table_cleanup_work+0x2f/0x2b0 [act_ct] > process_one_work+0x6a3/0x1030 > worker_thread+0x8a/0xdf0 > > This patch fixes it by using NF_FLOW_TABLE_STAT_INC_ATOMIC() > instead in flow_offload_queue_work(). > > Note that for FLOW_CLS_REPLACE branch in flow_offload_queue_work(), > it may not be called in preemptible path, but it's good to use > NF_FLOW_TABLE_STAT_INC_ATOMIC() for all cases in > flow_offload_queue_work(). Applied, thanks
diff --git a/net/netfilter/nf_flow_table_offload.c b/net/netfilter/nf_flow_table_offload.c index 00b522890d77..0fdcdb2c9ae4 100644 --- a/net/netfilter/nf_flow_table_offload.c +++ b/net/netfilter/nf_flow_table_offload.c @@ -997,13 +997,13 @@ static void flow_offload_queue_work(struct flow_offload_work *offload) struct net *net = read_pnet(&offload->flowtable->net); if (offload->cmd == FLOW_CLS_REPLACE) { - NF_FLOW_TABLE_STAT_INC(net, count_wq_add); + NF_FLOW_TABLE_STAT_INC_ATOMIC(net, count_wq_add); queue_work(nf_flow_offload_add_wq, &offload->work); } else if (offload->cmd == FLOW_CLS_DESTROY) { - NF_FLOW_TABLE_STAT_INC(net, count_wq_del); + NF_FLOW_TABLE_STAT_INC_ATOMIC(net, count_wq_del); queue_work(nf_flow_offload_del_wq, &offload->work); } else { - NF_FLOW_TABLE_STAT_INC(net, count_wq_stats); + NF_FLOW_TABLE_STAT_INC_ATOMIC(net, count_wq_stats); queue_work(nf_flow_offload_stats_wq, &offload->work); } }
flow_offload_queue_work() can be called in workqueue without bh disabled, like the call trace showed in my act_ct testing, calling NF_FLOW_TABLE_STAT_INC() there would cause a call trace: BUG: using __this_cpu_add() in preemptible [00000000] code: kworker/u4:0/138560 caller is flow_offload_queue_work+0xec/0x1b0 [nf_flow_table] Workqueue: act_ct_workqueue tcf_ct_flow_table_cleanup_work [act_ct] Call Trace: <TASK> dump_stack_lvl+0x33/0x46 check_preemption_disabled+0xc3/0xf0 flow_offload_queue_work+0xec/0x1b0 [nf_flow_table] nf_flow_table_iterate+0x138/0x170 [nf_flow_table] nf_flow_table_free+0x140/0x1a0 [nf_flow_table] tcf_ct_flow_table_cleanup_work+0x2f/0x2b0 [act_ct] process_one_work+0x6a3/0x1030 worker_thread+0x8a/0xdf0 This patch fixes it by using NF_FLOW_TABLE_STAT_INC_ATOMIC() instead in flow_offload_queue_work(). Note that for FLOW_CLS_REPLACE branch in flow_offload_queue_work(), it may not be called in preemptible path, but it's good to use NF_FLOW_TABLE_STAT_INC_ATOMIC() for all cases in flow_offload_queue_work(). Fixes: b038177636f8 ("netfilter: nf_flow_table: count pending offload workqueue tasks") Signed-off-by: Xin Long <lucien.xin@gmail.com> --- net/netfilter/nf_flow_table_offload.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)