Message ID | 20211114060222.3370-1-penguin-kernel@I-love.SAKURA.ne.jp (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | sock: fix /proc/net/sockstat underflow in sk_clone_lock() | expand |
On 11/13/21 10:02 PM, Tetsuo Handa wrote: > sk_clone_lock() needs to call get_net() and sock_inuse_inc() together, or > socket_seq_show() will underflow when __sk_free() from sk_free() from > sk_free_unlock_clone() is called. > IMO, a "sock_inuse_get() underflow" is a very different problem, I suspect this should be fixed with the following patch. diff --git a/net/core/sock.c b/net/core/sock.c index c57d9883f62c75f522b7f6bc68451aaf8429dc83..bac8e2b62521301ce897728fff9622c4c05419a3 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -3573,7 +3573,7 @@ int sock_inuse_get(struct net *net) for_each_possible_cpu(cpu) res += *per_cpu_ptr(net->core.sock_inuse, cpu); - return res; + return max(res, 0); } EXPORT_SYMBOL_GPL(sock_inuse_get); Bug added in commit 648845ab7e200993dccd3948c719c858368c91e7 Author: Tonghao Zhang <xiangxia.m.yue@gmail.com> Date: Thu Dec 14 05:51:58 2017 -0800 sock: Move the socket inuse to namespace. > Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> > --- > net/core/sock.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/net/core/sock.c b/net/core/sock.c > index 8f2b2f2c0e7b..41e91d0f7061 100644 > --- a/net/core/sock.c > +++ b/net/core/sock.c > @@ -2124,8 +2124,10 @@ struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority) > newsk->sk_prot_creator = prot; > > /* SANITY */ > - if (likely(newsk->sk_net_refcnt)) > + if (likely(newsk->sk_net_refcnt)) { > get_net(sock_net(newsk)); > + sock_inuse_add(sock_net(newsk), 1); > + } > sk_node_init(&newsk->sk_node); > sock_lock_init(newsk); > bh_lock_sock(newsk); > @@ -2197,8 +2199,6 @@ struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority) > newsk->sk_err_soft = 0; > newsk->sk_priority = 0; > newsk->sk_incoming_cpu = raw_smp_processor_id(); > - if (likely(newsk->sk_net_refcnt)) > - sock_inuse_add(sock_net(newsk), 1); > > /* Before updating sk_refcnt, we must commit prior changes to memory > * (Documentation/RCU/rculist_nulls.rst for details) >
On 2021/11/15 5:01, Eric Dumazet wrote: > On 11/13/21 10:02 PM, Tetsuo Handa wrote: >> sk_clone_lock() needs to call get_net() and sock_inuse_inc() together, or s/sock_inuse_inc/sock_inuse_add/ >> socket_seq_show() will underflow when __sk_free() from sk_free() from >> sk_free_unlock_clone() is called. >> > > IMO, a "sock_inuse_get() underflow" is a very different problem, Yes, a different problem. I found this problem while trying to examine https://syzkaller.appspot.com/bug?extid=694120e1002c117747ed where somebody might be failing to call get_net() or we might be failing to make sure that all timers are synchronously stopped before put_net(). > I suspect this should be fixed with the following patch. My patch addresses a permanent underflow problem which remains as long as that namespace exists. Your patch addresses a temporal underflow problem which happens due to calculating the sum without locks. Therefore, we can apply both patches if we want. > > diff --git a/net/core/sock.c b/net/core/sock.c > index c57d9883f62c75f522b7f6bc68451aaf8429dc83..bac8e2b62521301ce897728fff9622c4c05419a3 100644 > --- a/net/core/sock.c > +++ b/net/core/sock.c > @@ -3573,7 +3573,7 @@ int sock_inuse_get(struct net *net) > for_each_possible_cpu(cpu) > res += *per_cpu_ptr(net->core.sock_inuse, cpu); > > - return res; > + return max(res, 0); > } > > EXPORT_SYMBOL_GPL(sock_inuse_get); > > > Bug added in commit 648845ab7e200993dccd3948c719c858368c91e7 > Author: Tonghao Zhang <xiangxia.m.yue@gmail.com> > Date: Thu Dec 14 05:51:58 2017 -0800 > > sock: Move the socket inuse to namespace. > >
On 11/14/21 2:43 PM, Tetsuo Handa wrote: > On 2021/11/15 5:01, Eric Dumazet wrote: >> On 11/13/21 10:02 PM, Tetsuo Handa wrote: >>> sk_clone_lock() needs to call get_net() and sock_inuse_inc() together, or > > s/sock_inuse_inc/sock_inuse_add/ > >>> socket_seq_show() will underflow when __sk_free() from sk_free() from >>> sk_free_unlock_clone() is called. >>> >> >> IMO, a "sock_inuse_get() underflow" is a very different problem, > > Yes, a different problem. I found this problem while trying to examine > https://syzkaller.appspot.com/bug?extid=694120e1002c117747ed where > somebody might be failing to call get_net() or we might be failing to > make sure that all timers are synchronously stopped before put_net(). > >> I suspect this should be fixed with the following patch. > > My patch addresses a permanent underflow problem which remains as long as > that namespace exists. Your patch addresses a temporal underflow problem > which happens due to calculating the sum without locks. Therefore, we can > apply both patches if we want. > I think your changelog is a bit confusing. It would really help if you add the Fixes: tag, because if you do 1) It makes the patch much easier to understand, by reading again the the old patch. 2) We can tell if we can merge both fixes in the same submission (if they share the same bug origin) Thanks !
diff --git a/net/core/sock.c b/net/core/sock.c index 8f2b2f2c0e7b..41e91d0f7061 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -2124,8 +2124,10 @@ struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority) newsk->sk_prot_creator = prot; /* SANITY */ - if (likely(newsk->sk_net_refcnt)) + if (likely(newsk->sk_net_refcnt)) { get_net(sock_net(newsk)); + sock_inuse_add(sock_net(newsk), 1); + } sk_node_init(&newsk->sk_node); sock_lock_init(newsk); bh_lock_sock(newsk); @@ -2197,8 +2199,6 @@ struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority) newsk->sk_err_soft = 0; newsk->sk_priority = 0; newsk->sk_incoming_cpu = raw_smp_processor_id(); - if (likely(newsk->sk_net_refcnt)) - sock_inuse_add(sock_net(newsk), 1); /* Before updating sk_refcnt, we must commit prior changes to memory * (Documentation/RCU/rculist_nulls.rst for details)
sk_clone_lock() needs to call get_net() and sock_inuse_inc() together, or socket_seq_show() will underflow when __sk_free() from sk_free() from sk_free_unlock_clone() is called. Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> --- net/core/sock.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)