Message ID | 20211206120216.mgo6qibl5fmzdcrp@linutronix.de (mailing list archive) |
---|---|
State | Changes Requested |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | [net,v2] tcp: Don't acquire inet_listen_hashbucket::lock with disabled BH. | expand |
On Mon, Dec 06, 2021 at 01:02:16PM +0100, Sebastian Andrzej Siewior wrote: > Commit > 9652dc2eb9e40 ("tcp: relax listening_hash operations") > > removed the need to disable bottom half while acquiring > listening_hash.lock. There are still two callers left which disable > bottom half before the lock is acquired. > > On PREEMPT_RT the softirqs are preemptible and local_bh_disable() acts > as a lock to ensure that resources, that are protected by disabling > bottom halves, remain protected. > This leads to a circular locking dependency if the lock acquired with > disabled bottom halves is also acquired with enabled bottom halves > followed by disabling bottom halves. This is the reverse locking order. > It has been observed with inet_listen_hashbucket::lock: > > local_bh_disable() + spin_lock(&ilb->lock): > inet_listen() > inet_csk_listen_start() > sk->sk_prot->hash() := inet_hash() > local_bh_disable() > __inet_hash() > spin_lock(&ilb->lock); > acquire(&ilb->lock); > > Reverse order: spin_lock(&ilb->lock) + local_bh_disable(): > tcp_seq_next() > listening_get_next() > spin_lock(&ilb->lock); The net tree has already been using ilb2 instead of ilb. It does not change the problem though but updating the commit log will be useful to avoid future confusion. iiuc, established_get_next() does not hit this because it calls spin_lock_bh() which then keeps the local_bh_disable() => spin_lock() ordering? The patch lgtm. Acked-by: Martin KaFai Lau <kafai@fb.com> > acquire(&ilb->lock); > > tcp4_seq_show() > get_tcp4_sock() > sock_i_ino() > read_lock_bh(&sk->sk_callback_lock); > acquire(softirq_ctrl) // <---- whoops > acquire(&sk->sk_callback_lock) > > Drop local_bh_disable() around __inet_hash() which acquires > listening_hash->lock. Split inet_unhash() and acquire the > listen_hashbucket lock without disabling bottom halves; the inet_ehash > lock with disabled bottom halves.
On 2021-12-09 12:06:32 [-0800], Martin KaFai Lau wrote: > > local_bh_disable() + spin_lock(&ilb->lock): > > inet_listen() > > inet_csk_listen_start() > > sk->sk_prot->hash() := inet_hash() > > local_bh_disable() > > __inet_hash() > > spin_lock(&ilb->lock); > > acquire(&ilb->lock); > > > > Reverse order: spin_lock(&ilb->lock) + local_bh_disable(): > > tcp_seq_next() > > listening_get_next() > > spin_lock(&ilb->lock); > The net tree has already been using ilb2 instead of ilb. > It does not change the problem though but updating > the commit log will be useful to avoid future confusion. You think so? But having ilb2 and ilb might suggest that these two are different locks while they are the same. I could repost it early next week if you this actually confuses more… > iiuc, established_get_next() does not hit this because > it calls spin_lock_bh() which then keeps the > local_bh_disable() => spin_lock() ordering? Yes. BH was disabled before the spin_lock was acquired so it is the same ordering. The important part is not to disable BH after the spin_lock was acquired and in the reverse order. > The patch lgtm. > Acked-by: Martin KaFai Lau <kafai@fb.com> Sebastian
On Fri, Dec 10, 2021 at 05:21:13PM +0100, Sebastian Andrzej Siewior wrote: > On 2021-12-09 12:06:32 [-0800], Martin KaFai Lau wrote: > > > local_bh_disable() + spin_lock(&ilb->lock): > > > inet_listen() > > > inet_csk_listen_start() > > > sk->sk_prot->hash() := inet_hash() > > > local_bh_disable() > > > __inet_hash() > > > spin_lock(&ilb->lock); > > > acquire(&ilb->lock); > > > > > > Reverse order: spin_lock(&ilb->lock) + local_bh_disable(): > > > tcp_seq_next() > > > listening_get_next() > > > spin_lock(&ilb->lock); > > The net tree has already been using ilb2 instead of ilb. > > It does not change the problem though but updating > > the commit log will be useful to avoid future confusion. > > You think so? But having ilb2 and ilb might suggest that these two are > different locks while they are the same. I could repost it early next > week if you this actually confuses more… Yes, they are different locks. ilb2->lock is also taken in the inet_listen() path. ilb->lock is not even taken in the listening_get_next() side.
diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c index 75737267746f8..7bd1e10086f0a 100644 --- a/net/ipv4/inet_hashtables.c +++ b/net/ipv4/inet_hashtables.c @@ -637,7 +637,9 @@ int __inet_hash(struct sock *sk, struct sock *osk) int err = 0; if (sk->sk_state != TCP_LISTEN) { + local_bh_disable(); inet_ehash_nolisten(sk, osk, NULL); + local_bh_enable(); return 0; } WARN_ON(!sk_unhashed(sk)); @@ -669,45 +671,54 @@ int inet_hash(struct sock *sk) { int err = 0; - if (sk->sk_state != TCP_CLOSE) { - local_bh_disable(); + if (sk->sk_state != TCP_CLOSE) err = __inet_hash(sk, NULL); - local_bh_enable(); - } return err; } EXPORT_SYMBOL_GPL(inet_hash); -void inet_unhash(struct sock *sk) +static void __inet_unhash(struct sock *sk, struct inet_listen_hashbucket *ilb) { - struct inet_hashinfo *hashinfo = sk->sk_prot->h.hashinfo; - struct inet_listen_hashbucket *ilb = NULL; - spinlock_t *lock; - if (sk_unhashed(sk)) return; - if (sk->sk_state == TCP_LISTEN) { - ilb = &hashinfo->listening_hash[inet_sk_listen_hashfn(sk)]; - lock = &ilb->lock; - } else { - lock = inet_ehash_lockp(hashinfo, sk->sk_hash); - } - spin_lock_bh(lock); - if (sk_unhashed(sk)) - goto unlock; - if (rcu_access_pointer(sk->sk_reuseport_cb)) reuseport_stop_listen_sock(sk); if (ilb) { + struct inet_hashinfo *hashinfo = sk->sk_prot->h.hashinfo; + inet_unhash2(hashinfo, sk); ilb->count--; } __sk_nulls_del_node_init_rcu(sk); sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1); -unlock: - spin_unlock_bh(lock); +} + +void inet_unhash(struct sock *sk) +{ + struct inet_hashinfo *hashinfo = sk->sk_prot->h.hashinfo; + + if (sk_unhashed(sk)) + return; + + if (sk->sk_state == TCP_LISTEN) { + struct inet_listen_hashbucket *ilb; + + ilb = &hashinfo->listening_hash[inet_sk_listen_hashfn(sk)]; + /* Don't disable bottom halves while acquiring the lock to + * avoid circular locking dependency on PREEMPT_RT. + */ + spin_lock(&ilb->lock); + __inet_unhash(sk, ilb); + spin_unlock(&ilb->lock); + } else { + spinlock_t *lock = inet_ehash_lockp(hashinfo, sk->sk_hash); + + spin_lock_bh(lock); + __inet_unhash(sk, NULL); + spin_unlock_bh(lock); + } } EXPORT_SYMBOL_GPL(inet_unhash); diff --git a/net/ipv6/inet6_hashtables.c b/net/ipv6/inet6_hashtables.c index 67c9114835c84..0a2e7f2283911 100644 --- a/net/ipv6/inet6_hashtables.c +++ b/net/ipv6/inet6_hashtables.c @@ -333,11 +333,8 @@ int inet6_hash(struct sock *sk) { int err = 0; - if (sk->sk_state != TCP_CLOSE) { - local_bh_disable(); + if (sk->sk_state != TCP_CLOSE) err = __inet_hash(sk, NULL); - local_bh_enable(); - } return err; }