Message ID | 20221023023044.149357-1-xiyou.wangcong@gmail.com (mailing list archive) |
---|---|
State | Changes Requested |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | [net] kcm: fix a race condition in kcm_recvmsg() | expand |
On Sat, 22 Oct 2022 19:30:44 -0700 Cong Wang wrote: > + spin_lock_bh(&mux->rx_lock); > KCM_STATS_INCR(kcm->stats.rx_msgs); > skb_unlink(skb, &sk->sk_receive_queue); > + spin_unlock_bh(&mux->rx_lock); Why not switch to __skb_unlink() at the same time? Abundance of caution? Adding Eric who was fixing KCM bugs recently.
On Tue, Oct 25, 2022 at 4:02 PM Jakub Kicinski <kuba@kernel.org> wrote: > > On Sat, 22 Oct 2022 19:30:44 -0700 Cong Wang wrote: > > + spin_lock_bh(&mux->rx_lock); > > KCM_STATS_INCR(kcm->stats.rx_msgs); > > skb_unlink(skb, &sk->sk_receive_queue); > > + spin_unlock_bh(&mux->rx_lock); > > Why not switch to __skb_unlink() at the same time? > Abundance of caution? > > Adding Eric who was fixing KCM bugs recently. I think kcm_queue_rcv_skb() might have a similar problem if/when called from requeue_rx_msgs() (The mux->rx_lock spinlock is not acquired, and skb_queue_tail() is used) I agree we should stick to one lock, and if this is not the standard skb head lock, we should not use it at all (ie use __skb_queue_tail() and friends)
On Tue, Oct 25, 2022 at 04:02:22PM -0700, Jakub Kicinski wrote: > On Sat, 22 Oct 2022 19:30:44 -0700 Cong Wang wrote: > > + spin_lock_bh(&mux->rx_lock); > > KCM_STATS_INCR(kcm->stats.rx_msgs); > > skb_unlink(skb, &sk->sk_receive_queue); > > + spin_unlock_bh(&mux->rx_lock); > > Why not switch to __skb_unlink() at the same time? > Abundance of caution? What gain do we have? Since we have rx_lock, skb queue lock should never be contended? Thanks.
On Tue, Oct 25, 2022 at 04:49:48PM -0700, Eric Dumazet wrote: > On Tue, Oct 25, 2022 at 4:02 PM Jakub Kicinski <kuba@kernel.org> wrote: > > > > On Sat, 22 Oct 2022 19:30:44 -0700 Cong Wang wrote: > > > + spin_lock_bh(&mux->rx_lock); > > > KCM_STATS_INCR(kcm->stats.rx_msgs); > > > skb_unlink(skb, &sk->sk_receive_queue); > > > + spin_unlock_bh(&mux->rx_lock); > > > > Why not switch to __skb_unlink() at the same time? > > Abundance of caution? > > > > Adding Eric who was fixing KCM bugs recently. > > I think kcm_queue_rcv_skb() might have a similar problem if/when > called from requeue_rx_msgs() > > (The mux->rx_lock spinlock is not acquired, and skb_queue_tail() is used) rx_lock is acquired at least by 2 callers of it, requeue_rx_msgs() and kcm_rcv_ready(). kcm_rcv_strparser() seems missing it, I can fix this in a separate patch as no one actually reported a bug. Thanks.
On Fri, 28 Oct 2022 12:21:11 -0700 Cong Wang wrote: > On Tue, Oct 25, 2022 at 04:02:22PM -0700, Jakub Kicinski wrote: > > On Sat, 22 Oct 2022 19:30:44 -0700 Cong Wang wrote: > > > + spin_lock_bh(&mux->rx_lock); > > > KCM_STATS_INCR(kcm->stats.rx_msgs); > > > skb_unlink(skb, &sk->sk_receive_queue); > > > + spin_unlock_bh(&mux->rx_lock); > > > > Why not switch to __skb_unlink() at the same time? > > Abundance of caution? > > What gain do we have? Since we have rx_lock, skb queue lock should never > be contended? I was thinking mostly about readability, the performance is secondary. Other parts of the code use unlocked skb queue helpers so it may be confusing to a reader why this on isn't, and therefore what lock protects the queue. But no strong feelings.
On Sat, Oct 22, 2022 at 07:30:44PM -0700, Cong Wang wrote: > From: Cong Wang <cong.wang@bytedance.com> > > sk->sk_receive_queue is protected by skb queue lock, but for KCM > sockets its RX path takes mux->rx_lock to protect more than just > skb queue, so grabbing skb queue lock is not necessary when > mux->rx_lock is already held. But kcm_recvmsg() still only grabs > the skb queue lock, so race conditions still exist. > > Close this race condition by taking mux->rx_lock in kcm_recvmsg() > too. This way is much simpler than enforcing skb queue lock > everywhere. > After a second thought, this actually could introduce a performance regression as struct kcm_mux can be shared by multiple KCM sockets. So, I am afraid we have to use the skb queue lock, fortunately I found an easier way (comparing to Paolo's) to solve the skb peek race. Zhengchao, could you please test the following patch? Thanks! ----------------> diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c index a5004228111d..890a2423f559 100644 --- a/net/kcm/kcmsock.c +++ b/net/kcm/kcmsock.c @@ -222,7 +222,7 @@ static void requeue_rx_msgs(struct kcm_mux *mux, struct sk_buff_head *head) struct sk_buff *skb; struct kcm_sock *kcm; - while ((skb = __skb_dequeue(head))) { + while ((skb = skb_dequeue(head))) { /* Reset destructor to avoid calling kcm_rcv_ready */ skb->destructor = sock_rfree; skb_orphan(skb); @@ -1085,53 +1085,17 @@ static int kcm_sendmsg(struct socket *sock, struct msghdr *msg, size_t len) return err; } -static struct sk_buff *kcm_wait_data(struct sock *sk, int flags, - long timeo, int *err) -{ - struct sk_buff *skb; - - while (!(skb = skb_peek(&sk->sk_receive_queue))) { - if (sk->sk_err) { - *err = sock_error(sk); - return NULL; - } - - if (sock_flag(sk, SOCK_DONE)) - return NULL; - - if ((flags & MSG_DONTWAIT) || !timeo) { - *err = -EAGAIN; - return NULL; - } - - sk_wait_data(sk, &timeo, NULL); - - /* Handle signals */ - if (signal_pending(current)) { - *err = sock_intr_errno(timeo); - return NULL; - } - } - - return skb; -} - static int kcm_recvmsg(struct socket *sock, struct msghdr *msg, size_t len, int flags) { struct sock *sk = sock->sk; struct kcm_sock *kcm = kcm_sk(sk); int err = 0; - long timeo; struct strp_msg *stm; int copied = 0; struct sk_buff *skb; - timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT); - - lock_sock(sk); - - skb = kcm_wait_data(sk, flags, timeo, &err); + skb = skb_recv_datagram(sk, flags, &err); if (!skb) goto out; @@ -1162,14 +1126,11 @@ static int kcm_recvmsg(struct socket *sock, struct msghdr *msg, /* Finished with message */ msg->msg_flags |= MSG_EOR; KCM_STATS_INCR(kcm->stats.rx_msgs); - skb_unlink(skb, &sk->sk_receive_queue); - kfree_skb(skb); } } out: - release_sock(sk); - + skb_free_datagram(sk, skb); return copied ? : err; } @@ -1179,7 +1140,6 @@ static ssize_t kcm_splice_read(struct socket *sock, loff_t *ppos, { struct sock *sk = sock->sk; struct kcm_sock *kcm = kcm_sk(sk); - long timeo; struct strp_msg *stm; int err = 0; ssize_t copied; @@ -1187,11 +1147,7 @@ static ssize_t kcm_splice_read(struct socket *sock, loff_t *ppos, /* Only support splice for SOCKSEQPACKET */ - timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT); - - lock_sock(sk); - - skb = kcm_wait_data(sk, flags, timeo, &err); + skb = skb_recv_datagram(sk, flags, &err); if (!skb) goto err_out; @@ -1219,13 +1175,11 @@ static ssize_t kcm_splice_read(struct socket *sock, loff_t *ppos, * finish reading the message. */ - release_sock(sk); - + skb_free_datagram(sk, skb); return copied; err_out: - release_sock(sk); - + skb_free_datagram(sk, skb); return err; }
diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c index 27725464ec08..8b4e5d0ab2b6 100644 --- a/net/kcm/kcmsock.c +++ b/net/kcm/kcmsock.c @@ -1116,6 +1116,7 @@ static int kcm_recvmsg(struct socket *sock, struct msghdr *msg, { struct sock *sk = sock->sk; struct kcm_sock *kcm = kcm_sk(sk); + struct kcm_mux *mux = kcm->mux; int err = 0; long timeo; struct strp_msg *stm; @@ -1156,8 +1157,10 @@ static int kcm_recvmsg(struct socket *sock, struct msghdr *msg, msg_finished: /* Finished with message */ msg->msg_flags |= MSG_EOR; + spin_lock_bh(&mux->rx_lock); KCM_STATS_INCR(kcm->stats.rx_msgs); skb_unlink(skb, &sk->sk_receive_queue); + spin_unlock_bh(&mux->rx_lock); kfree_skb(skb); } }