Message ID | 20240621013929.1386815-1-luoxuanqiang@kylinos.cn (mailing list archive) |
---|---|
State | Accepted |
Commit | ff46e3b4421923937b7f6e44ffcd3549a074f321 |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | [net,v4] Fix race for duplicate reqsk on identical SYN | expand |
From: luoxuanqiang <luoxuanqiang@kylinos.cn> Date: Fri, 21 Jun 2024 09:39:29 +0800 > When bonding is configured in BOND_MODE_BROADCAST mode, if two identical > SYN packets are received at the same time and processed on different CPUs, > it can potentially create the same sk (sock) but two different reqsk > (request_sock) in tcp_conn_request(). > > These two different reqsk will respond with two SYNACK packets, and since > the generation of the seq (ISN) incorporates a timestamp, the final two > SYNACK packets will have different seq values. > > The consequence is that when the Client receives and replies with an ACK > to the earlier SYNACK packet, we will reset(RST) it. > > ======================================================================== > > This behavior is consistently reproducible in my local setup, > which comprises: > > | NETA1 ------ NETB1 | > PC_A --- bond --- | | --- bond --- PC_B > | NETA2 ------ NETB2 | > > - PC_A is the Server and has two network cards, NETA1 and NETA2. I have > bonded these two cards using BOND_MODE_BROADCAST mode and configured > them to be handled by different CPU. > > - PC_B is the Client, also equipped with two network cards, NETB1 and > NETB2, which are also bonded and configured in BOND_MODE_BROADCAST mode. > > If the client attempts a TCP connection to the server, it might encounter > a failure. Capturing packets from the server side reveals: > > 10.10.10.10.45182 > localhost: Flags [S], seq 320236027, > 10.10.10.10.45182 > localhost: Flags [S], seq 320236027, > localhost > 10.10.10.10.45182: Flags [S.], seq 2967855116, > localhost > 10.10.10.10.45182: Flags [S.], seq 2967855123, <== > 10.10.10.10.45182 > localhost: Flags [.], ack 4294967290, > 10.10.10.10.45182 > localhost: Flags [.], ack 4294967290, > localhost > 10.10.10.10.45182: Flags [R], seq 2967855117, <== > localhost > 10.10.10.10.45182: Flags [R], seq 2967855117, > > Two SYNACKs with different seq numbers are sent by localhost, > resulting in an anomaly. > > ======================================================================== > > The attempted solution is as follows: > Add a return value to inet_csk_reqsk_queue_hash_add() to confirm if the > ehash insertion is successful (Up to now, the reason for unsuccessful > insertion is that a reqsk for the same connection has already been > inserted). If the insertion fails, release the reqsk. > > Due to the refcnt, Kuniyuki suggests also adding a return value check > for the DCCP module; if ehash insertion fails, indicating a successful > insertion of the same connection, simply release the reqsk as well. > > Simultaneously, In the reqsk_queue_hash_req(), the start of the > req->rsk_timer is adjusted to be after successful insertion. > > Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") > Signed-off-by: luoxuanqiang <luoxuanqiang@kylinos.cn> Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com> Thanks!
On Fri, Jun 21, 2024 at 3:39 AM luoxuanqiang <luoxuanqiang@kylinos.cn> wrote: > > When bonding is configured in BOND_MODE_BROADCAST mode, if two identical > SYN packets are received at the same time and processed on different CPUs, > it can potentially create the same sk (sock) but two different reqsk > (request_sock) in tcp_conn_request(). > > These two different reqsk will respond with two SYNACK packets, and since > the generation of the seq (ISN) incorporates a timestamp, the final two > SYNACK packets will have different seq values. > > The consequence is that when the Client receives and replies with an ACK > to the earlier SYNACK packet, we will reset(RST) it. > > ======================================================================== > Reviewed-by: Eric Dumazet <edumazet@google.com>
On Fri, 2024-06-21 at 09:39 +0800, luoxuanqiang wrote: > When bonding is configured in BOND_MODE_BROADCAST mode, if two identical > SYN packets are received at the same time and processed on different CPUs, > it can potentially create the same sk (sock) but two different reqsk > (request_sock) in tcp_conn_request(). > > These two different reqsk will respond with two SYNACK packets, and since > the generation of the seq (ISN) incorporates a timestamp, the final two > SYNACK packets will have different seq values. > > The consequence is that when the Client receives and replies with an ACK > to the earlier SYNACK packet, we will reset(RST) it. > > ======================================================================== > > This behavior is consistently reproducible in my local setup, > which comprises: > > | NETA1 ------ NETB1 | > PC_A --- bond --- | | --- bond --- PC_B > | NETA2 ------ NETB2 | > > - PC_A is the Server and has two network cards, NETA1 and NETA2. I have > bonded these two cards using BOND_MODE_BROADCAST mode and configured > them to be handled by different CPU. > > - PC_B is the Client, also equipped with two network cards, NETB1 and > NETB2, which are also bonded and configured in BOND_MODE_BROADCAST mode. > > If the client attempts a TCP connection to the server, it might encounter > a failure. Capturing packets from the server side reveals: > > 10.10.10.10.45182 > localhost: Flags [S], seq 320236027, > 10.10.10.10.45182 > localhost: Flags [S], seq 320236027, > localhost > 10.10.10.10.45182: Flags [S.], seq 2967855116, > localhost > 10.10.10.10.45182: Flags [S.], seq 2967855123, <== > 10.10.10.10.45182 > localhost: Flags [.], ack 4294967290, > 10.10.10.10.45182 > localhost: Flags [.], ack 4294967290, > localhost > 10.10.10.10.45182: Flags [R], seq 2967855117, <== > localhost > 10.10.10.10.45182: Flags [R], seq 2967855117, > > Two SYNACKs with different seq numbers are sent by localhost, > resulting in an anomaly. > > ======================================================================== > > The attempted solution is as follows: > Add a return value to inet_csk_reqsk_queue_hash_add() to confirm if the > ehash insertion is successful (Up to now, the reason for unsuccessful > insertion is that a reqsk for the same connection has already been > inserted). If the insertion fails, release the reqsk. > > Due to the refcnt, Kuniyuki suggests also adding a return value check > for the DCCP module; if ehash insertion fails, indicating a successful > insertion of the same connection, simply release the reqsk as well. > > Simultaneously, In the reqsk_queue_hash_req(), the start of the > req->rsk_timer is adjusted to be after successful insertion. > > Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Just after applying the patch I wondered if the issue addressed here should be observable only after commit e994b2f0fb92 ("tcp: do not lock listener to process SYN packets")? In practice it should not matter as the latter commit it's older than the currently older LST, but I'm wondering if I read the things correctly? Thanks! Paolo
Hello: This patch was applied to netdev/net.git (main) by Paolo Abeni <pabeni@redhat.com>: On Fri, 21 Jun 2024 09:39:29 +0800 you wrote: > When bonding is configured in BOND_MODE_BROADCAST mode, if two identical > SYN packets are received at the same time and processed on different CPUs, > it can potentially create the same sk (sock) but two different reqsk > (request_sock) in tcp_conn_request(). > > These two different reqsk will respond with two SYNACK packets, and since > the generation of the seq (ISN) incorporates a timestamp, the final two > SYNACK packets will have different seq values. > > [...] Here is the summary with links: - [net,v4] Fix race for duplicate reqsk on identical SYN https://git.kernel.org/netdev/net/c/ff46e3b44219 You are awesome, thank you!
在 2024/6/25 17:49, Paolo Abeni 写道: > On Fri, 2024-06-21 at 09:39 +0800, luoxuanqiang wrote: >> When bonding is configured in BOND_MODE_BROADCAST mode, if two identical >> SYN packets are received at the same time and processed on different CPUs, >> it can potentially create the same sk (sock) but two different reqsk >> (request_sock) in tcp_conn_request(). >> >> These two different reqsk will respond with two SYNACK packets, and since >> the generation of the seq (ISN) incorporates a timestamp, the final two >> SYNACK packets will have different seq values. >> >> The consequence is that when the Client receives and replies with an ACK >> to the earlier SYNACK packet, we will reset(RST) it. >> >> ======================================================================== >> >> This behavior is consistently reproducible in my local setup, >> which comprises: >> >> | NETA1 ------ NETB1 | >> PC_A --- bond --- | | --- bond --- PC_B >> | NETA2 ------ NETB2 | >> >> - PC_A is the Server and has two network cards, NETA1 and NETA2. I have >> bonded these two cards using BOND_MODE_BROADCAST mode and configured >> them to be handled by different CPU. >> >> - PC_B is the Client, also equipped with two network cards, NETB1 and >> NETB2, which are also bonded and configured in BOND_MODE_BROADCAST mode. >> >> If the client attempts a TCP connection to the server, it might encounter >> a failure. Capturing packets from the server side reveals: >> >> 10.10.10.10.45182 > localhost: Flags [S], seq 320236027, >> 10.10.10.10.45182 > localhost: Flags [S], seq 320236027, >> localhost > 10.10.10.10.45182: Flags [S.], seq 2967855116, >> localhost > 10.10.10.10.45182: Flags [S.], seq 2967855123, <== >> 10.10.10.10.45182 > localhost: Flags [.], ack 4294967290, >> 10.10.10.10.45182 > localhost: Flags [.], ack 4294967290, >> localhost > 10.10.10.10.45182: Flags [R], seq 2967855117, <== >> localhost > 10.10.10.10.45182: Flags [R], seq 2967855117, >> >> Two SYNACKs with different seq numbers are sent by localhost, >> resulting in an anomaly. >> >> ======================================================================== >> >> The attempted solution is as follows: >> Add a return value to inet_csk_reqsk_queue_hash_add() to confirm if the >> ehash insertion is successful (Up to now, the reason for unsuccessful >> insertion is that a reqsk for the same connection has already been >> inserted). If the insertion fails, release the reqsk. >> >> Due to the refcnt, Kuniyuki suggests also adding a return value check >> for the DCCP module; if ehash insertion fails, indicating a successful >> insertion of the same connection, simply release the reqsk as well. >> >> Simultaneously, In the reqsk_queue_hash_req(), the start of the >> req->rsk_timer is adjusted to be after successful insertion. >> >> Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") > Just after applying the patch I wondered if the issue addressed here > should be observable only after commit e994b2f0fb92 ("tcp: do not lock > listener to process SYN packets")? > > In practice it should not matter as the latter commit it's older than > the currently older LST, but I'm wondering if I read the things > correctly? > > Thanks! > > Paolo > Hi, Paolo, I conducted some experiments on your concern by reverting e994b2f0fb92 on version 4.19 to observe how TCP handles this race condition. Here are the observations: where SYN-A is processed on CPUA and SYN-B is processed on CPUB CPUA & CPUB In tcp_v4_rcv(), both SYN-A and SYN-B obtained the same sk from __inet_lookup_listener(), with the sk state being TCP_LISTEN. CPUA SYN-A acquired sk_lock and was processed in tcp_v4_do_rcv(), where it created reqsk-A while in TCP_LISTEN state and sent a SYNACK packet. CPUB After SYN-A was processed and sk_lock was released, SYN-B was processed. Since it was the same sk still in TCP_LISTEN state, it created reqsk-B and sent a SYNACK packet with a different seq number. The issue remains reproducible. BRs!
diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h index 7d6b1254c92d..c0deaafebfdc 100644 --- a/include/net/inet_connection_sock.h +++ b/include/net/inet_connection_sock.h @@ -263,7 +263,7 @@ struct dst_entry *inet_csk_route_child_sock(const struct sock *sk, struct sock *inet_csk_reqsk_queue_add(struct sock *sk, struct request_sock *req, struct sock *child); -void inet_csk_reqsk_queue_hash_add(struct sock *sk, struct request_sock *req, +bool inet_csk_reqsk_queue_hash_add(struct sock *sk, struct request_sock *req, unsigned long timeout); struct sock *inet_csk_complete_hashdance(struct sock *sk, struct sock *child, struct request_sock *req, diff --git a/net/dccp/ipv4.c b/net/dccp/ipv4.c index ff41bd6f99c3..5926159a6f20 100644 --- a/net/dccp/ipv4.c +++ b/net/dccp/ipv4.c @@ -657,8 +657,11 @@ int dccp_v4_conn_request(struct sock *sk, struct sk_buff *skb) if (dccp_v4_send_response(sk, req)) goto drop_and_free; - inet_csk_reqsk_queue_hash_add(sk, req, DCCP_TIMEOUT_INIT); - reqsk_put(req); + if (unlikely(!inet_csk_reqsk_queue_hash_add(sk, req, DCCP_TIMEOUT_INIT))) + reqsk_free(req); + else + reqsk_put(req); + return 0; drop_and_free: diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c index 85f4b8fdbe5e..da5dba120bc9 100644 --- a/net/dccp/ipv6.c +++ b/net/dccp/ipv6.c @@ -400,8 +400,11 @@ static int dccp_v6_conn_request(struct sock *sk, struct sk_buff *skb) if (dccp_v6_send_response(sk, req)) goto drop_and_free; - inet_csk_reqsk_queue_hash_add(sk, req, DCCP_TIMEOUT_INIT); - reqsk_put(req); + if (unlikely(!inet_csk_reqsk_queue_hash_add(sk, req, DCCP_TIMEOUT_INIT))) + reqsk_free(req); + else + reqsk_put(req); + return 0; drop_and_free: diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c index d81f74ce0f02..d4f0eff8b20f 100644 --- a/net/ipv4/inet_connection_sock.c +++ b/net/ipv4/inet_connection_sock.c @@ -1122,25 +1122,34 @@ static void reqsk_timer_handler(struct timer_list *t) inet_csk_reqsk_queue_drop_and_put(oreq->rsk_listener, oreq); } -static void reqsk_queue_hash_req(struct request_sock *req, +static bool reqsk_queue_hash_req(struct request_sock *req, unsigned long timeout) { + bool found_dup_sk = false; + + if (!inet_ehash_insert(req_to_sk(req), NULL, &found_dup_sk)) + return false; + + /* The timer needs to be setup after a successful insertion. */ timer_setup(&req->rsk_timer, reqsk_timer_handler, TIMER_PINNED); mod_timer(&req->rsk_timer, jiffies + timeout); - inet_ehash_insert(req_to_sk(req), NULL, NULL); /* before letting lookups find us, make sure all req fields * are committed to memory and refcnt initialized. */ smp_wmb(); refcount_set(&req->rsk_refcnt, 2 + 1); + return true; } -void inet_csk_reqsk_queue_hash_add(struct sock *sk, struct request_sock *req, +bool inet_csk_reqsk_queue_hash_add(struct sock *sk, struct request_sock *req, unsigned long timeout) { - reqsk_queue_hash_req(req, timeout); + if (!reqsk_queue_hash_req(req, timeout)) + return false; + inet_csk_reqsk_queue_added(sk); + return true; } EXPORT_SYMBOL_GPL(inet_csk_reqsk_queue_hash_add); diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 9c04a9c8be9d..c6b08a43ce00 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -7256,7 +7256,12 @@ int tcp_conn_request(struct request_sock_ops *rsk_ops, tcp_rsk(req)->tfo_listener = false; if (!want_cookie) { req->timeout = tcp_timeout_init((struct sock *)req); - inet_csk_reqsk_queue_hash_add(sk, req, req->timeout); + if (unlikely(!inet_csk_reqsk_queue_hash_add(sk, req, + req->timeout))) { + reqsk_free(req); + return 0; + } + } af_ops->send_synack(sk, dst, &fl, req, &foc, !want_cookie ? TCP_SYNACK_NORMAL :
When bonding is configured in BOND_MODE_BROADCAST mode, if two identical SYN packets are received at the same time and processed on different CPUs, it can potentially create the same sk (sock) but two different reqsk (request_sock) in tcp_conn_request(). These two different reqsk will respond with two SYNACK packets, and since the generation of the seq (ISN) incorporates a timestamp, the final two SYNACK packets will have different seq values. The consequence is that when the Client receives and replies with an ACK to the earlier SYNACK packet, we will reset(RST) it. ======================================================================== This behavior is consistently reproducible in my local setup, which comprises: | NETA1 ------ NETB1 | PC_A --- bond --- | | --- bond --- PC_B | NETA2 ------ NETB2 | - PC_A is the Server and has two network cards, NETA1 and NETA2. I have bonded these two cards using BOND_MODE_BROADCAST mode and configured them to be handled by different CPU. - PC_B is the Client, also equipped with two network cards, NETB1 and NETB2, which are also bonded and configured in BOND_MODE_BROADCAST mode. If the client attempts a TCP connection to the server, it might encounter a failure. Capturing packets from the server side reveals: 10.10.10.10.45182 > localhost: Flags [S], seq 320236027, 10.10.10.10.45182 > localhost: Flags [S], seq 320236027, localhost > 10.10.10.10.45182: Flags [S.], seq 2967855116, localhost > 10.10.10.10.45182: Flags [S.], seq 2967855123, <== 10.10.10.10.45182 > localhost: Flags [.], ack 4294967290, 10.10.10.10.45182 > localhost: Flags [.], ack 4294967290, localhost > 10.10.10.10.45182: Flags [R], seq 2967855117, <== localhost > 10.10.10.10.45182: Flags [R], seq 2967855117, Two SYNACKs with different seq numbers are sent by localhost, resulting in an anomaly. ======================================================================== The attempted solution is as follows: Add a return value to inet_csk_reqsk_queue_hash_add() to confirm if the ehash insertion is successful (Up to now, the reason for unsuccessful insertion is that a reqsk for the same connection has already been inserted). If the insertion fails, release the reqsk. Due to the refcnt, Kuniyuki suggests also adding a return value check for the DCCP module; if ehash insertion fails, indicating a successful insertion of the same connection, simply release the reqsk as well. Simultaneously, In the reqsk_queue_hash_req(), the start of the req->rsk_timer is adjusted to be after successful insertion. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: luoxuanqiang <luoxuanqiang@kylinos.cn> --- include/net/inet_connection_sock.h | 2 +- net/dccp/ipv4.c | 7 +++++-- net/dccp/ipv6.c | 7 +++++-- net/ipv4/inet_connection_sock.c | 17 +++++++++++++---- net/ipv4/tcp_input.c | 7 ++++++- 5 files changed, 30 insertions(+), 10 deletions(-)