Message ID | 20241002173042.917928-4-edumazet@google.com (mailing list archive) |
---|---|
State | Accepted |
Commit | 81df4fa94ee8c0800ed42c47357435602ed105ad |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | tcp: add fast path in timer handlers | expand |
Hello Eric, On Thu, Oct 3, 2024 at 2:31 AM Eric Dumazet <edumazet@google.com> wrote: > > delack timer is not stopped from inet_csk_clear_xmit_timer() > because we do not define INET_CSK_CLEAR_TIMERS. > > This is a conscious choice : inet_csk_clear_xmit_timer() > is often called from another cpu. Calling del_timer() > would cause false sharing and lock contention. > > This means that very often, tcp_delack_timer() is called > at the timer expiration, while there is no ACK to transmit. > > This can be detected very early, avoiding the socket spinlock. > > Notes: > - test about tp->compressed_ack is racy, > but in the unlikely case there is a race, the dedicated > compressed_ack_timer hrtimer would close it. > > - Even if the fast path is not taken, reading > icsk->icsk_ack.pending and tp->compressed_ack > before acquiring the socket spinlock reduces > acquisition time and chances of contention. > > Signed-off-by: Eric Dumazet <edumazet@google.com> > --- > include/net/inet_connection_sock.h | 5 +++-- > net/ipv4/inet_connection_sock.c | 4 ++-- > net/ipv4/tcp_output.c | 3 ++- > net/ipv4/tcp_timer.c | 9 +++++++++ > net/mptcp/protocol.c | 3 ++- > 5 files changed, 18 insertions(+), 6 deletions(-) > > diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h > index 914d1977270449241f6fc6da2055f3af02a75f99..3c82fad904d4c6c51069e2e703673d667bb36d06 100644 > --- a/include/net/inet_connection_sock.h > +++ b/include/net/inet_connection_sock.h > @@ -202,7 +202,7 @@ static inline void inet_csk_clear_xmit_timer(struct sock *sk, const int what) > sk_stop_timer(sk, &icsk->icsk_retransmit_timer); > #endif > } else if (what == ICSK_TIME_DACK) { > - icsk->icsk_ack.pending = 0; > + smp_store_release(&icsk->icsk_ack.pending, 0); > icsk->icsk_ack.retry = 0; > #ifdef INET_CSK_CLEAR_TIMERS > sk_stop_timer(sk, &icsk->icsk_delack_timer); > @@ -233,7 +233,8 @@ static inline void inet_csk_reset_xmit_timer(struct sock *sk, const int what, > icsk->icsk_timeout = jiffies + when; > sk_reset_timer(sk, &icsk->icsk_retransmit_timer, icsk->icsk_timeout); > } else if (what == ICSK_TIME_DACK) { > - icsk->icsk_ack.pending |= ICSK_ACK_TIMER; > + smp_store_release(&icsk->icsk_ack.pending, > + icsk->icsk_ack.pending | ICSK_ACK_TIMER); > icsk->icsk_ack.timeout = jiffies + when; > sk_reset_timer(sk, &icsk->icsk_delack_timer, icsk->icsk_ack.timeout); > } else { > diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c > index 8c53385cc808c61097898514fd91a322e3a08d31..12e975ed4910d8c7cca79b1812f365589a5d469a 100644 > --- a/net/ipv4/inet_connection_sock.c > +++ b/net/ipv4/inet_connection_sock.c > @@ -776,7 +776,7 @@ void inet_csk_clear_xmit_timers(struct sock *sk) > struct inet_connection_sock *icsk = inet_csk(sk); > > smp_store_release(&icsk->icsk_pending, 0); > - icsk->icsk_ack.pending = 0; > + smp_store_release(&icsk->icsk_ack.pending, 0); > > sk_stop_timer(sk, &icsk->icsk_retransmit_timer); > sk_stop_timer(sk, &icsk->icsk_delack_timer); > @@ -792,7 +792,7 @@ void inet_csk_clear_xmit_timers_sync(struct sock *sk) > sock_not_owned_by_me(sk); > > smp_store_release(&icsk->icsk_pending, 0); > - icsk->icsk_ack.pending = 0; > + smp_store_release(&icsk->icsk_ack.pending, 0); > > sk_stop_timer_sync(sk, &icsk->icsk_retransmit_timer); > sk_stop_timer_sync(sk, &icsk->icsk_delack_timer); > diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c > index 4d04073016035dcf62ba5e0ad23aac86e54e65c7..08772395690d13a0c3309a273543a51aa0dd3fdc 100644 > --- a/net/ipv4/tcp_output.c > +++ b/net/ipv4/tcp_output.c > @@ -4224,7 +4224,8 @@ void tcp_send_delayed_ack(struct sock *sk) > if (!time_before(timeout, icsk->icsk_ack.timeout)) > timeout = icsk->icsk_ack.timeout; > } > - icsk->icsk_ack.pending |= ICSK_ACK_SCHED | ICSK_ACK_TIMER; > + smp_store_release(&icsk->icsk_ack.pending, > + icsk->icsk_ack.pending | ICSK_ACK_SCHED | ICSK_ACK_TIMER); > icsk->icsk_ack.timeout = timeout; > sk_reset_timer(sk, &icsk->icsk_delack_timer, timeout); > } > diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c > index b7266b9101ce5933776bd38d086287667e3a7f18..c3a7442332d4926a6089812f789e89ee23081306 100644 > --- a/net/ipv4/tcp_timer.c > +++ b/net/ipv4/tcp_timer.c > @@ -361,6 +361,14 @@ static void tcp_delack_timer(struct timer_list *t) > from_timer(icsk, t, icsk_delack_timer); > struct sock *sk = &icsk->icsk_inet.sk; > > + /* Avoid taking socket spinlock if there is no ACK to send. > + * The compressed_ack check is racy, but a separate hrtimer > + * will take care of it eventually. > + */ > + if (!(smp_load_acquire(&icsk->icsk_ack.pending) & ICSK_ACK_TIMER) && > + !READ_ONCE(tcp_sk(sk)->compressed_ack)) I wonder what the use of single READ_ONCE() is here without a WRITE_ONCE() pair? It cannot guarantee that the result of reading compressed_ack is accurate. What if we use without this READ_ONCE() here? Thanks, Jason
On Thu, Oct 3, 2024 at 1:19 AM Jason Xing <kerneljasonxing@gmail.com> wrote: > > Hello Eric, > !READ_ONCE(tcp_sk(sk)->compressed_ack)) > > I wonder what the use of single READ_ONCE() is here without a > WRITE_ONCE() pair? It cannot guarantee that the result of reading > compressed_ack is accurate. What if we use without this READ_ONCE() > here? Have you read the changelog and comments about this 'accuracy' thing ? If you do not use the READ_ONCE() here, only concern is KCSAN might trigger a splat. The WRITE_ONCE() for a single byte is not needed, no tearing is possible.
On Thu, Oct 3, 2024 at 6:11 PM Eric Dumazet <edumazet@google.com> wrote: > > On Thu, Oct 3, 2024 at 1:19 AM Jason Xing <kerneljasonxing@gmail.com> wrote: > > > > Hello Eric, > > > !READ_ONCE(tcp_sk(sk)->compressed_ack)) > > > > I wonder what the use of single READ_ONCE() is here without a > > WRITE_ONCE() pair? It cannot guarantee that the result of reading > > compressed_ack is accurate. What if we use without this READ_ONCE() > > here? > > Have you read the changelog and comments about this 'accuracy' thing ? My initial question was about how only READ_ONCE works without adding corresponding WRITE_ONCE here. Sorry that I didn't totally understand it. Sure, I did read them. But I did miss to understand one line "before acquiring the socket spinlock reduces acquisition time and chances of contention", which is perhaps the reason. > > If you do not use the READ_ONCE() here, only concern is KCSAN might > trigger a splat. > > The WRITE_ONCE() for a single byte is not needed, no tearing is possible. Yes, I got it. The writers of this field before this patch are all protected under the socket lock. Thanks for your explanation. Thanks, Jason
diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h index 914d1977270449241f6fc6da2055f3af02a75f99..3c82fad904d4c6c51069e2e703673d667bb36d06 100644 --- a/include/net/inet_connection_sock.h +++ b/include/net/inet_connection_sock.h @@ -202,7 +202,7 @@ static inline void inet_csk_clear_xmit_timer(struct sock *sk, const int what) sk_stop_timer(sk, &icsk->icsk_retransmit_timer); #endif } else if (what == ICSK_TIME_DACK) { - icsk->icsk_ack.pending = 0; + smp_store_release(&icsk->icsk_ack.pending, 0); icsk->icsk_ack.retry = 0; #ifdef INET_CSK_CLEAR_TIMERS sk_stop_timer(sk, &icsk->icsk_delack_timer); @@ -233,7 +233,8 @@ static inline void inet_csk_reset_xmit_timer(struct sock *sk, const int what, icsk->icsk_timeout = jiffies + when; sk_reset_timer(sk, &icsk->icsk_retransmit_timer, icsk->icsk_timeout); } else if (what == ICSK_TIME_DACK) { - icsk->icsk_ack.pending |= ICSK_ACK_TIMER; + smp_store_release(&icsk->icsk_ack.pending, + icsk->icsk_ack.pending | ICSK_ACK_TIMER); icsk->icsk_ack.timeout = jiffies + when; sk_reset_timer(sk, &icsk->icsk_delack_timer, icsk->icsk_ack.timeout); } else { diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c index 8c53385cc808c61097898514fd91a322e3a08d31..12e975ed4910d8c7cca79b1812f365589a5d469a 100644 --- a/net/ipv4/inet_connection_sock.c +++ b/net/ipv4/inet_connection_sock.c @@ -776,7 +776,7 @@ void inet_csk_clear_xmit_timers(struct sock *sk) struct inet_connection_sock *icsk = inet_csk(sk); smp_store_release(&icsk->icsk_pending, 0); - icsk->icsk_ack.pending = 0; + smp_store_release(&icsk->icsk_ack.pending, 0); sk_stop_timer(sk, &icsk->icsk_retransmit_timer); sk_stop_timer(sk, &icsk->icsk_delack_timer); @@ -792,7 +792,7 @@ void inet_csk_clear_xmit_timers_sync(struct sock *sk) sock_not_owned_by_me(sk); smp_store_release(&icsk->icsk_pending, 0); - icsk->icsk_ack.pending = 0; + smp_store_release(&icsk->icsk_ack.pending, 0); sk_stop_timer_sync(sk, &icsk->icsk_retransmit_timer); sk_stop_timer_sync(sk, &icsk->icsk_delack_timer); diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 4d04073016035dcf62ba5e0ad23aac86e54e65c7..08772395690d13a0c3309a273543a51aa0dd3fdc 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -4224,7 +4224,8 @@ void tcp_send_delayed_ack(struct sock *sk) if (!time_before(timeout, icsk->icsk_ack.timeout)) timeout = icsk->icsk_ack.timeout; } - icsk->icsk_ack.pending |= ICSK_ACK_SCHED | ICSK_ACK_TIMER; + smp_store_release(&icsk->icsk_ack.pending, + icsk->icsk_ack.pending | ICSK_ACK_SCHED | ICSK_ACK_TIMER); icsk->icsk_ack.timeout = timeout; sk_reset_timer(sk, &icsk->icsk_delack_timer, timeout); } diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c index b7266b9101ce5933776bd38d086287667e3a7f18..c3a7442332d4926a6089812f789e89ee23081306 100644 --- a/net/ipv4/tcp_timer.c +++ b/net/ipv4/tcp_timer.c @@ -361,6 +361,14 @@ static void tcp_delack_timer(struct timer_list *t) from_timer(icsk, t, icsk_delack_timer); struct sock *sk = &icsk->icsk_inet.sk; + /* Avoid taking socket spinlock if there is no ACK to send. + * The compressed_ack check is racy, but a separate hrtimer + * will take care of it eventually. + */ + if (!(smp_load_acquire(&icsk->icsk_ack.pending) & ICSK_ACK_TIMER) && + !READ_ONCE(tcp_sk(sk)->compressed_ack)) + goto out; + bh_lock_sock(sk); if (!sock_owned_by_user(sk)) { tcp_delack_timer_handler(sk); @@ -371,6 +379,7 @@ static void tcp_delack_timer(struct timer_list *t) sock_hold(sk); } bh_unlock_sock(sk); +out: sock_put(sk); } diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index c2317919fc148a67a81ded795359bd613c9b0dff..e85862352084907582ec884dcb96832356419fa5 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -3504,7 +3504,8 @@ static void schedule_3rdack_retransmission(struct sock *ssk) timeout += jiffies; WARN_ON_ONCE(icsk->icsk_ack.pending & ICSK_ACK_TIMER); - icsk->icsk_ack.pending |= ICSK_ACK_SCHED | ICSK_ACK_TIMER; + smp_store_release(&icsk->icsk_ack.pending, + icsk->icsk_ack.pending | ICSK_ACK_SCHED | ICSK_ACK_TIMER); icsk->icsk_ack.timeout = timeout; sk_reset_timer(ssk, &icsk->icsk_delack_timer, timeout); }
delack timer is not stopped from inet_csk_clear_xmit_timer() because we do not define INET_CSK_CLEAR_TIMERS. This is a conscious choice : inet_csk_clear_xmit_timer() is often called from another cpu. Calling del_timer() would cause false sharing and lock contention. This means that very often, tcp_delack_timer() is called at the timer expiration, while there is no ACK to transmit. This can be detected very early, avoiding the socket spinlock. Notes: - test about tp->compressed_ack is racy, but in the unlikely case there is a race, the dedicated compressed_ack_timer hrtimer would close it. - Even if the fast path is not taken, reading icsk->icsk_ack.pending and tp->compressed_ack before acquiring the socket spinlock reduces acquisition time and chances of contention. Signed-off-by: Eric Dumazet <edumazet@google.com> --- include/net/inet_connection_sock.h | 5 +++-- net/ipv4/inet_connection_sock.c | 4 ++-- net/ipv4/tcp_output.c | 3 ++- net/ipv4/tcp_timer.c | 9 +++++++++ net/mptcp/protocol.c | 3 ++- 5 files changed, 18 insertions(+), 6 deletions(-)