Message ID | 20250207152830.2527578-3-edumazet@google.com (mailing list archive) |
---|---|
State | Accepted |
Commit | 7baa030155e8fa2a1bd9ea6425083c3b16787636 |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | tcp: allow to reduce max RTO | expand |
On Fri, Feb 7, 2025 at 11:30 PM Eric Dumazet <edumazet@google.com> wrote: > > We want to factorize calls to inet_csk_reset_xmit_timer(), > to ease TCP_RTO_MAX change. > > Current users want to add tcp_pacing_delay(sk) > to the timeout. > > Remaining calls to inet_csk_reset_xmit_timer() > do not add the pacing delay. Following patch > will convert them, passing false for @pace_delay. > > Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Jason Xing <kerneljasonxing@gmail.com>
On Fri, Feb 7, 2025 at 11:31 PM Jason Xing <kerneljasonxing@gmail.com> wrote: > > On Fri, Feb 7, 2025 at 11:30 PM Eric Dumazet <edumazet@google.com> wrote: > > > > We want to factorize calls to inet_csk_reset_xmit_timer(), > > to ease TCP_RTO_MAX change. > > > > Current users want to add tcp_pacing_delay(sk) > > to the timeout. > > > > Remaining calls to inet_csk_reset_xmit_timer() > > do not add the pacing delay. Following patch > > will convert them, passing false for @pace_delay. > > > > Signed-off-by: Eric Dumazet <edumazet@google.com> > > Reviewed-by: Jason Xing <kerneljasonxing@gmail.com> Reviewed-by: Neal Cardwell <ncardwell@google.com> Thanks, Eric! neal
From: Eric Dumazet <edumazet@google.com> Date: Fri, 7 Feb 2025 15:28:27 +0000 > We want to factorize calls to inet_csk_reset_xmit_timer(), > to ease TCP_RTO_MAX change. > > Current users want to add tcp_pacing_delay(sk) > to the timeout. > > Remaining calls to inet_csk_reset_xmit_timer() > do not add the pacing delay. Following patch > will convert them, passing false for @pace_delay. > > Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com>
diff --git a/include/net/tcp.h b/include/net/tcp.h index 356f5aa51ce22921320e34adec111fc4e412de8f..9472ec438aaa53580bd2f6d5b320005e6dcceb29 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1422,10 +1422,12 @@ static inline unsigned long tcp_pacing_delay(const struct sock *sk) static inline void tcp_reset_xmit_timer(struct sock *sk, const int what, - unsigned long when) + unsigned long when, + bool pace_delay) { - inet_csk_reset_xmit_timer(sk, what, when + tcp_pacing_delay(sk), - TCP_RTO_MAX); + if (pace_delay) + when += tcp_pacing_delay(sk); + inet_csk_reset_xmit_timer(sk, what, when, TCP_RTO_MAX); } /* Something is really bad, we could not queue an additional packet, @@ -1454,7 +1456,7 @@ static inline void tcp_check_probe_timer(struct sock *sk) { if (!tcp_sk(sk)->packets_out && !inet_csk(sk)->icsk_pending) tcp_reset_xmit_timer(sk, ICSK_TIME_PROBE0, - tcp_probe0_base(sk)); + tcp_probe0_base(sk), true); } static inline void tcp_init_wl(struct tcp_sock *tp, u32 seq) diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index cf5cb710f202b29563de51179eaed0823aff8090..dc872728589fec5753e1bea9b89804731f284d05 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -3282,7 +3282,7 @@ void tcp_rearm_rto(struct sock *sk) */ rto = usecs_to_jiffies(max_t(int, delta_us, 1)); } - tcp_reset_xmit_timer(sk, ICSK_TIME_RETRANS, rto); + tcp_reset_xmit_timer(sk, ICSK_TIME_RETRANS, rto, true); } } @@ -3562,7 +3562,7 @@ static void tcp_ack_probe(struct sock *sk) unsigned long when = tcp_probe0_when(sk, TCP_RTO_MAX); when = tcp_clamp_probe0_to_user_timeout(sk, when); - tcp_reset_xmit_timer(sk, ICSK_TIME_PROBE0, when); + tcp_reset_xmit_timer(sk, ICSK_TIME_PROBE0, when, true); } } diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 93401dbf39d223a4943579786be5aa6d14e0ed8d..ea5104952a053c17f5522e78d2b557a01389bc4d 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -2911,7 +2911,7 @@ bool tcp_schedule_loss_probe(struct sock *sk, bool advancing_rto) if (rto_delta_us > 0) timeout = min_t(u32, timeout, usecs_to_jiffies(rto_delta_us)); - tcp_reset_xmit_timer(sk, ICSK_TIME_LOSS_PROBE, timeout); + tcp_reset_xmit_timer(sk, ICSK_TIME_LOSS_PROBE, timeout, true); return true; } @@ -3545,7 +3545,7 @@ void tcp_xmit_retransmit_queue(struct sock *sk) } if (rearm_timer) tcp_reset_xmit_timer(sk, ICSK_TIME_RETRANS, - inet_csk(sk)->icsk_rto); + inet_csk(sk)->icsk_rto, true); } /* We allow to exceed memory limits for FIN packets to expedite @@ -4401,7 +4401,7 @@ void tcp_send_probe0(struct sock *sk) } timeout = tcp_clamp_probe0_to_user_timeout(sk, timeout); - tcp_reset_xmit_timer(sk, ICSK_TIME_PROBE0, timeout); + tcp_reset_xmit_timer(sk, ICSK_TIME_PROBE0, timeout, true); } int tcp_rtx_synack(const struct sock *sk, struct request_sock *req)
We want to factorize calls to inet_csk_reset_xmit_timer(), to ease TCP_RTO_MAX change. Current users want to add tcp_pacing_delay(sk) to the timeout. Remaining calls to inet_csk_reset_xmit_timer() do not add the pacing delay. Following patch will convert them, passing false for @pace_delay. Signed-off-by: Eric Dumazet <edumazet@google.com> --- include/net/tcp.h | 10 ++++++---- net/ipv4/tcp_input.c | 4 ++-- net/ipv4/tcp_output.c | 6 +++--- 3 files changed, 11 insertions(+), 9 deletions(-)