Message ID | 20231204114322.9218-1-lulie@linux.alibaba.com (mailing list archive) |
---|---|
State | Handled Elsewhere |
Headers | show |
Series | [net-next] tcp: add tracepoints for data send/recv/acked | expand |
On Mon, Dec 4, 2023 at 12:43 PM Philo Lu <lulie@linux.alibaba.com> wrote: > > Add 3 tracepoints, namely tcp_data_send/tcp_data_recv/tcp_data_acked, > which will be called every time a tcp data packet is sent, received, and > acked. > tcp_data_send: called after a data packet is sent. > tcp_data_recv: called after a data packet is receviced. > tcp_data_acked: called after a valid ack packet is processed (some sent > data are ackknowledged). > > We use these callbacks for fine-grained tcp monitoring, which collects > and analyses every tcp request/response event information. The whole > system has been described in SIGMOD'18 (see > https://dl.acm.org/doi/pdf/10.1145/3183713.3190659 for details). To > achieve this with bpf, we require hooks for data events that call bpf > prog (1) when any data packet is sent/received/acked, and (2) after > critical tcp state variables have been updated (e.g., snd_una, snd_nxt, > rcv_nxt). However, existing bpf hooks cannot meet our requirements. > Besides, these tracepoints help to debug tcp when data send/recv/acked. This I do not understand. > > Though kretprobe/fexit can also be used to collect these information, > they will not work if the kernel functions get inlined. Considering the > stability, we prefer tracepoint as the solution. I dunno, this seems quite weak to me. I see many patches coming to add tracing in the stack, but no patches fixing any issues. It really looks like : We do not know how TCP stack works, we do not know if there is any issue, let us add trace points to help us to make forward progress in our analysis. These tracepoints will not tell how many segments/bytes were sent/acked/received, I really do not see how we will avoid adding in the future more stuff, forcing the compiler to save more state just in case the tracepoint needs the info. The argument of "add minimal info", so that we can silently add more stuff in the future "for free" is not something I buy. I very much prefer that you make sure the stuff you need is not inlined, so that standard kprobe/kretprobe facility can be used.
On Mon, 4 Dec 2023 13:28:21 +0100, Eric Dumazet <edumazet@google.com> wrote: > On Mon, Dec 4, 2023 at 12:43 PM Philo Lu <lulie@linux.alibaba.com> wrote: > > > > Add 3 tracepoints, namely tcp_data_send/tcp_data_recv/tcp_data_acked, > > which will be called every time a tcp data packet is sent, received, and > > acked. > > tcp_data_send: called after a data packet is sent. > > tcp_data_recv: called after a data packet is receviced. > > tcp_data_acked: called after a valid ack packet is processed (some sent > > data are ackknowledged). > > > > We use these callbacks for fine-grained tcp monitoring, which collects > > and analyses every tcp request/response event information. The whole > > system has been described in SIGMOD'18 (see > > https://dl.acm.org/doi/pdf/10.1145/3183713.3190659 for details). To > > achieve this with bpf, we require hooks for data events that call bpf > > prog (1) when any data packet is sent/received/acked, and (2) after > > critical tcp state variables have been updated (e.g., snd_una, snd_nxt, > > rcv_nxt). However, existing bpf hooks cannot meet our requirements. > > Besides, these tracepoints help to debug tcp when data send/recv/acked. > > This I do not understand. > > > > > Though kretprobe/fexit can also be used to collect these information, > > they will not work if the kernel functions get inlined. Considering the > > stability, we prefer tracepoint as the solution. > > I dunno, this seems quite weak to me. I see many patches coming to add > tracing in the stack, but no patches fixing any issues. We have implemented a mechanism to split the request and response from the TCP connection using these "hookers", which can handle various protocols such as HTTP, HTTPS, Redis, and MySQL. This mechanism allows us to record important information about each request and response, including the amount of data uploaded, the time taken by the server to handle the request, and the time taken for the client to receive the response. This mechanism has been running internally for many years and has proven to be very useful. One of the main benefits of this mechanism is that it helps in locating the source of any issues or problems that may arise. For example, if there is a problem with the network, the application, or the machine, we can use this mechanism to identify and isolate the issue. TCP has long been a challenge when it comes to tracking the transmission of data on the network. The application can only confirm that it has sent a certain amount of data to the kernel, but it has limited visibility into whether the client has actually received this data. Our mechanism addresses this issue by providing insights into the amount of data received by the client and the time it was received. Furthermore, we can also detect any packet loss or delays caused by the server. https://help-static-aliyun-doc.aliyuncs.com/assets/img/zh-CN/7912288961/9732df025beny.svg So, we do not want to add some tracepoint to do some unknow debug. We have a clear goal. debugging is just an incidental capability. Thanks. > > It really looks like : We do not know how TCP stack works, we do not > know if there is any issue, > let us add trace points to help us to make forward progress in our analysis. > > These tracepoints will not tell how many segments/bytes were > sent/acked/received, I really do not see > how we will avoid adding in the future more stuff, forcing the > compiler to save more state > just in case the tracepoint needs the info. > > The argument of "add minimal info", so that we can silently add more > stuff in the future "for free" is not something I buy. > > I very much prefer that you make sure the stuff you need is not > inlined, so that standard kprobe/kretprobe facility can be used.
On Tue, Dec 5, 2023 at 3:11 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote: > > On Mon, 4 Dec 2023 13:28:21 +0100, Eric Dumazet <edumazet@google.com> wrote: > > On Mon, Dec 4, 2023 at 12:43 PM Philo Lu <lulie@linux.alibaba.com> wrote: > > > > > > Add 3 tracepoints, namely tcp_data_send/tcp_data_recv/tcp_data_acked, > > > which will be called every time a tcp data packet is sent, received, and > > > acked. > > > tcp_data_send: called after a data packet is sent. > > > tcp_data_recv: called after a data packet is receviced. > > > tcp_data_acked: called after a valid ack packet is processed (some sent > > > data are ackknowledged). > > > > > > We use these callbacks for fine-grained tcp monitoring, which collects > > > and analyses every tcp request/response event information. The whole > > > system has been described in SIGMOD'18 (see > > > https://dl.acm.org/doi/pdf/10.1145/3183713.3190659 for details). To > > > achieve this with bpf, we require hooks for data events that call bpf > > > prog (1) when any data packet is sent/received/acked, and (2) after > > > critical tcp state variables have been updated (e.g., snd_una, snd_nxt, > > > rcv_nxt). However, existing bpf hooks cannot meet our requirements. > > > Besides, these tracepoints help to debug tcp when data send/recv/acked. > > > > This I do not understand. > > > > > > > > Though kretprobe/fexit can also be used to collect these information, > > > they will not work if the kernel functions get inlined. Considering the > > > stability, we prefer tracepoint as the solution. > > > > I dunno, this seems quite weak to me. I see many patches coming to add > > tracing in the stack, but no patches fixing any issues. > > > We have implemented a mechanism to split the request and response from the TCP > connection using these "hookers", which can handle various protocols such as > HTTP, HTTPS, Redis, and MySQL. This mechanism allows us to record important > information about each request and response, including the amount of data > uploaded, the time taken by the server to handle the request, and the time taken > for the client to receive the response. This mechanism has been running > internally for many years and has proven to be very useful. > > One of the main benefits of this mechanism is that it helps in locating the > source of any issues or problems that may arise. For example, if there is a > problem with the network, the application, or the machine, we can use this > mechanism to identify and isolate the issue. > > TCP has long been a challenge when it comes to tracking the transmission of data > on the network. The application can only confirm that it has sent a certain > amount of data to the kernel, but it has limited visibility into whether the > client has actually received this data. Our mechanism addresses this issue by > providing insights into the amount of data received by the client and the time > it was received. Furthermore, we can also detect any packet loss or delays > caused by the server. > > https://help-static-aliyun-doc.aliyuncs.com/assets/img/zh-CN/7912288961/9732df025beny.svg > > So, we do not want to add some tracepoint to do some unknow debug. > We have a clear goal. debugging is just an incidental capability. > We have powerful mechanisms in the stack already that ordinary (no privilege requested) applications can readily use. We have been using them for a while. If existing mechanisms are missing something you need, please expand them. For reference, start looking at tcp_get_timestamping_opt_stats() history. Sender side can for instance get precise timestamps. Combinations of these timestamps reveal different parts of the overall network latency, T0: sendmsg() enters TCP T1: first byte enters qdisc T2: first byte sent to the NIC T3: first byte ACKed in TCP T4: last byte sent to the NIC T5: last byte ACKed T1 - T0: how long the first byte was blocked in the TCP layer ("Head of Line Blocking" latency). T2 - T1: how long the first byte was blocked in the Linux traffic shaping layer (known as QDisc). T3 - T2: the network ‘distance’ (propagation delay + current queuing delay along the network path and at the receiver). T5 - T2: how fast the sent chunk was delivered. Message Size / (T5 - T0): goodput (from application’s perspective)
On Tue, 5 Dec 2023 20:39:28 +0100 Eric Dumazet <edumazet@google.com> wrote: > > So, we do not want to add some tracepoint to do some unknow debug. > > We have a clear goal. debugging is just an incidental capability. > > > > We have powerful mechanisms in the stack already that ordinary (no > privilege requested) applications can readily use. > I'm not arguing for or against this patch set, but tracepoints are available for other utilities that may have non privilege access. They are not just for tracers. -- Steve
On Tue, 5 Dec 2023 20:39:28 +0100, Eric Dumazet <edumazet@google.com> wrote: > On Tue, Dec 5, 2023 at 3:11 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote: > > > > On Mon, 4 Dec 2023 13:28:21 +0100, Eric Dumazet <edumazet@google.com> wrote: > > > On Mon, Dec 4, 2023 at 12:43 PM Philo Lu <lulie@linux.alibaba.com> wrote: > > > > > > > > Add 3 tracepoints, namely tcp_data_send/tcp_data_recv/tcp_data_acked, > > > > which will be called every time a tcp data packet is sent, received, and > > > > acked. > > > > tcp_data_send: called after a data packet is sent. > > > > tcp_data_recv: called after a data packet is receviced. > > > > tcp_data_acked: called after a valid ack packet is processed (some sent > > > > data are ackknowledged). > > > > > > > > We use these callbacks for fine-grained tcp monitoring, which collects > > > > and analyses every tcp request/response event information. The whole > > > > system has been described in SIGMOD'18 (see > > > > https://dl.acm.org/doi/pdf/10.1145/3183713.3190659 for details). To > > > > achieve this with bpf, we require hooks for data events that call bpf > > > > prog (1) when any data packet is sent/received/acked, and (2) after > > > > critical tcp state variables have been updated (e.g., snd_una, snd_nxt, > > > > rcv_nxt). However, existing bpf hooks cannot meet our requirements. > > > > Besides, these tracepoints help to debug tcp when data send/recv/acked. > > > > > > This I do not understand. > > > > > > > > > > > Though kretprobe/fexit can also be used to collect these information, > > > > they will not work if the kernel functions get inlined. Considering the > > > > stability, we prefer tracepoint as the solution. > > > > > > I dunno, this seems quite weak to me. I see many patches coming to add > > > tracing in the stack, but no patches fixing any issues. > > > > > > We have implemented a mechanism to split the request and response from the TCP > > connection using these "hookers", which can handle various protocols such as > > HTTP, HTTPS, Redis, and MySQL. This mechanism allows us to record important > > information about each request and response, including the amount of data > > uploaded, the time taken by the server to handle the request, and the time taken > > for the client to receive the response. This mechanism has been running > > internally for many years and has proven to be very useful. > > > > One of the main benefits of this mechanism is that it helps in locating the > > source of any issues or problems that may arise. For example, if there is a > > problem with the network, the application, or the machine, we can use this > > mechanism to identify and isolate the issue. > > > > TCP has long been a challenge when it comes to tracking the transmission of data > > on the network. The application can only confirm that it has sent a certain > > amount of data to the kernel, but it has limited visibility into whether the > > client has actually received this data. Our mechanism addresses this issue by > > providing insights into the amount of data received by the client and the time > > it was received. Furthermore, we can also detect any packet loss or delays > > caused by the server. > > > > https://help-static-aliyun-doc.aliyuncs.com/assets/img/zh-CN/7912288961/9732df025beny.svg > > > > So, we do not want to add some tracepoint to do some unknow debug. > > We have a clear goal. debugging is just an incidental capability. > > > > We have powerful mechanisms in the stack already that ordinary (no > privilege requested) applications can readily use. > > We have been using them for a while. > > If existing mechanisms are missing something you need, please expand them. > > For reference, start looking at tcp_get_timestamping_opt_stats() history. > > Sender side can for instance get precise timestamps. > > Combinations of these timestamps reveal different parts of the overall > network latency, > > T0: sendmsg() enters TCP > T1: first byte enters qdisc > T2: first byte sent to the NIC > T3: first byte ACKed in TCP > T4: last byte sent to the NIC > T5: last byte ACKed > T1 - T0: how long the first byte was blocked in the TCP layer ("Head > of Line Blocking" latency). > T2 - T1: how long the first byte was blocked in the Linux traffic > shaping layer (known as QDisc). > T3 - T2: the network ‘distance’ (propagation delay + current queuing > delay along the network path and at the receiver). > T5 - T2: how fast the sent chunk was delivered. > Message Size / (T5 - T0): goodput (from application’s perspective) The key point is that using our mechanism, the application does not need to be modified. As long as the app's network protocol is request-response, we can trace tcp connection at any time to analyze the request and response. And record the start and end times of request and response. Of course there is some ttl and other information. Thanks.
diff --git a/include/trace/events/tcp.h b/include/trace/events/tcp.h index 7b1ddffa3dfc..1423f7cb73f9 100644 --- a/include/trace/events/tcp.h +++ b/include/trace/events/tcp.h @@ -113,6 +113,13 @@ DEFINE_EVENT(tcp_event_sk_skb, tcp_send_reset, TP_ARGS(sk, skb) ); +DEFINE_EVENT(tcp_event_sk_skb, tcp_data_recv, + + TP_PROTO(const struct sock *sk, const struct sk_buff *skb), + + TP_ARGS(sk, skb) +); + /* * tcp event with arguments sk * @@ -187,6 +194,20 @@ DEFINE_EVENT(tcp_event_sk, tcp_rcv_space_adjust, TP_ARGS(sk) ); +DEFINE_EVENT(tcp_event_sk, tcp_data_send, + + TP_PROTO(struct sock *sk), + + TP_ARGS(sk) +); + +DEFINE_EVENT(tcp_event_sk, tcp_data_acked, + + TP_PROTO(struct sock *sk), + + TP_ARGS(sk) +); + TRACE_EVENT(tcp_retransmit_synack, TP_PROTO(const struct sock *sk, const struct request_sock *req), diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index bcb55d98004c..edb1e24a3423 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -824,6 +824,8 @@ static void tcp_event_data_recv(struct sock *sk, struct sk_buff *skb) now = tcp_jiffies32; + trace_tcp_data_recv(sk, skb); + if (!icsk->icsk_ack.ato) { /* The _first_ data packet received, initialize * delayed ACK engine. @@ -3486,6 +3488,8 @@ static int tcp_clean_rtx_queue(struct sock *sk, const struct sk_buff *ack_skb, } } #endif + + trace_tcp_data_acked(sk); return flag; } diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index eb13a55d660c..cb6f2af55ce2 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -2821,6 +2821,8 @@ static bool tcp_write_xmit(struct sock *sk, unsigned int mss_now, int nonagle, /* Send one loss probe per tail loss episode. */ if (push_one != 2) tcp_schedule_loss_probe(sk, false); + + trace_tcp_data_send(sk); return false; } return !tp->packets_out && !tcp_write_queue_empty(sk);