Message ID | 20210521182104.18273-1-kuniyu@amazon.co.jp (mailing list archive) |
---|---|
Headers | show |
Series | Socket migration for SO_REUSEPORT. | expand |
On 5/21/21 8:20 PM, Kuniyuki Iwashima wrote: > The SO_REUSEPORT option allows sockets to listen on the same port and to > accept connections evenly. However, there is a defect in the current > implementation [1]. When a SYN packet is received, the connection is tied > to a listening socket. Accordingly, when the listener is closed, in-flight > requests during the three-way handshake and child sockets in the accept > queue are dropped even if other listeners on the same port could accept > such connections. > > This situation can happen when various server management tools restart > server (such as nginx) processes. For instance, when we change nginx > configurations and restart it, it spins up new workers that respect the new > configuration and closes all listeners on the old workers, resulting in the > in-flight ACK of 3WHS is responded by RST. > > To avoid such a situation, users have to know deeply how the kernel handles > SYN packets and implement connection draining by eBPF [2]: > > 1. Stop routing SYN packets to the listener by eBPF. > 2. Wait for all timers to expire to complete requests > 3. Accept connections until EAGAIN, then close the listener. > > or > > 1. Start counting SYN packets and accept syscalls using the eBPF map. > 2. Stop routing SYN packets. > 3. Accept connections up to the count, then close the listener. > > In either way, we cannot close a listener immediately. However, ideally, > the application need not drain the not yet accepted sockets because 3WHS > and tying a connection to a listener are just the kernel behaviour. The > root cause is within the kernel, so the issue should be addressed in kernel > space and should not be visible to user space. This patchset fixes it so > that users need not take care of kernel implementation and connection > draining. With this patchset, the kernel redistributes requests and > connections from a listener to the others in the same reuseport group > at/after close or shutdown syscalls. > > Although some software does connection draining, there are still merits in > migration. For some security reasons, such as replacing TLS certificates, > we may want to apply new settings as soon as possible and/or we may not be > able to wait for connection draining. The sockets in the accept queue have > not started application sessions yet. So, if we do not drain such sockets, > they can be handled by the newer listeners and could have a longer > lifetime. It is difficult to drain all connections in every case, but we > can decrease such aborted connections by migration. In that sense, > migration is always better than draining. > > Moreover, auto-migration simplifies user space logic and also works well in > a case where we cannot modify and build a server program to implement the > workaround. > > Note that the source and destination listeners MUST have the same settings > at the socket API level; otherwise, applications may face inconsistency and > cause errors. In such a case, we have to use the eBPF program to select a > specific listener or to cancel migration. > > Special thanks to Martin KaFai Lau for bouncing ideas and exchanging code > snippets along the way. > > > Link: > [1] The SO_REUSEPORT socket option > https://lwn.net/Articles/542629/ > > [2] Re: [PATCH 1/1] net: Add SO_REUSEPORT_LISTEN_OFF socket option as drain mode > https://lore.kernel.org/netdev/1458828813.10868.65.camel@edumazet-glaptop3.roam.corp.google.com/ This series needs review/ACKs from TCP maintainers. Eric/Neal/Yuchung please take a look again. Thanks, Daniel
On Tue, May 25, 2021 at 11:42 PM Daniel Borkmann <daniel@iogearbox.net> wrote: > > On 5/21/21 8:20 PM, Kuniyuki Iwashima wrote: > > The SO_REUSEPORT option allows sockets to listen on the same port and to > > accept connections evenly. However, there is a defect in the current > > implementation [1]. When a SYN packet is received, the connection is tied > > to a listening socket. Accordingly, when the listener is closed, in-flight > > requests during the three-way handshake and child sockets in the accept > > queue are dropped even if other listeners on the same port could accept > > such connections. > > > > This situation can happen when various server management tools restart > > server (such as nginx) processes. For instance, when we change nginx > > configurations and restart it, it spins up new workers that respect the new > > configuration and closes all listeners on the old workers, resulting in the > > in-flight ACK of 3WHS is responded by RST. > > > > To avoid such a situation, users have to know deeply how the kernel handles > > SYN packets and implement connection draining by eBPF [2]: > > > > 1. Stop routing SYN packets to the listener by eBPF. > > 2. Wait for all timers to expire to complete requests > > 3. Accept connections until EAGAIN, then close the listener. > > > > or > > > > 1. Start counting SYN packets and accept syscalls using the eBPF map. > > 2. Stop routing SYN packets. > > 3. Accept connections up to the count, then close the listener. > > > > In either way, we cannot close a listener immediately. However, ideally, > > the application need not drain the not yet accepted sockets because 3WHS > > and tying a connection to a listener are just the kernel behaviour. The > > root cause is within the kernel, so the issue should be addressed in kernel > > space and should not be visible to user space. This patchset fixes it so > > that users need not take care of kernel implementation and connection > > draining. With this patchset, the kernel redistributes requests and > > connections from a listener to the others in the same reuseport group > > at/after close or shutdown syscalls. > > > > Although some software does connection draining, there are still merits in > > migration. For some security reasons, such as replacing TLS certificates, > > we may want to apply new settings as soon as possible and/or we may not be > > able to wait for connection draining. The sockets in the accept queue have > > not started application sessions yet. So, if we do not drain such sockets, > > they can be handled by the newer listeners and could have a longer > > lifetime. It is difficult to drain all connections in every case, but we > > can decrease such aborted connections by migration. In that sense, > > migration is always better than draining. > > > > Moreover, auto-migration simplifies user space logic and also works well in > > a case where we cannot modify and build a server program to implement the > > workaround. > > > > Note that the source and destination listeners MUST have the same settings > > at the socket API level; otherwise, applications may face inconsistency and > > cause errors. In such a case, we have to use the eBPF program to select a > > specific listener or to cancel migration. > > > > Special thanks to Martin KaFai Lau for bouncing ideas and exchanging code > > snippets along the way. > > > > > > Link: > > [1] The SO_REUSEPORT socket option > > https://lwn.net/Articles/542629/ > > > > [2] Re: [PATCH 1/1] net: Add SO_REUSEPORT_LISTEN_OFF socket option as drain mode > > https://lore.kernel.org/netdev/1458828813.10868.65.camel@edumazet-glaptop3.roam.corp.google.com/ > > This series needs review/ACKs from TCP maintainers. Eric/Neal/Yuchung please take > a look again. Eric, I've looked through bpf and tcp changes and they don't look scary at all. I think the feature is useful and a bit of extra complexity is worth it. So please review tcp bits to make sure we didn't miss anything. Thanks!
On Tue, May 25, 2021 at 11:42 PM Daniel Borkmann <daniel@iogearbox.net> wrote: > > On 5/21/21 8:20 PM, Kuniyuki Iwashima wrote: > > The SO_REUSEPORT option allows sockets to listen on the same port and to > > accept connections evenly. However, there is a defect in the current > > implementation [1]. When a SYN packet is received, the connection is tied > > to a listening socket. Accordingly, when the listener is closed, in-flight > > requests during the three-way handshake and child sockets in the accept > > queue are dropped even if other listeners on the same port could accept > > such connections. > > > > This situation can happen when various server management tools restart > > server (such as nginx) processes. For instance, when we change nginx > > configurations and restart it, it spins up new workers that respect the new > > configuration and closes all listeners on the old workers, resulting in the > > in-flight ACK of 3WHS is responded by RST. > > > > To avoid such a situation, users have to know deeply how the kernel handles > > SYN packets and implement connection draining by eBPF [2]: > > > > 1. Stop routing SYN packets to the listener by eBPF. > > 2. Wait for all timers to expire to complete requests > > 3. Accept connections until EAGAIN, then close the listener. > > > > or > > > > 1. Start counting SYN packets and accept syscalls using the eBPF map. > > 2. Stop routing SYN packets. > > 3. Accept connections up to the count, then close the listener. > > > > In either way, we cannot close a listener immediately. However, ideally, > > the application need not drain the not yet accepted sockets because 3WHS > > and tying a connection to a listener are just the kernel behaviour. The > > root cause is within the kernel, so the issue should be addressed in kernel > > space and should not be visible to user space. This patchset fixes it so > > that users need not take care of kernel implementation and connection > > draining. With this patchset, the kernel redistributes requests and > > connections from a listener to the others in the same reuseport group > > at/after close or shutdown syscalls. > > > > Although some software does connection draining, there are still merits in > > migration. For some security reasons, such as replacing TLS certificates, > > we may want to apply new settings as soon as possible and/or we may not be > > able to wait for connection draining. The sockets in the accept queue have > > not started application sessions yet. So, if we do not drain such sockets, > > they can be handled by the newer listeners and could have a longer > > lifetime. It is difficult to drain all connections in every case, but we > > can decrease such aborted connections by migration. In that sense, > > migration is always better than draining. > > > > Moreover, auto-migration simplifies user space logic and also works well in > > a case where we cannot modify and build a server program to implement the > > workaround. > > > > Note that the source and destination listeners MUST have the same settings > > at the socket API level; otherwise, applications may face inconsistency and > > cause errors. In such a case, we have to use the eBPF program to select a > > specific listener or to cancel migration. This looks to be a useful feature. What happens to migrating a passively fast-opened socket in the old listener but it has not yet been accepted (TFO is both a mini-socket and a full-socket)? It gets tricky when the old and new listener have different TFO key > > > > Special thanks to Martin KaFai Lau for bouncing ideas and exchanging code > > snippets along the way. > > > > > > Link: > > [1] The SO_REUSEPORT socket option > > https://lwn.net/Articles/542629/ > > > > [2] Re: [PATCH 1/1] net: Add SO_REUSEPORT_LISTEN_OFF socket option as drain mode > > https://lore.kernel.org/netdev/1458828813.10868.65.camel@edumazet-glaptop3.roam.corp.google.com/ > > This series needs review/ACKs from TCP maintainers. Eric/Neal/Yuchung please take > a look again. > > Thanks, > Daniel
From: Yuchung Cheng <ycheng@google.com> Date: Tue, 8 Jun 2021 10:48:06 -0700 > On Tue, May 25, 2021 at 11:42 PM Daniel Borkmann <daniel@iogearbox.net> wrote: > > > > On 5/21/21 8:20 PM, Kuniyuki Iwashima wrote: > > > The SO_REUSEPORT option allows sockets to listen on the same port and to > > > accept connections evenly. However, there is a defect in the current > > > implementation [1]. When a SYN packet is received, the connection is tied > > > to a listening socket. Accordingly, when the listener is closed, in-flight > > > requests during the three-way handshake and child sockets in the accept > > > queue are dropped even if other listeners on the same port could accept > > > such connections. > > > > > > This situation can happen when various server management tools restart > > > server (such as nginx) processes. For instance, when we change nginx > > > configurations and restart it, it spins up new workers that respect the new > > > configuration and closes all listeners on the old workers, resulting in the > > > in-flight ACK of 3WHS is responded by RST. > > > > > > To avoid such a situation, users have to know deeply how the kernel handles > > > SYN packets and implement connection draining by eBPF [2]: > > > > > > 1. Stop routing SYN packets to the listener by eBPF. > > > 2. Wait for all timers to expire to complete requests > > > 3. Accept connections until EAGAIN, then close the listener. > > > > > > or > > > > > > 1. Start counting SYN packets and accept syscalls using the eBPF map. > > > 2. Stop routing SYN packets. > > > 3. Accept connections up to the count, then close the listener. > > > > > > In either way, we cannot close a listener immediately. However, ideally, > > > the application need not drain the not yet accepted sockets because 3WHS > > > and tying a connection to a listener are just the kernel behaviour. The > > > root cause is within the kernel, so the issue should be addressed in kernel > > > space and should not be visible to user space. This patchset fixes it so > > > that users need not take care of kernel implementation and connection > > > draining. With this patchset, the kernel redistributes requests and > > > connections from a listener to the others in the same reuseport group > > > at/after close or shutdown syscalls. > > > > > > Although some software does connection draining, there are still merits in > > > migration. For some security reasons, such as replacing TLS certificates, > > > we may want to apply new settings as soon as possible and/or we may not be > > > able to wait for connection draining. The sockets in the accept queue have > > > not started application sessions yet. So, if we do not drain such sockets, > > > they can be handled by the newer listeners and could have a longer > > > lifetime. It is difficult to drain all connections in every case, but we > > > can decrease such aborted connections by migration. In that sense, > > > migration is always better than draining. > > > > > > Moreover, auto-migration simplifies user space logic and also works well in > > > a case where we cannot modify and build a server program to implement the > > > workaround. > > > > > > Note that the source and destination listeners MUST have the same settings > > > at the socket API level; otherwise, applications may face inconsistency and > > > cause errors. In such a case, we have to use the eBPF program to select a > > > specific listener or to cancel migration. > This looks to be a useful feature. What happens to migrating a > passively fast-opened socket in the old listener but it has not yet > been accepted (TFO is both a mini-socket and a full-socket)? > It gets tricky when the old and new listener have different TFO key The tricky situation can happen without this patch set. We can change the listener's TFO key when TCP_SYN_RECV sockets are still in the accept queue. The change is already handled properly, so it does not crash applications. In the normal 3WHS case, a full-socket is created after 3WHS. In the TFO case, a full-socket is created after validating the TFO cookie in the initial SYN packet. After that, the connection is basically handled via the full-socket, except for accept() syscall. So in the both cases, the mini-socket is poped out of old listener's queue, cloned, and put into the new listner's queue. Then we can accept() its full-socket via the cloned mini-socket.
On Tue, Jun 8, 2021 at 4:04 PM Kuniyuki Iwashima <kuniyu@amazon.co.jp> wrote: > > From: Yuchung Cheng <ycheng@google.com> > Date: Tue, 8 Jun 2021 10:48:06 -0700 > > On Tue, May 25, 2021 at 11:42 PM Daniel Borkmann <daniel@iogearbox.net> wrote: > > > > > > On 5/21/21 8:20 PM, Kuniyuki Iwashima wrote: > > > > The SO_REUSEPORT option allows sockets to listen on the same port and to > > > > accept connections evenly. However, there is a defect in the current > > > > implementation [1]. When a SYN packet is received, the connection is tied > > > > to a listening socket. Accordingly, when the listener is closed, in-flight > > > > requests during the three-way handshake and child sockets in the accept > > > > queue are dropped even if other listeners on the same port could accept > > > > such connections. > > > > > > > > This situation can happen when various server management tools restart > > > > server (such as nginx) processes. For instance, when we change nginx > > > > configurations and restart it, it spins up new workers that respect the new > > > > configuration and closes all listeners on the old workers, resulting in the > > > > in-flight ACK of 3WHS is responded by RST. > > > > > > > > To avoid such a situation, users have to know deeply how the kernel handles > > > > SYN packets and implement connection draining by eBPF [2]: > > > > > > > > 1. Stop routing SYN packets to the listener by eBPF. > > > > 2. Wait for all timers to expire to complete requests > > > > 3. Accept connections until EAGAIN, then close the listener. > > > > > > > > or > > > > > > > > 1. Start counting SYN packets and accept syscalls using the eBPF map. > > > > 2. Stop routing SYN packets. > > > > 3. Accept connections up to the count, then close the listener. > > > > > > > > In either way, we cannot close a listener immediately. However, ideally, > > > > the application need not drain the not yet accepted sockets because 3WHS > > > > and tying a connection to a listener are just the kernel behaviour. The > > > > root cause is within the kernel, so the issue should be addressed in kernel > > > > space and should not be visible to user space. This patchset fixes it so > > > > that users need not take care of kernel implementation and connection > > > > draining. With this patchset, the kernel redistributes requests and > > > > connections from a listener to the others in the same reuseport group > > > > at/after close or shutdown syscalls. > > > > > > > > Although some software does connection draining, there are still merits in > > > > migration. For some security reasons, such as replacing TLS certificates, > > > > we may want to apply new settings as soon as possible and/or we may not be > > > > able to wait for connection draining. The sockets in the accept queue have > > > > not started application sessions yet. So, if we do not drain such sockets, > > > > they can be handled by the newer listeners and could have a longer > > > > lifetime. It is difficult to drain all connections in every case, but we > > > > can decrease such aborted connections by migration. In that sense, > > > > migration is always better than draining. > > > > > > > > Moreover, auto-migration simplifies user space logic and also works well in > > > > a case where we cannot modify and build a server program to implement the > > > > workaround. > > > > > > > > Note that the source and destination listeners MUST have the same settings > > > > at the socket API level; otherwise, applications may face inconsistency and > > > > cause errors. In such a case, we have to use the eBPF program to select a > > > > specific listener or to cancel migration. > > This looks to be a useful feature. What happens to migrating a > > passively fast-opened socket in the old listener but it has not yet > > been accepted (TFO is both a mini-socket and a full-socket)? > > It gets tricky when the old and new listener have different TFO key > > The tricky situation can happen without this patch set. We can change > the listener's TFO key when TCP_SYN_RECV sockets are still in the accept > queue. The change is already handled properly, so it does not crash > applications. > > In the normal 3WHS case, a full-socket is created after 3WHS. In the TFO > case, a full-socket is created after validating the TFO cookie in the > initial SYN packet. > > After that, the connection is basically handled via the full-socket, except > for accept() syscall. So in the both cases, the mini-socket is poped out of > old listener's queue, cloned, and put into the new listner's queue. Then we > can accept() its full-socket via the cloned mini-socket. Thanks, that makes sense. Eric is the expert in this part to review the correctness. My only suggestion is to add some stats tracking the mini-sockets that fail to migrate due to a variety of reasons (the code locations that the requests need to be dropped). This can be useful to evaluate the effectiveness of this new feature.
From: Yuchung Cheng <ycheng@google.com> Date: Tue, 8 Jun 2021 16:47:37 -0700 > On Tue, Jun 8, 2021 at 4:04 PM Kuniyuki Iwashima <kuniyu@amazon.co.jp> wrote: > > > > From: Yuchung Cheng <ycheng@google.com> > > Date: Tue, 8 Jun 2021 10:48:06 -0700 > > > On Tue, May 25, 2021 at 11:42 PM Daniel Borkmann <daniel@iogearbox.net> wrote: > > > > > > > > On 5/21/21 8:20 PM, Kuniyuki Iwashima wrote: > > > > > The SO_REUSEPORT option allows sockets to listen on the same port and to > > > > > accept connections evenly. However, there is a defect in the current > > > > > implementation [1]. When a SYN packet is received, the connection is tied > > > > > to a listening socket. Accordingly, when the listener is closed, in-flight > > > > > requests during the three-way handshake and child sockets in the accept > > > > > queue are dropped even if other listeners on the same port could accept > > > > > such connections. > > > > > > > > > > This situation can happen when various server management tools restart > > > > > server (such as nginx) processes. For instance, when we change nginx > > > > > configurations and restart it, it spins up new workers that respect the new > > > > > configuration and closes all listeners on the old workers, resulting in the > > > > > in-flight ACK of 3WHS is responded by RST. > > > > > > > > > > To avoid such a situation, users have to know deeply how the kernel handles > > > > > SYN packets and implement connection draining by eBPF [2]: > > > > > > > > > > 1. Stop routing SYN packets to the listener by eBPF. > > > > > 2. Wait for all timers to expire to complete requests > > > > > 3. Accept connections until EAGAIN, then close the listener. > > > > > > > > > > or > > > > > > > > > > 1. Start counting SYN packets and accept syscalls using the eBPF map. > > > > > 2. Stop routing SYN packets. > > > > > 3. Accept connections up to the count, then close the listener. > > > > > > > > > > In either way, we cannot close a listener immediately. However, ideally, > > > > > the application need not drain the not yet accepted sockets because 3WHS > > > > > and tying a connection to a listener are just the kernel behaviour. The > > > > > root cause is within the kernel, so the issue should be addressed in kernel > > > > > space and should not be visible to user space. This patchset fixes it so > > > > > that users need not take care of kernel implementation and connection > > > > > draining. With this patchset, the kernel redistributes requests and > > > > > connections from a listener to the others in the same reuseport group > > > > > at/after close or shutdown syscalls. > > > > > > > > > > Although some software does connection draining, there are still merits in > > > > > migration. For some security reasons, such as replacing TLS certificates, > > > > > we may want to apply new settings as soon as possible and/or we may not be > > > > > able to wait for connection draining. The sockets in the accept queue have > > > > > not started application sessions yet. So, if we do not drain such sockets, > > > > > they can be handled by the newer listeners and could have a longer > > > > > lifetime. It is difficult to drain all connections in every case, but we > > > > > can decrease such aborted connections by migration. In that sense, > > > > > migration is always better than draining. > > > > > > > > > > Moreover, auto-migration simplifies user space logic and also works well in > > > > > a case where we cannot modify and build a server program to implement the > > > > > workaround. > > > > > > > > > > Note that the source and destination listeners MUST have the same settings > > > > > at the socket API level; otherwise, applications may face inconsistency and > > > > > cause errors. In such a case, we have to use the eBPF program to select a > > > > > specific listener or to cancel migration. > > > This looks to be a useful feature. What happens to migrating a > > > passively fast-opened socket in the old listener but it has not yet > > > been accepted (TFO is both a mini-socket and a full-socket)? > > > It gets tricky when the old and new listener have different TFO key > > > > The tricky situation can happen without this patch set. We can change > > the listener's TFO key when TCP_SYN_RECV sockets are still in the accept > > queue. The change is already handled properly, so it does not crash > > applications. > > > > In the normal 3WHS case, a full-socket is created after 3WHS. In the TFO > > case, a full-socket is created after validating the TFO cookie in the > > initial SYN packet. > > > > After that, the connection is basically handled via the full-socket, except > > for accept() syscall. So in the both cases, the mini-socket is poped out of > > old listener's queue, cloned, and put into the new listner's queue. Then we > > can accept() its full-socket via the cloned mini-socket. > > Thanks, that makes sense. Eric is the expert in this part to review > the correctness. My only suggestion is to add some stats tracking the > mini-sockets that fail to migrate due to a variety of reasons (the > code locations that the requests need to be dropped). This can be > useful to evaluate the effectiveness of this new feature. That's nice idea. I'll implement it as a follow-up patch or in the next spin. For now, I would like to wait for Eric's review. Thank you.
On 6/9/21 2:34 AM, Kuniyuki Iwashima wrote: > > For now, I would like to wait for Eric's review. > I have been busy these days, I will review your patches by tomorrow.