mbox series

[v7,bpf-next,00/11] Socket migration for SO_REUSEPORT.

Message ID 20210521182104.18273-1-kuniyu@amazon.co.jp (mailing list archive)
Headers show
Series Socket migration for SO_REUSEPORT. | expand

Message

Iwashima, Kuniyuki May 21, 2021, 6:20 p.m. UTC
The SO_REUSEPORT option allows sockets to listen on the same port and to
accept connections evenly. However, there is a defect in the current
implementation [1]. When a SYN packet is received, the connection is tied
to a listening socket. Accordingly, when the listener is closed, in-flight
requests during the three-way handshake and child sockets in the accept
queue are dropped even if other listeners on the same port could accept
such connections.

This situation can happen when various server management tools restart
server (such as nginx) processes. For instance, when we change nginx
configurations and restart it, it spins up new workers that respect the new
configuration and closes all listeners on the old workers, resulting in the
in-flight ACK of 3WHS is responded by RST.

To avoid such a situation, users have to know deeply how the kernel handles
SYN packets and implement connection draining by eBPF [2]:

  1. Stop routing SYN packets to the listener by eBPF.
  2. Wait for all timers to expire to complete requests
  3. Accept connections until EAGAIN, then close the listener.

  or

  1. Start counting SYN packets and accept syscalls using the eBPF map.
  2. Stop routing SYN packets.
  3. Accept connections up to the count, then close the listener.

In either way, we cannot close a listener immediately. However, ideally,
the application need not drain the not yet accepted sockets because 3WHS
and tying a connection to a listener are just the kernel behaviour. The
root cause is within the kernel, so the issue should be addressed in kernel
space and should not be visible to user space. This patchset fixes it so
that users need not take care of kernel implementation and connection
draining. With this patchset, the kernel redistributes requests and
connections from a listener to the others in the same reuseport group
at/after close or shutdown syscalls.

Although some software does connection draining, there are still merits in
migration. For some security reasons, such as replacing TLS certificates,
we may want to apply new settings as soon as possible and/or we may not be
able to wait for connection draining. The sockets in the accept queue have
not started application sessions yet. So, if we do not drain such sockets,
they can be handled by the newer listeners and could have a longer
lifetime. It is difficult to drain all connections in every case, but we
can decrease such aborted connections by migration. In that sense,
migration is always better than draining. 

Moreover, auto-migration simplifies user space logic and also works well in
a case where we cannot modify and build a server program to implement the
workaround.

Note that the source and destination listeners MUST have the same settings
at the socket API level; otherwise, applications may face inconsistency and
cause errors. In such a case, we have to use the eBPF program to select a
specific listener or to cancel migration.

Special thanks to Martin KaFai Lau for bouncing ideas and exchanging code
snippets along the way.


Link:
 [1] The SO_REUSEPORT socket option
 https://lwn.net/Articles/542629/

 [2] Re: [PATCH 1/1] net: Add SO_REUSEPORT_LISTEN_OFF socket option as drain mode
 https://lore.kernel.org/netdev/1458828813.10868.65.camel@edumazet-glaptop3.roam.corp.google.com/


Changelog:
 v7:
  * Prevent attaching/detaching a bpf prog via shutdowned socket
  * Fix typo in commit messages
  * Split selftest into subtests

 v6:
 https://lore.kernel.org/bpf/20210517002258.75019-1-kuniyu@amazon.co.jp/
  * Change description in ip-sysctl.rst
  * Test IPPROTO_TCP before reading tfo_listener
  * Move reqsk_clone() to inet_connection_sock.c and rename to
    inet_reqsk_clone()
  * Pass req->rsk_listener to inet_csk_reqsk_queue_drop() and
    reqsk_queue_removed() in the migration path of receiving ACK
  * s/ARG_PTR_TO_SOCKET/PTR_TO_SOCKET/ in sk_reuseport_is_valid_access()
  * In selftest, use atomic ops to increment global vars, drop ACK by XDP,
    enable force fastopen, use "skel->bss" instead of "skel->data"

 v5:
 https://lore.kernel.org/bpf/20210510034433.52818-1-kuniyu@amazon.co.jp/
  * Move initializtion of sk_node from 6th to 5th patch
  * Initialize sk_refcnt in reqsk_clone()
  * Modify some definitions in reqsk_timer_handler()
  * Validate in which path/state migration happens in selftest

 v4:
 https://lore.kernel.org/bpf/20210427034623.46528-1-kuniyu@amazon.co.jp/
  * Make some functions and variables 'static' in selftest
  * Remove 'scalability' from the cover letter

 v3:
 https://lore.kernel.org/bpf/20210420154140.80034-1-kuniyu@amazon.co.jp/
  * Add sysctl back for reuseport_grow()
  * Add helper functions to manage socks[]
  * Separate migration related logic into functions: reuseport_resurrect(),
    reuseport_stop_listen_sock(), reuseport_migrate_sock()
  * Clone request_sock to be migrated
  * Migrate request one by one
  * Pass child socket to eBPF prog

 v2:
 https://lore.kernel.org/netdev/20201207132456.65472-1-kuniyu@amazon.co.jp/
  * Do not save closed sockets in socks[]
  * Revert 607904c357c61adf20b8fd18af765e501d61a385
  * Extract inet_csk_reqsk_queue_migrate() into a single patch
  * Change the spin_lock order to avoid lockdep warning
  * Add static to __reuseport_select_sock
  * Use refcount_inc_not_zero() in reuseport_select_migrated_sock()
  * Set the default attach type in bpf_prog_load_check_attach()
  * Define new proto of BPF_FUNC_get_socket_cookie
  * Fix test to be compiled successfully
  * Update commit messages

 v1:
 https://lore.kernel.org/netdev/20201201144418.35045-1-kuniyu@amazon.co.jp/
  * Remove the sysctl option
  * Enable migration if eBPF progam is not attached
  * Add expected_attach_type to check if eBPF program can migrate sockets
  * Add a field to tell migration type to eBPF program
  * Support BPF_FUNC_get_socket_cookie to get the cookie of sk
  * Allocate an empty skb if skb is NULL
  * Pass req_to_sk(req)->sk_hash because listener's hash is zero
  * Update commit messages and coverletter

 RFC:
 https://lore.kernel.org/netdev/20201117094023.3685-1-kuniyu@amazon.co.jp/


Kuniyuki Iwashima (11):
  net: Introduce net.ipv4.tcp_migrate_req.
  tcp: Add num_closed_socks to struct sock_reuseport.
  tcp: Keep TCP_CLOSE sockets in the reuseport group.
  tcp: Add reuseport_migrate_sock() to select a new listener.
  tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues.
  tcp: Migrate TCP_NEW_SYN_RECV requests at retransmitting SYN+ACKs.
  tcp: Migrate TCP_NEW_SYN_RECV requests at receiving the final ACK.
  bpf: Support BPF_FUNC_get_socket_cookie() for
    BPF_PROG_TYPE_SK_REUSEPORT.
  bpf: Support socket migration by eBPF.
  libbpf: Set expected_attach_type for BPF_PROG_TYPE_SK_REUSEPORT.
  bpf: Test BPF_SK_REUSEPORT_SELECT_OR_MIGRATE.

 Documentation/networking/ip-sysctl.rst        |  25 +
 include/linux/bpf.h                           |   1 +
 include/linux/filter.h                        |   2 +
 include/net/netns/ipv4.h                      |   1 +
 include/net/sock_reuseport.h                  |   9 +-
 include/uapi/linux/bpf.h                      |  16 +
 kernel/bpf/syscall.c                          |  13 +
 net/core/filter.c                             |  23 +-
 net/core/sock_reuseport.c                     | 362 ++++++++++--
 net/ipv4/inet_connection_sock.c               | 190 +++++-
 net/ipv4/inet_hashtables.c                    |   2 +-
 net/ipv4/sysctl_net_ipv4.c                    |   9 +
 net/ipv4/tcp_ipv4.c                           |  20 +-
 net/ipv4/tcp_minisocks.c                      |   4 +-
 net/ipv6/tcp_ipv6.c                           |  14 +-
 tools/include/uapi/linux/bpf.h                |  16 +
 tools/lib/bpf/libbpf.c                        |   5 +-
 tools/testing/selftests/bpf/network_helpers.c |   2 +-
 tools/testing/selftests/bpf/network_helpers.h |   1 +
 .../bpf/prog_tests/migrate_reuseport.c        | 555 ++++++++++++++++++
 .../bpf/progs/test_migrate_reuseport.c        | 135 +++++
 21 files changed, 1336 insertions(+), 69 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/migrate_reuseport.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_migrate_reuseport.c

Comments

Daniel Borkmann May 26, 2021, 6:42 a.m. UTC | #1
On 5/21/21 8:20 PM, Kuniyuki Iwashima wrote:
> The SO_REUSEPORT option allows sockets to listen on the same port and to
> accept connections evenly. However, there is a defect in the current
> implementation [1]. When a SYN packet is received, the connection is tied
> to a listening socket. Accordingly, when the listener is closed, in-flight
> requests during the three-way handshake and child sockets in the accept
> queue are dropped even if other listeners on the same port could accept
> such connections.
> 
> This situation can happen when various server management tools restart
> server (such as nginx) processes. For instance, when we change nginx
> configurations and restart it, it spins up new workers that respect the new
> configuration and closes all listeners on the old workers, resulting in the
> in-flight ACK of 3WHS is responded by RST.
> 
> To avoid such a situation, users have to know deeply how the kernel handles
> SYN packets and implement connection draining by eBPF [2]:
> 
>    1. Stop routing SYN packets to the listener by eBPF.
>    2. Wait for all timers to expire to complete requests
>    3. Accept connections until EAGAIN, then close the listener.
> 
>    or
> 
>    1. Start counting SYN packets and accept syscalls using the eBPF map.
>    2. Stop routing SYN packets.
>    3. Accept connections up to the count, then close the listener.
> 
> In either way, we cannot close a listener immediately. However, ideally,
> the application need not drain the not yet accepted sockets because 3WHS
> and tying a connection to a listener are just the kernel behaviour. The
> root cause is within the kernel, so the issue should be addressed in kernel
> space and should not be visible to user space. This patchset fixes it so
> that users need not take care of kernel implementation and connection
> draining. With this patchset, the kernel redistributes requests and
> connections from a listener to the others in the same reuseport group
> at/after close or shutdown syscalls.
> 
> Although some software does connection draining, there are still merits in
> migration. For some security reasons, such as replacing TLS certificates,
> we may want to apply new settings as soon as possible and/or we may not be
> able to wait for connection draining. The sockets in the accept queue have
> not started application sessions yet. So, if we do not drain such sockets,
> they can be handled by the newer listeners and could have a longer
> lifetime. It is difficult to drain all connections in every case, but we
> can decrease such aborted connections by migration. In that sense,
> migration is always better than draining.
> 
> Moreover, auto-migration simplifies user space logic and also works well in
> a case where we cannot modify and build a server program to implement the
> workaround.
> 
> Note that the source and destination listeners MUST have the same settings
> at the socket API level; otherwise, applications may face inconsistency and
> cause errors. In such a case, we have to use the eBPF program to select a
> specific listener or to cancel migration.
> 
> Special thanks to Martin KaFai Lau for bouncing ideas and exchanging code
> snippets along the way.
> 
> 
> Link:
>   [1] The SO_REUSEPORT socket option
>   https://lwn.net/Articles/542629/
> 
>   [2] Re: [PATCH 1/1] net: Add SO_REUSEPORT_LISTEN_OFF socket option as drain mode
>   https://lore.kernel.org/netdev/1458828813.10868.65.camel@edumazet-glaptop3.roam.corp.google.com/

This series needs review/ACKs from TCP maintainers. Eric/Neal/Yuchung please take
a look again.

Thanks,
Daniel
Alexei Starovoitov June 8, 2021, 3:13 a.m. UTC | #2
On Tue, May 25, 2021 at 11:42 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
>
> On 5/21/21 8:20 PM, Kuniyuki Iwashima wrote:
> > The SO_REUSEPORT option allows sockets to listen on the same port and to
> > accept connections evenly. However, there is a defect in the current
> > implementation [1]. When a SYN packet is received, the connection is tied
> > to a listening socket. Accordingly, when the listener is closed, in-flight
> > requests during the three-way handshake and child sockets in the accept
> > queue are dropped even if other listeners on the same port could accept
> > such connections.
> >
> > This situation can happen when various server management tools restart
> > server (such as nginx) processes. For instance, when we change nginx
> > configurations and restart it, it spins up new workers that respect the new
> > configuration and closes all listeners on the old workers, resulting in the
> > in-flight ACK of 3WHS is responded by RST.
> >
> > To avoid such a situation, users have to know deeply how the kernel handles
> > SYN packets and implement connection draining by eBPF [2]:
> >
> >    1. Stop routing SYN packets to the listener by eBPF.
> >    2. Wait for all timers to expire to complete requests
> >    3. Accept connections until EAGAIN, then close the listener.
> >
> >    or
> >
> >    1. Start counting SYN packets and accept syscalls using the eBPF map.
> >    2. Stop routing SYN packets.
> >    3. Accept connections up to the count, then close the listener.
> >
> > In either way, we cannot close a listener immediately. However, ideally,
> > the application need not drain the not yet accepted sockets because 3WHS
> > and tying a connection to a listener are just the kernel behaviour. The
> > root cause is within the kernel, so the issue should be addressed in kernel
> > space and should not be visible to user space. This patchset fixes it so
> > that users need not take care of kernel implementation and connection
> > draining. With this patchset, the kernel redistributes requests and
> > connections from a listener to the others in the same reuseport group
> > at/after close or shutdown syscalls.
> >
> > Although some software does connection draining, there are still merits in
> > migration. For some security reasons, such as replacing TLS certificates,
> > we may want to apply new settings as soon as possible and/or we may not be
> > able to wait for connection draining. The sockets in the accept queue have
> > not started application sessions yet. So, if we do not drain such sockets,
> > they can be handled by the newer listeners and could have a longer
> > lifetime. It is difficult to drain all connections in every case, but we
> > can decrease such aborted connections by migration. In that sense,
> > migration is always better than draining.
> >
> > Moreover, auto-migration simplifies user space logic and also works well in
> > a case where we cannot modify and build a server program to implement the
> > workaround.
> >
> > Note that the source and destination listeners MUST have the same settings
> > at the socket API level; otherwise, applications may face inconsistency and
> > cause errors. In such a case, we have to use the eBPF program to select a
> > specific listener or to cancel migration.
> >
> > Special thanks to Martin KaFai Lau for bouncing ideas and exchanging code
> > snippets along the way.
> >
> >
> > Link:
> >   [1] The SO_REUSEPORT socket option
> >   https://lwn.net/Articles/542629/
> >
> >   [2] Re: [PATCH 1/1] net: Add SO_REUSEPORT_LISTEN_OFF socket option as drain mode
> >   https://lore.kernel.org/netdev/1458828813.10868.65.camel@edumazet-glaptop3.roam.corp.google.com/
>
> This series needs review/ACKs from TCP maintainers. Eric/Neal/Yuchung please take
> a look again.

Eric,

I've looked through bpf and tcp changes and they don't look scary at all.
I think the feature is useful and a bit of extra complexity is worth it.
So please review tcp bits to make sure we didn't miss anything.

Thanks!
Yuchung Cheng June 8, 2021, 5:48 p.m. UTC | #3
On Tue, May 25, 2021 at 11:42 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
>
> On 5/21/21 8:20 PM, Kuniyuki Iwashima wrote:
> > The SO_REUSEPORT option allows sockets to listen on the same port and to
> > accept connections evenly. However, there is a defect in the current
> > implementation [1]. When a SYN packet is received, the connection is tied
> > to a listening socket. Accordingly, when the listener is closed, in-flight
> > requests during the three-way handshake and child sockets in the accept
> > queue are dropped even if other listeners on the same port could accept
> > such connections.
> >
> > This situation can happen when various server management tools restart
> > server (such as nginx) processes. For instance, when we change nginx
> > configurations and restart it, it spins up new workers that respect the new
> > configuration and closes all listeners on the old workers, resulting in the
> > in-flight ACK of 3WHS is responded by RST.
> >
> > To avoid such a situation, users have to know deeply how the kernel handles
> > SYN packets and implement connection draining by eBPF [2]:
> >
> >    1. Stop routing SYN packets to the listener by eBPF.
> >    2. Wait for all timers to expire to complete requests
> >    3. Accept connections until EAGAIN, then close the listener.
> >
> >    or
> >
> >    1. Start counting SYN packets and accept syscalls using the eBPF map.
> >    2. Stop routing SYN packets.
> >    3. Accept connections up to the count, then close the listener.
> >
> > In either way, we cannot close a listener immediately. However, ideally,
> > the application need not drain the not yet accepted sockets because 3WHS
> > and tying a connection to a listener are just the kernel behaviour. The
> > root cause is within the kernel, so the issue should be addressed in kernel
> > space and should not be visible to user space. This patchset fixes it so
> > that users need not take care of kernel implementation and connection
> > draining. With this patchset, the kernel redistributes requests and
> > connections from a listener to the others in the same reuseport group
> > at/after close or shutdown syscalls.
> >
> > Although some software does connection draining, there are still merits in
> > migration. For some security reasons, such as replacing TLS certificates,
> > we may want to apply new settings as soon as possible and/or we may not be
> > able to wait for connection draining. The sockets in the accept queue have
> > not started application sessions yet. So, if we do not drain such sockets,
> > they can be handled by the newer listeners and could have a longer
> > lifetime. It is difficult to drain all connections in every case, but we
> > can decrease such aborted connections by migration. In that sense,
> > migration is always better than draining.
> >
> > Moreover, auto-migration simplifies user space logic and also works well in
> > a case where we cannot modify and build a server program to implement the
> > workaround.
> >
> > Note that the source and destination listeners MUST have the same settings
> > at the socket API level; otherwise, applications may face inconsistency and
> > cause errors. In such a case, we have to use the eBPF program to select a
> > specific listener or to cancel migration.
This looks to be a useful feature. What happens to migrating a
passively fast-opened socket in the old listener but it has not yet
been accepted (TFO is both a mini-socket and a full-socket)?
It gets tricky when the old and new listener have different TFO key


> >
> > Special thanks to Martin KaFai Lau for bouncing ideas and exchanging code
> > snippets along the way.
> >
> >
> > Link:
> >   [1] The SO_REUSEPORT socket option
> >   https://lwn.net/Articles/542629/
> >
> >   [2] Re: [PATCH 1/1] net: Add SO_REUSEPORT_LISTEN_OFF socket option as drain mode
> >   https://lore.kernel.org/netdev/1458828813.10868.65.camel@edumazet-glaptop3.roam.corp.google.com/
>
> This series needs review/ACKs from TCP maintainers. Eric/Neal/Yuchung please take
> a look again.
>
> Thanks,
> Daniel
Iwashima, Kuniyuki June 8, 2021, 11:03 p.m. UTC | #4
From:   Yuchung Cheng <ycheng@google.com>
Date:   Tue, 8 Jun 2021 10:48:06 -0700
> On Tue, May 25, 2021 at 11:42 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
> >
> > On 5/21/21 8:20 PM, Kuniyuki Iwashima wrote:
> > > The SO_REUSEPORT option allows sockets to listen on the same port and to
> > > accept connections evenly. However, there is a defect in the current
> > > implementation [1]. When a SYN packet is received, the connection is tied
> > > to a listening socket. Accordingly, when the listener is closed, in-flight
> > > requests during the three-way handshake and child sockets in the accept
> > > queue are dropped even if other listeners on the same port could accept
> > > such connections.
> > >
> > > This situation can happen when various server management tools restart
> > > server (such as nginx) processes. For instance, when we change nginx
> > > configurations and restart it, it spins up new workers that respect the new
> > > configuration and closes all listeners on the old workers, resulting in the
> > > in-flight ACK of 3WHS is responded by RST.
> > >
> > > To avoid such a situation, users have to know deeply how the kernel handles
> > > SYN packets and implement connection draining by eBPF [2]:
> > >
> > >    1. Stop routing SYN packets to the listener by eBPF.
> > >    2. Wait for all timers to expire to complete requests
> > >    3. Accept connections until EAGAIN, then close the listener.
> > >
> > >    or
> > >
> > >    1. Start counting SYN packets and accept syscalls using the eBPF map.
> > >    2. Stop routing SYN packets.
> > >    3. Accept connections up to the count, then close the listener.
> > >
> > > In either way, we cannot close a listener immediately. However, ideally,
> > > the application need not drain the not yet accepted sockets because 3WHS
> > > and tying a connection to a listener are just the kernel behaviour. The
> > > root cause is within the kernel, so the issue should be addressed in kernel
> > > space and should not be visible to user space. This patchset fixes it so
> > > that users need not take care of kernel implementation and connection
> > > draining. With this patchset, the kernel redistributes requests and
> > > connections from a listener to the others in the same reuseport group
> > > at/after close or shutdown syscalls.
> > >
> > > Although some software does connection draining, there are still merits in
> > > migration. For some security reasons, such as replacing TLS certificates,
> > > we may want to apply new settings as soon as possible and/or we may not be
> > > able to wait for connection draining. The sockets in the accept queue have
> > > not started application sessions yet. So, if we do not drain such sockets,
> > > they can be handled by the newer listeners and could have a longer
> > > lifetime. It is difficult to drain all connections in every case, but we
> > > can decrease such aborted connections by migration. In that sense,
> > > migration is always better than draining.
> > >
> > > Moreover, auto-migration simplifies user space logic and also works well in
> > > a case where we cannot modify and build a server program to implement the
> > > workaround.
> > >
> > > Note that the source and destination listeners MUST have the same settings
> > > at the socket API level; otherwise, applications may face inconsistency and
> > > cause errors. In such a case, we have to use the eBPF program to select a
> > > specific listener or to cancel migration.
> This looks to be a useful feature. What happens to migrating a
> passively fast-opened socket in the old listener but it has not yet
> been accepted (TFO is both a mini-socket and a full-socket)?
> It gets tricky when the old and new listener have different TFO key

The tricky situation can happen without this patch set. We can change
the listener's TFO key when TCP_SYN_RECV sockets are still in the accept
queue. The change is already handled properly, so it does not crash
applications.

In the normal 3WHS case, a full-socket is created after 3WHS. In the TFO
case, a full-socket is created after validating the TFO cookie in the
initial SYN packet.

After that, the connection is basically handled via the full-socket, except
for accept() syscall. So in the both cases, the mini-socket is poped out of
old listener's queue, cloned, and put into the new listner's queue. Then we
can accept() its full-socket via the cloned mini-socket.
Yuchung Cheng June 8, 2021, 11:47 p.m. UTC | #5
On Tue, Jun 8, 2021 at 4:04 PM Kuniyuki Iwashima <kuniyu@amazon.co.jp> wrote:
>
> From:   Yuchung Cheng <ycheng@google.com>
> Date:   Tue, 8 Jun 2021 10:48:06 -0700
> > On Tue, May 25, 2021 at 11:42 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
> > >
> > > On 5/21/21 8:20 PM, Kuniyuki Iwashima wrote:
> > > > The SO_REUSEPORT option allows sockets to listen on the same port and to
> > > > accept connections evenly. However, there is a defect in the current
> > > > implementation [1]. When a SYN packet is received, the connection is tied
> > > > to a listening socket. Accordingly, when the listener is closed, in-flight
> > > > requests during the three-way handshake and child sockets in the accept
> > > > queue are dropped even if other listeners on the same port could accept
> > > > such connections.
> > > >
> > > > This situation can happen when various server management tools restart
> > > > server (such as nginx) processes. For instance, when we change nginx
> > > > configurations and restart it, it spins up new workers that respect the new
> > > > configuration and closes all listeners on the old workers, resulting in the
> > > > in-flight ACK of 3WHS is responded by RST.
> > > >
> > > > To avoid such a situation, users have to know deeply how the kernel handles
> > > > SYN packets and implement connection draining by eBPF [2]:
> > > >
> > > >    1. Stop routing SYN packets to the listener by eBPF.
> > > >    2. Wait for all timers to expire to complete requests
> > > >    3. Accept connections until EAGAIN, then close the listener.
> > > >
> > > >    or
> > > >
> > > >    1. Start counting SYN packets and accept syscalls using the eBPF map.
> > > >    2. Stop routing SYN packets.
> > > >    3. Accept connections up to the count, then close the listener.
> > > >
> > > > In either way, we cannot close a listener immediately. However, ideally,
> > > > the application need not drain the not yet accepted sockets because 3WHS
> > > > and tying a connection to a listener are just the kernel behaviour. The
> > > > root cause is within the kernel, so the issue should be addressed in kernel
> > > > space and should not be visible to user space. This patchset fixes it so
> > > > that users need not take care of kernel implementation and connection
> > > > draining. With this patchset, the kernel redistributes requests and
> > > > connections from a listener to the others in the same reuseport group
> > > > at/after close or shutdown syscalls.
> > > >
> > > > Although some software does connection draining, there are still merits in
> > > > migration. For some security reasons, such as replacing TLS certificates,
> > > > we may want to apply new settings as soon as possible and/or we may not be
> > > > able to wait for connection draining. The sockets in the accept queue have
> > > > not started application sessions yet. So, if we do not drain such sockets,
> > > > they can be handled by the newer listeners and could have a longer
> > > > lifetime. It is difficult to drain all connections in every case, but we
> > > > can decrease such aborted connections by migration. In that sense,
> > > > migration is always better than draining.
> > > >
> > > > Moreover, auto-migration simplifies user space logic and also works well in
> > > > a case where we cannot modify and build a server program to implement the
> > > > workaround.
> > > >
> > > > Note that the source and destination listeners MUST have the same settings
> > > > at the socket API level; otherwise, applications may face inconsistency and
> > > > cause errors. In such a case, we have to use the eBPF program to select a
> > > > specific listener or to cancel migration.
> > This looks to be a useful feature. What happens to migrating a
> > passively fast-opened socket in the old listener but it has not yet
> > been accepted (TFO is both a mini-socket and a full-socket)?
> > It gets tricky when the old and new listener have different TFO key
>
> The tricky situation can happen without this patch set. We can change
> the listener's TFO key when TCP_SYN_RECV sockets are still in the accept
> queue. The change is already handled properly, so it does not crash
> applications.
>
> In the normal 3WHS case, a full-socket is created after 3WHS. In the TFO
> case, a full-socket is created after validating the TFO cookie in the
> initial SYN packet.
>
> After that, the connection is basically handled via the full-socket, except
> for accept() syscall. So in the both cases, the mini-socket is poped out of
> old listener's queue, cloned, and put into the new listner's queue. Then we
> can accept() its full-socket via the cloned mini-socket.

Thanks, that makes sense. Eric is the expert in this part to review
the correctness. My only suggestion is to add some stats tracking the
mini-sockets that fail to migrate due to a variety of reasons (the
code locations that the requests need to be dropped). This can be
useful to evaluate the effectiveness of this new feature.
Iwashima, Kuniyuki June 9, 2021, 12:34 a.m. UTC | #6
From:   Yuchung Cheng <ycheng@google.com>
Date:   Tue, 8 Jun 2021 16:47:37 -0700
> On Tue, Jun 8, 2021 at 4:04 PM Kuniyuki Iwashima <kuniyu@amazon.co.jp> wrote:
> >
> > From:   Yuchung Cheng <ycheng@google.com>
> > Date:   Tue, 8 Jun 2021 10:48:06 -0700
> > > On Tue, May 25, 2021 at 11:42 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
> > > >
> > > > On 5/21/21 8:20 PM, Kuniyuki Iwashima wrote:
> > > > > The SO_REUSEPORT option allows sockets to listen on the same port and to
> > > > > accept connections evenly. However, there is a defect in the current
> > > > > implementation [1]. When a SYN packet is received, the connection is tied
> > > > > to a listening socket. Accordingly, when the listener is closed, in-flight
> > > > > requests during the three-way handshake and child sockets in the accept
> > > > > queue are dropped even if other listeners on the same port could accept
> > > > > such connections.
> > > > >
> > > > > This situation can happen when various server management tools restart
> > > > > server (such as nginx) processes. For instance, when we change nginx
> > > > > configurations and restart it, it spins up new workers that respect the new
> > > > > configuration and closes all listeners on the old workers, resulting in the
> > > > > in-flight ACK of 3WHS is responded by RST.
> > > > >
> > > > > To avoid such a situation, users have to know deeply how the kernel handles
> > > > > SYN packets and implement connection draining by eBPF [2]:
> > > > >
> > > > >    1. Stop routing SYN packets to the listener by eBPF.
> > > > >    2. Wait for all timers to expire to complete requests
> > > > >    3. Accept connections until EAGAIN, then close the listener.
> > > > >
> > > > >    or
> > > > >
> > > > >    1. Start counting SYN packets and accept syscalls using the eBPF map.
> > > > >    2. Stop routing SYN packets.
> > > > >    3. Accept connections up to the count, then close the listener.
> > > > >
> > > > > In either way, we cannot close a listener immediately. However, ideally,
> > > > > the application need not drain the not yet accepted sockets because 3WHS
> > > > > and tying a connection to a listener are just the kernel behaviour. The
> > > > > root cause is within the kernel, so the issue should be addressed in kernel
> > > > > space and should not be visible to user space. This patchset fixes it so
> > > > > that users need not take care of kernel implementation and connection
> > > > > draining. With this patchset, the kernel redistributes requests and
> > > > > connections from a listener to the others in the same reuseport group
> > > > > at/after close or shutdown syscalls.
> > > > >
> > > > > Although some software does connection draining, there are still merits in
> > > > > migration. For some security reasons, such as replacing TLS certificates,
> > > > > we may want to apply new settings as soon as possible and/or we may not be
> > > > > able to wait for connection draining. The sockets in the accept queue have
> > > > > not started application sessions yet. So, if we do not drain such sockets,
> > > > > they can be handled by the newer listeners and could have a longer
> > > > > lifetime. It is difficult to drain all connections in every case, but we
> > > > > can decrease such aborted connections by migration. In that sense,
> > > > > migration is always better than draining.
> > > > >
> > > > > Moreover, auto-migration simplifies user space logic and also works well in
> > > > > a case where we cannot modify and build a server program to implement the
> > > > > workaround.
> > > > >
> > > > > Note that the source and destination listeners MUST have the same settings
> > > > > at the socket API level; otherwise, applications may face inconsistency and
> > > > > cause errors. In such a case, we have to use the eBPF program to select a
> > > > > specific listener or to cancel migration.
> > > This looks to be a useful feature. What happens to migrating a
> > > passively fast-opened socket in the old listener but it has not yet
> > > been accepted (TFO is both a mini-socket and a full-socket)?
> > > It gets tricky when the old and new listener have different TFO key
> >
> > The tricky situation can happen without this patch set. We can change
> > the listener's TFO key when TCP_SYN_RECV sockets are still in the accept
> > queue. The change is already handled properly, so it does not crash
> > applications.
> >
> > In the normal 3WHS case, a full-socket is created after 3WHS. In the TFO
> > case, a full-socket is created after validating the TFO cookie in the
> > initial SYN packet.
> >
> > After that, the connection is basically handled via the full-socket, except
> > for accept() syscall. So in the both cases, the mini-socket is poped out of
> > old listener's queue, cloned, and put into the new listner's queue. Then we
> > can accept() its full-socket via the cloned mini-socket.
> 
> Thanks, that makes sense. Eric is the expert in this part to review
> the correctness. My only suggestion is to add some stats tracking the
> mini-sockets that fail to migrate due to a variety of reasons (the
> code locations that the requests need to be dropped). This can be
> useful to evaluate the effectiveness of this new feature.

That's nice idea.
I'll implement it as a follow-up patch or in the next spin.

For now, I would like to wait for Eric's review.

Thank you.
Eric Dumazet June 9, 2021, 5:04 p.m. UTC | #7
On 6/9/21 2:34 AM, Kuniyuki Iwashima wrote:


> 
> For now, I would like to wait for Eric's review.
> 

I have been busy these days, I will review your patches by tomorrow.