Message ID | c597e6c6d004e5b2a26a9535c8099d389214f273.1644323503.git.alibuda@linux.alibaba.com (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | net/smc: Optimizing performance in short-lived scenarios | expand |
On 08/02/2022 13:53, D. Wythe wrote: > From: "D. Wythe" <alibuda@linux.alibaba.com> > > Current implementation does not handling backlog semantics, one > potential risk is that server will be flooded by infinite amount > connections, even if client was SMC-incapable. In this patch you count the number of inflight SMC handshakes as pending and check them against the defined max_backlog. I really like this improvement. There is another queue in af_smc.c, the smc accept queue and any new client socket that completed the handshake process is enqueued there (in smc_accept_enqueue() ) and is waiting to get accepted by the user space application. To apply the correct semantics here, I think the number of sockets waiting in the smc accept queue should also be counted as backlog connections, right? I see no limit for this queue now. What do you think?
There are indirectly limits on smc accept queue with following code. + if (sk_acceptq_is_full(&smc->sk)) { + NET_INC_STATS(sock_net(sk), LINUX_MIB_LISTENOVERFLOWS); + goto drop; + } In fact, we treat the connections in smc accept queue as Full establisted connection. As I wrote in patch commits, there are trade-offs to this implemets. Thanks. 在 2022/2/9 上午1:13, Karsten Graul 写道: > On 08/02/2022 13:53, D. Wythe wrote: >> From: "D. Wythe" <alibuda@linux.alibaba.com> >> >> Current implementation does not handling backlog semantics, one >> potential risk is that server will be flooded by infinite amount >> connections, even if client was SMC-incapable. > > In this patch you count the number of inflight SMC handshakes as pending and > check them against the defined max_backlog. I really like this improvement. > > There is another queue in af_smc.c, the smc accept queue and any new client > socket that completed the handshake process is enqueued there (in smc_accept_enqueue() ) > and is waiting to get accepted by the user space application. To apply the correct > semantics here, I think the number of sockets waiting in the smc accept queue > should also be counted as backlog connections, right? I see no limit for this queue > now. What do you think?
On 09/02/2022 08:11, D. Wythe wrote: > > There are indirectly limits on smc accept queue with following code. > > + if (sk_acceptq_is_full(&smc->sk)) { > + NET_INC_STATS(sock_net(sk), LINUX_MIB_LISTENOVERFLOWS); > + goto drop; > + } > > > In fact, we treat the connections in smc accept queue as Full establisted connection. As I wrote in patch commits, there are trade-offs to this implemets. > Thanks for the clarification, I got your point. You refer to the call to sk_acceptq_added() in smc_accept_enqueue().
diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c index 4969ac8..ebfce3d 100644 --- a/net/smc/af_smc.c +++ b/net/smc/af_smc.c @@ -73,6 +73,34 @@ static void smc_set_keepalive(struct sock *sk, int val) smc->clcsock->sk->sk_prot->keepalive(smc->clcsock->sk, val); } +static struct sock *smc_tcp_syn_recv_sock(const struct sock *sk, struct sk_buff *skb, + struct request_sock *req, + struct dst_entry *dst, + struct request_sock *req_unhash, + bool *own_req) +{ + struct smc_sock *smc; + + smc = (struct smc_sock *)((uintptr_t)sk->sk_user_data & ~SK_USER_DATA_NOCOPY); + + if (READ_ONCE(sk->sk_ack_backlog) + atomic_read(&smc->smc_pendings) > + sk->sk_max_ack_backlog) + goto drop; + + if (sk_acceptq_is_full(&smc->sk)) { + NET_INC_STATS(sock_net(sk), LINUX_MIB_LISTENOVERFLOWS); + goto drop; + } + + /* passthrough to origin syn recv sock fct */ + return smc->ori_af_ops->syn_recv_sock(sk, skb, req, dst, req_unhash, own_req); + +drop: + dst_release(dst); + tcp_listendrop(sk); + return NULL; +} + static struct smc_hashinfo smc_v4_hashinfo = { .lock = __RW_LOCK_UNLOCKED(smc_v4_hashinfo.lock), }; @@ -1595,6 +1623,9 @@ static void smc_listen_out(struct smc_sock *new_smc) struct smc_sock *lsmc = new_smc->listen_smc; struct sock *newsmcsk = &new_smc->sk; + if (tcp_sk(new_smc->clcsock->sk)->syn_smc) + atomic_dec(&lsmc->smc_pendings); + if (lsmc->sk.sk_state == SMC_LISTEN) { lock_sock_nested(&lsmc->sk, SINGLE_DEPTH_NESTING); smc_accept_enqueue(&lsmc->sk, newsmcsk); @@ -2200,6 +2231,9 @@ static void smc_tcp_listen_work(struct work_struct *work) if (!new_smc) continue; + if (tcp_sk(new_smc->clcsock->sk)->syn_smc) + atomic_inc(&lsmc->smc_pendings); + new_smc->listen_smc = lsmc; new_smc->use_fallback = lsmc->use_fallback; new_smc->fallback_rsn = lsmc->fallback_rsn; @@ -2266,6 +2300,15 @@ static int smc_listen(struct socket *sock, int backlog) smc->clcsock->sk->sk_data_ready = smc_clcsock_data_ready; smc->clcsock->sk->sk_user_data = (void *)((uintptr_t)smc | SK_USER_DATA_NOCOPY); + + /* save origin ops */ + smc->ori_af_ops = inet_csk(smc->clcsock->sk)->icsk_af_ops; + + smc->af_ops = *smc->ori_af_ops; + smc->af_ops.syn_recv_sock = smc_tcp_syn_recv_sock; + + inet_csk(smc->clcsock->sk)->icsk_af_ops = &smc->af_ops; + rc = kernel_listen(smc->clcsock, backlog); if (rc) { smc->clcsock->sk->sk_data_ready = smc->clcsk_data_ready; diff --git a/net/smc/smc.h b/net/smc/smc.h index 37b2001..5e5e38d 100644 --- a/net/smc/smc.h +++ b/net/smc/smc.h @@ -252,6 +252,10 @@ struct smc_sock { /* smc sock container */ bool use_fallback; /* fallback to tcp */ int fallback_rsn; /* reason for fallback */ u32 peer_diagnosis; /* decline reason from peer */ + atomic_t smc_pendings; /* pending smc connections */ + struct inet_connection_sock_af_ops af_ops; + const struct inet_connection_sock_af_ops *ori_af_ops; + /* origin af ops */ int sockopt_defer_accept; /* sockopt TCP_DEFER_ACCEPT * value