Message ID | 6deeca64bfecbd01d724092a1a2c91ca8bce3ce0.1644214112.git.alibuda@linux.alibaba.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | net/smc: Optimizing performance in short-lived scenarios | expand |
On 07/02/2022 07:24, D. Wythe wrote: > From: "D. Wythe" <alibuda@linux.alibaba.com> > > This patch intends to provide a mechanism to allow automatic fallback to > TCP according to the pressure of SMC handshake process. At present, > frequent visits will cause the incoming connections to be backlogged in > SMC handshake queue, raise the connections established time. Which is > quite unacceptable for those applications who base on short lived > connections. I hope I didn't miss any news, but with your latest reply to the v2 series you questioned the config option in this v4 patch, so this is still work in progress, right?
The main communication in v2 series is about adding a dynamic control for auto fallback to TCP. I will soon add a new patch to implements this in v5 series, or modify it in curret patch. Which one do you recommend? Look forward to your suggestions. Thanks. 在 2022/2/7 下午3:56, Karsten Graul 写道: > On 07/02/2022 07:24, D. Wythe wrote: >> From: "D. Wythe" <alibuda@linux.alibaba.com> >> >> This patch intends to provide a mechanism to allow automatic fallback to >> TCP according to the pressure of SMC handshake process. At present, >> frequent visits will cause the incoming connections to be backlogged in >> SMC handshake queue, raise the connections established time. Which is >> quite unacceptable for those applications who base on short lived >> connections. > > I hope I didn't miss any news, but with your latest reply to the v2 series you > questioned the config option in this v4 patch, so this is still work in progress, right?
On 07/02/2022 10:50, D. Wythe wrote: > > The main communication in v2 series is about adding a dynamic control for auto fallback to TCP. I will soon add a new patch to implements this in v5 series, or modify it in curret patch. Which one do you recommend? > When you change a patch then you need to send a new series.
You may have misunderstood what I mean ... but it doesn't matter now, I've sent the v5 series. Looking forward to your suggestions for v5 series. Thanks. 在 2022/2/7 下午6:03, Karsten Graul 写道: > On 07/02/2022 10:50, D. Wythe wrote: >> >> The main communication in v2 series is about adding a dynamic control for auto fallback to TCP. I will soon add a new patch to implements this in v5 series, or modify it in curret patch. Which one do you recommend? >> > > When you change a patch then you need to send a new series.
diff --git a/include/linux/tcp.h b/include/linux/tcp.h index 78b91bb..1c4ae5d 100644 --- a/include/linux/tcp.h +++ b/include/linux/tcp.h @@ -394,6 +394,7 @@ struct tcp_sock { bool is_mptcp; #endif #if IS_ENABLED(CONFIG_SMC) + bool (*smc_in_limited)(const struct sock *sk); bool syn_smc; /* SYN includes SMC */ #endif diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index dc49a3d..9890de9 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -6701,7 +6701,8 @@ static void tcp_openreq_init(struct request_sock *req, ireq->ir_num = ntohs(tcp_hdr(skb)->dest); ireq->ir_mark = inet_request_mark(sk, skb); #if IS_ENABLED(CONFIG_SMC) - ireq->smc_ok = rx_opt->smc_ok; + ireq->smc_ok = rx_opt->smc_ok && !(tcp_sk(sk)->smc_in_limited && + tcp_sk(sk)->smc_in_limited(sk)); #endif } diff --git a/net/smc/Kconfig b/net/smc/Kconfig index 1ab3c5a..a4e1713 100644 --- a/net/smc/Kconfig +++ b/net/smc/Kconfig @@ -19,3 +19,15 @@ config SMC_DIAG smcss. if unsure, say Y. + +if SMC + +config SMC_AUTO_FALLBACK + bool "SMC: automatic fallback to TCP" + default y + help + Allow automatic fallback to TCP accroding to the pressure of SMC-R + handshake process. + + If that's not what you except or unsure, say N. +endif diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c index 697573f..46f86a2 100644 --- a/net/smc/af_smc.c +++ b/net/smc/af_smc.c @@ -101,6 +101,24 @@ static struct sock *smc_tcp_syn_recv_sock(const struct sock *sk, struct sk_buff return NULL; } +#if IS_ENABLED(CONFIG_SMC_AUTO_FALLBACK) +static bool smc_is_in_limited(const struct sock *sk) +{ + const struct smc_sock *smc; + + smc = (const struct smc_sock *) + ((uintptr_t)sk->sk_user_data & ~SK_USER_DATA_NOCOPY); + + if (!smc) + return true; + + if (workqueue_congested(WORK_CPU_UNBOUND, smc_hs_wq)) + return true; + + return false; +} +#endif + static struct smc_hashinfo smc_v4_hashinfo = { .lock = __RW_LOCK_UNLOCKED(smc_v4_hashinfo.lock), }; @@ -2206,6 +2224,10 @@ static int smc_listen(struct socket *sock, int backlog) inet_csk(smc->clcsock->sk)->icsk_af_ops = &smc->af_ops; +#if IS_ENABLED(CONFIG_SMC_AUTO_FALLBACK) + tcp_sk(smc->clcsock->sk)->smc_in_limited = smc_is_in_limited; +#endif + rc = kernel_listen(smc->clcsock, backlog); if (rc) { smc->clcsock->sk->sk_data_ready = smc->clcsk_data_ready;