Message ID | 20211116033011.16658-1-tonylu@linux.alibaba.com (mailing list archive) |
---|---|
State | RFC |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | [RFC,net] net/smc: Ensure the active closing peer first closes clcsock | expand |
On 16/11/2021 04:30, Tony Lu wrote: > We found an issue when replacing TCP with SMC. When the actively closed > peer called close() in userspace, the clcsock of peer doesn't enter TCP > active close progress, but the passive closed peer close it first, and > enters TIME_WAIT state. It means the behavior doesn't match what we > expected. After reading RFC7609, there is no clear description of the > order in which we close clcsock during close progress. Thanks for your detailed description, it helped me to understand the problem. Your point is that SMC sockets should show the same behavior as TCP sockets in this situation: the side that actively closed the socket should get into TIME_WAIT state, and not the passive side. I agree with this. Your idea to fix it looks like a good solution for me. But I need to do more testing to make sure that other SMC implementations (not Linux) work as expected with this change. For example, Linux does not actively monitor the clcsocket state, but if another implementation would do this it could happen that the SMC socket is closed already when the clcsocket shutdown arrives, and pending data transfers are aborted. I will respond to your RFC when I finished my testing. Thank you.
On 17/11/2021 17:19, Karsten Graul wrote: > On 16/11/2021 04:30, Tony Lu wrote: >> We found an issue when replacing TCP with SMC. When the actively closed >> peer called close() in userspace, the clcsock of peer doesn't enter TCP >> active close progress, but the passive closed peer close it first, and >> enters TIME_WAIT state. It means the behavior doesn't match what we >> expected. After reading RFC7609, there is no clear description of the >> order in which we close clcsock during close progress. > > Thanks for your detailed description, it helped me to understand the problem. > Your point is that SMC sockets should show the same behavior as TCP sockets > in this situation: the side that actively closed the socket should get into > TIME_WAIT state, and not the passive side. I agree with this. > Your idea to fix it looks like a good solution for me. But I need to do more > testing to make sure that other SMC implementations (not Linux) work as > expected with this change. For example, Linux does not actively monitor the > clcsocket state, but if another implementation would do this it could happen > that the SMC socket is closed already when the clcsocket shutdown arrives, and > pending data transfers are aborted. > > I will respond to your RFC when I finished my testing. > > Thank you. > Testing and discussions are finished, the patch looks good. Can you please send your change as a patch to the mailing list?
On Mon, Nov 22, 2021 at 05:47:43PM +0100, Karsten Graul wrote: > On 17/11/2021 17:19, Karsten Graul wrote: > > On 16/11/2021 04:30, Tony Lu wrote: > >> We found an issue when replacing TCP with SMC. When the actively closed > >> peer called close() in userspace, the clcsock of peer doesn't enter TCP > >> active close progress, but the passive closed peer close it first, and > >> enters TIME_WAIT state. It means the behavior doesn't match what we > >> expected. After reading RFC7609, there is no clear description of the > >> order in which we close clcsock during close progress. > > > > Thanks for your detailed description, it helped me to understand the problem. > > Your point is that SMC sockets should show the same behavior as TCP sockets > > in this situation: the side that actively closed the socket should get into > > TIME_WAIT state, and not the passive side. I agree with this. > > Your idea to fix it looks like a good solution for me. But I need to do more > > testing to make sure that other SMC implementations (not Linux) work as > > expected with this change. For example, Linux does not actively monitor the > > clcsocket state, but if another implementation would do this it could happen > > that the SMC socket is closed already when the clcsocket shutdown arrives, and > > pending data transfers are aborted. > > > > I will respond to your RFC when I finished my testing. > > > > Thank you. > > > > Testing and discussions are finished, the patch looks good. > Can you please send your change as a patch to the mailing list? Thanks for your advice and testing. I will send it soon. Cheers, Tony Lu
On 16/11/2021 04:30, Tony Lu wrote: > diff --git a/net/smc/smc_close.c b/net/smc/smc_close.c > index 0f9ffba07d26..04620b53b74a 100644 > --- a/net/smc/smc_close.c > +++ b/net/smc/smc_close.c > @@ -228,6 +228,12 @@ int smc_close_active(struct smc_sock *smc) > /* send close request */ > rc = smc_close_final(conn); > sk->sk_state = SMC_PEERCLOSEWAIT1; > + > + /* actively shutdown clcsock before peer close it, > + * prevent peer from entering TIME_WAIT state. > + */ > + if (smc->clcsock && smc->clcsock->sk) > + rc = kernel_sock_shutdown(smc->clcsock, SHUT_RDWR); > } else { While integrating this patch I stumbled over the overwritten rc, which was already set with the return value from smc_close_final(). Is the rc from kernel_sock_shutdown() even important for the result of this function? How to handle this in your opinion?
On Tue, Nov 23, 2021 at 10:26:21AM +0100, Karsten Graul wrote: > On 16/11/2021 04:30, Tony Lu wrote: > > diff --git a/net/smc/smc_close.c b/net/smc/smc_close.c > > index 0f9ffba07d26..04620b53b74a 100644 > > --- a/net/smc/smc_close.c > > +++ b/net/smc/smc_close.c > > @@ -228,6 +228,12 @@ int smc_close_active(struct smc_sock *smc) > > /* send close request */ > > rc = smc_close_final(conn); > > sk->sk_state = SMC_PEERCLOSEWAIT1; > > + > > + /* actively shutdown clcsock before peer close it, > > + * prevent peer from entering TIME_WAIT state. > > + */ > > + if (smc->clcsock && smc->clcsock->sk) > > + rc = kernel_sock_shutdown(smc->clcsock, SHUT_RDWR); > > } else { > > While integrating this patch I stumbled over the overwritten rc, which was > already set with the return value from smc_close_final(). > Is the rc from kernel_sock_shutdown() even important for the result of this > function? How to handle this in your opinion? Hi Graul, I have investigated the function smc_close_final() when return error: 1. return -EPIPE * conn->killed * !conn->lgr || (conn->lgr->is_smcd && conn->lgr->peer_shutdown) 2. return -ENOLINK * !smc_link_usable(link) * conn's link have changed during wr get free slot 3. return -EBUSY * smc_wr_tx_get_free_slot_index has no available slot The return code -EBUSY is important for user-space to recall close() again. -ENOLINK and -EPIPE means there is no chance to tell peer to perform close progress. The applications should known this. And the clcsock will be released in the end. And the caller of upper function smc_close_active(): 1. __smc_release(), it doesn't handle rc and return it to user-space who called close() directly. 2. smc_shutdown(), it return rc to caller, also with function kernel_sock_shutdown(). IMHO, given that, it is better to not ignore smc_close_final(), and move kernel_sock_shutdown() to __smc_release(), because smc_shutdown() also calls kernel_sock_shutdown() after smc_close_active() and smc_close_shutdown_write(), then enters SMC_PEERCLOSEWAIT1. It's no need to call it twice with SHUT_WR and SHUT_RDWR. Here is the complete code of __smc_release in af_smc.c static int __smc_release(struct smc_sock *smc) { struct sock *sk = &smc->sk; int rc = 0, rc1 = 0; if (!smc->use_fallback) { rc = smc_close_active(smc); /* make sure don't call kernel_sock_shutdown() twice * after called smc_shutdown with SHUT_WR or SHUT_RDWR, * which will perform TCP closing progress. */ if (smc->clcsock && (!sk->sk_shutdown || (sk->sk_shutdown & RCV_SHUTDOWN)) && sk->sk_state == SMC_PEERCLOSEWAIT1) { rc1 = kernel_sock_shutdown(smc->clcsock, SHUT_RDWR); rc = rc ? rc : rc1; } sock_set_flag(sk, SOCK_DEAD); sk->sk_shutdown |= SHUTDOWN_MASK; } else { // code ignored Thanks for pointing it out, it would be more complete of this patch. Tony Lu
On 24/11/2021 09:57, Tony Lu wrote: > IMHO, given that, it is better to not ignore smc_close_final(), and move > kernel_sock_shutdown() to __smc_release(), because smc_shutdown() also > calls kernel_sock_shutdown() after smc_close_active() and > smc_close_shutdown_write(), then enters SMC_PEERCLOSEWAIT1. It's no need > to call it twice with SHUT_WR and SHUT_RDWR. Since the idea is to shutdown the socket before the remote peer shutdowns it first, are you sure that this shutdown in smc_release() is not too late? Is it sure that smc_release() is called in time for this processing? Maybe its better to keep the shutdown in smc_close_active() and to use an rc1 just like shown in your proposal, and return either the rc of smc_close_final() or the rc of kernel_sock_shutdown(). I see the possibility of calling shutdown twice for the clcsocket, but does it harm enough to give a reason to check it before in smc_shutdown()? I expect TCP to handle this already.
On Wed, Nov 24, 2021 at 11:08:23AM +0100, Karsten Graul wrote: > On 24/11/2021 09:57, Tony Lu wrote: > > IMHO, given that, it is better to not ignore smc_close_final(), and move > > kernel_sock_shutdown() to __smc_release(), because smc_shutdown() also > > calls kernel_sock_shutdown() after smc_close_active() and > > smc_close_shutdown_write(), then enters SMC_PEERCLOSEWAIT1. It's no need > > to call it twice with SHUT_WR and SHUT_RDWR. > > Since the idea is to shutdown the socket before the remote peer shutdowns it > first, are you sure that this shutdown in smc_release() is not too late? Hi Graul, Yes, I have tested this idea, it will be too late sometime. I won't fix this issue. > Is it sure that smc_release() is called in time for this processing? > > Maybe its better to keep the shutdown in smc_close_active() and to use an rc1 > just like shown in your proposal, and return either the rc of smc_close_final() > or the rc of kernel_sock_shutdown(). Yep, I am testing this approach in my environment. I am going to keep these return codes and return the available one. > I see the possibility of calling shutdown twice for the clcsocket, but does it > harm enough to give a reason to check it before in smc_shutdown()? I expect TCP > to handle this already. TCP could handle this already, but it doesn't make much sense to call it twice. When call smc_shutdown(), we can check sk_shutdown before call kernel_sock_shutdown(), so that it can slightly speed up the release process. I will send this soon, thanks for your advice. Tony Lu
diff --git a/net/smc/smc_close.c b/net/smc/smc_close.c index 0f9ffba07d26..04620b53b74a 100644 --- a/net/smc/smc_close.c +++ b/net/smc/smc_close.c @@ -228,6 +228,12 @@ int smc_close_active(struct smc_sock *smc) /* send close request */ rc = smc_close_final(conn); sk->sk_state = SMC_PEERCLOSEWAIT1; + + /* actively shutdown clcsock before peer close it, + * prevent peer from entering TIME_WAIT state. + */ + if (smc->clcsock && smc->clcsock->sk) + rc = kernel_sock_shutdown(smc->clcsock, SHUT_RDWR); } else { /* peer event has changed the state */ goto again;