Message ID | 20191016141546.32277-1-trond.myklebust@hammerspace.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/3] SUNRPC: The TCP back channel mustn't disappear while requests are outstanding | expand |
These make sense to me. (But I'll confess I haven't been following the back and forth with Neil.) On Wed, Oct 16, 2019 at 10:15:44AM -0400, Trond Myklebust wrote: > If there are TCP back channel requests either being processed by the In this patch and the next, is the "either" unnecessary, or was there some other condition you meant to mention? --b. > server threads, then we should hold a reference to the transport > to ensure it doesn't get freed from underneath us. > > Reported-by: Neil Brown <neilb@suse.de> > Fixes: 2ea24497a1b3 ("SUNRPC: RPC callbacks may be split across several..") > Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> > --- > net/sunrpc/backchannel_rqst.c | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) > > diff --git a/net/sunrpc/backchannel_rqst.c b/net/sunrpc/backchannel_rqst.c > index 339e8c077c2d..7eb251372f94 100644 > --- a/net/sunrpc/backchannel_rqst.c > +++ b/net/sunrpc/backchannel_rqst.c > @@ -307,8 +307,8 @@ void xprt_free_bc_rqst(struct rpc_rqst *req) > */ > dprintk("RPC: Last session removed req=%p\n", req); > xprt_free_allocation(req); > - return; > } > + xprt_put(xprt); > } > > /* > @@ -339,7 +339,7 @@ struct rpc_rqst *xprt_lookup_bc_request(struct rpc_xprt *xprt, __be32 xid) > spin_unlock(&xprt->bc_pa_lock); > if (new) { > if (req != new) > - xprt_free_bc_rqst(new); > + xprt_free_allocation(new); > break; > } else if (req) > break; > @@ -368,6 +368,7 @@ void xprt_complete_bc_request(struct rpc_rqst *req, uint32_t copied) > set_bit(RPC_BC_PA_IN_USE, &req->rq_bc_pa_state); > > dprintk("RPC: add callback request to list\n"); > + xprt_get(xprt); > spin_lock(&bc_serv->sv_cb_lock); > list_add(&req->rq_bc_list, &bc_serv->sv_cb_list); > wake_up(&bc_serv->sv_cb_waitq); > -- > 2.21.0 >
On Wed, Oct 16 2019, Trond Myklebust wrote: > If there are TCP back channel requests either being processed by the > server threads, then we should hold a reference to the transport > to ensure it doesn't get freed from underneath us. > > Reported-by: Neil Brown <neilb@suse.de> > Fixes: 2ea24497a1b3 ("SUNRPC: RPC callbacks may be split across several..") > Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> > --- > net/sunrpc/backchannel_rqst.c | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) > > diff --git a/net/sunrpc/backchannel_rqst.c b/net/sunrpc/backchannel_rqst.c > index 339e8c077c2d..7eb251372f94 100644 > --- a/net/sunrpc/backchannel_rqst.c > +++ b/net/sunrpc/backchannel_rqst.c > @@ -307,8 +307,8 @@ void xprt_free_bc_rqst(struct rpc_rqst *req) > */ > dprintk("RPC: Last session removed req=%p\n", req); > xprt_free_allocation(req); > - return; > } > + xprt_put(xprt); > } > > /* > @@ -339,7 +339,7 @@ struct rpc_rqst *xprt_lookup_bc_request(struct rpc_xprt *xprt, __be32 xid) > spin_unlock(&xprt->bc_pa_lock); > if (new) { > if (req != new) > - xprt_free_bc_rqst(new); > + xprt_free_allocation(new); > break; > } else if (req) > break; > @@ -368,6 +368,7 @@ void xprt_complete_bc_request(struct rpc_rqst *req, uint32_t copied) > set_bit(RPC_BC_PA_IN_USE, &req->rq_bc_pa_state); > > dprintk("RPC: add callback request to list\n"); > + xprt_get(xprt); > spin_lock(&bc_serv->sv_cb_lock); > list_add(&req->rq_bc_list, &bc_serv->sv_cb_list); > wake_up(&bc_serv->sv_cb_waitq); > -- > 2.21.0 Looks good. This and the next two: Reviewed-by: NeilBrown <neilb@suse.de> It would help me if you could add a Fixes: tag to at least the first two. BTW, while reviewing I notices that bc_alloc_count and bc_slot_count are almost identical. The three places were that are changed separately are probably (minor) bugs. Do you recall why there were two different counters? Has the reason disappeared? Thanks, NeilBrown
On Thu, 2019-10-17 at 09:24 +1100, NeilBrown wrote: > On Wed, Oct 16 2019, Trond Myklebust wrote: > > > If there are TCP back channel requests either being processed by > > the > > server threads, then we should hold a reference to the transport > > to ensure it doesn't get freed from underneath us. > > > > Reported-by: Neil Brown <neilb@suse.de> > > Fixes: 2ea24497a1b3 ("SUNRPC: RPC callbacks may be split across > > several..") > > Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> > > --- > > net/sunrpc/backchannel_rqst.c | 5 +++-- > > 1 file changed, 3 insertions(+), 2 deletions(-) > > > > diff --git a/net/sunrpc/backchannel_rqst.c > > b/net/sunrpc/backchannel_rqst.c > > index 339e8c077c2d..7eb251372f94 100644 > > --- a/net/sunrpc/backchannel_rqst.c > > +++ b/net/sunrpc/backchannel_rqst.c > > @@ -307,8 +307,8 @@ void xprt_free_bc_rqst(struct rpc_rqst *req) > > */ > > dprintk("RPC: Last session removed req=%p\n", > > req); > > xprt_free_allocation(req); > > - return; > > } > > + xprt_put(xprt); > > } > > > > /* > > @@ -339,7 +339,7 @@ struct rpc_rqst *xprt_lookup_bc_request(struct > > rpc_xprt *xprt, __be32 xid) > > spin_unlock(&xprt->bc_pa_lock); > > if (new) { > > if (req != new) > > - xprt_free_bc_rqst(new); > > + xprt_free_allocation(new); > > break; > > } else if (req) > > break; > > @@ -368,6 +368,7 @@ void xprt_complete_bc_request(struct rpc_rqst > > *req, uint32_t copied) > > set_bit(RPC_BC_PA_IN_USE, &req->rq_bc_pa_state); > > > > dprintk("RPC: add callback request to list\n"); > > + xprt_get(xprt); > > spin_lock(&bc_serv->sv_cb_lock); > > list_add(&req->rq_bc_list, &bc_serv->sv_cb_list); > > wake_up(&bc_serv->sv_cb_waitq); > > -- > > 2.21.0 > > Looks good. > This and the next two: > Reviewed-by: NeilBrown <neilb@suse.de> > > It would help me if you could add a Fixes: tag to at least the first > two. Neither have Cc:stable, but they both already have Fixes: tags. See above. > > BTW, while reviewing I notices that bc_alloc_count and bc_slot_count > are > almost identical. The three places were that are changed separately > are > probably (minor) bugs. > Do you recall why there were two different counters? Has the reason > disappeared? IIRC, the former contains the count of preallocated slots, and the latter the count of preallocated+dynamic slots. Thanks Trond
diff --git a/net/sunrpc/backchannel_rqst.c b/net/sunrpc/backchannel_rqst.c index 339e8c077c2d..7eb251372f94 100644 --- a/net/sunrpc/backchannel_rqst.c +++ b/net/sunrpc/backchannel_rqst.c @@ -307,8 +307,8 @@ void xprt_free_bc_rqst(struct rpc_rqst *req) */ dprintk("RPC: Last session removed req=%p\n", req); xprt_free_allocation(req); - return; } + xprt_put(xprt); } /* @@ -339,7 +339,7 @@ struct rpc_rqst *xprt_lookup_bc_request(struct rpc_xprt *xprt, __be32 xid) spin_unlock(&xprt->bc_pa_lock); if (new) { if (req != new) - xprt_free_bc_rqst(new); + xprt_free_allocation(new); break; } else if (req) break; @@ -368,6 +368,7 @@ void xprt_complete_bc_request(struct rpc_rqst *req, uint32_t copied) set_bit(RPC_BC_PA_IN_USE, &req->rq_bc_pa_state); dprintk("RPC: add callback request to list\n"); + xprt_get(xprt); spin_lock(&bc_serv->sv_cb_lock); list_add(&req->rq_bc_list, &bc_serv->sv_cb_list); wake_up(&bc_serv->sv_cb_waitq);
If there are TCP back channel requests either being processed by the server threads, then we should hold a reference to the transport to ensure it doesn't get freed from underneath us. Reported-by: Neil Brown <neilb@suse.de> Fixes: 2ea24497a1b3 ("SUNRPC: RPC callbacks may be split across several..") Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> --- net/sunrpc/backchannel_rqst.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)