Message ID | 20130211205845.GE30117@fieldses.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
12.02.2013 00:58, J. Bruce Fields ?????: <snip> > void svc_close_net(struct svc_serv *serv, struct net *net) > { > - svc_close_list(serv, &serv->sv_tempsocks, net); > - svc_close_list(serv, &serv->sv_permsocks, net); > - > - svc_clear_pools(serv, net); > - /* > - * At this point the sp_sockets lists will stay empty, since > - * svc_xprt_enqueue will not add new entries without taking the > - * sp_lock and checking XPT_BUSY. > - */ > - svc_clear_list(serv, &serv->sv_tempsocks, net); > - svc_clear_list(serv, &serv->sv_permsocks, net); > + int closed; > + int delay = 0; > + > +again: > + closed = svc_close_list(serv, &serv->sv_permsocks, net); > + closed += svc_close_list(serv, &serv->sv_tempsocks, net); > + if (closed) { > + svc_clean_up_xprts(serv, net); > + msleep(delay++); > + goto again; > + } Frankly, this hunk above makes me feel sick... :( But I have no better idea right now... Maybe make this hunk a bit less weird (this is from my POW only, of course), like this: > + while (svc_close_list(serv, &serv->sv_permsocks, net) + > + svc_close_list(serv, &serv->sv_tempsocks, net)) { > + svc_clean_up_xprts(serv, net); > + msleep(delay++); > + } ? Anyway, thanks! Acked-by: Stanislav Kinsbursky <skinsbursky@parallels.com>
On Tue, Feb 12, 2013 at 01:52:32PM +0400, Stanislav Kinsbursky wrote: > 12.02.2013 00:58, J. Bruce Fields ?????: > <snip> > > void svc_close_net(struct svc_serv *serv, struct net *net) > > { > >- svc_close_list(serv, &serv->sv_tempsocks, net); > >- svc_close_list(serv, &serv->sv_permsocks, net); > >- > >- svc_clear_pools(serv, net); > >- /* > >- * At this point the sp_sockets lists will stay empty, since > >- * svc_xprt_enqueue will not add new entries without taking the > >- * sp_lock and checking XPT_BUSY. > >- */ > >- svc_clear_list(serv, &serv->sv_tempsocks, net); > >- svc_clear_list(serv, &serv->sv_permsocks, net); > >+ int closed; > >+ int delay = 0; > >+ > >+again: > >+ closed = svc_close_list(serv, &serv->sv_permsocks, net); > >+ closed += svc_close_list(serv, &serv->sv_tempsocks, net); > >+ if (closed) { > >+ svc_clean_up_xprts(serv, net); > >+ msleep(delay++); > >+ goto again; > >+ } > > Frankly, this hunk above makes me feel sick... :( > But I have no better idea right now... > Maybe make this hunk a bit less weird (this is from my POW only, of course), like this: > > > + while (svc_close_list(serv, &serv->sv_permsocks, net) + > > + svc_close_list(serv, &serv->sv_tempsocks, net)) { > > + svc_clean_up_xprts(serv, net); > > + msleep(delay++); > > + } > > ? OK, that's a little more compact at least. --b. > > Anyway, thanks! > > Acked-by: Stanislav Kinsbursky <skinsbursky@parallels.com> > > -- > Best regards, > Stanislav Kinsbursky -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
The "+" thing seems a little odd. Why not use "||" instead? The sum of the two returns isn't really the important thing, is it? It is that either call to svc_close_list() returns non-zero. Thanx... ps -----Original Message----- From: linux-nfs-owner@vger.kernel.org [mailto:linux-nfs-owner@vger.kernel.org] On Behalf Of J. Bruce Fields Sent: Tuesday, February 12, 2013 3:46 PM To: Stanislav Kinsbursky Cc: akpm@linux-foundation.org; linux-nfs@vger.kernel.org; Trond.Myklebust@netapp.com; linux-kernel@vger.kernel.org; devel@openvz.org Subject: Re: [PATCH 0/2] NFSD: fix races in service per-net resources allocation On Tue, Feb 12, 2013 at 01:52:32PM +0400, Stanislav Kinsbursky wrote: > 12.02.2013 00:58, J. Bruce Fields ?????: > <snip> > > void svc_close_net(struct svc_serv *serv, struct net *net) > > { > >- svc_close_list(serv, &serv->sv_tempsocks, net); > >- svc_close_list(serv, &serv->sv_permsocks, net); > >- > >- svc_clear_pools(serv, net); > >- /* > >- * At this point the sp_sockets lists will stay empty, since > >- * svc_xprt_enqueue will not add new entries without taking the > >- * sp_lock and checking XPT_BUSY. > >- */ > >- svc_clear_list(serv, &serv->sv_tempsocks, net); > >- svc_clear_list(serv, &serv->sv_permsocks, net); > >+ int closed; > >+ int delay = 0; > >+ > >+again: > >+ closed = svc_close_list(serv, &serv->sv_permsocks, net); > >+ closed += svc_close_list(serv, &serv->sv_tempsocks, net); > >+ if (closed) { > >+ svc_clean_up_xprts(serv, net); > >+ msleep(delay++); > >+ goto again; > >+ } > > Frankly, this hunk above makes me feel sick... :( But I have no better > idea right now... > Maybe make this hunk a bit less weird (this is from my POW only, of course), like this: > > > + while (svc_close_list(serv, &serv->sv_permsocks, net) + > > + svc_close_list(serv, &serv->sv_tempsocks, net)) { > > + svc_clean_up_xprts(serv, net); > > + msleep(delay++); > > + } > > ? OK, that's a little more compact at least. --b. > > Anyway, thanks! > > Acked-by: Stanislav Kinsbursky <skinsbursky@parallels.com> > > -- > Best regards, > Stanislav Kinsbursky -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
13.02.2013 01:18, Peter Staubach ?????: > The "+" thing seems a little odd. Why not use "||" instead? The sum of the two returns isn't really the important thing, is it? It is that either call to svc_close_list() returns non-zero. > > Thanx... > Yep, thanks for the notice. > ps > > > -----Original Message----- > From: linux-nfs-owner@vger.kernel.org [mailto:linux-nfs-owner@vger.kernel.org] On Behalf Of J. Bruce Fields > Sent: Tuesday, February 12, 2013 3:46 PM > To: Stanislav Kinsbursky > Cc: akpm@linux-foundation.org; linux-nfs@vger.kernel.org; Trond.Myklebust@netapp.com; linux-kernel@vger.kernel.org; devel@openvz.org > Subject: Re: [PATCH 0/2] NFSD: fix races in service per-net resources allocation > > On Tue, Feb 12, 2013 at 01:52:32PM +0400, Stanislav Kinsbursky wrote: >> 12.02.2013 00:58, J. Bruce Fields ?????: >> <snip> >>> void svc_close_net(struct svc_serv *serv, struct net *net) >>> { >>> - svc_close_list(serv, &serv->sv_tempsocks, net); >>> - svc_close_list(serv, &serv->sv_permsocks, net); >>> - >>> - svc_clear_pools(serv, net); >>> - /* >>> - * At this point the sp_sockets lists will stay empty, since >>> - * svc_xprt_enqueue will not add new entries without taking the >>> - * sp_lock and checking XPT_BUSY. >>> - */ >>> - svc_clear_list(serv, &serv->sv_tempsocks, net); >>> - svc_clear_list(serv, &serv->sv_permsocks, net); >>> + int closed; >>> + int delay = 0; >>> + >>> +again: >>> + closed = svc_close_list(serv, &serv->sv_permsocks, net); >>> + closed += svc_close_list(serv, &serv->sv_tempsocks, net); >>> + if (closed) { >>> + svc_clean_up_xprts(serv, net); >>> + msleep(delay++); >>> + goto again; >>> + } >> >> Frankly, this hunk above makes me feel sick... :( But I have no better >> idea right now... >> Maybe make this hunk a bit less weird (this is from my POW only, of course), like this: >> >>> + while (svc_close_list(serv, &serv->sv_permsocks, net) + >>> + svc_close_list(serv, &serv->sv_tempsocks, net)) { >>> + svc_clean_up_xprts(serv, net); >>> + msleep(delay++); >>> + } >> >> ? > > OK, that's a little more compact at least. > > --b. > >> >> Anyway, thanks! >> >> Acked-by: Stanislav Kinsbursky <skinsbursky@parallels.com> >> >> -- >> Best regards, >> Stanislav Kinsbursky > -- > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html >
diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c index dbf12ac..2d34b6b 100644 --- a/net/sunrpc/svc.c +++ b/net/sunrpc/svc.c @@ -515,15 +515,6 @@ EXPORT_SYMBOL_GPL(svc_create_pooled); void svc_shutdown_net(struct svc_serv *serv, struct net *net) { - /* - * The set of xprts (contained in the sv_tempsocks and - * sv_permsocks lists) is now constant, since it is modified - * only by accepting new sockets (done by service threads in - * svc_recv) or aging old ones (done by sv_temptimer), or - * configuration changes (excluded by whatever locking the - * caller is using--nfsd_mutex in the case of nfsd). So it's - * safe to traverse those lists and shut everything down: - */ svc_close_net(serv, net); if (serv->sv_shutdown) diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c index 0b67409..0bd0b6f 100644 --- a/net/sunrpc/svc_xprt.c +++ b/net/sunrpc/svc_xprt.c @@ -948,21 +948,24 @@ void svc_close_xprt(struct svc_xprt *xprt) } EXPORT_SYMBOL_GPL(svc_close_xprt); -static void svc_close_list(struct svc_serv *serv, struct list_head *xprt_list, struct net *net) +static int svc_close_list(struct svc_serv *serv, struct list_head *xprt_list, struct net *net) { struct svc_xprt *xprt; + int ret = 0; spin_lock(&serv->sv_lock); list_for_each_entry(xprt, xprt_list, xpt_list) { if (xprt->xpt_net != net) continue; + ret++; set_bit(XPT_CLOSE, &xprt->xpt_flags); - set_bit(XPT_BUSY, &xprt->xpt_flags); + svc_xprt_enqueue(xprt); } spin_unlock(&serv->sv_lock); + return ret; } -static void svc_clear_pools(struct svc_serv *serv, struct net *net) +static struct svc_xprt *svc_dequeue_net(struct svc_serv *serv, struct net *net) { struct svc_pool *pool; struct svc_xprt *xprt; @@ -977,42 +980,49 @@ static void svc_clear_pools(struct svc_serv *serv, struct net *net) if (xprt->xpt_net != net) continue; list_del_init(&xprt->xpt_ready); + spin_unlock_bh(&pool->sp_lock); + return xprt; } spin_unlock_bh(&pool->sp_lock); } + return NULL; } -static void svc_clear_list(struct svc_serv *serv, struct list_head *xprt_list, struct net *net) +static void svc_clean_up_xprts(struct svc_serv *serv, struct net *net) { struct svc_xprt *xprt; - struct svc_xprt *tmp; - LIST_HEAD(victims); - - spin_lock(&serv->sv_lock); - list_for_each_entry_safe(xprt, tmp, xprt_list, xpt_list) { - if (xprt->xpt_net != net) - continue; - list_move(&xprt->xpt_list, &victims); - } - spin_unlock(&serv->sv_lock); - list_for_each_entry_safe(xprt, tmp, &victims, xpt_list) + while ((xprt = svc_dequeue_net(serv, net))) { + set_bit(XPT_CLOSE, &xprt->xpt_flags); svc_delete_xprt(xprt); + } } +/* + * Server threads may still be running (especially in the case where the + * service is still running in other network namespaces). + * + * So we shut down sockets the same way we would on a running server, by + * setting XPT_CLOSE, enqueuing, and letting a thread pick it up to do + * the close. In the case there are no such other threads, + * threads running, svc_clean_up_xprts() does a simple version of a + * server's main event loop, and in the case where there are other + * threads, we may need to wait a little while and then check again to + * see if they're done. + */ void svc_close_net(struct svc_serv *serv, struct net *net) { - svc_close_list(serv, &serv->sv_tempsocks, net); - svc_close_list(serv, &serv->sv_permsocks, net); - - svc_clear_pools(serv, net); - /* - * At this point the sp_sockets lists will stay empty, since - * svc_xprt_enqueue will not add new entries without taking the - * sp_lock and checking XPT_BUSY. - */ - svc_clear_list(serv, &serv->sv_tempsocks, net); - svc_clear_list(serv, &serv->sv_permsocks, net); + int closed; + int delay = 0; + +again: + closed = svc_close_list(serv, &serv->sv_permsocks, net); + closed += svc_close_list(serv, &serv->sv_tempsocks, net); + if (closed) { + svc_clean_up_xprts(serv, net); + msleep(delay++); + goto again; + } } /*