diff mbox

3.0+ NFS issues (bisected)

Message ID 20120817223253.GA15659@fieldses.org (mailing list archive)
State New, archived
Headers show

Commit Message

J. Bruce Fields Aug. 17, 2012, 10:32 p.m. UTC
On Fri, Aug 17, 2012 at 04:08:07PM -0400, J. Bruce Fields wrote:
> Wait a minute, that assumption's a problem because that calculation
> depends in part on xpt_reserved, which is changed here....
> 
> In particular, svc_xprt_release() calls svc_reserve(rqstp, 0), which
> subtracts rqstp->rq_reserved and then calls svc_xprt_enqueue, now with a
> lower xpt_reserved value.  That could well explain this.

So, maybe something like this?

--b.

commit c8136c319ad85d0db870021fc3f9074d37f26d4a
Author: J. Bruce Fields <bfields@redhat.com>
Date:   Fri Aug 17 17:31:53 2012 -0400

    svcrpc: don't add to xpt_reserved till we receive
    
    The rpc server tries to ensure that there will be room to send a reply
    before it receives a request.
    
    It does this by tracking, in xpt_reserved, an upper bound on the total
    size of the replies that is has already committed to for the socket.
    
    Currently it is adding in the estimate for a new reply *before* it
    checks whether there is space available.  If it finds that there is not
    space, it then subtracts the estimate back out.
    
    This may lead the subsequent svc_xprt_enqueue to decide that there is
    space after all.
    
    The results is a svc_recv() that will repeatedly return -EAGAIN, causing
    server threads to loop without doing any actual work.
    
    Reported-by: Michael Tokarev <mjt@tls.msk.ru>
    Signed-off-by: J. Bruce Fields <bfields@redhat.com>

--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Michael Tokarev Aug. 18, 2012, 6:49 a.m. UTC | #1
On 18.08.2012 02:32, J. Bruce Fields wrote:
> On Fri, Aug 17, 2012 at 04:08:07PM -0400, J. Bruce Fields wrote:
>> Wait a minute, that assumption's a problem because that calculation
>> depends in part on xpt_reserved, which is changed here....
>>
>> In particular, svc_xprt_release() calls svc_reserve(rqstp, 0), which
>> subtracts rqstp->rq_reserved and then calls svc_xprt_enqueue, now with a
>> lower xpt_reserved value.  That could well explain this.
> 
> So, maybe something like this?

Well.  What can I say?  With the change below applied (to 3.2 kernel
at least), I don't see any stalls or high CPU usage on the server
anymore.  It survived several multi-gigabyte transfers, for several
hours, without any problem.  So it is a good step forward ;)

But the whole thing seems to be quite a bit fragile.  I tried to follow
the logic in there, and the thing is quite a bit, well, "twisted", and
somewhat difficult to follow.  So I don't know if this is the right
fix or not.  At least it works! :)

And I really wonder why no one else reported this problem before.
Is me the only one in this world who uses linux nfsd? :)

Thank you for all your patience and the proposed fix!

/mjt

> commit c8136c319ad85d0db870021fc3f9074d37f26d4a
> Author: J. Bruce Fields <bfields@redhat.com>
> Date:   Fri Aug 17 17:31:53 2012 -0400
> 
>     svcrpc: don't add to xpt_reserved till we receive
>     
>     The rpc server tries to ensure that there will be room to send a reply
>     before it receives a request.
>     
>     It does this by tracking, in xpt_reserved, an upper bound on the total
>     size of the replies that is has already committed to for the socket.
>     
>     Currently it is adding in the estimate for a new reply *before* it
>     checks whether there is space available.  If it finds that there is not
>     space, it then subtracts the estimate back out.
>     
>     This may lead the subsequent svc_xprt_enqueue to decide that there is
>     space after all.
>     
>     The results is a svc_recv() that will repeatedly return -EAGAIN, causing
>     server threads to loop without doing any actual work.
>     
>     Reported-by: Michael Tokarev <mjt@tls.msk.ru>
>     Signed-off-by: J. Bruce Fields <bfields@redhat.com>
> 
> diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
> index ec99849a..59ff3a3 100644
> --- a/net/sunrpc/svc_xprt.c
> +++ b/net/sunrpc/svc_xprt.c
> @@ -366,8 +366,6 @@ void svc_xprt_enqueue(struct svc_xprt *xprt)
>  				rqstp, rqstp->rq_xprt);
>  		rqstp->rq_xprt = xprt;
>  		svc_xprt_get(xprt);
> -		rqstp->rq_reserved = serv->sv_max_mesg;
> -		atomic_add(rqstp->rq_reserved, &xprt->xpt_reserved);
>  		pool->sp_stats.threads_woken++;
>  		wake_up(&rqstp->rq_wait);
>  	} else {
> @@ -644,8 +642,6 @@ int svc_recv(struct svc_rqst *rqstp, long timeout)
>  	if (xprt) {
>  		rqstp->rq_xprt = xprt;
>  		svc_xprt_get(xprt);
> -		rqstp->rq_reserved = serv->sv_max_mesg;
> -		atomic_add(rqstp->rq_reserved, &xprt->xpt_reserved);
>  
>  		/* As there is a shortage of threads and this request
>  		 * had to be queued, don't allow the thread to wait so
> @@ -743,6 +739,10 @@ int svc_recv(struct svc_rqst *rqstp, long timeout)
>  			len = xprt->xpt_ops->xpo_recvfrom(rqstp);
>  		dprintk("svc: got len=%d\n", len);
>  	}
> +	if (len > 0) {
> +		rqstp->rq_reserved = serv->sv_max_mesg;
> +		atomic_add(rqstp->rq_reserved, &xprt->xpt_reserved);
> +	}
>  	svc_xprt_received(xprt);
>  
>  	/* No data, incomplete (TCP) read, or accept() */
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
J. Bruce Fields Aug. 18, 2012, 11:13 a.m. UTC | #2
On Sat, Aug 18, 2012 at 10:49:31AM +0400, Michael Tokarev wrote:
> On 18.08.2012 02:32, J. Bruce Fields wrote:
> > On Fri, Aug 17, 2012 at 04:08:07PM -0400, J. Bruce Fields wrote:
> >> Wait a minute, that assumption's a problem because that calculation
> >> depends in part on xpt_reserved, which is changed here....
> >>
> >> In particular, svc_xprt_release() calls svc_reserve(rqstp, 0), which
> >> subtracts rqstp->rq_reserved and then calls svc_xprt_enqueue, now with a
> >> lower xpt_reserved value.  That could well explain this.
> > 
> > So, maybe something like this?
> 
> Well.  What can I say?  With the change below applied (to 3.2 kernel
> at least), I don't see any stalls or high CPU usage on the server
> anymore.  It survived several multi-gigabyte transfers, for several
> hours, without any problem.  So it is a good step forward ;)
> 
> But the whole thing seems to be quite a bit fragile.  I tried to follow
> the logic in there, and the thing is quite a bit, well, "twisted", and
> somewhat difficult to follow.  So I don't know if this is the right
> fix or not.  At least it works! :)

Suggestions welcomed.

> And I really wonder why no one else reported this problem before.
> Is me the only one in this world who uses linux nfsd? :)

This, for example:

	http://marc.info/?l=linux-nfs&m=134131915612287&w=2

may well describe the same problem....  It just needed some debugging
persistence, thanks!

--b.
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Michael Tokarev Aug. 18, 2012, 12:58 p.m. UTC | #3
On 18.08.2012 15:13, J. Bruce Fields wrote:
> On Sat, Aug 18, 2012 at 10:49:31AM +0400, Michael Tokarev wrote:
[]
>> Well.  What can I say?  With the change below applied (to 3.2 kernel
>> at least), I don't see any stalls or high CPU usage on the server
>> anymore.  It survived several multi-gigabyte transfers, for several
>> hours, without any problem.  So it is a good step forward ;)
>>
>> But the whole thing seems to be quite a bit fragile.  I tried to follow
>> the logic in there, and the thing is quite a bit, well, "twisted", and
>> somewhat difficult to follow.  So I don't know if this is the right
>> fix or not.  At least it works! :)
> 
> Suggestions welcomed.

Ok...

Meanwhile, you can add my
Tested-By: Michael Tokarev <mjt@tls.msk.ru>

to the patch.

>> And I really wonder why no one else reported this problem before.
>> Is me the only one in this world who uses linux nfsd? :)
> 
> This, for example:
> 
> 	http://marc.info/?l=linux-nfs&m=134131915612287&w=2
> 
> may well describe the same problem....  It just needed some debugging
> persistence, thanks!

Ah.  I tried to find something when I initially
sent this report, but weren't able to.  Apparently
I'm not alone with this problem indeed!

Thank you for all the work!

/mjt
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
index ec99849a..59ff3a3 100644
--- a/net/sunrpc/svc_xprt.c
+++ b/net/sunrpc/svc_xprt.c
@@ -366,8 +366,6 @@  void svc_xprt_enqueue(struct svc_xprt *xprt)
 				rqstp, rqstp->rq_xprt);
 		rqstp->rq_xprt = xprt;
 		svc_xprt_get(xprt);
-		rqstp->rq_reserved = serv->sv_max_mesg;
-		atomic_add(rqstp->rq_reserved, &xprt->xpt_reserved);
 		pool->sp_stats.threads_woken++;
 		wake_up(&rqstp->rq_wait);
 	} else {
@@ -644,8 +642,6 @@  int svc_recv(struct svc_rqst *rqstp, long timeout)
 	if (xprt) {
 		rqstp->rq_xprt = xprt;
 		svc_xprt_get(xprt);
-		rqstp->rq_reserved = serv->sv_max_mesg;
-		atomic_add(rqstp->rq_reserved, &xprt->xpt_reserved);
 
 		/* As there is a shortage of threads and this request
 		 * had to be queued, don't allow the thread to wait so
@@ -743,6 +739,10 @@  int svc_recv(struct svc_rqst *rqstp, long timeout)
 			len = xprt->xpt_ops->xpo_recvfrom(rqstp);
 		dprintk("svc: got len=%d\n", len);
 	}
+	if (len > 0) {
+		rqstp->rq_reserved = serv->sv_max_mesg;
+		atomic_add(rqstp->rq_reserved, &xprt->xpt_reserved);
+	}
 	svc_xprt_received(xprt);
 
 	/* No data, incomplete (TCP) read, or accept() */