Message ID | 1305913963.12712.6.camel@lade.trondhjem.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Fri, 2011-05-20 at 13:52 -0400, Trond Myklebust wrote: > On Fri, 2011-05-20 at 13:26 -0400, Dr. J. Bruce Fields wrote: > > On Fri, May 20, 2011 at 09:20:47AM -0700, Harry Edmon wrote: > > > On 05/16/11 13:53, Dr. J. Bruce Fields wrote: > > > >Hm, so the renews all have clid 465ccc4d09000000, and the reads all have > > > >a stateid (0, 465ccc4dc24c0a0000000000). > > > > > > > >So the first 4 bytes matching just tells me both were handed out by the > > > >same server instance (so there was no server reboot in between); there's > > > >no way for me to tell whether they really belong to the same client. > > > > > > > >The server does assume that any stateid from the current server instance > > > >that no longer exists in its table is expired. I believe that's > > > >correct, given a correctly functioning client, but perhaps I'm missing a > > > >case. > > > > > > > >--b. > > > I am very appreciative of the quick initial comments I receive from > > > all of you on my NFS problem. I notice that there has been silence > > > on the problem since the 16th, so I assume that either this is a > > > hard bug to track down or you have been busy with higher priority > > > tasks. Is there anything I can do to help develop a solution to > > > this problem? > > > > Well, the only candidate explanation for the problem is that my > > assumption--that any time the server gets a stateid from the current > > boot instance that it doesn't recognize as an active stateid, it is safe > > for the server to return EXPIRED--is wrong. > > > > I don't immediately see why it's wrong, and based on the silence nobody > > else does either, but I'm not 100% convinced I'm right either. > > > > So one approach might be to add server code that makes a better effort > > to return EXPIRED only when we're sure it's a stateid from an expired > > client, and see if that solves your problem. > > > > Remind me, did you have an easy way to reproduce your problem? > > My silence is simply because I'm mystified as to how this can happen. > Patching for it is trivial (see below). > > When the server tells us that our lease is expired, the normal behaviour > for the client is to re-establish the lease, and then proceed to recover > all known stateids. I don't see how we can 'miss' a stateid that then > needs to be recovered afterwards... Bruce, If the clientid expired, is it possible that the server may have handed out the same numerical short clientid to someone else and that explains why the RENEW is succeeding?
On Fri, May 20, 2011 at 01:52:43PM -0400, Trond Myklebust wrote: > On Fri, 2011-05-20 at 13:26 -0400, Dr. J. Bruce Fields wrote: > > On Fri, May 20, 2011 at 09:20:47AM -0700, Harry Edmon wrote: > > > On 05/16/11 13:53, Dr. J. Bruce Fields wrote: > > > >Hm, so the renews all have clid 465ccc4d09000000, and the reads all have > > > >a stateid (0, 465ccc4dc24c0a0000000000). > > > > > > > >So the first 4 bytes matching just tells me both were handed out by the > > > >same server instance (so there was no server reboot in between); there's > > > >no way for me to tell whether they really belong to the same client. > > > > > > > >The server does assume that any stateid from the current server instance > > > >that no longer exists in its table is expired. I believe that's > > > >correct, given a correctly functioning client, but perhaps I'm missing a > > > >case. > > > > > > > >--b. > > > I am very appreciative of the quick initial comments I receive from > > > all of you on my NFS problem. I notice that there has been silence > > > on the problem since the 16th, so I assume that either this is a > > > hard bug to track down or you have been busy with higher priority > > > tasks. Is there anything I can do to help develop a solution to > > > this problem? > > > > Well, the only candidate explanation for the problem is that my > > assumption--that any time the server gets a stateid from the current > > boot instance that it doesn't recognize as an active stateid, it is safe > > for the server to return EXPIRED--is wrong. > > > > I don't immediately see why it's wrong, and based on the silence nobody > > else does either, but I'm not 100% convinced I'm right either. > > > > So one approach might be to add server code that makes a better effort > > to return EXPIRED only when we're sure it's a stateid from an expired > > client, and see if that solves your problem. > > > > Remind me, did you have an easy way to reproduce your problem? > > My silence is simply because I'm mystified as to how this can happen. So since the client's sending it with a READ, the client thinks that the stateid is still a valid open, lock, or delegation stateid, while the server thinks it's not. Hm. --b. > Patching for it is trivial (see below). > > When the server tells us that our lease is expired, the normal behaviour > for the client is to re-establish the lease, and then proceed to recover > all known stateids. I don't see how we can 'miss' a stateid that then > needs to be recovered afterwards... -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 05/20/2011 02:47 PM, Dr. J. Bruce Fields wrote: > On Fri, May 20, 2011 at 01:52:43PM -0400, Trond Myklebust wrote: >> On Fri, 2011-05-20 at 13:26 -0400, Dr. J. Bruce Fields wrote: >>> On Fri, May 20, 2011 at 09:20:47AM -0700, Harry Edmon wrote: >>>> On 05/16/11 13:53, Dr. J. Bruce Fields wrote: >>>>> Hm, so the renews all have clid 465ccc4d09000000, and the reads all have >>>>> a stateid (0, 465ccc4dc24c0a0000000000). >>>>> >>>>> So the first 4 bytes matching just tells me both were handed out by the >>>>> same server instance (so there was no server reboot in between); there's >>>>> no way for me to tell whether they really belong to the same client. >>>>> >>>>> The server does assume that any stateid from the current server instance >>>>> that no longer exists in its table is expired. I believe that's >>>>> correct, given a correctly functioning client, but perhaps I'm missing a >>>>> case. >>>>> >>>>> --b. >>>> I am very appreciative of the quick initial comments I receive from >>>> all of you on my NFS problem. I notice that there has been silence >>>> on the problem since the 16th, so I assume that either this is a >>>> hard bug to track down or you have been busy with higher priority >>>> tasks. Is there anything I can do to help develop a solution to >>>> this problem? >>> >>> Well, the only candidate explanation for the problem is that my >>> assumption--that any time the server gets a stateid from the current >>> boot instance that it doesn't recognize as an active stateid, it is safe >>> for the server to return EXPIRED--is wrong. >>> >>> I don't immediately see why it's wrong, and based on the silence nobody >>> else does either, but I'm not 100% convinced I'm right either. >>> >>> So one approach might be to add server code that makes a better effort >>> to return EXPIRED only when we're sure it's a stateid from an expired >>> client, and see if that solves your problem. >>> >>> Remind me, did you have an easy way to reproduce your problem? >> >> My silence is simply because I'm mystified as to how this can happen. > > So since the client's sending it with a READ, the client thinks that the > stateid is still a valid open, lock, or delegation stateid, while the > server thinks it's not. Hm. I found this bug when I used "forget all locks" in the fault injection code I recently posted. Trond's fix works for me. - Bryan > > --b. > >> Patching for it is trivial (see below). >> >> When the server tells us that our lease is expired, the normal behaviour >> for the client is to re-establish the lease, and then proceed to recover >> all known stateids. I don't see how we can 'miss' a stateid that then >> needs to be recovered afterwards... > -- > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, May 20, 2011 at 02:36:56PM -0400, Trond Myklebust wrote: > On Fri, 2011-05-20 at 13:52 -0400, Trond Myklebust wrote: > > On Fri, 2011-05-20 at 13:26 -0400, Dr. J. Bruce Fields wrote: > > > On Fri, May 20, 2011 at 09:20:47AM -0700, Harry Edmon wrote: > > > > On 05/16/11 13:53, Dr. J. Bruce Fields wrote: > > > > >Hm, so the renews all have clid 465ccc4d09000000, and the reads all have > > > > >a stateid (0, 465ccc4dc24c0a0000000000). > > > > > > > > > >So the first 4 bytes matching just tells me both were handed out by the > > > > >same server instance (so there was no server reboot in between); there's > > > > >no way for me to tell whether they really belong to the same client. > > > > > > > > > >The server does assume that any stateid from the current server instance > > > > >that no longer exists in its table is expired. I believe that's > > > > >correct, given a correctly functioning client, but perhaps I'm missing a > > > > >case. > > > > > > > > > >--b. > > > > I am very appreciative of the quick initial comments I receive from > > > > all of you on my NFS problem. I notice that there has been silence > > > > on the problem since the 16th, so I assume that either this is a > > > > hard bug to track down or you have been busy with higher priority > > > > tasks. Is there anything I can do to help develop a solution to > > > > this problem? > > > > > > Well, the only candidate explanation for the problem is that my > > > assumption--that any time the server gets a stateid from the current > > > boot instance that it doesn't recognize as an active stateid, it is safe > > > for the server to return EXPIRED--is wrong. > > > > > > I don't immediately see why it's wrong, and based on the silence nobody > > > else does either, but I'm not 100% convinced I'm right either. > > > > > > So one approach might be to add server code that makes a better effort > > > to return EXPIRED only when we're sure it's a stateid from an expired > > > client, and see if that solves your problem. > > > > > > Remind me, did you have an easy way to reproduce your problem? > > > > My silence is simply because I'm mystified as to how this can happen. > > Patching for it is trivial (see below). > > > > When the server tells us that our lease is expired, the normal behaviour > > for the client is to re-establish the lease, and then proceed to recover > > all known stateids. I don't see how we can 'miss' a stateid that then > > needs to be recovered afterwards... > > Bruce, > > If the clientid expired, is it possible that the server may have handed > out the same numerical short clientid to someone else and that explains > why the RENEW is succeeding? Clientid's are created from a u32 counter that's sampled only under the state lock, so it sounds unlikely. I think more likely would be some bug affecting the lifetime of a stateid--e.g. if the server destroyed a lock stateid earlier than it should in some case, then this would happen. (Since, as I say, we assume EXPIRED for any stateid we don't recognize.) --b. -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, 2011-05-20 at 14:59 -0400, Dr. J. Bruce Fields wrote: > > Bruce, > > > > If the clientid expired, is it possible that the server may have handed > > out the same numerical short clientid to someone else and that explains > > why the RENEW is succeeding? > > Clientid's are created from a u32 counter that's sampled only under the > state lock, so it sounds unlikely. > > I think more likely would be some bug affecting the lifetime of a > stateid--e.g. if the server destroyed a lock stateid earlier than it > should in some case, then this would happen. (Since, as I say, we > assume EXPIRED for any stateid we don't recognize.) Shouldn't that be a NFS4ERR_BAD_STATEID instead of NFS4ERR_EXPIRED? The latter should really be reserved for the case where you know that this stateid came from an expired lease.
On 05/20/11 10:52, Trond Myklebust wrote: > On Fri, 2011-05-20 at 13:26 -0400, Dr. J. Bruce Fields wrote: > >> On Fri, May 20, 2011 at 09:20:47AM -0700, Harry Edmon wrote: >> >>> On 05/16/11 13:53, Dr. J. Bruce Fields wrote: >>> >>>> Hm, so the renews all have clid 465ccc4d09000000, and the reads all have >>>> a stateid (0, 465ccc4dc24c0a0000000000). >>>> >>>> So the first 4 bytes matching just tells me both were handed out by the >>>> same server instance (so there was no server reboot in between); there's >>>> no way for me to tell whether they really belong to the same client. >>>> >>>> The server does assume that any stateid from the current server instance >>>> that no longer exists in its table is expired. I believe that's >>>> correct, given a correctly functioning client, but perhaps I'm missing a >>>> case. >>>> >>>> --b. >>>> >>> I am very appreciative of the quick initial comments I receive from >>> all of you on my NFS problem. I notice that there has been silence >>> on the problem since the 16th, so I assume that either this is a >>> hard bug to track down or you have been busy with higher priority >>> tasks. Is there anything I can do to help develop a solution to >>> this problem? >>> >> Well, the only candidate explanation for the problem is that my >> assumption--that any time the server gets a stateid from the current >> boot instance that it doesn't recognize as an active stateid, it is safe >> for the server to return EXPIRED--is wrong. >> >> I don't immediately see why it's wrong, and based on the silence nobody >> else does either, but I'm not 100% convinced I'm right either. >> >> So one approach might be to add server code that makes a better effort >> to return EXPIRED only when we're sure it's a stateid from an expired >> client, and see if that solves your problem. >> >> Remind me, did you have an easy way to reproduce your problem? >> > My silence is simply because I'm mystified as to how this can happen. > Patching for it is trivial (see below). > > When the server tells us that our lease is expired, the normal behaviour > for the client is to re-establish the lease, and then proceed to recover > all known stateids. I don't see how we can 'miss' a stateid that then > needs to be recovered afterwards... > > Cheers > Trond > > 8<---------------------------------------------------------------------------- > From 920ddb153f28717be363f6e87dde24ef2a8d0ce2 Mon Sep 17 00:00:00 2001 > From: Trond Myklebust<Trond.Myklebust@netapp.com> > Date: Fri, 20 May 2011 13:44:02 -0400 > Subject: [PATCH] NFSv4: Handle expired stateids when the lease is still valid > > Currently, if the server returns NFS4ERR_EXPIRED in reply to a READ or > WRITE, but the RENEW test determines that the lease is still active, we > fail to recover and end up looping forever in a READ/WRITE + RENEW death > spiral. > > Signed-off-by: Trond Myklebust<Trond.Myklebust@netapp.com> > --- > fs/nfs/nfs4proc.c | 9 +++++++-- > 1 files changed, 7 insertions(+), 2 deletions(-) > > diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c > index cf1b339..d0e15db 100644 > --- a/fs/nfs/nfs4proc.c > +++ b/fs/nfs/nfs4proc.c > @@ -267,9 +267,11 @@ static int nfs4_handle_exception(struct nfs_server *server, int errorcode, struc > break; > nfs4_schedule_stateid_recovery(server, state); > goto wait_on_recovery; > + case -NFS4ERR_EXPIRED: > + if (state != NULL) > + nfs4_schedule_stateid_recovery(server, state); > case -NFS4ERR_STALE_STATEID: > case -NFS4ERR_STALE_CLIENTID: > - case -NFS4ERR_EXPIRED: > nfs4_schedule_lease_recovery(clp); > goto wait_on_recovery; > #if defined(CONFIG_NFS_V4_1) > @@ -3670,9 +3672,11 @@ nfs4_async_handle_error(struct rpc_task *task, const struct nfs_server *server, > break; > nfs4_schedule_stateid_recovery(server, state); > goto wait_on_recovery; > + case -NFS4ERR_EXPIRED: > + if (state != NULL) > + nfs4_schedule_stateid_recovery(server, state); > case -NFS4ERR_STALE_STATEID: > case -NFS4ERR_STALE_CLIENTID: > - case -NFS4ERR_EXPIRED: > nfs4_schedule_lease_recovery(clp); > goto wait_on_recovery; > #if defined(CONFIG_NFS_V4_1) > @@ -4543,6 +4547,7 @@ int nfs4_lock_delegation_recall(struct nfs4_state *state, struct file_lock *fl) > case -ESTALE: > goto out; > case -NFS4ERR_EXPIRED: > + nfs4_schedule_stateid_recovery(server, state); > case -NFS4ERR_STALE_CLIENTID: > case -NFS4ERR_STALE_STATEID: > nfs4_schedule_lease_recovery(server->nfs_client); > I installed this patch on my client, and now I am seeing the state manager appear in the process accounting file about once a minute rather that the constant respawning I saw earlier. Is once a minute normal, or is there still a problem?
On Fri, May 20, 2011 at 03:15:33PM -0400, Trond Myklebust wrote: > On Fri, 2011-05-20 at 14:59 -0400, Dr. J. Bruce Fields wrote: > > > Bruce, > > > > > > If the clientid expired, is it possible that the server may have handed > > > out the same numerical short clientid to someone else and that explains > > > why the RENEW is succeeding? > > > > Clientid's are created from a u32 counter that's sampled only under the > > state lock, so it sounds unlikely. > > > > I think more likely would be some bug affecting the lifetime of a > > stateid--e.g. if the server destroyed a lock stateid earlier than it > > should in some case, then this would happen. (Since, as I say, we > > assume EXPIRED for any stateid we don't recognize.) > > Shouldn't that be a NFS4ERR_BAD_STATEID instead of NFS4ERR_EXPIRED? Probably so, but absent a bug on either side I can't see a case where it would make a difference; can you? --b. > The > latter should really be reserved for the case where you know that this > stateid came from an expired lease. -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On May 20, 2011, at 3:29 PM, Harry Edmon wrote: > On 05/20/11 10:52, Trond Myklebust wrote: >> On Fri, 2011-05-20 at 13:26 -0400, Dr. J. Bruce Fields wrote: >> >>> On Fri, May 20, 2011 at 09:20:47AM -0700, Harry Edmon wrote: >>> >>>> On 05/16/11 13:53, Dr. J. Bruce Fields wrote: >>>> >>>>> Hm, so the renews all have clid 465ccc4d09000000, and the reads all have >>>>> a stateid (0, 465ccc4dc24c0a0000000000). >>>>> >>>>> So the first 4 bytes matching just tells me both were handed out by the >>>>> same server instance (so there was no server reboot in between); there's >>>>> no way for me to tell whether they really belong to the same client. >>>>> >>>>> The server does assume that any stateid from the current server instance >>>>> that no longer exists in its table is expired. I believe that's >>>>> correct, given a correctly functioning client, but perhaps I'm missing a >>>>> case. >>>>> >>>>> --b. >>>>> >>>> I am very appreciative of the quick initial comments I receive from >>>> all of you on my NFS problem. I notice that there has been silence >>>> on the problem since the 16th, so I assume that either this is a >>>> hard bug to track down or you have been busy with higher priority >>>> tasks. Is there anything I can do to help develop a solution to >>>> this problem? >>>> >>> Well, the only candidate explanation for the problem is that my >>> assumption--that any time the server gets a stateid from the current >>> boot instance that it doesn't recognize as an active stateid, it is safe >>> for the server to return EXPIRED--is wrong. >>> >>> I don't immediately see why it's wrong, and based on the silence nobody >>> else does either, but I'm not 100% convinced I'm right either. >>> >>> So one approach might be to add server code that makes a better effort >>> to return EXPIRED only when we're sure it's a stateid from an expired >>> client, and see if that solves your problem. >>> >>> Remind me, did you have an easy way to reproduce your problem? >>> >> My silence is simply because I'm mystified as to how this can happen. >> Patching for it is trivial (see below). >> >> When the server tells us that our lease is expired, the normal behaviour >> for the client is to re-establish the lease, and then proceed to recover >> all known stateids. I don't see how we can 'miss' a stateid that then >> needs to be recovered afterwards... >> >> Cheers >> Trond >> >> 8<---------------------------------------------------------------------------- >> From 920ddb153f28717be363f6e87dde24ef2a8d0ce2 Mon Sep 17 00:00:00 2001 >> From: Trond Myklebust<Trond.Myklebust@netapp.com> >> Date: Fri, 20 May 2011 13:44:02 -0400 >> Subject: [PATCH] NFSv4: Handle expired stateids when the lease is still valid >> >> Currently, if the server returns NFS4ERR_EXPIRED in reply to a READ or >> WRITE, but the RENEW test determines that the lease is still active, we >> fail to recover and end up looping forever in a READ/WRITE + RENEW death >> spiral. >> >> Signed-off-by: Trond Myklebust<Trond.Myklebust@netapp.com> >> --- >> fs/nfs/nfs4proc.c | 9 +++++++-- >> 1 files changed, 7 insertions(+), 2 deletions(-) >> >> diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c >> index cf1b339..d0e15db 100644 >> --- a/fs/nfs/nfs4proc.c >> +++ b/fs/nfs/nfs4proc.c >> @@ -267,9 +267,11 @@ static int nfs4_handle_exception(struct nfs_server *server, int errorcode, struc >> break; >> nfs4_schedule_stateid_recovery(server, state); >> goto wait_on_recovery; >> + case -NFS4ERR_EXPIRED: >> + if (state != NULL) >> + nfs4_schedule_stateid_recovery(server, state); >> case -NFS4ERR_STALE_STATEID: >> case -NFS4ERR_STALE_CLIENTID: >> - case -NFS4ERR_EXPIRED: >> nfs4_schedule_lease_recovery(clp); >> goto wait_on_recovery; >> #if defined(CONFIG_NFS_V4_1) >> @@ -3670,9 +3672,11 @@ nfs4_async_handle_error(struct rpc_task *task, const struct nfs_server *server, >> break; >> nfs4_schedule_stateid_recovery(server, state); >> goto wait_on_recovery; >> + case -NFS4ERR_EXPIRED: >> + if (state != NULL) >> + nfs4_schedule_stateid_recovery(server, state); >> case -NFS4ERR_STALE_STATEID: >> case -NFS4ERR_STALE_CLIENTID: >> - case -NFS4ERR_EXPIRED: >> nfs4_schedule_lease_recovery(clp); >> goto wait_on_recovery; >> #if defined(CONFIG_NFS_V4_1) >> @@ -4543,6 +4547,7 @@ int nfs4_lock_delegation_recall(struct nfs4_state *state, struct file_lock *fl) >> case -ESTALE: >> goto out; >> case -NFS4ERR_EXPIRED: >> + nfs4_schedule_stateid_recovery(server, state); >> case -NFS4ERR_STALE_CLIENTID: >> case -NFS4ERR_STALE_STATEID: >> nfs4_schedule_lease_recovery(server->nfs_client); >> > I installed this patch on my client, and now I am seeing the state manager appear in the process accounting file about once a minute rather that the constant respawning I saw earlier. Is once a minute normal, or is there still a problem? The state manager sends the lease renew heart-beat. It should spawn every lease period unless a lease-renewing operation (one with state) happens to be sent. -->Andy > > -- > Dr. Harry Edmon E-MAIL: harry@uw.edu > 206-543-0547 FAX: 206-543-0308 harry@atmos.washington.edu > Director of IT, College of the Environment and > Director of Computing, Dept of Atmospheric Sciences > University of Washington, Box 351640, Seattle, WA 98195-1640 > > -- > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, 2011-05-20 at 12:29 -0700, Harry Edmon wrote: > On 05/20/11 10:52, Trond Myklebust wrote: > > On Fri, 2011-05-20 at 13:26 -0400, Dr. J. Bruce Fields wrote: > > > >> On Fri, May 20, 2011 at 09:20:47AM -0700, Harry Edmon wrote: > >> > >>> On 05/16/11 13:53, Dr. J. Bruce Fields wrote: > >>> > >>>> Hm, so the renews all have clid 465ccc4d09000000, and the reads all have > >>>> a stateid (0, 465ccc4dc24c0a0000000000). > >>>> > >>>> So the first 4 bytes matching just tells me both were handed out by the > >>>> same server instance (so there was no server reboot in between); there's > >>>> no way for me to tell whether they really belong to the same client. > >>>> > >>>> The server does assume that any stateid from the current server instance > >>>> that no longer exists in its table is expired. I believe that's > >>>> correct, given a correctly functioning client, but perhaps I'm missing a > >>>> case. > >>>> > >>>> --b. > >>>> > >>> I am very appreciative of the quick initial comments I receive from > >>> all of you on my NFS problem. I notice that there has been silence > >>> on the problem since the 16th, so I assume that either this is a > >>> hard bug to track down or you have been busy with higher priority > >>> tasks. Is there anything I can do to help develop a solution to > >>> this problem? > >>> > >> Well, the only candidate explanation for the problem is that my > >> assumption--that any time the server gets a stateid from the current > >> boot instance that it doesn't recognize as an active stateid, it is safe > >> for the server to return EXPIRED--is wrong. > >> > >> I don't immediately see why it's wrong, and based on the silence nobody > >> else does either, but I'm not 100% convinced I'm right either. > >> > >> So one approach might be to add server code that makes a better effort > >> to return EXPIRED only when we're sure it's a stateid from an expired > >> client, and see if that solves your problem. > >> > >> Remind me, did you have an easy way to reproduce your problem? > >> > > My silence is simply because I'm mystified as to how this can happen. > > Patching for it is trivial (see below). > > > > When the server tells us that our lease is expired, the normal behaviour > > for the client is to re-establish the lease, and then proceed to recover > > all known stateids. I don't see how we can 'miss' a stateid that then > > needs to be recovered afterwards... > > > > Cheers > > Trond > > > > 8<---------------------------------------------------------------------------- > > From 920ddb153f28717be363f6e87dde24ef2a8d0ce2 Mon Sep 17 00:00:00 2001 > > From: Trond Myklebust<Trond.Myklebust@netapp.com> > > Date: Fri, 20 May 2011 13:44:02 -0400 > > Subject: [PATCH] NFSv4: Handle expired stateids when the lease is still valid > > > > Currently, if the server returns NFS4ERR_EXPIRED in reply to a READ or > > WRITE, but the RENEW test determines that the lease is still active, we > > fail to recover and end up looping forever in a READ/WRITE + RENEW death > > spiral. > > > > Signed-off-by: Trond Myklebust<Trond.Myklebust@netapp.com> > > --- > > fs/nfs/nfs4proc.c | 9 +++++++-- > > 1 files changed, 7 insertions(+), 2 deletions(-) > > > > diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c > > index cf1b339..d0e15db 100644 > > --- a/fs/nfs/nfs4proc.c > > +++ b/fs/nfs/nfs4proc.c > > @@ -267,9 +267,11 @@ static int nfs4_handle_exception(struct nfs_server *server, int errorcode, struc > > break; > > nfs4_schedule_stateid_recovery(server, state); > > goto wait_on_recovery; > > + case -NFS4ERR_EXPIRED: > > + if (state != NULL) > > + nfs4_schedule_stateid_recovery(server, state); > > case -NFS4ERR_STALE_STATEID: > > case -NFS4ERR_STALE_CLIENTID: > > - case -NFS4ERR_EXPIRED: > > nfs4_schedule_lease_recovery(clp); > > goto wait_on_recovery; > > #if defined(CONFIG_NFS_V4_1) > > @@ -3670,9 +3672,11 @@ nfs4_async_handle_error(struct rpc_task *task, const struct nfs_server *server, > > break; > > nfs4_schedule_stateid_recovery(server, state); > > goto wait_on_recovery; > > + case -NFS4ERR_EXPIRED: > > + if (state != NULL) > > + nfs4_schedule_stateid_recovery(server, state); > > case -NFS4ERR_STALE_STATEID: > > case -NFS4ERR_STALE_CLIENTID: > > - case -NFS4ERR_EXPIRED: > > nfs4_schedule_lease_recovery(clp); > > goto wait_on_recovery; > > #if defined(CONFIG_NFS_V4_1) > > @@ -4543,6 +4547,7 @@ int nfs4_lock_delegation_recall(struct nfs4_state *state, struct file_lock *fl) > > case -ESTALE: > > goto out; > > case -NFS4ERR_EXPIRED: > > + nfs4_schedule_stateid_recovery(server, state); > > case -NFS4ERR_STALE_CLIENTID: > > case -NFS4ERR_STALE_STATEID: > > nfs4_schedule_lease_recovery(server->nfs_client); > > > I installed this patch on my client, and now I am seeing the state > manager appear in the process accounting file about once a minute rather > that the constant respawning I saw earlier. Is once a minute normal, or > is there still a problem? Once a minute is rather unusual... What kind of server are you running against? If it is a Linux server, what is the value contained in the virtual file "/proc/fs/nfsd/nfsv4leasetime" ?
On 05/20/11 12:40, Trond Myklebust wrote: > On Fri, 2011-05-20 at 12:29 -0700, Harry Edmon wrote: > >> On 05/20/11 10:52, Trond Myklebust wrote: >> >>> On Fri, 2011-05-20 at 13:26 -0400, Dr. J. Bruce Fields wrote: >>> >>> >>>> On Fri, May 20, 2011 at 09:20:47AM -0700, Harry Edmon wrote: >>>> >>>> >>>>> On 05/16/11 13:53, Dr. J. Bruce Fields wrote: >>>>> >>>>> >>>>>> Hm, so the renews all have clid 465ccc4d09000000, and the reads all have >>>>>> a stateid (0, 465ccc4dc24c0a0000000000). >>>>>> >>>>>> So the first 4 bytes matching just tells me both were handed out by the >>>>>> same server instance (so there was no server reboot in between); there's >>>>>> no way for me to tell whether they really belong to the same client. >>>>>> >>>>>> The server does assume that any stateid from the current server instance >>>>>> that no longer exists in its table is expired. I believe that's >>>>>> correct, given a correctly functioning client, but perhaps I'm missing a >>>>>> case. >>>>>> >>>>>> --b. >>>>>> >>>>>> >>>>> I am very appreciative of the quick initial comments I receive from >>>>> all of you on my NFS problem. I notice that there has been silence >>>>> on the problem since the 16th, so I assume that either this is a >>>>> hard bug to track down or you have been busy with higher priority >>>>> tasks. Is there anything I can do to help develop a solution to >>>>> this problem? >>>>> >>>>> >>>> Well, the only candidate explanation for the problem is that my >>>> assumption--that any time the server gets a stateid from the current >>>> boot instance that it doesn't recognize as an active stateid, it is safe >>>> for the server to return EXPIRED--is wrong. >>>> >>>> I don't immediately see why it's wrong, and based on the silence nobody >>>> else does either, but I'm not 100% convinced I'm right either. >>>> >>>> So one approach might be to add server code that makes a better effort >>>> to return EXPIRED only when we're sure it's a stateid from an expired >>>> client, and see if that solves your problem. >>>> >>>> Remind me, did you have an easy way to reproduce your problem? >>>> >>>> >>> My silence is simply because I'm mystified as to how this can happen. >>> Patching for it is trivial (see below). >>> >>> When the server tells us that our lease is expired, the normal behaviour >>> for the client is to re-establish the lease, and then proceed to recover >>> all known stateids. I don't see how we can 'miss' a stateid that then >>> needs to be recovered afterwards... >>> >>> Cheers >>> Trond >>> >>> 8<---------------------------------------------------------------------------- >>> From 920ddb153f28717be363f6e87dde24ef2a8d0ce2 Mon Sep 17 00:00:00 2001 >>> From: Trond Myklebust<Trond.Myklebust@netapp.com> >>> Date: Fri, 20 May 2011 13:44:02 -0400 >>> Subject: [PATCH] NFSv4: Handle expired stateids when the lease is still valid >>> >>> Currently, if the server returns NFS4ERR_EXPIRED in reply to a READ or >>> WRITE, but the RENEW test determines that the lease is still active, we >>> fail to recover and end up looping forever in a READ/WRITE + RENEW death >>> spiral. >>> >>> Signed-off-by: Trond Myklebust<Trond.Myklebust@netapp.com> >>> --- >>> fs/nfs/nfs4proc.c | 9 +++++++-- >>> 1 files changed, 7 insertions(+), 2 deletions(-) >>> >>> diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c >>> index cf1b339..d0e15db 100644 >>> --- a/fs/nfs/nfs4proc.c >>> +++ b/fs/nfs/nfs4proc.c >>> @@ -267,9 +267,11 @@ static int nfs4_handle_exception(struct nfs_server *server, int errorcode, struc >>> break; >>> nfs4_schedule_stateid_recovery(server, state); >>> goto wait_on_recovery; >>> + case -NFS4ERR_EXPIRED: >>> + if (state != NULL) >>> + nfs4_schedule_stateid_recovery(server, state); >>> case -NFS4ERR_STALE_STATEID: >>> case -NFS4ERR_STALE_CLIENTID: >>> - case -NFS4ERR_EXPIRED: >>> nfs4_schedule_lease_recovery(clp); >>> goto wait_on_recovery; >>> #if defined(CONFIG_NFS_V4_1) >>> @@ -3670,9 +3672,11 @@ nfs4_async_handle_error(struct rpc_task *task, const struct nfs_server *server, >>> break; >>> nfs4_schedule_stateid_recovery(server, state); >>> goto wait_on_recovery; >>> + case -NFS4ERR_EXPIRED: >>> + if (state != NULL) >>> + nfs4_schedule_stateid_recovery(server, state); >>> case -NFS4ERR_STALE_STATEID: >>> case -NFS4ERR_STALE_CLIENTID: >>> - case -NFS4ERR_EXPIRED: >>> nfs4_schedule_lease_recovery(clp); >>> goto wait_on_recovery; >>> #if defined(CONFIG_NFS_V4_1) >>> @@ -4543,6 +4547,7 @@ int nfs4_lock_delegation_recall(struct nfs4_state *state, struct file_lock *fl) >>> case -ESTALE: >>> goto out; >>> case -NFS4ERR_EXPIRED: >>> + nfs4_schedule_stateid_recovery(server, state); >>> case -NFS4ERR_STALE_CLIENTID: >>> case -NFS4ERR_STALE_STATEID: >>> nfs4_schedule_lease_recovery(server->nfs_client); >>> >>> >> I installed this patch on my client, and now I am seeing the state >> manager appear in the process accounting file about once a minute rather >> that the constant respawning I saw earlier. Is once a minute normal, or >> is there still a problem? >> > Once a minute is rather unusual... What kind of server are you running > against? > > If it is a Linux server, what is the value contained in the virtual file > "/proc/fs/nfsd/nfsv4leasetime" ? > > Same as before - Debian Squeeze running 2.6.38.6. The value of /proc/fs/nfsd/nfsv4leasetime is 90 and is not something I changed.
On Fri, 2011-05-20 at 12:44 -0700, Harry Edmon wrote: > > Once a minute is rather unusual... What kind of server are you running > > against? > > > > If it is a Linux server, what is the value contained in the virtual file > > "/proc/fs/nfsd/nfsv4leasetime" ? > > > > > Same as before - Debian Squeeze running 2.6.38.6. The value of > /proc/fs/nfsd/nfsv4leasetime is 90 and is not something I changed. OK... Does 'nfsstat' on the server show any 'delegreturn' updates around the time when the state manager thread runs? It could just be that it is reaping all your unused delegations.
On 05/20/11 13:11, Trond Myklebust wrote: > On Fri, 2011-05-20 at 12:44 -0700, Harry Edmon wrote: > >>> Once a minute is rather unusual... What kind of server are you running >>> against? >>> >>> If it is a Linux server, what is the value contained in the virtual file >>> "/proc/fs/nfsd/nfsv4leasetime" ? >>> >>> >>> >> Same as before - Debian Squeeze running 2.6.38.6. The value of >> /proc/fs/nfsd/nfsv4leasetime is 90 and is not something I changed. >> > OK... Does 'nfsstat' on the server show any 'delegreturn' updates around > the time when the state manager thread runs? It could just be that it is > reaping all your unused delegations. > > That number seems to be increasing all the time, whether or not the state manager process appears in the accounting file. And this is only the one NFS client for this sever.
On Fri, 2011-05-20 at 13:23 -0700, Harry Edmon wrote: > On 05/20/11 13:11, Trond Myklebust wrote: > > On Fri, 2011-05-20 at 12:44 -0700, Harry Edmon wrote: > > > >>> Once a minute is rather unusual... What kind of server are you running > >>> against? > >>> > >>> If it is a Linux server, what is the value contained in the virtual file > >>> "/proc/fs/nfsd/nfsv4leasetime" ? > >>> > >>> > >>> > >> Same as before - Debian Squeeze running 2.6.38.6. The value of > >> /proc/fs/nfsd/nfsv4leasetime is 90 and is not something I changed. > >> > > OK... Does 'nfsstat' on the server show any 'delegreturn' updates around > > the time when the state manager thread runs? It could just be that it is > > reaping all your unused delegations. > > > > > That number seems to be increasing all the time, whether or not the > state manager process appears in the accounting file. And this is only > the one NFS client for this sever. > OK. If your client is using delegations heavily then that would indeed explain the 1 minute delay between state manager runs, since the renew daemon runs once a minute, and will mark any unused delegations for reaping by the state manager.
diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c index cf1b339..d0e15db 100644 --- a/fs/nfs/nfs4proc.c +++ b/fs/nfs/nfs4proc.c @@ -267,9 +267,11 @@ static int nfs4_handle_exception(struct nfs_server *server, int errorcode, struc break; nfs4_schedule_stateid_recovery(server, state); goto wait_on_recovery; + case -NFS4ERR_EXPIRED: + if (state != NULL) + nfs4_schedule_stateid_recovery(server, state); case -NFS4ERR_STALE_STATEID: case -NFS4ERR_STALE_CLIENTID: - case -NFS4ERR_EXPIRED: nfs4_schedule_lease_recovery(clp); goto wait_on_recovery; #if defined(CONFIG_NFS_V4_1) @@ -3670,9 +3672,11 @@ nfs4_async_handle_error(struct rpc_task *task, const struct nfs_server *server, break; nfs4_schedule_stateid_recovery(server, state); goto wait_on_recovery; + case -NFS4ERR_EXPIRED: + if (state != NULL) + nfs4_schedule_stateid_recovery(server, state); case -NFS4ERR_STALE_STATEID: case -NFS4ERR_STALE_CLIENTID: - case -NFS4ERR_EXPIRED: nfs4_schedule_lease_recovery(clp); goto wait_on_recovery; #if defined(CONFIG_NFS_V4_1) @@ -4543,6 +4547,7 @@ int nfs4_lock_delegation_recall(struct nfs4_state *state, struct file_lock *fl) case -ESTALE: goto out; case -NFS4ERR_EXPIRED: + nfs4_schedule_stateid_recovery(server, state); case -NFS4ERR_STALE_CLIENTID: case -NFS4ERR_STALE_STATEID: nfs4_schedule_lease_recovery(server->nfs_client);