diff mbox series

[6/6] nfsd: allow delegation state ids to be revoked and then freed

Message ID 20231101010049.27315-7-neilb@suse.de (mailing list archive)
State New, archived
Headers show
Series support admin-revocation of v4 state | expand

Commit Message

NeilBrown Nov. 1, 2023, 12:57 a.m. UTC
Revoking state through 'unlock_filesystem' now revokes any delegation
states found.  When the stateids are then freed by the client, the
revoked stateids will be cleaned up correctly.

As there is already support for revoking delegations, we build on that
for admin-revoking.

Signed-off-by: NeilBrown <neilb@suse.de>
---
 fs/nfsd/nfs4state.c | 36 +++++++++++++++++++++++++++++-------
 1 file changed, 29 insertions(+), 7 deletions(-)

Comments

Chuck Lever Nov. 1, 2023, 2:10 a.m. UTC | #1
Howdy Neil-

> On Oct 31, 2023, at 5:57 PM, NeilBrown <neilb@suse.de> wrote:
> 
> Revoking state through 'unlock_filesystem' now revokes any delegation
> states found.  When the stateids are then freed by the client, the
> revoked stateids will be cleaned up correctly.

Here's my derpy question of the day.

"When the stateids are then freed by the client" seems to be
a repeating trope, and it concerns me a bit (probably because
I haven't yet learned how this mechanism /currently/ works)...

In the case when the client has actually vanished (eg, was
destroyed by an orchestrator), it's not going to be around
to actively free revoked state. Doesn't that situation result
in pinned state on the server? I would expect that's a primary
use case for "unlock_filesystem."

Maybe I've misunderstood something fundamental.


--
Chuck Lever
NeilBrown Nov. 1, 2023, 3:01 a.m. UTC | #2
On Wed, 01 Nov 2023, Chuck Lever III wrote:
> Howdy Neil-
> 
> > On Oct 31, 2023, at 5:57 PM, NeilBrown <neilb@suse.de> wrote:
> > 
> > Revoking state through 'unlock_filesystem' now revokes any delegation
> > states found.  When the stateids are then freed by the client, the
> > revoked stateids will be cleaned up correctly.
> 
> Here's my derpy question of the day.
> 
> "When the stateids are then freed by the client" seems to be
> a repeating trope, and it concerns me a bit (probably because
> I haven't yet learned how this mechanism /currently/ works)...
> 
> In the case when the client has actually vanished (eg, was
> destroyed by an orchestrator), it's not going to be around
> to actively free revoked state. Doesn't that situation result
> in pinned state on the server? I would expect that's a primary
> use case for "unlock_filesystem."

If a client is de-orchestrated then it will stop renewing its lease, and
regular cleanup of expired state will kick in after one lease period.
So for NFSv4 we don't need to worry about disappearing clients.
For NFSv3 (or more specifically for NLM) we did and locks could hang
around indefinitely if the client died.
For that reason we have /proc/fs/nfsd/unlock_ip which discards all NFSv3
lock state for a given client.  Extending that to NFSv4 is not needed
because of leases, and not meaningful because of trunking - a client
might have several IP's.

unlock_filesystem is for when the client is still active and we want to
let it (them) continue accessing some filesystems, but not all.

NeilBrown


> 
> Maybe I've misunderstood something fundamental.
> 
> 
> --
> Chuck Lever
> 
> 
>
Chuck Lever Nov. 1, 2023, 5:41 a.m. UTC | #3
> On Oct 31, 2023, at 8:01 PM, NeilBrown <neilb@suse.de> wrote:
> 
> On Wed, 01 Nov 2023, Chuck Lever III wrote:
>> Howdy Neil-
>> 
>>> On Oct 31, 2023, at 5:57 PM, NeilBrown <neilb@suse.de> wrote:
>>> 
>>> Revoking state through 'unlock_filesystem' now revokes any delegation
>>> states found.  When the stateids are then freed by the client, the
>>> revoked stateids will be cleaned up correctly.
>> 
>> Here's my derpy question of the day.
>> 
>> "When the stateids are then freed by the client" seems to be
>> a repeating trope, and it concerns me a bit (probably because
>> I haven't yet learned how this mechanism /currently/ works)...
>> 
>> In the case when the client has actually vanished (eg, was
>> destroyed by an orchestrator), it's not going to be around
>> to actively free revoked state. Doesn't that situation result
>> in pinned state on the server? I would expect that's a primary
>> use case for "unlock_filesystem."
> 
> If a client is de-orchestrated then it will stop renewing its lease, and
> regular cleanup of expired state will kick in after one lease period.

Thanks for educating me.

Such state actually stays around for much longer now as
expired but renewable state. Does unlock_filesystem need
to purge courtesy state too, to make the target filesystem
unexportable and unmountable?


> So for NFSv4 we don't need to worry about disappearing clients.
> For NFSv3 (or more specifically for NLM) we did and locks could hang
> around indefinitely if the client died.
> For that reason we have /proc/fs/nfsd/unlock_ip which discards all NFSv3
> lock state for a given client.  Extending that to NFSv4 is not needed
> because of leases, and not meaningful because of trunking - a client
> might have several IP's.
> 
> unlock_filesystem is for when the client is still active and we want to
> let it (them) continue accessing some filesystems, but not all.
> 
> NeilBrown
> 
> 
>> 
>> Maybe I've misunderstood something fundamental.
>> 
>> 
>> --
>> Chuck Lever


--
Chuck Lever
NeilBrown Nov. 1, 2023, 7:43 a.m. UTC | #4
On Wed, 01 Nov 2023, Chuck Lever III wrote:
> 
> > On Oct 31, 2023, at 8:01 PM, NeilBrown <neilb@suse.de> wrote:
> > 
> > On Wed, 01 Nov 2023, Chuck Lever III wrote:
> >> Howdy Neil-
> >> 
> >>> On Oct 31, 2023, at 5:57 PM, NeilBrown <neilb@suse.de> wrote:
> >>> 
> >>> Revoking state through 'unlock_filesystem' now revokes any delegation
> >>> states found.  When the stateids are then freed by the client, the
> >>> revoked stateids will be cleaned up correctly.
> >> 
> >> Here's my derpy question of the day.
> >> 
> >> "When the stateids are then freed by the client" seems to be
> >> a repeating trope, and it concerns me a bit (probably because
> >> I haven't yet learned how this mechanism /currently/ works)...
> >> 
> >> In the case when the client has actually vanished (eg, was
> >> destroyed by an orchestrator), it's not going to be around
> >> to actively free revoked state. Doesn't that situation result
> >> in pinned state on the server? I would expect that's a primary
> >> use case for "unlock_filesystem."
> > 
> > If a client is de-orchestrated then it will stop renewing its lease, and
> > regular cleanup of expired state will kick in after one lease period.
> 
> Thanks for educating me.
> 
> Such state actually stays around for much longer now as
> expired but renewable state. Does unlock_filesystem need
> to purge courtesy state too, to make the target filesystem
> unexportable and unmountable?

I don't think there is any special case there that we need to deal with.
I haven't explored in detail but I think "courtesy" state is managed at
the client level.  Some clients still have valid leases, others are
being maintained only as a courtesy.  At the individual state level
there is no difference.  The "unlock_filesystem" code examines all
states for all client and selects those for the target filesystem and
revokes those.

NeilBrown


> 
> 
> > So for NFSv4 we don't need to worry about disappearing clients.
> > For NFSv3 (or more specifically for NLM) we did and locks could hang
> > around indefinitely if the client died.
> > For that reason we have /proc/fs/nfsd/unlock_ip which discards all NFSv3
> > lock state for a given client.  Extending that to NFSv4 is not needed
> > because of leases, and not meaningful because of trunking - a client
> > might have several IP's.
> > 
> > unlock_filesystem is for when the client is still active and we want to
> > let it (them) continue accessing some filesystems, but not all.
> > 
> > NeilBrown
> > 
> > 
> >> 
> >> Maybe I've misunderstood something fundamental.
> >> 
> >> 
> >> --
> >> Chuck Lever
> 
> 
> --
> Chuck Lever
> 
> 
>
Chuck Lever Nov. 1, 2023, 3:41 p.m. UTC | #5
> On Nov 1, 2023, at 12:43 AM, NeilBrown <neilb@suse.de> wrote:
> 
> On Wed, 01 Nov 2023, Chuck Lever III wrote:
>> 
>>> On Oct 31, 2023, at 8:01 PM, NeilBrown <neilb@suse.de> wrote:
>>> 
>>> On Wed, 01 Nov 2023, Chuck Lever III wrote:
>>>> Howdy Neil-
>>>> 
>>>>> On Oct 31, 2023, at 5:57 PM, NeilBrown <neilb@suse.de> wrote:
>>>>> 
>>>>> Revoking state through 'unlock_filesystem' now revokes any delegation
>>>>> states found.  When the stateids are then freed by the client, the
>>>>> revoked stateids will be cleaned up correctly.
>>>> 
>>>> Here's my derpy question of the day.
>>>> 
>>>> "When the stateids are then freed by the client" seems to be
>>>> a repeating trope, and it concerns me a bit (probably because
>>>> I haven't yet learned how this mechanism /currently/ works)...
>>>> 
>>>> In the case when the client has actually vanished (eg, was
>>>> destroyed by an orchestrator), it's not going to be around
>>>> to actively free revoked state. Doesn't that situation result
>>>> in pinned state on the server? I would expect that's a primary
>>>> use case for "unlock_filesystem."
>>> 
>>> If a client is de-orchestrated then it will stop renewing its lease, and
>>> regular cleanup of expired state will kick in after one lease period.
>> 
>> Thanks for educating me.
>> 
>> Such state actually stays around for much longer now as
>> expired but renewable state. Does unlock_filesystem need
>> to purge courtesy state too, to make the target filesystem
>> unexportable and unmountable?
> 
> I don't think there is any special case there that we need to deal with.
> I haven't explored in detail but I think "courtesy" state is managed at
> the client level.  Some clients still have valid leases, others are
> being maintained only as a courtesy.  At the individual state level
> there is no difference.  The "unlock_filesystem" code examines all
> states for all client and selects those for the target filesystem and
> revokes those.

Dai can correct me if I've misremembered, but NFSD's courteous
server does not currently implement partial loss of state. If
any of a client's state is lost while it is disconnected, the
server discards its entire lease.

Thus if an admin unlocks a filesystem that a disconnected client
has some open files on, that client's entire lease should be
purged.


>>> So for NFSv4 we don't need to worry about disappearing clients.
>>> For NFSv3 (or more specifically for NLM) we did and locks could hang
>>> around indefinitely if the client died.
>>> For that reason we have /proc/fs/nfsd/unlock_ip which discards all NFSv3
>>> lock state for a given client.  Extending that to NFSv4 is not needed
>>> because of leases, and not meaningful because of trunking - a client
>>> might have several IP's.
>>> 
>>> unlock_filesystem is for when the client is still active and we want to
>>> let it (them) continue accessing some filesystems, but not all.
>>> 
>>> NeilBrown
>>> 
>>> 
>>>> 
>>>> Maybe I've misunderstood something fundamental.
>>>> 
>>>> 
>>>> --
>>>> Chuck Lever
>> 
>> 
>> --
>> Chuck Lever


--
Chuck Lever
Dai Ngo Nov. 1, 2023, 5:41 p.m. UTC | #6
On 11/1/23 8:41 AM, Chuck Lever III wrote:
>
>> On Nov 1, 2023, at 12:43 AM, NeilBrown <neilb@suse.de> wrote:
>>
>> On Wed, 01 Nov 2023, Chuck Lever III wrote:
>>>> On Oct 31, 2023, at 8:01 PM, NeilBrown <neilb@suse.de> wrote:
>>>>
>>>> On Wed, 01 Nov 2023, Chuck Lever III wrote:
>>>>> Howdy Neil-
>>>>>
>>>>>> On Oct 31, 2023, at 5:57 PM, NeilBrown <neilb@suse.de> wrote:
>>>>>>
>>>>>> Revoking state through 'unlock_filesystem' now revokes any delegation
>>>>>> states found.  When the stateids are then freed by the client, the
>>>>>> revoked stateids will be cleaned up correctly.
>>>>> Here's my derpy question of the day.
>>>>>
>>>>> "When the stateids are then freed by the client" seems to be
>>>>> a repeating trope, and it concerns me a bit (probably because
>>>>> I haven't yet learned how this mechanism /currently/ works)...
>>>>>
>>>>> In the case when the client has actually vanished (eg, was
>>>>> destroyed by an orchestrator), it's not going to be around
>>>>> to actively free revoked state. Doesn't that situation result
>>>>> in pinned state on the server? I would expect that's a primary
>>>>> use case for "unlock_filesystem."
>>>> If a client is de-orchestrated then it will stop renewing its lease, and
>>>> regular cleanup of expired state will kick in after one lease period.
>>> Thanks for educating me.
>>>
>>> Such state actually stays around for much longer now as
>>> expired but renewable state. Does unlock_filesystem need
>>> to purge courtesy state too, to make the target filesystem
>>> unexportable and unmountable?
>> I don't think there is any special case there that we need to deal with.
>> I haven't explored in detail but I think "courtesy" state is managed at
>> the client level.  Some clients still have valid leases, others are
>> being maintained only as a courtesy.  At the individual state level
>> there is no difference.  The "unlock_filesystem" code examines all
>> states for all client and selects those for the target filesystem and
>> revokes those.
> Dai can correct me if I've misremembered, but NFSD's courteous
> server does not currently implement partial loss of state. If
> any of a client's state is lost while it is disconnected, the
> server discards its entire lease.
>
> Thus if an admin unlocks a filesystem that a disconnected client
> has some open files on, that client's entire lease should be
> purged.

Ben is correct, courtesy state is managed at the client level. The
server does not deal with individual state of the courtesy client.
The courtesy clients and their states are allowed to be around until
the memory shrinker kicks in and the worker starts reaping clients in
NFSD4_COURTESY state.

-Dai

>
>
>>>> So for NFSv4 we don't need to worry about disappearing clients.
>>>> For NFSv3 (or more specifically for NLM) we did and locks could hang
>>>> around indefinitely if the client died.
>>>> For that reason we have /proc/fs/nfsd/unlock_ip which discards all NFSv3
>>>> lock state for a given client.  Extending that to NFSv4 is not needed
>>>> because of leases, and not meaningful because of trunking - a client
>>>> might have several IP's.
>>>>
>>>> unlock_filesystem is for when the client is still active and we want to
>>>> let it (them) continue accessing some filesystems, but not all.
>>>>
>>>> NeilBrown
>>>>
>>>>
>>>>> Maybe I've misunderstood something fundamental.
>>>>>
>>>>>
>>>>> --
>>>>> Chuck Lever
>>>
>>> --
>>> Chuck Lever
>
> --
> Chuck Lever
>
>
diff mbox series

Patch

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index ed879f68944b..9fdbdc64262d 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -1367,7 +1367,8 @@  static void destroy_delegation(struct nfs4_delegation *dp)
 		destroy_unhashed_deleg(dp);
 }
 
-static void revoke_delegation(struct nfs4_delegation *dp)
+static void revoke_delegation(struct nfs4_delegation *dp,
+			      unsigned short sc_type)
 {
 	struct nfs4_client *clp = dp->dl_stid.sc_client;
 
@@ -1375,9 +1376,9 @@  static void revoke_delegation(struct nfs4_delegation *dp)
 
 	trace_nfsd_stid_revoke(&dp->dl_stid);
 
-	if (clp->cl_minorversion) {
+	if (clp->cl_minorversion || sc_type == NFS4_ADMIN_REVOKED_DELEG_STID) {
 		spin_lock(&clp->cl_lock);
-		dp->dl_stid.sc_type = NFS4_REVOKED_DELEG_STID;
+		dp->dl_stid.sc_type = sc_type;
 		refcount_inc(&dp->dl_stid.sc_count);
 		list_add(&dp->dl_recall_lru, &clp->cl_revoked);
 		spin_unlock(&clp->cl_lock);
@@ -1708,7 +1709,7 @@  void nfsd4_revoke_states(struct net *net, struct super_block *sb)
 	unsigned int idhashval;
 	unsigned short sc_types;
 
-	sc_types = NFS4_OPEN_STID | NFS4_LOCK_STID;
+	sc_types = NFS4_OPEN_STID | NFS4_LOCK_STID | NFS4_DELEG_STID;
 
 	spin_lock(&nn->client_lock);
 	for (idhashval = 0; idhashval < CLIENT_HASH_MASK; idhashval++) {
@@ -1720,6 +1721,7 @@  void nfsd4_revoke_states(struct net *net, struct super_block *sb)
 								  sc_types);
 			if (stid) {
 				struct nfs4_ol_stateid *stp;
+				struct nfs4_delegation *dp;
 
 				spin_unlock(&nn->client_lock);
 				switch (stid->sc_type) {
@@ -1758,6 +1760,18 @@  void nfsd4_revoke_states(struct net *net, struct super_block *sb)
 					}
 					mutex_unlock(&stp->st_mutex);
 					break;
+				case NFS4_DELEG_STID:
+					dp = delegstateid(stid);
+					spin_lock(&state_lock);
+					if (!unhash_delegation_locked(dp))
+						dp = NULL;
+					spin_unlock(&state_lock);
+					if (dp) {
+						list_del_init(&dp->dl_recall_lru);
+						revoke_delegation(
+							dp, NFS4_ADMIN_REVOKED_DELEG_STID);
+					}
+					break;
 				}
 				nfs4_put_stid(stid);
 				spin_lock(&nn->client_lock);
@@ -4695,6 +4709,7 @@  static void nfsd_drop_revoked_stid(struct nfs4_stid *s)
 	struct nfs4_client *cl = s->sc_client;
 	LIST_HEAD(reaplist);
 	struct nfs4_ol_stateid *stp;
+	struct nfs4_delegation *dp;
 	bool unhashed;
 
 	switch (s->sc_type) {
@@ -4712,6 +4727,12 @@  static void nfsd_drop_revoked_stid(struct nfs4_stid *s)
 		if (unhashed)
 			nfs4_put_stid(s);
 		break;
+	case NFS4_ADMIN_REVOKED_DELEG_STID:
+		dp = delegstateid(s);
+		list_del_init(&dp->dl_recall_lru);
+		spin_unlock(&cl->cl_lock);
+		nfs4_put_stid(s);
+		break;
 	default:
 		spin_unlock(&cl->cl_lock);
 	}
@@ -5073,8 +5094,9 @@  static int nfsd4_cb_recall_done(struct nfsd4_callback *cb,
 	trace_nfsd_cb_recall_done(&dp->dl_stid.sc_stateid, task);
 
 	if (dp->dl_stid.sc_type == NFS4_CLOSED_DELEG_STID ||
-	    dp->dl_stid.sc_type == NFS4_REVOKED_DELEG_STID)
-	        return 1;
+	    dp->dl_stid.sc_type == NFS4_REVOKED_DELEG_STID ||
+	    dp->dl_stid.sc_type == NFS4_ADMIN_REVOKED_DELEG_STID)
+		return 1;
 
 	switch (task->tk_status) {
 	case 0:
@@ -6438,7 +6460,7 @@  nfs4_laundromat(struct nfsd_net *nn)
 		dp = list_first_entry(&reaplist, struct nfs4_delegation,
 					dl_recall_lru);
 		list_del_init(&dp->dl_recall_lru);
-		revoke_delegation(dp);
+		revoke_delegation(dp, NFS4_REVOKED_DELEG_STID);
 	}
 
 	spin_lock(&nn->client_lock);