From patchwork Sat Mar 1 18:31:47 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13997653 X-Patchwork-Delegate: cel@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7D9131E5B65 for ; Sat, 1 Mar 2025 18:31:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740853918; cv=none; b=hEcrq3QPcnL/unFlu/crK3jmImTf9p7DQXU3RXswA2noPoRfRnJ6kjm56p17YwloEc/aRcOwlu8/zR562P9czhcGO4mqh4Q4MjoS0gkfRrzQ2VCZM+jIEBInaQ7tUPOHbD1tDXekUFKtsE70odoyimtjgU/gWIfzIBWbjxJ0Uq0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740853918; c=relaxed/simple; bh=ShlpwZvS0/FdZJH/EwCVVZGCktVyn1oBkGmi/uM5D98=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=MaOYZ2PLSxAyj6pOh7luG++46Vfbp8H/hFViaU6OebsjC88sHj2UZBOhcFJc5OCMOgpmnfdXBBE7UfsqNtMrFjQXeKXd7F9wsY0PJzeAts1VYqTfddAg3irlFEI+3huSwC0JIn4msUNqc5FcPs9QCTLvVpKHerImd3kCnhEIGN0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ghJUGgKp; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ghJUGgKp" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 847EAC4CEE2; Sat, 1 Mar 2025 18:31:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1740853916; bh=ShlpwZvS0/FdZJH/EwCVVZGCktVyn1oBkGmi/uM5D98=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ghJUGgKpRTG7sHSOY1Ua3iOmrbbHQKOtks/1dgGIkXtz1KUGIZgqGvVeJE/ZuyIZ8 7tAZdldod3i0e2DCHa7P8ZT5w09tMFJQ46VitkQvpkHk7g8SPlUty4bFdjgj7ZW5up Q8yiysS821fs51hiZQr7oMJyUatj9Keyu8vbwSR7Dlz0lCfbRVEPvduBxVRfKRb8aM pRkShDrF6P47yF6PUmSpI7TPneymS6VPCPAwGqjKqSNp4Dd5WAeT0elu71GqfptYNC +uTFFAt1kzeB595SKgS3TYuXBfC5SRfynNSI1eCOwmhfRyEFZ3N+EkETr/eTtojDCG CLRKXWtCOd5yA== From: cel@kernel.org To: Neil Brown , Jeff Layton , Olga Kornievskaia , Dai Ngo , Tom Talpey Cc: , Chuck Lever Subject: [PATCH v2 1/5] NFSD: OFFLOAD_CANCEL should mark an async COPY as completed Date: Sat, 1 Mar 2025 13:31:47 -0500 Message-ID: <20250301183151.11362-2-cel@kernel.org> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20250301183151.11362-1-cel@kernel.org> References: <20250301183151.11362-1-cel@kernel.org> Precedence: bulk X-Mailing-List: linux-nfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Chuck Lever Update the status of an async COPY operation when it has been stopped. OFFLOAD_STATUS needs to indicate that the COPY is no longer running. Signed-off-by: Chuck Lever --- fs/nfsd/nfs4proc.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c index f6e06c779d09..9a0e68aa246f 100644 --- a/fs/nfsd/nfs4proc.c +++ b/fs/nfsd/nfs4proc.c @@ -1379,8 +1379,11 @@ static void nfs4_put_copy(struct nfsd4_copy *copy) static void nfsd4_stop_copy(struct nfsd4_copy *copy) { trace_nfsd_copy_async_cancel(copy); - if (!test_and_set_bit(NFSD4_COPY_F_STOPPED, ©->cp_flags)) + if (!test_and_set_bit(NFSD4_COPY_F_STOPPED, ©->cp_flags)) { kthread_stop(copy->copy_task); + copy->nfserr = nfs_ok; + set_bit(NFSD4_COPY_F_COMPLETED, ©->cp_flags); + } nfs4_put_copy(copy); } From patchwork Sat Mar 1 18:31:48 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13997651 X-Patchwork-Delegate: cel@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3D14423F37C for ; Sat, 1 Mar 2025 18:31:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740853917; cv=none; b=BsZXwW2uDJYbe6aYCjeAuK4DX7HlEYmWsGqfmtTXz9kxm/KyUJC/+mX6bpFQ93xhTDIZYgnCMiLizogQ9ChDlC6d2yUHgPmfIfYUfA3bNkuzcWyFPcnE6yW1+i8GT2a5/6bqbG0iat04zlBkkZy+OcpzGiYZEjRMDj9hx8GeOSU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740853917; c=relaxed/simple; bh=pXLkTue2agU9aUoirGM9pYc+ArZ6Ml0BGKW+n3BCSPw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=eCiPvuMmx71ezGD1/qASYaWa7yCPY1DL8C47IloKG/ZLcUqLmcpZltSI8Sa3cFZZEhDTo8giDbr0oHImKFiNZQLLMbNILgVZBVdy3wJt8QGV7Vj35iE3bVKs4ZUDJ181k8dchV5ImNfTb/62ANRnCSE5GiM8cB3NfI8nVIcJf7U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=i7a0Rji2; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="i7a0Rji2" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 69DCAC4CEEA; Sat, 1 Mar 2025 18:31:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1740853917; bh=pXLkTue2agU9aUoirGM9pYc+ArZ6Ml0BGKW+n3BCSPw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=i7a0Rji2oMYCanpFcZLXRZoeIdLqAtMHzIKndAL32sP5ts+VB403IqzNBPrJqnWBa dSU7v68xce7wLTLFDFNYzDp9Ecm81kQusgbOvwf/oaND+YPsmBJJPBDGFpjc8U0ykt RxFQOlfA3STJsajMy/V08Ij+n2+zH7sFxvyizbZS3593f+2gfQI4ZBXov17l4M7vBu ksYiSluT2LDzkvRlej+km3DnaoRe3pbMgGhczbwVj7YRzPhJThcvQ2ZzYda8D5XBK1 7fmISQXTcC8CFs4hHIgqCBd+LXXJDOsyjpQojleIO6VfzRtjKCP3uIMUjpAFxEC4VZ 247PkZEUiIuCA== From: cel@kernel.org To: Neil Brown , Jeff Layton , Olga Kornievskaia , Dai Ngo , Tom Talpey Cc: , Chuck Lever Subject: [PATCH v2 2/5] NFSD: Shorten CB_OFFLOAD response to NFS4ERR_DELAY Date: Sat, 1 Mar 2025 13:31:48 -0500 Message-ID: <20250301183151.11362-3-cel@kernel.org> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20250301183151.11362-1-cel@kernel.org> References: <20250301183151.11362-1-cel@kernel.org> Precedence: bulk X-Mailing-List: linux-nfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Chuck Lever Try not to prolong the wait for completion of a COPY or COPY_NOTIFY operation. Signed-off-by: Chuck Lever --- fs/nfsd/nfs4proc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c index 9a0e68aa246f..3431b695882d 100644 --- a/fs/nfsd/nfs4proc.c +++ b/fs/nfsd/nfs4proc.c @@ -1712,7 +1712,7 @@ static int nfsd4_cb_offload_done(struct nfsd4_callback *cb, switch (task->tk_status) { case -NFS4ERR_DELAY: if (cbo->co_retries--) { - rpc_delay(task, 1 * HZ); + rpc_delay(task, HZ / 5); return 0; } } From patchwork Sat Mar 1 18:31:49 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13997652 X-Patchwork-Delegate: cel@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7D8B123F37C for ; Sat, 1 Mar 2025 18:31:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740853918; cv=none; b=ZvhX1uABfYzrBq7NoyBRfEZbg30GNPGD2oM28pcBu5k3cVycZm8SP0xcnuBD9em1KLqJPpPE4naJxRG3JSss3iW8b+Y3ERMWMOsZQESDVN7CpsFKf/9dE0d0PnKZVyj3GJkCN4iVLJPr6dJ+b8DJKdvXUX7dX+DQ1Bx7E4Qj0ng= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740853918; c=relaxed/simple; bh=KAj5IC6JLA5gMqbVwmNYlhcpO9sRh+xkosZkSx30yd0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JiC7AmDbf1RRnu/PQW0OiPLfUrxgxN1ANisy41GE1dghQVJG+/tWgzY8vOuK2WCTDzgfZa2HKyYJp7H8mUpn8B5aMcmVtSormx7PuNCVOvjhJjVToMepVGAZLAIwJsZ2s+/vAJA4lDoLamRJ+5YCvdkTjkgVJGECGBt//2TMGQw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=s7C1Q9QI; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="s7C1Q9QI" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 564A1C4CEE6; Sat, 1 Mar 2025 18:31:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1740853918; bh=KAj5IC6JLA5gMqbVwmNYlhcpO9sRh+xkosZkSx30yd0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=s7C1Q9QIAb3jkDYdG95wc52oZjb0iGeXWu8ZmNUXOMVcLOhQ4/uMQOieDpJEBfMnW nNCXZKLegSL8MFERBC3g8Y4JlBPY0jhoqZYxxrq30dQWvayzyXLJoS2HaUReWHMJx8 dlI6tFjO/RGSBpWEKtMBQCEQ17g3pbhJv00qt2kzf4I+9/xVgJxaUTqfZTTAh1PP9R 0LA/mXg/nJjVA5PnrvLaD74riV0t/lhd53yqELnRhDR92Aim0+pNrValjP4ex8EeGI xytholzp2G39tvCl81azL5suAiLgXV1OQ3Rd0fgzKPR/3Svq3/+So1FHUYTNpV8m9a KPxJ8wZ5NZ0hA== From: cel@kernel.org To: Neil Brown , Jeff Layton , Olga Kornievskaia , Dai Ngo , Tom Talpey Cc: , Chuck Lever Subject: [PATCH v2 3/5] NFSD: Implement CB_SEQUENCE referring call lists Date: Sat, 1 Mar 2025 13:31:49 -0500 Message-ID: <20250301183151.11362-4-cel@kernel.org> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20250301183151.11362-1-cel@kernel.org> References: <20250301183151.11362-1-cel@kernel.org> Precedence: bulk X-Mailing-List: linux-nfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Chuck Lever We have yet to implement a mechanism in NFSD for resolving races between a server's reply and a related callback operation. For example, a CB_OFFLOAD callback can race with the matching COPY response. The client will not recognize the copy state ID in the CB_OFFLOAD callback until the COPY response arrives. Trond adds: > It is also needed for the same kind of race with delegation > recalls, layout recalls, CB_NOTIFY_DEVICEID and would also be > helpful (although not as strongly required) for CB_NOTIFY_LOCK. RFC 8881 Section 20.9.3 describes referring call lists this way: > The csa_referring_call_lists array is the list of COMPOUND > requests, identified by session ID, slot ID, and sequence ID. > These are requests that the client previously sent to the server. > These previous requests created state that some operation(s) in > the same CB_COMPOUND as the csa_referring_call_lists are > identifying. A session ID is included because leased state is tied > to a client ID, and a client ID can have multiple sessions. See > Section 2.10.6.3. Introduce the XDR infrastructure for populating the csa_referring_call_lists argument of CB_SEQUENCE. Subsequent patches will put the referring call list to use. Note that cb_sequence_enc_sz estimates that only zero or one rcl is included in each CB_SEQUENCE, but the new infrastructure can manage any number of referring calls. Signed-off-by: Chuck Lever --- fs/nfsd/nfs4callback.c | 132 +++++++++++++++++++++++++++++++++++++++-- fs/nfsd/state.h | 22 +++++++ fs/nfsd/xdr4cb.h | 5 +- 3 files changed, 153 insertions(+), 6 deletions(-) diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c index 484077200c5d..f1fffff69330 100644 --- a/fs/nfsd/nfs4callback.c +++ b/fs/nfsd/nfs4callback.c @@ -419,6 +419,29 @@ static u32 highest_slotid(struct nfsd4_session *ses) return idx; } +static void +encode_referring_call4(struct xdr_stream *xdr, + const struct nfsd4_referring_call *rc) +{ + encode_uint32(xdr, rc->rc_sequenceid); + encode_uint32(xdr, rc->rc_slotid); +} + +static void +encode_referring_call_list4(struct xdr_stream *xdr, + const struct nfsd4_referring_call_list *rcl) +{ + struct nfsd4_referring_call *rc; + __be32 *p; + + p = xdr_reserve_space(xdr, NFS4_MAX_SESSIONID_LEN); + xdr_encode_opaque_fixed(p, rcl->rcl_sessionid.data, + NFS4_MAX_SESSIONID_LEN); + encode_uint32(xdr, rcl->__nr_referring_calls); + list_for_each_entry(rc, &rcl->rcl_referring_calls, __list) + encode_referring_call4(xdr, rc); +} + /* * CB_SEQUENCE4args * @@ -436,6 +459,7 @@ static void encode_cb_sequence4args(struct xdr_stream *xdr, struct nfs4_cb_compound_hdr *hdr) { struct nfsd4_session *session = cb->cb_clp->cl_cb_session; + struct nfsd4_referring_call_list *rcl; __be32 *p; if (hdr->minorversion == 0) @@ -444,12 +468,16 @@ static void encode_cb_sequence4args(struct xdr_stream *xdr, encode_nfs_cb_opnum4(xdr, OP_CB_SEQUENCE); encode_sessionid4(xdr, session); - p = xdr_reserve_space(xdr, 4 + 4 + 4 + 4 + 4); + p = xdr_reserve_space(xdr, XDR_UNIT * 4); *p++ = cpu_to_be32(session->se_cb_seq_nr[cb->cb_held_slot]); /* csa_sequenceid */ *p++ = cpu_to_be32(cb->cb_held_slot); /* csa_slotid */ *p++ = cpu_to_be32(highest_slotid(session)); /* csa_highest_slotid */ *p++ = xdr_zero; /* csa_cachethis */ - xdr_encode_empty_array(p); /* csa_referring_call_lists */ + + /* csa_referring_call_lists */ + encode_uint32(xdr, cb->cb_nr_referring_call_list); + list_for_each_entry(rcl, &cb->cb_referring_call_list, __list) + encode_referring_call_list4(xdr, rcl); hdr->nops++; } @@ -1306,10 +1334,102 @@ static void nfsd41_destroy_cb(struct nfsd4_callback *cb) nfsd41_cb_inflight_end(clp); } -/* - * TODO: cb_sequence should support referring call lists, cachethis, - * and mark callback channel down on communication errors. +/** + * nfsd41_cb_referring_call - add a referring call to a callback operation + * @cb: context of callback to add the rc to + * @sessionid: referring call's session ID + * @slotid: referring call's session slot index + * @seqno: referring call's slot sequence number + * + * Caller serializes access to @cb. + * + * NB: If memory allocation fails, the referring call is not added. */ +void nfsd41_cb_referring_call(struct nfsd4_callback *cb, + struct nfs4_sessionid *sessionid, + u32 slotid, u32 seqno) +{ + struct nfsd4_referring_call_list *rcl; + struct nfsd4_referring_call *rc; + bool found; + + might_sleep(); + + found = false; + list_for_each_entry(rcl, &cb->cb_referring_call_list, __list) { + if (!memcmp(rcl->rcl_sessionid.data, sessionid->data, + NFS4_MAX_SESSIONID_LEN)) { + found = true; + break; + } + } + if (!found) { + rcl = kmalloc(sizeof(*rcl), GFP_KERNEL); + if (!rcl) + return; + memcpy(rcl->rcl_sessionid.data, sessionid->data, + NFS4_MAX_SESSIONID_LEN); + rcl->__nr_referring_calls = 0; + INIT_LIST_HEAD(&rcl->rcl_referring_calls); + list_add(&rcl->__list, &cb->cb_referring_call_list); + cb->cb_nr_referring_call_list++; + } + + found = false; + list_for_each_entry(rc, &rcl->rcl_referring_calls, __list) { + if (rc->rc_sequenceid == seqno && rc->rc_slotid == slotid) { + found = true; + break; + } + } + if (!found) { + rc = kmalloc(sizeof(*rc), GFP_KERNEL); + if (!rc) + goto out; + rc->rc_sequenceid = seqno; + rc->rc_slotid = slotid; + rcl->__nr_referring_calls++; + list_add(&rc->__list, &rcl->rcl_referring_calls); + } + +out: + if (!rcl->__nr_referring_calls) { + cb->cb_nr_referring_call_list--; + kfree(rcl); + } +} + +/** + * nfsd41_cb_destroy_referring_call_list - release referring call info + * @cb: context of a callback that has completed + * + * Callers who allocate referring calls using nfsd41_cb_referring_call() must + * release those resources by calling nfsd41_cb_destroy_referring_call_list. + * + * Caller serializes access to @cb. + */ +void nfsd41_cb_destroy_referring_call_list(struct nfsd4_callback *cb) +{ + struct nfsd4_referring_call_list *rcl; + struct nfsd4_referring_call *rc; + + while (!list_empty(&cb->cb_referring_call_list)) { + rcl = list_first_entry(&cb->cb_referring_call_list, + struct nfsd4_referring_call_list, + __list); + + while (!list_empty(&rcl->rcl_referring_calls)) { + rc = list_first_entry(&rcl->rcl_referring_calls, + struct nfsd4_referring_call, + __list); + list_del(&rc->__list); + kfree(rc); + } + list_del(&rcl->__list); + kfree(rcl); + } +} + static void nfsd4_cb_prepare(struct rpc_task *task, void *calldata) { struct nfsd4_callback *cb = calldata; @@ -1625,6 +1745,8 @@ void nfsd4_init_cb(struct nfsd4_callback *cb, struct nfs4_client *clp, cb->cb_status = 0; cb->cb_need_restart = false; cb->cb_held_slot = -1; + cb->cb_nr_referring_call_list = 0; + INIT_LIST_HEAD(&cb->cb_referring_call_list); } /** diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h index 74d2d7b42676..b4af840fc4f9 100644 --- a/fs/nfsd/state.h +++ b/fs/nfsd/state.h @@ -64,6 +64,21 @@ typedef struct { refcount_t cs_count; } copy_stateid_t; +struct nfsd4_referring_call { + struct list_head __list; + + u32 rc_sequenceid; + u32 rc_slotid; +}; + +struct nfsd4_referring_call_list { + struct list_head __list; + + struct nfs4_sessionid rcl_sessionid; + int __nr_referring_calls; + struct list_head rcl_referring_calls; +}; + struct nfsd4_callback { struct nfs4_client *cb_clp; struct rpc_message cb_msg; @@ -73,6 +88,9 @@ struct nfsd4_callback { int cb_status; int cb_held_slot; bool cb_need_restart; + + int cb_nr_referring_call_list; + struct list_head cb_referring_call_list; }; struct nfsd4_callback_ops { @@ -777,6 +795,10 @@ extern __be32 nfs4_check_open_reclaim(struct nfs4_client *); extern void nfsd4_probe_callback(struct nfs4_client *clp); extern void nfsd4_probe_callback_sync(struct nfs4_client *clp); extern void nfsd4_change_callback(struct nfs4_client *clp, struct nfs4_cb_conn *); +extern void nfsd41_cb_referring_call(struct nfsd4_callback *cb, + struct nfs4_sessionid *sessionid, + u32 slotid, u32 seqno); +extern void nfsd41_cb_destroy_referring_call_list(struct nfsd4_callback *cb); extern void nfsd4_init_cb(struct nfsd4_callback *cb, struct nfs4_client *clp, const struct nfsd4_callback_ops *ops, enum nfsd4_cb_op op); extern bool nfsd4_run_cb(struct nfsd4_callback *cb); diff --git a/fs/nfsd/xdr4cb.h b/fs/nfsd/xdr4cb.h index f1a315cd31b7..f4e29c0c701c 100644 --- a/fs/nfsd/xdr4cb.h +++ b/fs/nfsd/xdr4cb.h @@ -6,8 +6,11 @@ #define cb_compound_enc_hdr_sz 4 #define cb_compound_dec_hdr_sz (3 + (NFS4_MAXTAGLEN >> 2)) #define sessionid_sz (NFS4_MAX_SESSIONID_LEN >> 2) +#define enc_referring_call4_sz (1 + 1) +#define enc_referring_call_list4_sz (sessionid_sz + 1 + \ + enc_referring_call4_sz) #define cb_sequence_enc_sz (sessionid_sz + 4 + \ - 1 /* no referring calls list yet */) + enc_referring_call_list4_sz) #define cb_sequence_dec_sz (op_dec_sz + sessionid_sz + 4) #define op_enc_sz 1 From patchwork Sat Mar 1 18:31:50 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13997654 X-Patchwork-Delegate: cel@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0863A1E5B8D for ; Sat, 1 Mar 2025 18:31:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740853919; cv=none; b=emkRmikD2qj1aJ/MsP7pYLai6T/rwBiITZdbAz1zN/ivFvw0ItpG6iHtQxxA4ZBCIdZRLyobqHD0O7761NPNBdXw24mRYlSxdXIX3sQ5vFjuXJXktiRtXPB4g6I2ZFHTJZN2rRUm141w7HLFfb6LIOu3wNlObacNIu5Ihv8Qwfw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740853919; c=relaxed/simple; bh=sbH0iu+irtMD+DgymbC19hFEl8PaJYnQxdIWE938u4s=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=OpHe4UVeRWZrLyeTsPRlwPYrDU1sf1EsSFDO5oco53haUWpbRC1oeHBotIEW8nniGFExd6PN2vCIhr4dbvgLHC0F+P/XyRhgubvB0SRIa9EtwrxNzswNo/VUdFq0e3ZvM+YPokleebu6lNqpyk6C3t74/CCggXUsKfc672dqeoE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Pmd/J74K; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Pmd/J74K" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3BF7AC4CEE9; Sat, 1 Mar 2025 18:31:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1740853918; bh=sbH0iu+irtMD+DgymbC19hFEl8PaJYnQxdIWE938u4s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Pmd/J74KCXU5LrkRwRvXQJBZ0Up5DKtUwzruHoagE9U01LyAlFCpVjYC2832mgqHG 6vlJ32BWT/+vvNeYNJjHOCRzTCdeWh9WT/SVh4+EGJctSsfBFzkKvsvC4dXRGwCymh o/UuSPfWhoLtAYI05Qd5FiVgM4bUo9aggVqJLzylZklw2Eqw/t0YL7StdR5Wwam/kS mbLr/Vn1/xUCMPdKniFCNoCMPhzU4yS2J6r72dh02lswz9p92mYn5Hj4fCCZWnmu3G iqp7vRoBvPd4F+74SySsMUNOIrA6hp3yMhv2tuUO3nnRX+4tgo6PBV/KCMLdtzUgRH 6XgmW49EvPRHg== From: cel@kernel.org To: Neil Brown , Jeff Layton , Olga Kornievskaia , Dai Ngo , Tom Talpey Cc: , Chuck Lever Subject: [PATCH v2 4/5] NFSD: Record each NFSv4 call's session slot index Date: Sat, 1 Mar 2025 13:31:50 -0500 Message-ID: <20250301183151.11362-5-cel@kernel.org> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20250301183151.11362-1-cel@kernel.org> References: <20250301183151.11362-1-cel@kernel.org> Precedence: bulk X-Mailing-List: linux-nfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Chuck Lever The slot index number of the current COMPOUND has, until now, not been needed outside of nfsd4_sequence(). But to record the tuple that represents a referring call, the slot number will be needed when processing subsequent operations in the COMPOUND. Refactor the code that allocates a new struct nfsd4_slot to ensure that the new sl_index field is always correctly initialized. Signed-off-by: Chuck Lever --- fs/nfsd/nfs4state.c | 38 +++++++++++++++++++++----------------- fs/nfsd/state.h | 1 + 2 files changed, 22 insertions(+), 17 deletions(-) diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c index 153eeea2c7c9..d25f2a65c2bc 100644 --- a/fs/nfsd/nfs4state.c +++ b/fs/nfsd/nfs4state.c @@ -1989,26 +1989,30 @@ reduce_session_slots(struct nfsd4_session *ses, int dec) return ret; } -/* - * We don't actually need to cache the rpc and session headers, so we - * can allocate a little less for each slot: - */ -static inline u32 slot_bytes(struct nfsd4_channel_attrs *ca) +static struct nfsd4_slot *nfsd4_alloc_slot(struct nfsd4_channel_attrs *fattrs, + int index, gfp_t gfp) { - u32 size; + struct nfsd4_slot *slot; + size_t size; - if (ca->maxresp_cached < NFSD_MIN_HDR_SEQ_SZ) - size = 0; - else - size = ca->maxresp_cached - NFSD_MIN_HDR_SEQ_SZ; - return size + sizeof(struct nfsd4_slot); + /* + * The RPC and NFS session headers are never saved in + * the slot reply cache buffer. + */ + size = fattrs->maxresp_cached < NFSD_MIN_HDR_SEQ_SZ ? + 0 : fattrs->maxresp_cached - NFSD_MIN_HDR_SEQ_SZ; + + slot = kzalloc(struct_size(slot, sl_data, size), gfp); + if (!slot) + return NULL; + slot->sl_index = index; + return slot; } static struct nfsd4_session *alloc_session(struct nfsd4_channel_attrs *fattrs, struct nfsd4_channel_attrs *battrs) { int numslots = fattrs->maxreqs; - int slotsize = slot_bytes(fattrs); struct nfsd4_session *new; struct nfsd4_slot *slot; int i; @@ -2017,14 +2021,14 @@ static struct nfsd4_session *alloc_session(struct nfsd4_channel_attrs *fattrs, if (!new) return NULL; xa_init(&new->se_slots); - /* allocate each struct nfsd4_slot and data cache in one piece */ - slot = kzalloc(slotsize, GFP_KERNEL); + + slot = nfsd4_alloc_slot(fattrs, 0, GFP_KERNEL); if (!slot || xa_is_err(xa_store(&new->se_slots, 0, slot, GFP_KERNEL))) goto out_free; for (i = 1; i < numslots; i++) { const gfp_t gfp = GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN; - slot = kzalloc(slotsize, gfp); + slot = nfsd4_alloc_slot(fattrs, i, gfp); if (!slot) break; if (xa_is_err(xa_store(&new->se_slots, i, slot, gfp))) { @@ -4438,8 +4442,8 @@ nfsd4_sequence(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate, * spinlock, and only succeeds if there is * plenty of memory. */ - slot = kzalloc(slot_bytes(&session->se_fchannel), - GFP_NOWAIT); + slot = nfsd4_alloc_slot(&session->se_fchannel, s, + GFP_NOWAIT); prev_slot = xa_load(&session->se_slots, s); if (xa_is_value(prev_slot) && slot) { slot->sl_seqid = xa_to_value(prev_slot); diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h index b4af840fc4f9..a971c8503c37 100644 --- a/fs/nfsd/state.h +++ b/fs/nfsd/state.h @@ -279,6 +279,7 @@ struct nfsd4_slot { u32 sl_seqid; __be32 sl_status; struct svc_cred sl_cred; + u32 sl_index; u32 sl_datalen; u16 sl_opcnt; u16 sl_generation; From patchwork Sat Mar 1 18:31:51 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13997655 X-Patchwork-Delegate: cel@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 62B4070808 for ; Sat, 1 Mar 2025 18:31:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740853920; cv=none; b=bbRTNz725wTiqqRBAW35R1covUCozVqrmH4y20jtMwcSWmKg6r6IPE/BRXie05BKQIK6RokW1SzVPfeXuPhMrzN8ccIZ/unhn3X9qSYVvCMZVfh6ZVdxA7YGXE1JdOPdMPKoDP1OoQ/ZtM/FSHrO89rmUlBlVaAWUV58bPydzDE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740853920; c=relaxed/simple; bh=JbfakIFrF6miKJDOEy4NTY3WujsJoSrvbVTZpKBoTsk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=BjIZSx3FvqO0BX368TfomaxbYhR5j3h6uJNpd4zPVOSYueIb9dRJkECux+8tEMdjwLbD8plsQvFOIDOiaPOx58S9CY0T5k6FNk94zai4vIayMArGN0yA3yasHgm/m9O0pNcM9+N0Ee4jV65SUoyg5IXEM0bsE+tFJuooJDD05Tk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=GcfHMiq5; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="GcfHMiq5" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 238A6C4CEE2; Sat, 1 Mar 2025 18:31:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1740853919; bh=JbfakIFrF6miKJDOEy4NTY3WujsJoSrvbVTZpKBoTsk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GcfHMiq51n2x8cIXMBMmreJwrlp7qlvcV4hMiEw49u+MkkoicIU22wgQDBFQ97x9H Lz5GdLb6WoO/c6uoJ3vmaAF5oh6DKlO3jqB83snRLbeQ5g/FGBG2XHydvnytMz7osz +FtY4/gmX8LLvBq3/H3HpdmBayGH5DMjxAQjT0Qjk4pY1H3uoiNbAQ7yjRxPmXVPwe ebcqFh5lq5mRfF6ZOlRtJBbILyFdOQofzufxvrtIW82id5qYu9dnhSSVYXKdMGEZ2+ TCc9zMb3kfjweh8u/jkQGdDx+iNqlwliabOTmnCd5jJ1RICqlPid+bl1wdK3eBcjTy idZTDUma2QfZg== From: cel@kernel.org To: Neil Brown , Jeff Layton , Olga Kornievskaia , Dai Ngo , Tom Talpey Cc: , Chuck Lever , Trond Myklebust Subject: [PATCH v2 5/5] NFSD: Use a referring call list for CB_OFFLOAD Date: Sat, 1 Mar 2025 13:31:51 -0500 Message-ID: <20250301183151.11362-6-cel@kernel.org> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20250301183151.11362-1-cel@kernel.org> References: <20250301183151.11362-1-cel@kernel.org> Precedence: bulk X-Mailing-List: linux-nfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Chuck Lever Help the client resolve the race between the reply to an asynchronous COPY reply and the associated CB_OFFLOAD callback by planting the session, slot, and sequence number of the COPY in the CB_SEQUENCE contained in the CB_OFFLOAD COMPOUND. Suggested-by: Trond Myklebust Signed-off-by: Chuck Lever --- fs/nfsd/nfs4proc.c | 9 +++++++++ fs/nfsd/xdr4.h | 4 ++++ 2 files changed, 13 insertions(+) diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c index 3431b695882d..48c1e3600d75 100644 --- a/fs/nfsd/nfs4proc.c +++ b/fs/nfsd/nfs4proc.c @@ -1716,6 +1716,7 @@ static int nfsd4_cb_offload_done(struct nfsd4_callback *cb, return 0; } } + nfsd41_cb_destroy_referring_call_list(cb); return 1; } @@ -1848,6 +1849,9 @@ static void nfsd4_send_cb_offload(struct nfsd4_copy *copy) nfsd4_init_cb(&cbo->co_cb, copy->cp_clp, &nfsd4_cb_offload_ops, NFSPROC4_CLNT_CB_OFFLOAD); + nfsd41_cb_referring_call(&cbo->co_cb, &cbo->co_referring_sessionid, + cbo->co_referring_slotid, + cbo->co_referring_seqno); trace_nfsd_cb_offload(copy->cp_clp, &cbo->co_res.cb_stateid, &cbo->co_fh, copy->cp_count, copy->nfserr); nfsd4_run_cb(&cbo->co_cb); @@ -1964,6 +1968,11 @@ nfsd4_copy(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate, memcpy(&result->cb_stateid, ©->cp_stateid.cs_stid, sizeof(result->cb_stateid)); dup_copy_fields(copy, async_copy); + memcpy(async_copy->cp_cb_offload.co_referring_sessionid.data, + cstate->session->se_sessionid.data, + NFS4_MAX_SESSIONID_LEN); + async_copy->cp_cb_offload.co_referring_slotid = cstate->slot->sl_index; + async_copy->cp_cb_offload.co_referring_seqno = cstate->slot->sl_seqid; async_copy->copy_task = kthread_create(nfsd4_do_async_copy, async_copy, "%s", "copy thread"); if (IS_ERR(async_copy->copy_task)) diff --git a/fs/nfsd/xdr4.h b/fs/nfsd/xdr4.h index c26ba86dbdfd..aa2a356da784 100644 --- a/fs/nfsd/xdr4.h +++ b/fs/nfsd/xdr4.h @@ -676,6 +676,10 @@ struct nfsd4_cb_offload { __be32 co_nfserr; unsigned int co_retries; struct knfsd_fh co_fh; + + struct nfs4_sessionid co_referring_sessionid; + u32 co_referring_slotid; + u32 co_referring_seqno; }; struct nfsd4_copy {