From patchwork Mon Dec 11 15:24:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13487404 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 234EE4AF81; Mon, 11 Dec 2023 15:24:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="cEJQ9dTf" Received: by smtp.kernel.org (Postfix) with ESMTPSA id AC311C433B9; Mon, 11 Dec 2023 15:24:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1702308250; bh=Cn2aLrgLn9F4WwngZDunocI5s85Y6YoRuq/jxaaPDr4=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=cEJQ9dTfj1hiXCcLW1NkRUFavOioJ3tD9NITXYTr03W9MnD3MDHByAGCJT+W1ZMZq mFANGT1OyshB8b7QU2PIEr2NE1ewhmb+VDEL5xkvSfCx8EGPRTKfACyhbWpgEMWtnY x72GaqAXd06CMatfYfBpA1xyepzFBP+gcdyAaYEday8Ce2Zo2Ci5zAba4qM5riPftL 6rYydb9oAb98qqx6eANB+oLPm51zW7wKE4P3xSIencVMd5VUMSMjT8RDSxujMqD4Nh 5cgUTzPWzaoGEvVMCA6CLQ8ooluO6Mg/PIU0yodPeSe/PPhBQOsT3iQduV4WzSCvHg w0KTWL8FpuypQ== Subject: [PATCH v1 1/8] svcrdma: De-duplicate completion ID initialization helpers From: Chuck Lever To: linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org Cc: tom@talpey.com Date: Mon, 11 Dec 2023 10:24:08 -0500 Message-ID: <170230824865.90242.7760213084666437745.stgit@bazille.1015granger.net> In-Reply-To: <170230788373.90242.9421368360904462120.stgit@bazille.1015granger.net> References: <170230788373.90242.9421368360904462120.stgit@bazille.1015granger.net> User-Agent: StGit/1.5 Precedence: bulk X-Mailing-List: linux-nfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Chuck Lever Signed-off-by: Chuck Lever --- include/linux/sunrpc/svc_rdma.h | 24 ++++++++++++++++++++++++ net/sunrpc/xprtrdma/svc_rdma_recvfrom.c | 7 ------- net/sunrpc/xprtrdma/svc_rdma_rw.c | 9 +-------- net/sunrpc/xprtrdma/svc_rdma_sendto.c | 7 ------- 4 files changed, 25 insertions(+), 22 deletions(-) diff --git a/include/linux/sunrpc/svc_rdma.h b/include/linux/sunrpc/svc_rdma.h index 051fefde8d51..46f2ce9f810b 100644 --- a/include/linux/sunrpc/svc_rdma.h +++ b/include/linux/sunrpc/svc_rdma.h @@ -134,6 +134,30 @@ enum { #define RPCSVC_MAXPAYLOAD_RDMA RPCSVC_MAXPAYLOAD +/** + * svc_rdma_send_cid_init - Initialize a Receive Queue completion ID + * @rdma: controlling transport + * @cid: completion ID to initialize + */ +static inline void svc_rdma_recv_cid_init(struct svcxprt_rdma *rdma, + struct rpc_rdma_cid *cid) +{ + cid->ci_queue_id = rdma->sc_rq_cq->res.id; + cid->ci_completion_id = atomic_inc_return(&rdma->sc_completion_ids); +} + +/** + * svc_rdma_send_cid_init - Initialize a Send Queue completion ID + * @rdma: controlling transport + * @cid: completion ID to initialize + */ +static inline void svc_rdma_send_cid_init(struct svcxprt_rdma *rdma, + struct rpc_rdma_cid *cid) +{ + cid->ci_queue_id = rdma->sc_sq_cq->res.id; + cid->ci_completion_id = atomic_inc_return(&rdma->sc_completion_ids); +} + /* * A chunk context tracks all I/O for moving one Read or Write * chunk. This is a set of rdma_rw's that handle data movement diff --git a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c index 392a91dc8a99..ac6351e292c5 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c +++ b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c @@ -115,13 +115,6 @@ svc_rdma_next_recv_ctxt(struct list_head *list) rc_list); } -static void svc_rdma_recv_cid_init(struct svcxprt_rdma *rdma, - struct rpc_rdma_cid *cid) -{ - cid->ci_queue_id = rdma->sc_rq_cq->res.id; - cid->ci_completion_id = atomic_inc_return(&rdma->sc_completion_ids); -} - static struct svc_rdma_recv_ctxt * svc_rdma_recv_ctxt_alloc(struct svcxprt_rdma *rdma) { diff --git a/net/sunrpc/xprtrdma/svc_rdma_rw.c b/net/sunrpc/xprtrdma/svc_rdma_rw.c index 4d2db06ccfd2..eab71f3867fa 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_rw.c +++ b/net/sunrpc/xprtrdma/svc_rdma_rw.c @@ -146,13 +146,6 @@ static int svc_rdma_rw_ctx_init(struct svcxprt_rdma *rdma, return ret; } -static void svc_rdma_cc_cid_init(struct svcxprt_rdma *rdma, - struct rpc_rdma_cid *cid) -{ - cid->ci_queue_id = rdma->sc_sq_cq->res.id; - cid->ci_completion_id = atomic_inc_return(&rdma->sc_completion_ids); -} - /** * svc_rdma_cc_init - Initialize an svc_rdma_chunk_ctxt * @rdma: controlling transport instance @@ -161,7 +154,7 @@ static void svc_rdma_cc_cid_init(struct svcxprt_rdma *rdma, void svc_rdma_cc_init(struct svcxprt_rdma *rdma, struct svc_rdma_chunk_ctxt *cc) { - svc_rdma_cc_cid_init(rdma, &cc->cc_cid); + svc_rdma_send_cid_init(rdma, &cc->cc_cid); INIT_LIST_HEAD(&cc->cc_rwctxts); cc->cc_sqecount = 0; diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c index 9571ed4a74d4..c9585e469ca8 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c +++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c @@ -113,13 +113,6 @@ static void svc_rdma_wc_send(struct ib_cq *cq, struct ib_wc *wc); -static void svc_rdma_send_cid_init(struct svcxprt_rdma *rdma, - struct rpc_rdma_cid *cid) -{ - cid->ci_queue_id = rdma->sc_sq_cq->res.id; - cid->ci_completion_id = atomic_inc_return(&rdma->sc_completion_ids); -} - static struct svc_rdma_send_ctxt * svc_rdma_send_ctxt_alloc(struct svcxprt_rdma *rdma) { From patchwork Mon Dec 11 15:24:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13487405 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DF2A33B184; Mon, 11 Dec 2023 15:24:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="FSiLGxrK" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 22501C433C7; Mon, 11 Dec 2023 15:24:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1702308256; bh=Azc8E1z0nSScQA4Ka3Ad03FQroG23ikvDXFi+W7NPPc=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=FSiLGxrK8TSlNmNwfeivh5gcZ+ZS811VsnkjTRL5vhKn1q/I8aCJDD3f/H+jkqPSc HgMngoR0EwdeeHe6A54iHF+rX6UkwQ0/TmhJwWtaK9EhwoA5hEP9/r3/y0Eq5O2Dcq KCIeiEFjXkYrZ1qKlSM0uSMiJVlEQ+pA1WXpLsIHUnf4q8p0WPszTmKNz5CnN7J9Iu 7jCIfgqZcnzHZUXZx5hcgLUtr+qsnHKP60WbuZM1JovIfLCl2i86W5pcoLxR7s9V5a aWN2g/RfXRnLvzNFN2HaBFnqy0LKpDPioqj+PGvpj/PunfhbjhBPzSYifvaQPVWqnZ wlKpCKUvGc5Vw== Subject: [PATCH v1 2/8] svcrdma: Optimize svc_rdma_cc_init() From: Chuck Lever To: linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org Cc: tom@talpey.com Date: Mon, 11 Dec 2023 10:24:15 -0500 Message-ID: <170230825516.90242.7046834370878992757.stgit@bazille.1015granger.net> In-Reply-To: <170230788373.90242.9421368360904462120.stgit@bazille.1015granger.net> References: <170230788373.90242.9421368360904462120.stgit@bazille.1015granger.net> User-Agent: StGit/1.5 Precedence: bulk X-Mailing-List: linux-nfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Chuck Lever The atomic_inc_return() in svc_rdma_send_cid_init() is expensive. Some svc_rdma_chunk_ctxt's now reside in long-lived container structures. They don't need a fresh completion ID for every I/O operation. Signed-off-by: Chuck Lever --- net/sunrpc/xprtrdma/svc_rdma_recvfrom.c | 2 +- net/sunrpc/xprtrdma/svc_rdma_rw.c | 9 +++++---- net/sunrpc/xprtrdma/svc_rdma_sendto.c | 2 +- 3 files changed, 7 insertions(+), 6 deletions(-) diff --git a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c index ac6351e292c5..38f01652dc6d 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c +++ b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c @@ -123,7 +123,7 @@ svc_rdma_recv_ctxt_alloc(struct svcxprt_rdma *rdma) dma_addr_t addr; void *buffer; - ctxt = kmalloc_node(sizeof(*ctxt), GFP_KERNEL, node); + ctxt = kzalloc_node(sizeof(*ctxt), GFP_KERNEL, node); if (!ctxt) goto fail0; buffer = kmalloc_node(rdma->sc_max_req_size, GFP_KERNEL, node); diff --git a/net/sunrpc/xprtrdma/svc_rdma_rw.c b/net/sunrpc/xprtrdma/svc_rdma_rw.c index eab71f3867fa..ff54bb268b7d 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_rw.c +++ b/net/sunrpc/xprtrdma/svc_rdma_rw.c @@ -154,7 +154,10 @@ static int svc_rdma_rw_ctx_init(struct svcxprt_rdma *rdma, void svc_rdma_cc_init(struct svcxprt_rdma *rdma, struct svc_rdma_chunk_ctxt *cc) { - svc_rdma_send_cid_init(rdma, &cc->cc_cid); + struct rpc_rdma_cid *cid = &cc->cc_cid; + + if (unlikely(!cid->ci_completion_id)) + svc_rdma_send_cid_init(rdma, cid); INIT_LIST_HEAD(&cc->cc_rwctxts); cc->cc_sqecount = 0; @@ -221,15 +224,13 @@ svc_rdma_write_info_alloc(struct svcxprt_rdma *rdma, { struct svc_rdma_write_info *info; - info = kmalloc_node(sizeof(*info), GFP_KERNEL, + info = kzalloc_node(sizeof(*info), GFP_KERNEL, ibdev_to_node(rdma->sc_cm_id->device)); if (!info) return info; info->wi_rdma = rdma; info->wi_chunk = chunk; - info->wi_seg_off = 0; - info->wi_seg_no = 0; svc_rdma_cc_init(rdma, &info->wi_cc); info->wi_cc.cc_cqe.done = svc_rdma_write_done; return info; diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c index c9585e469ca8..1a49b7f02041 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c +++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c @@ -122,7 +122,7 @@ svc_rdma_send_ctxt_alloc(struct svcxprt_rdma *rdma) void *buffer; int i; - ctxt = kmalloc_node(struct_size(ctxt, sc_sges, rdma->sc_max_send_sges), + ctxt = kzalloc_node(struct_size(ctxt, sc_sges, rdma->sc_max_send_sges), GFP_KERNEL, node); if (!ctxt) goto fail0; From patchwork Mon Dec 11 15:24:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13487406 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 050593B184; Mon, 11 Dec 2023 15:24:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="qOX7TxX8" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 925E3C433C8; Mon, 11 Dec 2023 15:24:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1702308262; bh=w4I9suP34OK/T9wqgSV5LFh9uj8Gkgnu9+/UaxfsYwU=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=qOX7TxX8tbbI5Gqvq7zJAKcvkyThMAgKUwwtCSD1c5g61NdbVI10gJvbKK0GQJHIp dcchV8c21R5aoYEQ7ZnSPtVg1t/ftdkAX6NU1Nau3TElWuuOjmIUEMdm7zdIPkAggL Ue7JI2L95xLODC4gcFWL5qwzp0W+8M0g91v1IF7M3nLn9pGPvnsTIqnCCfpbM2Hf2k PCnGomzkkyODz2N7Z06PHrO3eilD9rQ+vU/p06jnIU8fxYxCv+o7j9TYK20cLbU8xE I8MYtSkm2Reztc+yIbFb5sI2upjatwNFu2nLHPCc0fKTebMQ2RP09Cbk1Qf1+AtvOp J4PTqXIO6T4vg== Subject: [PATCH v1 3/8] svcrdma: Remove pointer addresses shown in dprintk() From: Chuck Lever To: linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org Cc: tom@talpey.com Date: Mon, 11 Dec 2023 10:24:21 -0500 Message-ID: <170230826159.90242.7084253882158192266.stgit@bazille.1015granger.net> In-Reply-To: <170230788373.90242.9421368360904462120.stgit@bazille.1015granger.net> References: <170230788373.90242.9421368360904462120.stgit@bazille.1015granger.net> User-Agent: StGit/1.5 Precedence: bulk X-Mailing-List: linux-nfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Chuck Lever There are a couple of dprintk() call sites in svc_rdma_accept() that show pointer addresses. These days, displayed pointer addresses are hashed and thus have little or no diagnostic value, especially for site administrators. Signed-off-by: Chuck Lever --- net/sunrpc/xprtrdma/svc_rdma_transport.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c index 3826da1c15f3..451814eb12b9 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_transport.c +++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c @@ -457,8 +457,6 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt) qp_attr.qp_type = IB_QPT_RC; qp_attr.send_cq = newxprt->sc_sq_cq; qp_attr.recv_cq = newxprt->sc_rq_cq; - dprintk("svcrdma: newxprt->sc_cm_id=%p, newxprt->sc_pd=%p\n", - newxprt->sc_cm_id, newxprt->sc_pd); dprintk(" cap.max_send_wr = %d, cap.max_recv_wr = %d\n", qp_attr.cap.max_send_wr, qp_attr.cap.max_recv_wr); dprintk(" cap.max_send_sge = %d, cap.max_recv_sge = %d\n", @@ -512,7 +510,7 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt) } #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) - dprintk("svcrdma: new connection %p accepted:\n", newxprt); + dprintk("svcrdma: new connection accepted on device %s:\n", dev->name); sap = (struct sockaddr *)&newxprt->sc_cm_id->route.addr.src_addr; dprintk(" local address : %pIS:%u\n", sap, rpc_get_port(sap)); sap = (struct sockaddr *)&newxprt->sc_cm_id->route.addr.dst_addr; From patchwork Mon Dec 11 15:24:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13487407 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6988E3B184; Mon, 11 Dec 2023 15:24:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="c1JkOejR" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 01F2EC433C7; Mon, 11 Dec 2023 15:24:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1702308269; bh=VWGn/gAnNfqZSoTmYXsQnuNZs6xC+143uzhJaMWYAGA=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=c1JkOejRicweqAXk6EPnGCzKPjG+xYCar4BmcucrEA2399l1Vyz/HfaoZww5YU+C+ ibMo78xVT4eNXmHj4R6cXr1Fp6QzolhQtzJhFG1hnh/iE+0MVrV8nPW+87Ebi7IKUm a82NRywLfJPFrutkxWRtDQvOdQ+PnoUsbP6G0htnFxjqJOHGGcbtR58oHbpSBjsHWN BarAq+91Pw5jPA10sC9rjygJGflfZF6VBsZqsk8t4Siz6UCxH21F8pm8ShFZXjAiRi 1qEiZZrMFglc8RTNYNn1n+ZNkA6nfVZf/5eZaFrojG0w7mPgycP4hhoEXFIztrZrXp SaaP/5yFTereg== Subject: [PATCH v1 4/8] svcrdma: Remove queue-shortening warnings From: Chuck Lever To: linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org Cc: tom@talpey.com Date: Mon, 11 Dec 2023 10:24:28 -0500 Message-ID: <170230826804.90242.16823273103656314417.stgit@bazille.1015granger.net> In-Reply-To: <170230788373.90242.9421368360904462120.stgit@bazille.1015granger.net> References: <170230788373.90242.9421368360904462120.stgit@bazille.1015granger.net> User-Agent: StGit/1.5 Precedence: bulk X-Mailing-List: linux-nfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Chuck Lever These won't have much diagnostic value for site administrators. Since they can't be disabled, they become noise. What's more, the subsequent rdma_create_qp() call adjusts the Send Queue size (possibly downward) without warning, making the size reported by these pr_warns inaccurate. Signed-off-by: Chuck Lever --- net/sunrpc/xprtrdma/svc_rdma_transport.c | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c index 451814eb12b9..040d2ef6400c 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_transport.c +++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c @@ -412,8 +412,6 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt) rq_depth = newxprt->sc_max_requests + newxprt->sc_max_bc_requests + newxprt->sc_recv_batch; if (rq_depth > dev->attrs.max_qp_wr) { - pr_warn("svcrdma: reducing receive depth to %d\n", - dev->attrs.max_qp_wr); rq_depth = dev->attrs.max_qp_wr; newxprt->sc_recv_batch = 1; newxprt->sc_max_requests = rq_depth - 2; @@ -423,11 +421,8 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt) ctxts = rdma_rw_mr_factor(dev, newxprt->sc_port_num, RPCSVC_MAXPAGES); ctxts *= newxprt->sc_max_requests; newxprt->sc_sq_depth = rq_depth + ctxts; - if (newxprt->sc_sq_depth > dev->attrs.max_qp_wr) { - pr_warn("svcrdma: reducing send depth to %d\n", - dev->attrs.max_qp_wr); + if (newxprt->sc_sq_depth > dev->attrs.max_qp_wr) newxprt->sc_sq_depth = dev->attrs.max_qp_wr; - } atomic_set(&newxprt->sc_sq_avail, newxprt->sc_sq_depth); newxprt->sc_pd = ib_alloc_pd(dev, 0); From patchwork Mon Dec 11 15:24:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13487408 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D03553B184; Mon, 11 Dec 2023 15:24:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ESO03ZZ3" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6B35FC433C8; Mon, 11 Dec 2023 15:24:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1702308275; bh=UOMQmlHeinTdme3rkZK7IDBvn6sh8SWXyRioXF5go7A=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=ESO03ZZ3Hs6b5B5KTwlpcEkc8g/EfuGhfMVTJKYNiJCFlIi+CIy9CvucqW4EqN9zJ Uef6YxtaYpe8+ZmGgKWSWS6B8zUmFiNwzflh2XDbjyzwNqOoDwiiDZOjUt3DfBqLr6 RJVkLSz7vWy17viSQ+4S8+mx9NBS3lHaxmC3AUCpN+hnbgI0j/s248meFs5auu8LY7 gQ3yc09q3pz9iQBABWw/cG2X8bmBDgCjQqADlyYMYT5MbL42kmxbWhp6mE+/KhGubo dad5gYtQicdDVx51Fwi8HFs8qoWn5GnCP+pG6QAHIPoPKEbKMSUi1cOsFqZkJWIluI yHyeE3eim4blg== Subject: [PATCH v1 5/8] svcrdma: Clean up comment in svc_rdma_accept() From: Chuck Lever To: linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org Cc: tom@talpey.com Date: Mon, 11 Dec 2023 10:24:34 -0500 Message-ID: <170230827445.90242.4868169879235583897.stgit@bazille.1015granger.net> In-Reply-To: <170230788373.90242.9421368360904462120.stgit@bazille.1015granger.net> References: <170230788373.90242.9421368360904462120.stgit@bazille.1015granger.net> User-Agent: StGit/1.5 Precedence: bulk X-Mailing-List: linux-nfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Chuck Lever The comment that starts "Qualify ..." applies to only some of the following code paragraph. Re-arrange the lines so the comment makes more sense. Signed-off-by: Chuck Lever --- net/sunrpc/xprtrdma/svc_rdma_transport.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c index 040d2ef6400c..8127c711fa3b 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_transport.c +++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c @@ -397,18 +397,22 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt) dev = newxprt->sc_cm_id->device; newxprt->sc_port_num = newxprt->sc_cm_id->port_num; - /* Qualify the transport resource defaults with the - * capabilities of this particular device */ + newxprt->sc_max_req_size = svcrdma_max_req_size; + newxprt->sc_max_requests = svcrdma_max_requests; + newxprt->sc_max_bc_requests = svcrdma_max_bc_requests; + newxprt->sc_recv_batch = RPCRDMA_MAX_RECV_BATCH; + newxprt->sc_fc_credits = cpu_to_be32(newxprt->sc_max_requests); + + /* Qualify the transport's resource defaults with the + * capabilities of this particular device. + */ + /* Transport header, head iovec, tail iovec */ newxprt->sc_max_send_sges = 3; /* Add one SGE per page list entry */ newxprt->sc_max_send_sges += (svcrdma_max_req_size / PAGE_SIZE) + 1; if (newxprt->sc_max_send_sges > dev->attrs.max_send_sge) newxprt->sc_max_send_sges = dev->attrs.max_send_sge; - newxprt->sc_max_req_size = svcrdma_max_req_size; - newxprt->sc_max_requests = svcrdma_max_requests; - newxprt->sc_max_bc_requests = svcrdma_max_bc_requests; - newxprt->sc_recv_batch = RPCRDMA_MAX_RECV_BATCH; rq_depth = newxprt->sc_max_requests + newxprt->sc_max_bc_requests + newxprt->sc_recv_batch; if (rq_depth > dev->attrs.max_qp_wr) { @@ -417,7 +421,6 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt) newxprt->sc_max_requests = rq_depth - 2; newxprt->sc_max_bc_requests = 2; } - newxprt->sc_fc_credits = cpu_to_be32(newxprt->sc_max_requests); ctxts = rdma_rw_mr_factor(dev, newxprt->sc_port_num, RPCSVC_MAXPAGES); ctxts *= newxprt->sc_max_requests; newxprt->sc_sq_depth = rq_depth + ctxts; From patchwork Mon Dec 11 15:24:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13487409 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4E98C4AF83; Mon, 11 Dec 2023 15:24:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="RnmXoHCq" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D8C22C433C8; Mon, 11 Dec 2023 15:24:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1702308282; bh=hbCI41fDxHFogMcLFo0YztlPsPbDh4I1ONF2ZseeT1s=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=RnmXoHCqP8e5mWqqw2J5OqLsDnOoKgRbBdjthe5Hn/ujOkulyfE2lM65OdN5EA/vv bPEBFy6zFDR48AEuPrVQCPGJ5zT8YeMp4XM9ptQSEji8+0lbOoGXgp25ipRkcVRgtK e31hqLAOvC3kPoE8vX+cL+paUDXjEKCdO7Qoh22nyDEiYvZGS83aIQ1UbpfgQOum5Q 217XBdtTCWIua0Jtqq6JqS7KjRvsmC1L3XKiDWFECRTj0aoZY9B8hjPw2UQt5SWsyF RCNGw0NaWqlf6LT7KP20Mwz95HGmhT9tBcT/0vfh+wZSy0zFeCje49bHRUhBj4hR7+ 31WrZS3z5QUPg== Subject: [PATCH v1 6/8] svcrdma: Reserve an extra WQE for ib_drain_rq() From: Chuck Lever To: linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org Cc: tom@talpey.com Date: Mon, 11 Dec 2023 10:24:40 -0500 Message-ID: <170230828089.90242.3890762157877508465.stgit@bazille.1015granger.net> In-Reply-To: <170230788373.90242.9421368360904462120.stgit@bazille.1015granger.net> References: <170230788373.90242.9421368360904462120.stgit@bazille.1015granger.net> User-Agent: StGit/1.5 Precedence: bulk X-Mailing-List: linux-nfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Chuck Lever Do as other ULPs already do: ensure there is an extra Receive WQE reserved for the tear-down drain WR. I haven't heard reports of problems but it can't hurt. Note that rq_depth is used to compute the Send Queue depth as well, so this fix should affect both the SQ and RQ. Signed-off-by: Chuck Lever --- net/sunrpc/xprtrdma/svc_rdma_transport.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c index 8127c711fa3b..0541aa54674c 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_transport.c +++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c @@ -414,7 +414,7 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt) if (newxprt->sc_max_send_sges > dev->attrs.max_send_sge) newxprt->sc_max_send_sges = dev->attrs.max_send_sge; rq_depth = newxprt->sc_max_requests + newxprt->sc_max_bc_requests + - newxprt->sc_recv_batch; + newxprt->sc_recv_batch + 1 /* drain */; if (rq_depth > dev->attrs.max_qp_wr) { rq_depth = dev->attrs.max_qp_wr; newxprt->sc_recv_batch = 1; From patchwork Mon Dec 11 15:24:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13487410 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A9DCB3B184; Mon, 11 Dec 2023 15:24:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="qYy8RIrF" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 43AB2C433C7; Mon, 11 Dec 2023 15:24:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1702308288; bh=LM4GTHoqE1IG5KmsA8entz5JyxVhvYRQ9yYxKkHLBFI=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=qYy8RIrFNtZ5B9KewsE61tPclABdOYVp/EI6e3r9Hl+ucarRACwdhOGJgVUN3P581 rgLWMih/HcUSiw6su9KAqtSitesF4IT1dlOUuxK3naC50qLn6KIO+kzr9oUNisWToG PBLelnttY+/+uqNkG8panzKPtgdTF9DwrpAj2lIj+WK9CS2Q69MgP16lHygzxXNyTD 72Gx+G8wol6DsUsUFuk6Xmwisv6Vk8VZ/OmNXmgCFyoQtV+23h7jdanINuFtA8RM1X IAofOIsJLUX+ZAo9J1lXmhfDjXo5AzKiCVy+OQ71zD7EqisMjkBz3ryWAafgT+KcZL /slkZNYLu2udA== Subject: [PATCH v1 7/8] svcrdma: Use all allocated Send Queue entries From: Chuck Lever To: linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org Cc: tom@talpey.com Date: Mon, 11 Dec 2023 10:24:47 -0500 Message-ID: <170230828734.90242.8027061101409867750.stgit@bazille.1015granger.net> In-Reply-To: <170230788373.90242.9421368360904462120.stgit@bazille.1015granger.net> References: <170230788373.90242.9421368360904462120.stgit@bazille.1015granger.net> User-Agent: StGit/1.5 Precedence: bulk X-Mailing-List: linux-nfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Chuck Lever For upper layer protocols that request rw_ctxs, ib_create_qp() adjusts ib_qp_init_attr::max_send_wr to accommodate the WQEs those rw_ctxs will consume. See rdma_rw_init_qp() for details. To actually use those additional WQEs, svc_rdma_accept() needs to retrieve the corrected SQ depth after calling rdma_create_qp() and set newxprt->sc_sq_depth and newxprt->sc_sq_avail so that svc_rdma_send() and svc_rdma_post_chunk_ctxt() can utilize those WQEs. The NVMe target driver, for example, already does this properly. Fixes: 26fb2254dd33 ("svcrdma: Estimate Send Queue depth properly") Signed-off-by: Chuck Lever --- net/sunrpc/xprtrdma/svc_rdma_transport.c | 39 +++++++++++++++++++----------- 1 file changed, 25 insertions(+), 14 deletions(-) diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c index 0541aa54674c..790841864153 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_transport.c +++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c @@ -369,12 +369,12 @@ static struct svc_xprt *svc_rdma_create(struct svc_serv *serv, */ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt) { + unsigned int ctxts, rq_depth, sq_depth; struct svcxprt_rdma *listen_rdma; struct svcxprt_rdma *newxprt = NULL; struct rdma_conn_param conn_param; struct rpcrdma_connect_private pmsg; struct ib_qp_init_attr qp_attr; - unsigned int ctxts, rq_depth; struct ib_device *dev; int ret = 0; RPC_IFDEBUG(struct sockaddr *sap); @@ -421,24 +421,32 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt) newxprt->sc_max_requests = rq_depth - 2; newxprt->sc_max_bc_requests = 2; } + ctxts = rdma_rw_mr_factor(dev, newxprt->sc_port_num, RPCSVC_MAXPAGES); ctxts *= newxprt->sc_max_requests; - newxprt->sc_sq_depth = rq_depth + ctxts; - if (newxprt->sc_sq_depth > dev->attrs.max_qp_wr) - newxprt->sc_sq_depth = dev->attrs.max_qp_wr; - atomic_set(&newxprt->sc_sq_avail, newxprt->sc_sq_depth); + + sq_depth = newxprt->sc_max_requests + newxprt->sc_max_bc_requests + 1; + if (sq_depth > dev->attrs.max_qp_wr) + sq_depth = dev->attrs.max_qp_wr; newxprt->sc_pd = ib_alloc_pd(dev, 0); if (IS_ERR(newxprt->sc_pd)) { trace_svcrdma_pd_err(newxprt, PTR_ERR(newxprt->sc_pd)); goto errout; } - newxprt->sc_sq_cq = ib_alloc_cq_any(dev, newxprt, newxprt->sc_sq_depth, + + /* The Completion Queue depth is the maximum number of signaled + * WRs expected to be in flight. Every Send WR is signaled, and + * each rw_ctx has a chain of WRs, but only one WR in each chain + * is signaled. + */ + newxprt->sc_sq_cq = ib_alloc_cq_any(dev, newxprt, sq_depth + ctxts, IB_POLL_WORKQUEUE); if (IS_ERR(newxprt->sc_sq_cq)) goto errout; - newxprt->sc_rq_cq = - ib_alloc_cq_any(dev, newxprt, rq_depth, IB_POLL_WORKQUEUE); + /* Every Receive WR is signaled. */ + newxprt->sc_rq_cq = ib_alloc_cq_any(dev, newxprt, rq_depth, + IB_POLL_WORKQUEUE); if (IS_ERR(newxprt->sc_rq_cq)) goto errout; @@ -447,7 +455,7 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt) qp_attr.qp_context = &newxprt->sc_xprt; qp_attr.port_num = newxprt->sc_port_num; qp_attr.cap.max_rdma_ctxs = ctxts; - qp_attr.cap.max_send_wr = newxprt->sc_sq_depth - ctxts; + qp_attr.cap.max_send_wr = sq_depth; qp_attr.cap.max_recv_wr = rq_depth; qp_attr.cap.max_send_sge = newxprt->sc_max_send_sges; qp_attr.cap.max_recv_sge = 1; @@ -455,17 +463,20 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt) qp_attr.qp_type = IB_QPT_RC; qp_attr.send_cq = newxprt->sc_sq_cq; qp_attr.recv_cq = newxprt->sc_rq_cq; - dprintk(" cap.max_send_wr = %d, cap.max_recv_wr = %d\n", - qp_attr.cap.max_send_wr, qp_attr.cap.max_recv_wr); - dprintk(" cap.max_send_sge = %d, cap.max_recv_sge = %d\n", - qp_attr.cap.max_send_sge, qp_attr.cap.max_recv_sge); - ret = rdma_create_qp(newxprt->sc_cm_id, newxprt->sc_pd, &qp_attr); if (ret) { trace_svcrdma_qp_err(newxprt, ret); goto errout; } + dprintk("svcrdma: cap.max_send_wr = %d, cap.max_recv_wr = %d\n", + qp_attr.cap.max_send_wr, qp_attr.cap.max_recv_wr); + dprintk(" cap.max_send_sge = %d, cap.max_recv_sge = %d\n", + qp_attr.cap.max_send_sge, qp_attr.cap.max_recv_sge); + dprintk(" send CQ depth = %d, recv CQ depth = %d\n", + sq_depth, rq_depth); newxprt->sc_qp = newxprt->sc_cm_id->qp; + newxprt->sc_sq_depth = qp_attr.cap.max_send_wr; + atomic_set(&newxprt->sc_sq_avail, qp_attr.cap.max_send_wr); if (!(dev->attrs.device_cap_flags & IB_DEVICE_MEM_MGT_EXTENSIONS)) newxprt->sc_snd_w_inv = false; From patchwork Mon Dec 11 15:24:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13487411 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 685BD3B184; Mon, 11 Dec 2023 15:24:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="LMVQ+r4P" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A4BFCC433C7; Mon, 11 Dec 2023 15:24:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1702308294; bh=jdchVgDmsZtECwiFg6aF5OPF0Qz+GiqmuHEdpGSnXtE=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=LMVQ+r4PcRjJfACop+BPqRzcPR4AiDY7LSQeZNUzqHVZs0sFie/08jBpstRqYFJPz rWYLIByCaMe6gUKdjkjkmBW4mFUngugfrVMf+ckcTcyzzojtikcVcFgWSYew4tZQd7 LbK9zq/fMoXi8vvC89qQ93w5IyQ9vK4VXmgec9Td8FBU5EDdtjdZ+fNt5rbFtQpWn9 LTWLH+xqQSC5+2WgJJzH8pDvGvS8ve+gNTVb0mk/K0yz2KL5IuRlbIJ3ER6Pk8Lk6C y92m3icsxa3m57RHuELpCvRiRfriTdE7n67i/zg8O9Z/qPcl9rfYtyqvDbmRtRwSvM sfOiDgNGKSuew== Subject: [PATCH v1 8/8] svcrdma: Increase the per-transport rw_ctx count From: Chuck Lever To: linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org Cc: tom@talpey.com Date: Mon, 11 Dec 2023 10:24:53 -0500 Message-ID: <170230829373.90242.11114271955743181616.stgit@bazille.1015granger.net> In-Reply-To: <170230788373.90242.9421368360904462120.stgit@bazille.1015granger.net> References: <170230788373.90242.9421368360904462120.stgit@bazille.1015granger.net> User-Agent: StGit/1.5 Precedence: bulk X-Mailing-List: linux-nfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Chuck Lever rdma_rw_mr_factor() returns the smallest number of MRs needed to move a particular number of pages. svcrdma currently asks for the number of MRs needed to move RPCSVC_MAXPAGES (a little over one megabyte), as that is the number of pages in the largest r/wsize the server supports. This call assumes that the client's NIC can bundle a full one megabyte payload in a single rdma_segment. In fact, most NICs cannot handle a full megabyte with a single rkey / rdma_segment. Clients will typically split even a single Read chunk into many segments. The server needs one MR to read each rdma_segment in a Read chunk, and thus each one needs an rw_ctx. svcrdma has been vastly underestimating the number of rw_ctxs needed to handle 64 RPC requests with large Read chunks using small rdma_segments. Unfortunately there doesn't seem to be a good way to estimate this number without knowing the client NIC's capabilities. Even then, the client RPC/RDMA implementation is still free to split a chunk into smaller segments (for example, it might be using physical registration, which needs an rdma_segment per page). The best we can do for now is choose a number that will guarantee forward progress in the worst case (one page per segment). At some later point, we could add some mechanisms to make this much less of a problem: - Add a core API to add more rw_ctxs to an already-established QP - svcrdma could treat rw_ctx exhaustion as a temporary error and try again - Limit the number of Reads in flight Signed-off-by: Chuck Lever --- net/sunrpc/xprtrdma/svc_rdma_transport.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c index 790841864153..0ceb2817ca4d 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_transport.c +++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c @@ -422,8 +422,12 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt) newxprt->sc_max_bc_requests = 2; } - ctxts = rdma_rw_mr_factor(dev, newxprt->sc_port_num, RPCSVC_MAXPAGES); - ctxts *= newxprt->sc_max_requests; + /* Arbitrarily estimate the number of rw_ctxs needed for + * this transport. This is enough rw_ctxs to make forward + * progress even if the client is using one rkey per page + * in each Read chunk. + */ + ctxts = 3 * RPCSVC_MAXPAGES; sq_depth = newxprt->sc_max_requests + newxprt->sc_max_bc_requests + 1; if (sq_depth > dev->attrs.max_qp_wr)