From patchwork Mon Nov 13 13:43:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13453994 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4552AC4332F for ; Mon, 13 Nov 2023 13:43:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231395AbjKMNnM (ORCPT ); Mon, 13 Nov 2023 08:43:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45822 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231428AbjKMNnL (ORCPT ); Mon, 13 Nov 2023 08:43:11 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 34799D44; Mon, 13 Nov 2023 05:43:07 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 82CAAC433C9; Mon, 13 Nov 2023 13:43:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1699882986; bh=gP95Sc36yJ8h7pqWWj5NO6WoubKWQ2vmxgltLbLReaI=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=JoNqYE9X8ZIJnF+JKB/mYrzcWHJVi9UcXKayLMgqqZqCcjDUuIWLlDaSazQyJSMap MjkuD3ks91F50jHwEPxhC++riPjXOKFhYf738X4vJdgn8K4q47oIuZgpr57rV+tfXk zgFyxlFQzE53lndWN+jG9nRDpSn/pt4QhmxlZ+2Z48KpTCkVAmiswRgl2VzKTJlCEX XsYrYwf/owFxzXp6PcdcYFgh3ZYk31iYBToZYHp7xLx2RJnB/LODi4IxIaXzIM65U6 5Q7TKOqaPlMGkdO2jGsF2pjjcPxRFzSq1LU5VANDk2HV949dU49/hG4Ak1DDO7K3np F+UXvUUral+PA== Subject: [PATCH v1 1/7] svcrdma: Eliminate allocation of recv_ctxt objects in backchannel From: Chuck Lever To: linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org Cc: tom@talpey.com Date: Mon, 13 Nov 2023 08:43:05 -0500 Message-ID: <169988298547.6417.6947965014611566854.stgit@bazille.1015granger.net> In-Reply-To: <169988267843.6417.17927133323277523958.stgit@bazille.1015granger.net> References: <169988267843.6417.17927133323277523958.stgit@bazille.1015granger.net> User-Agent: StGit/1.5 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Chuck Lever The svc_rdma_recv_ctxt free list uses a lockless list to avoid the need for a spin lock in the fast path. llist_del_first(), which is used by svc_rdma_recv_ctxt_get(), requires serialization, however, when there are multiple list producers that are unserialized. I mistakenly thought there was only one caller of svc_rdma_recv_ctxt_get() (svc_rdma_refresh_recvs()), thus explicit serialization would not be necessary. But there is another caller: svc_rdma_bc_sendto(), and these two are not serialized against each other. I haven't seen ill effects that I could directly ascribe to a lack of serialization. It's just an observation based on code audit. When DMA-mapping before sending a Reply, the passed-in struct svc_rdma_recv_ctxt is used only for its write and reply PCLs. These are currently always empty in the backchannel case. So, instead of passing a full svc_rdma_recv_ctxt object to svc_rdma_map_reply_msg(), let's pass in just the Write and Reply PCLs. This change makes it unnecessary for the backchannel to acquire a dummy svc_rdma_recv_ctxt object when sending an RPC Call. The need for svc_rdma_recv_ctxt free list serialization is now completely avoided. Signed-off-by: Chuck Lever --- include/linux/sunrpc/svc_rdma.h | 3 ++- net/sunrpc/xprtrdma/svc_rdma_backchannel.c | 11 ++++------ net/sunrpc/xprtrdma/svc_rdma_sendto.c | 31 +++++++++++++++------------- 3 files changed, 23 insertions(+), 22 deletions(-) diff --git a/include/linux/sunrpc/svc_rdma.h b/include/linux/sunrpc/svc_rdma.h index a5ee0af2a310..4ac32895a058 100644 --- a/include/linux/sunrpc/svc_rdma.h +++ b/include/linux/sunrpc/svc_rdma.h @@ -200,7 +200,8 @@ extern int svc_rdma_send(struct svcxprt_rdma *rdma, struct svc_rdma_send_ctxt *ctxt); extern int svc_rdma_map_reply_msg(struct svcxprt_rdma *rdma, struct svc_rdma_send_ctxt *sctxt, - const struct svc_rdma_recv_ctxt *rctxt, + const struct svc_rdma_pcl *write_pcl, + const struct svc_rdma_pcl *reply_pcl, const struct xdr_buf *xdr); extern void svc_rdma_send_error_msg(struct svcxprt_rdma *rdma, struct svc_rdma_send_ctxt *sctxt, diff --git a/net/sunrpc/xprtrdma/svc_rdma_backchannel.c b/net/sunrpc/xprtrdma/svc_rdma_backchannel.c index 7420a2c990c7..c9be6778643b 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_backchannel.c +++ b/net/sunrpc/xprtrdma/svc_rdma_backchannel.c @@ -76,15 +76,12 @@ static int svc_rdma_bc_sendto(struct svcxprt_rdma *rdma, struct rpc_rqst *rqst, struct svc_rdma_send_ctxt *sctxt) { - struct svc_rdma_recv_ctxt *rctxt; + struct svc_rdma_pcl empty_pcl; int ret; - rctxt = svc_rdma_recv_ctxt_get(rdma); - if (!rctxt) - return -EIO; - - ret = svc_rdma_map_reply_msg(rdma, sctxt, rctxt, &rqst->rq_snd_buf); - svc_rdma_recv_ctxt_put(rdma, rctxt); + pcl_init(&empty_pcl); + ret = svc_rdma_map_reply_msg(rdma, sctxt, &empty_pcl, &empty_pcl, + &rqst->rq_snd_buf); if (ret < 0) return -EIO; diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c index c6644cca52c5..45735f74eb86 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c +++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c @@ -653,7 +653,7 @@ static int svc_rdma_xb_count_sges(const struct xdr_buf *xdr, * svc_rdma_pull_up_needed - Determine whether to use pull-up * @rdma: controlling transport * @sctxt: send_ctxt for the Send WR - * @rctxt: Write and Reply chunks provided by client + * @write_pcl: Write chunk list provided by client * @xdr: xdr_buf containing RPC message to transmit * * Returns: @@ -662,7 +662,7 @@ static int svc_rdma_xb_count_sges(const struct xdr_buf *xdr, */ static bool svc_rdma_pull_up_needed(const struct svcxprt_rdma *rdma, const struct svc_rdma_send_ctxt *sctxt, - const struct svc_rdma_recv_ctxt *rctxt, + const struct svc_rdma_pcl *write_pcl, const struct xdr_buf *xdr) { /* Resources needed for the transport header */ @@ -672,7 +672,7 @@ static bool svc_rdma_pull_up_needed(const struct svcxprt_rdma *rdma, }; int ret; - ret = pcl_process_nonpayloads(&rctxt->rc_write_pcl, xdr, + ret = pcl_process_nonpayloads(write_pcl, xdr, svc_rdma_xb_count_sges, &args); if (ret < 0) return false; @@ -728,7 +728,7 @@ static int svc_rdma_xb_linearize(const struct xdr_buf *xdr, * svc_rdma_pull_up_reply_msg - Copy Reply into a single buffer * @rdma: controlling transport * @sctxt: send_ctxt for the Send WR; xprt hdr is already prepared - * @rctxt: Write and Reply chunks provided by client + * @write_pcl: Write chunk list provided by client * @xdr: prepared xdr_buf containing RPC message * * The device is not capable of sending the reply directly. @@ -743,7 +743,7 @@ static int svc_rdma_xb_linearize(const struct xdr_buf *xdr, */ static int svc_rdma_pull_up_reply_msg(const struct svcxprt_rdma *rdma, struct svc_rdma_send_ctxt *sctxt, - const struct svc_rdma_recv_ctxt *rctxt, + const struct svc_rdma_pcl *write_pcl, const struct xdr_buf *xdr) { struct svc_rdma_pullup_data args = { @@ -751,7 +751,7 @@ static int svc_rdma_pull_up_reply_msg(const struct svcxprt_rdma *rdma, }; int ret; - ret = pcl_process_nonpayloads(&rctxt->rc_write_pcl, xdr, + ret = pcl_process_nonpayloads(write_pcl, xdr, svc_rdma_xb_linearize, &args); if (ret < 0) return ret; @@ -764,7 +764,8 @@ static int svc_rdma_pull_up_reply_msg(const struct svcxprt_rdma *rdma, /* svc_rdma_map_reply_msg - DMA map the buffer holding RPC message * @rdma: controlling transport * @sctxt: send_ctxt for the Send WR - * @rctxt: Write and Reply chunks provided by client + * @write_pcl: Write chunk list provided by client + * @reply_pcl: Reply chunk provided by client * @xdr: prepared xdr_buf containing RPC message * * Returns: @@ -776,7 +777,8 @@ static int svc_rdma_pull_up_reply_msg(const struct svcxprt_rdma *rdma, */ int svc_rdma_map_reply_msg(struct svcxprt_rdma *rdma, struct svc_rdma_send_ctxt *sctxt, - const struct svc_rdma_recv_ctxt *rctxt, + const struct svc_rdma_pcl *write_pcl, + const struct svc_rdma_pcl *reply_pcl, const struct xdr_buf *xdr) { struct svc_rdma_map_data args = { @@ -789,18 +791,18 @@ int svc_rdma_map_reply_msg(struct svcxprt_rdma *rdma, sctxt->sc_sges[0].length = sctxt->sc_hdrbuf.len; /* If there is a Reply chunk, nothing follows the transport - * header, and we're done here. + * header, so there is nothing to map. */ - if (!pcl_is_empty(&rctxt->rc_reply_pcl)) + if (!pcl_is_empty(reply_pcl)) return 0; /* For pull-up, svc_rdma_send() will sync the transport header. * No additional DMA mapping is necessary. */ - if (svc_rdma_pull_up_needed(rdma, sctxt, rctxt, xdr)) - return svc_rdma_pull_up_reply_msg(rdma, sctxt, rctxt, xdr); + if (svc_rdma_pull_up_needed(rdma, sctxt, write_pcl, xdr)) + return svc_rdma_pull_up_reply_msg(rdma, sctxt, write_pcl, xdr); - return pcl_process_nonpayloads(&rctxt->rc_write_pcl, xdr, + return pcl_process_nonpayloads(write_pcl, xdr, svc_rdma_xb_dma_map, &args); } @@ -848,7 +850,8 @@ static int svc_rdma_send_reply_msg(struct svcxprt_rdma *rdma, { int ret; - ret = svc_rdma_map_reply_msg(rdma, sctxt, rctxt, &rqstp->rq_res); + ret = svc_rdma_map_reply_msg(rdma, sctxt, &rctxt->rc_write_pcl, + &rctxt->rc_reply_pcl, &rqstp->rq_res); if (ret < 0) return ret; From patchwork Mon Nov 13 13:43:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13453995 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4ABE3C4167B for ; Mon, 13 Nov 2023 13:43:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231425AbjKMNnR (ORCPT ); Mon, 13 Nov 2023 08:43:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45954 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231389AbjKMNnQ (ORCPT ); Mon, 13 Nov 2023 08:43:16 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 83247D51; Mon, 13 Nov 2023 05:43:13 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E5CB1C433C7; Mon, 13 Nov 2023 13:43:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1699882993; bh=8/KpESZhH8UUTgiENVB0Cw+EDjoA/5tTwqj4bDbduF0=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=kfdAAHii3Ht7fPYes3OEbir4ajZ1mH5aqZ0DB6X3Ctp3+ihgxnIzqPalaaUc02Re6 p6CUINrSKmh879ZyBNZx2chJ3BNFzpfHHpnWlOONpkHlGa7tkn2c5zHfHdsS75plNH hT7jyKMg2KEUdtqlT+La9brlpUB00h8nc3aOYBDKbKFvsOPmf4AZj9aqo9RMZnKxb1 OGJa/1SXAg2rKJTaNvY8ElbVBmm6us0lg8IoQvHCW/9Zs63XmVbtcPFpNB4/fmc4OY txu/EBCMv2RRka1P+XAw7khITFYr01mOx0kDS5jrpLhc2OhCwjzJes2JNx2R3n6e9f JoPVIfLI7ZxSw== Subject: [PATCH v1 2/7] svcrdma: Clean up use of rdma->sc_pd->device in Receive paths From: Chuck Lever To: linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org Cc: tom@talpey.com Date: Mon, 13 Nov 2023 08:43:12 -0500 Message-ID: <169988299199.6417.2235443204829287810.stgit@bazille.1015granger.net> In-Reply-To: <169988267843.6417.17927133323277523958.stgit@bazille.1015granger.net> References: <169988267843.6417.17927133323277523958.stgit@bazille.1015granger.net> User-Agent: StGit/1.5 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Chuck Lever I can't think of a reason why svcrdma is using the PD's device. Most other consumers of the IB DMA API use the ib_device pointer from the connection's rdma_cm_id. I don't belive there's any functional difference between the two, but it is a little confusing to see some uses of rdma_cm_id->device and some of ib_pd->device. Signed-off-by: Chuck Lever --- net/sunrpc/xprtrdma/svc_rdma_recvfrom.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c index 3b05f90a3e50..69cc21654365 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c +++ b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c @@ -125,20 +125,21 @@ static void svc_rdma_recv_cid_init(struct svcxprt_rdma *rdma, static struct svc_rdma_recv_ctxt * svc_rdma_recv_ctxt_alloc(struct svcxprt_rdma *rdma) { - int node = ibdev_to_node(rdma->sc_cm_id->device); + struct ib_device *device = rdma->sc_cm_id->device; struct svc_rdma_recv_ctxt *ctxt; dma_addr_t addr; void *buffer; - ctxt = kmalloc_node(sizeof(*ctxt), GFP_KERNEL, node); + ctxt = kmalloc_node(sizeof(*ctxt), GFP_KERNEL, ibdev_to_node(device)); if (!ctxt) goto fail0; - buffer = kmalloc_node(rdma->sc_max_req_size, GFP_KERNEL, node); + buffer = kmalloc_node(rdma->sc_max_req_size, GFP_KERNEL, + ibdev_to_node(device)); if (!buffer) goto fail1; - addr = ib_dma_map_single(rdma->sc_pd->device, buffer, - rdma->sc_max_req_size, DMA_FROM_DEVICE); - if (ib_dma_mapping_error(rdma->sc_pd->device, addr)) + addr = ib_dma_map_single(device, buffer, rdma->sc_max_req_size, + DMA_FROM_DEVICE); + if (ib_dma_mapping_error(device, addr)) goto fail2; svc_rdma_recv_cid_init(rdma, &ctxt->rc_cid); @@ -169,7 +170,7 @@ svc_rdma_recv_ctxt_alloc(struct svcxprt_rdma *rdma) static void svc_rdma_recv_ctxt_destroy(struct svcxprt_rdma *rdma, struct svc_rdma_recv_ctxt *ctxt) { - ib_dma_unmap_single(rdma->sc_pd->device, ctxt->rc_recv_sge.addr, + ib_dma_unmap_single(rdma->sc_cm_id->device, ctxt->rc_recv_sge.addr, ctxt->rc_recv_sge.length, DMA_FROM_DEVICE); kfree(ctxt->rc_recv_buf); kfree(ctxt); @@ -814,7 +815,7 @@ int svc_rdma_recvfrom(struct svc_rqst *rqstp) return 0; percpu_counter_inc(&svcrdma_stat_recv); - ib_dma_sync_single_for_cpu(rdma_xprt->sc_pd->device, + ib_dma_sync_single_for_cpu(rdma_xprt->sc_cm_id->device, ctxt->rc_recv_sge.addr, ctxt->rc_byte_len, DMA_FROM_DEVICE); svc_rdma_build_arg_xdr(rqstp, ctxt); From patchwork Mon Nov 13 13:43:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13453996 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5666CC4332F for ; Mon, 13 Nov 2023 13:43:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231389AbjKMNnY (ORCPT ); Mon, 13 Nov 2023 08:43:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56846 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231403AbjKMNnX (ORCPT ); Mon, 13 Nov 2023 08:43:23 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 03607172D; Mon, 13 Nov 2023 05:43:20 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 637C0C433C8; Mon, 13 Nov 2023 13:43:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1699882999; bh=L0+/Y1s58E7taTPmGhQUW6KQj36XTxPHjKVh6Tjle54=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=XiE7s8NYLU4G0H68fgYBAXKdQPxuMuRNMD3Dbc3Utz/Ph7wCp0JZcKR+00oqJpVgm GIpISjIE4TLQN2/HNXV1AQVbBgkRDvLipWf1+KwYDgsLwPW4hpYiIbXAvaUa/urSCR AGzdSPaedtWzAsUFvm2jKR+k5+B2zCG2bJtCb+A1DqN8E/djlypX+KJ7pHlliKq+Er rf08QMWLJnjv9V0yX5H+LcsMeA6o4ZnH9HENN7zgij2k7CiRlmncy1vag4HXjATM7t CbGNeU+riQ31urXeOT/ccs/9bjTgOD8QeJsiLQ0SXbGKfQW3oICbdDnBCGns2aK31X 02DD+P03lwriA== Subject: [PATCH v1 3/7] svcrdma: Pre-allocate svc_rdma_recv_ctxt objects From: Chuck Lever To: linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org Cc: tom@talpey.com Date: Mon, 13 Nov 2023 08:43:18 -0500 Message-ID: <169988299839.6417.8336988845874237584.stgit@bazille.1015granger.net> In-Reply-To: <169988267843.6417.17927133323277523958.stgit@bazille.1015granger.net> References: <169988267843.6417.17927133323277523958.stgit@bazille.1015granger.net> User-Agent: StGit/1.5 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Chuck Lever The original reason for allocating svc_rdma_recv_ctxt objects during Receive completion was to ensure the objects were allocated on the NUMA node closest to the underlying IB device. Since commit c5d68d25bd6b ("svcrdma: Clean up allocation of svc_rdma_recv_ctxt"), however, the device's favored node is explicitly passed to the memory allocator. To enable switching Receive completion to soft IRQ context, move memory allocation out of completion handling, since it can be costly, and it can sleep. A limited number of objects is now allocated at "accept" time. Signed-off-by: Chuck Lever --- net/sunrpc/xprtrdma/svc_rdma_recvfrom.c | 32 ++++++++++++++++++++----------- 1 file changed, 21 insertions(+), 11 deletions(-) diff --git a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c index 69cc21654365..6191ce20f89e 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c +++ b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c @@ -205,18 +205,11 @@ struct svc_rdma_recv_ctxt *svc_rdma_recv_ctxt_get(struct svcxprt_rdma *rdma) node = llist_del_first(&rdma->sc_recv_ctxts); if (!node) - goto out_empty; - ctxt = llist_entry(node, struct svc_rdma_recv_ctxt, rc_node); + return NULL; -out: + ctxt = llist_entry(node, struct svc_rdma_recv_ctxt, rc_node); ctxt->rc_page_count = 0; return ctxt; - -out_empty: - ctxt = svc_rdma_recv_ctxt_alloc(rdma); - if (!ctxt) - return NULL; - goto out; } /** @@ -278,7 +271,7 @@ static bool svc_rdma_refresh_recvs(struct svcxprt_rdma *rdma, rdma->sc_pending_recvs++; } if (!recv_chain) - return false; + return true; ret = ib_post_recv(rdma->sc_qp, recv_chain, &bad_wr); if (ret) @@ -302,10 +295,27 @@ static bool svc_rdma_refresh_recvs(struct svcxprt_rdma *rdma, * svc_rdma_post_recvs - Post initial set of Recv WRs * @rdma: fresh svcxprt_rdma * - * Returns true if successful, otherwise false. + * Return values: + * %true: Receive Queue initialization successful + * %false: memory allocation or DMA error */ bool svc_rdma_post_recvs(struct svcxprt_rdma *rdma) { + unsigned int total; + + /* For each credit, allocate enough recv_ctxts for one + * posted Receive and one RPC in process. + */ + total = (rdma->sc_max_requests * 2) + rdma->sc_recv_batch; + while (total--) { + struct svc_rdma_recv_ctxt *ctxt; + + ctxt = svc_rdma_recv_ctxt_alloc(rdma); + if (!ctxt) + return false; + llist_add(&ctxt->rc_node, &rdma->sc_recv_ctxts); + } + return svc_rdma_refresh_recvs(rdma, rdma->sc_max_requests); } From patchwork Mon Nov 13 13:43:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13453997 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64918C4167B for ; Mon, 13 Nov 2023 13:43:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231431AbjKMNna (ORCPT ); Mon, 13 Nov 2023 08:43:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56978 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231428AbjKMNn3 (ORCPT ); Mon, 13 Nov 2023 08:43:29 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6D183D44; Mon, 13 Nov 2023 05:43:26 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C7D00C433C8; Mon, 13 Nov 2023 13:43:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1699883006; bh=PurKU7DUZBXlrLeL6TOir/sVXXCMmw1iVaKDt3IC0HE=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=F4/JS151Vc7xp092MthnmPNmDlw/3oURplg7TbgiHTW2WrSfBF/Qmzm9NCWlPENBq TZaeTIC8Iz8huUJpWyNIqDkQUh3zvTA/eGd5Tz/ImLL+YXNnXX1Necla3oqIOdmeOr z+jgrg3phsu6EeEwZU4ghhPQDFL+CPTZRjf7Z6qkVNYgsGCGpsn+RVCujyfoYcBI52 w/uJOE7eULNJ3O0JF/5rFKOMGJ7r4jqSNnqgfXb+gip6QBcQJ+0UObmFLHrX5WhGr0 MqVxdzDOiKxwcnlI6bjH2dMQ4tDQf05PKeWqleXPoXHoeEziHGa3ECc5u+vfc/up0T AcNxoiiIg8QFQ== Subject: [PATCH v1 4/7] svcrdma: Switch Receive CQ to soft IRQ From: Chuck Lever To: linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org Cc: tom@talpey.com Date: Mon, 13 Nov 2023 08:43:24 -0500 Message-ID: <169988300485.6417.15944939065289556526.stgit@bazille.1015granger.net> In-Reply-To: <169988267843.6417.17927133323277523958.stgit@bazille.1015granger.net> References: <169988267843.6417.17927133323277523958.stgit@bazille.1015granger.net> User-Agent: StGit/1.5 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Chuck Lever The original rationale for handling Receive completions in process context was to eliminate the use of a bottom-half-disabled spin lock. This was intended to simplify assumptions made in the Receive code paths and reduce lock contention. However, a completion handled during soft IRQ is considerably lower in average latency than taking a spin lock that disables bottom halves, since with soft IRQ, the completion interrupt no longer has to get scheduled onto a workqueue. Now that Receive contexts are pre-allocated and the RPC service thread scheduler is constant time, moving Receive completion processing to soft IRQ is safe and simple. Signed-off-by: Chuck Lever --- net/sunrpc/xprtrdma/svc_rdma_recvfrom.c | 4 ++-- net/sunrpc/xprtrdma/svc_rdma_transport.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c index 6191ce20f89e..4ee219924433 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c +++ b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c @@ -810,14 +810,14 @@ int svc_rdma_recvfrom(struct svc_rqst *rqstp) rqstp->rq_xprt_ctxt = NULL; ctxt = NULL; - spin_lock(&rdma_xprt->sc_rq_dto_lock); + spin_lock_bh(&rdma_xprt->sc_rq_dto_lock); ctxt = svc_rdma_next_recv_ctxt(&rdma_xprt->sc_rq_dto_q); if (ctxt) list_del(&ctxt->rc_list); else /* No new incoming requests, terminate the loop */ clear_bit(XPT_DATA, &xprt->xpt_flags); - spin_unlock(&rdma_xprt->sc_rq_dto_lock); + spin_unlock_bh(&rdma_xprt->sc_rq_dto_lock); /* Unblock the transport for the next receive */ svc_xprt_received(xprt); diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c index 2abd895046ee..7bd50efeeb4e 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_transport.c +++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c @@ -433,8 +433,8 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt) IB_POLL_WORKQUEUE); if (IS_ERR(newxprt->sc_sq_cq)) goto errout; - newxprt->sc_rq_cq = - ib_alloc_cq_any(dev, newxprt, rq_depth, IB_POLL_WORKQUEUE); + newxprt->sc_rq_cq = ib_alloc_cq_any(dev, newxprt, rq_depth, + IB_POLL_SOFTIRQ); if (IS_ERR(newxprt->sc_rq_cq)) goto errout; From patchwork Mon Nov 13 13:43:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13453998 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B63BC4332F for ; Mon, 13 Nov 2023 13:43:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231434AbjKMNng (ORCPT ); Mon, 13 Nov 2023 08:43:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45554 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231403AbjKMNnf (ORCPT ); Mon, 13 Nov 2023 08:43:35 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D49E81A2; Mon, 13 Nov 2023 05:43:32 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 30954C433C8; Mon, 13 Nov 2023 13:43:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1699883012; bh=VHoqppGnHdOrBVYCSsM3iJLIfCpyjVwwtWSf802otXc=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=j/1EQgiDtldKkAwRJpl05JsccGyyvmKsB9Ss61OVnOStMq/VR08njwRIStV37nJeF rMrpRemrecCPWInO8QIkh7TJlpTbr3dqaopsF4BuZZ8+1ha8b+CDI1UDI3p8pEmGHD EAOGZNlsZoGlOucbQb3ci2BmK9k5a6I9mIc9nFfcDs9ESu8IkTLMs+s/KPwtzKmVh5 JkOJRjDdfSDn74PiRvLYea0xi+eLCyLoJKV982UymO4hOtcfNcIkHKdbksLCPs3Mk3 YkUHjUkjGMUJdnIyIl5tFyuLZJDysgjgA1jJxaCzDo+J85mVQoZBmEJ3hE+tIMisti 2EqlnF5bgjtyw== Subject: [PATCH v1 5/7] svcrdma: Clean up use of rdma->sc_pd->device From: Chuck Lever To: linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org Cc: tom@talpey.com Date: Mon, 13 Nov 2023 08:43:31 -0500 Message-ID: <169988301128.6417.2640827711073808511.stgit@bazille.1015granger.net> In-Reply-To: <169988267843.6417.17927133323277523958.stgit@bazille.1015granger.net> References: <169988267843.6417.17927133323277523958.stgit@bazille.1015granger.net> User-Agent: StGit/1.5 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Chuck Lever I can't think of a reason why svcrdma is using the PD's device. Most other consumers of the IB DMA API use the ib_device pointer from the connection's rdma_cm_id. I don't think there's any functional difference between the two, but it is a little confusing to see some uses of rdma_cm_id and some of ib_pd. Signed-off-by: Chuck Lever --- net/sunrpc/xprtrdma/svc_rdma_sendto.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c index 45735f74eb86..e27345af6289 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c +++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c @@ -123,22 +123,23 @@ static void svc_rdma_send_cid_init(struct svcxprt_rdma *rdma, static struct svc_rdma_send_ctxt * svc_rdma_send_ctxt_alloc(struct svcxprt_rdma *rdma) { - int node = ibdev_to_node(rdma->sc_cm_id->device); + struct ib_device *device = rdma->sc_cm_id->device; struct svc_rdma_send_ctxt *ctxt; dma_addr_t addr; void *buffer; int i; ctxt = kmalloc_node(struct_size(ctxt, sc_sges, rdma->sc_max_send_sges), - GFP_KERNEL, node); + GFP_KERNEL, ibdev_to_node(device)); if (!ctxt) goto fail0; - buffer = kmalloc_node(rdma->sc_max_req_size, GFP_KERNEL, node); + buffer = kmalloc_node(rdma->sc_max_req_size, GFP_KERNEL, + ibdev_to_node(device)); if (!buffer) goto fail1; - addr = ib_dma_map_single(rdma->sc_pd->device, buffer, - rdma->sc_max_req_size, DMA_TO_DEVICE); - if (ib_dma_mapping_error(rdma->sc_pd->device, addr)) + addr = ib_dma_map_single(device, buffer, rdma->sc_max_req_size, + DMA_TO_DEVICE); + if (ib_dma_mapping_error(device, addr)) goto fail2; svc_rdma_send_cid_init(rdma, &ctxt->sc_cid); @@ -172,15 +173,14 @@ svc_rdma_send_ctxt_alloc(struct svcxprt_rdma *rdma) */ void svc_rdma_send_ctxts_destroy(struct svcxprt_rdma *rdma) { + struct ib_device *device = rdma->sc_cm_id->device; struct svc_rdma_send_ctxt *ctxt; struct llist_node *node; while ((node = llist_del_first(&rdma->sc_send_ctxts)) != NULL) { ctxt = llist_entry(node, struct svc_rdma_send_ctxt, sc_node); - ib_dma_unmap_single(rdma->sc_pd->device, - ctxt->sc_sges[0].addr, - rdma->sc_max_req_size, - DMA_TO_DEVICE); + ib_dma_unmap_single(device, ctxt->sc_sges[0].addr, + rdma->sc_max_req_size, DMA_TO_DEVICE); kfree(ctxt->sc_xprt_buf); kfree(ctxt); } @@ -318,7 +318,7 @@ int svc_rdma_send(struct svcxprt_rdma *rdma, struct svc_rdma_send_ctxt *ctxt) might_sleep(); /* Sync the transport header buffer */ - ib_dma_sync_single_for_device(rdma->sc_pd->device, + ib_dma_sync_single_for_device(rdma->sc_cm_id->device, wr->sg_list[0].addr, wr->sg_list[0].length, DMA_TO_DEVICE); From patchwork Mon Nov 13 13:43:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13453999 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A4A7C4167B for ; Mon, 13 Nov 2023 13:43:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231403AbjKMNnm (ORCPT ); Mon, 13 Nov 2023 08:43:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44002 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231441AbjKMNnm (ORCPT ); Mon, 13 Nov 2023 08:43:42 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2B3DCD44; Mon, 13 Nov 2023 05:43:39 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8E4ACC433C7; Mon, 13 Nov 2023 13:43:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1699883018; bh=2ZhG0ZJ8kqHHCB5v6QnlLHzsHUixPCVM1jU13pFdT98=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=gSFuiPDCvbiAPNfG8y3+rRx454VNoLaQcxCvablygH1cCuuWcul33lSVrtCyvu8fW yplcCQofdjyzTkS9rEsnf7hGRYFb+h4dFNV5Sb6fZXFT5IAM0O+f6ec6zwjFskFkWl 19q005TLCh+m/sZn8V/41HGA+2I2JwawGjtISvb0FudYjaa8zW56TEc8zJAefyqlTh n5dyLulzUrkqdE6AOL2+NGoHJYbM1H+RE1+Hlcn/p+y0VCZbMm/j6XlAskLig/ZIOD c2r8vrBEN8Z26f4lE/lKEYicOnANa/tKetGJthHFi8cAd3MxUyao2MotQyiMspw8Ln DL9cHfoUuGGfA== Subject: [PATCH v1 6/7] svcrdma: Move the svcxprt_rdma::sc_pd field From: Chuck Lever To: linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org Cc: tom@talpey.com Date: Mon, 13 Nov 2023 08:43:37 -0500 Message-ID: <169988301765.6417.18396957653055937794.stgit@bazille.1015granger.net> In-Reply-To: <169988267843.6417.17927133323277523958.stgit@bazille.1015granger.net> References: <169988267843.6417.17927133323277523958.stgit@bazille.1015granger.net> User-Agent: StGit/1.5 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Chuck Lever It's now used only during transport construction and tear down, so move it out of the hotter cache lines in struct svcxprt_rdma. Signed-off-by: Chuck Lever --- include/linux/sunrpc/svc_rdma.h | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/include/linux/sunrpc/svc_rdma.h b/include/linux/sunrpc/svc_rdma.h index 4ac32895a058..2a22629b6fd0 100644 --- a/include/linux/sunrpc/svc_rdma.h +++ b/include/linux/sunrpc/svc_rdma.h @@ -87,8 +87,6 @@ struct svcxprt_rdma { int sc_max_req_size; /* Size of each RQ WR buf */ u8 sc_port_num; - struct ib_pd *sc_pd; - spinlock_t sc_send_lock; struct llist_head sc_send_ctxts; spinlock_t sc_rw_ctxt_lock; @@ -101,6 +99,7 @@ struct svcxprt_rdma { struct ib_qp *sc_qp; struct ib_cq *sc_rq_cq; struct ib_cq *sc_sq_cq; + struct ib_pd *sc_pd; spinlock_t sc_lock; /* transport lock */ From patchwork Mon Nov 13 13:43:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13454000 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7094C4332F for ; Mon, 13 Nov 2023 13:43:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231441AbjKMNnt (ORCPT ); Mon, 13 Nov 2023 08:43:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44196 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231440AbjKMNns (ORCPT ); Mon, 13 Nov 2023 08:43:48 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A6C0D132; Mon, 13 Nov 2023 05:43:45 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1591DC433C9; Mon, 13 Nov 2023 13:43:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1699883025; bh=zNd+G+1IihzPTwhS+TvIhUCeqzB2pnsHXfpvmHRUUfw=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=k2KRH7Kg/+ShKnngqSKXVdUQdUAZmpdX0EN55yMkX/2bsPqwQ6Ca94s/rvnBuFvi8 U9YKDUuSHtTuZTRykQTLOZ8Ugcifa9P7NXtNOMb+WzfktDqI5Cpz1NIQwloBPZBpCd mBslipUCaAFqFIiPN6mxWxEz5K8gY49TXhepJQX5Qs/bPblZiVx9RWn8mYL/e654No IziDiJTr1YwzaYeEePZGJxzBWKkpUedzuzEZ8t4J25121keHmjFgKT7bPn0xHkATGl KGNMI0t9axO1jrCOaLUsziiB8GLQgTH94Wqb+v2nSJuUGCsj9cHMsun/yqRLys+Hc1 /h/6ALNYPOhTQ== Subject: [PATCH v1 7/7] svcrdma: Move Send CQ to SOFTIRQ context From: Chuck Lever To: linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org Cc: tom@talpey.com Date: Mon, 13 Nov 2023 08:43:44 -0500 Message-ID: <169988302403.6417.12005628146945923629.stgit@bazille.1015granger.net> In-Reply-To: <169988267843.6417.17927133323277523958.stgit@bazille.1015granger.net> References: <169988267843.6417.17927133323277523958.stgit@bazille.1015granger.net> User-Agent: StGit/1.5 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Chuck Lever I've noticed that using the ib-comp-wq workqueue delays Send Completions anywhere between 5us and 3 or more milliseconds. For RDMA Write and Send completions, this is not a terribly significant issue, since these just release resources. They do not contribute to RPC round-trip time. However, for RDMA Read completions, it delays the start of NFS WRITE operations, adding round-trip latency. For small to moderate NFS WRITEs, using soft IRQ completion means up to 5us better latency per NFS WRITE -- this is a significant portion of average RTT for small NFS WRITEs, which is 40-75us. Signed-off-by: Chuck Lever --- net/sunrpc/xprtrdma/svc_rdma_rw.c | 4 ++-- net/sunrpc/xprtrdma/svc_rdma_sendto.c | 6 +++--- net/sunrpc/xprtrdma/svc_rdma_transport.c | 2 +- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/net/sunrpc/xprtrdma/svc_rdma_rw.c b/net/sunrpc/xprtrdma/svc_rdma_rw.c index e460e25a1d6d..ada164c027bc 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_rw.c +++ b/net/sunrpc/xprtrdma/svc_rdma_rw.c @@ -56,9 +56,9 @@ svc_rdma_get_rw_ctxt(struct svcxprt_rdma *rdma, unsigned int sges) struct svc_rdma_rw_ctxt *ctxt; struct llist_node *node; - spin_lock(&rdma->sc_rw_ctxt_lock); + spin_lock_bh(&rdma->sc_rw_ctxt_lock); node = llist_del_first(&rdma->sc_rw_ctxts); - spin_unlock(&rdma->sc_rw_ctxt_lock); + spin_unlock_bh(&rdma->sc_rw_ctxt_lock); if (node) { ctxt = llist_entry(node, struct svc_rdma_rw_ctxt, rw_node); } else { diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c index e27345af6289..49a9f409bc8e 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c +++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c @@ -198,12 +198,13 @@ struct svc_rdma_send_ctxt *svc_rdma_send_ctxt_get(struct svcxprt_rdma *rdma) struct svc_rdma_send_ctxt *ctxt; struct llist_node *node; - spin_lock(&rdma->sc_send_lock); + spin_lock_bh(&rdma->sc_send_lock); node = llist_del_first(&rdma->sc_send_ctxts); + spin_unlock_bh(&rdma->sc_send_lock); if (!node) goto out_empty; + ctxt = llist_entry(node, struct svc_rdma_send_ctxt, sc_node); - spin_unlock(&rdma->sc_send_lock); out: rpcrdma_set_xdrlen(&ctxt->sc_hdrbuf, 0); @@ -216,7 +217,6 @@ struct svc_rdma_send_ctxt *svc_rdma_send_ctxt_get(struct svcxprt_rdma *rdma) return ctxt; out_empty: - spin_unlock(&rdma->sc_send_lock); ctxt = svc_rdma_send_ctxt_alloc(rdma); if (!ctxt) return NULL; diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c index 7bd50efeeb4e..8de32927cd7d 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_transport.c +++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c @@ -430,7 +430,7 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt) goto errout; } newxprt->sc_sq_cq = ib_alloc_cq_any(dev, newxprt, newxprt->sc_sq_depth, - IB_POLL_WORKQUEUE); + IB_POLL_SOFTIRQ); if (IS_ERR(newxprt->sc_sq_cq)) goto errout; newxprt->sc_rq_cq = ib_alloc_cq_any(dev, newxprt, rq_depth,