From patchwork Mon Jun 23 22:40:15 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 4405301 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 4F8659F387 for ; Mon, 23 Jun 2014 22:40:24 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 6A5772018E for ; Mon, 23 Jun 2014 22:40:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5CC8B20225 for ; Mon, 23 Jun 2014 22:40:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752479AbaFWWkS (ORCPT ); Mon, 23 Jun 2014 18:40:18 -0400 Received: from mail-ie0-f176.google.com ([209.85.223.176]:34227 "EHLO mail-ie0-f176.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751400AbaFWWkR (ORCPT ); Mon, 23 Jun 2014 18:40:17 -0400 Received: by mail-ie0-f176.google.com with SMTP id rd18so6114663iec.7 for ; Mon, 23 Jun 2014 15:40:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:subject:from:to:date:message-id:in-reply-to:references :user-agent:mime-version:content-type:content-transfer-encoding; bh=y7aMOnL8xrGGxKRKSUYSyYjC5y+CyrDpMGc1piRQKAc=; b=Xj6W7ECR5Z3ZPE6YszW0vErSFMd+mwA3csn4Ad6CMjcjq3Rk7y5X78JG5sRe7HU4I0 1Zx6JV2hNdpnrNQhNgaPdVhr2850aK7SaM4+2fEW6V7FECdPmHybjYzi5igCxZ4Jg510 qa010Kr5//8uL0a8UdNGQ5iRjJxSBUv/hBAouyCSEMDlipZIL5spS389uZK1EjPwMYFB 1XMFdmBADcDSa7BL2TK9EQ8S41QjWH8LU/H2fbtYRQlHJEZQf7DcHyL5N7mWeDvWnqG8 5ro3tXMpuGGZmdS6BlF4DFXdRZhpqs7PlNpeUqUhLBG60ry78mOsJPxPYm6FcU6zzMqZ nbsA== X-Received: by 10.50.80.17 with SMTP id n17mr30659781igx.22.1403563216307; Mon, 23 Jun 2014 15:40:16 -0700 (PDT) Received: from manet.1015granger.net ([2604:8800:100:81fc:82ee:73ff:fe43:d64f]) by mx.google.com with ESMTPSA id vm1sm240864igc.3.2014.06.23.15.40.15 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 23 Jun 2014 15:40:16 -0700 (PDT) Subject: [PATCH v1 09/13] xprtrdma: Refactor rpcrdma_buffer_put() From: Chuck Lever To: linux-rdma@vger.kernel.org, linux-nfs@vger.kernel.org Date: Mon, 23 Jun 2014 18:40:15 -0400 Message-ID: <20140623224015.1634.35375.stgit@manet.1015granger.net> In-Reply-To: <20140623223201.1634.83888.stgit@manet.1015granger.net> References: <20140623223201.1634.83888.stgit@manet.1015granger.net> User-Agent: StGit/0.17.1-3-g7d0f MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID,T_RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Split out the code that manages the rb_mws list. A little extra error checking is introduced in the code path that grabs MWs for the next RPC request. If rb_mws were ever to become empty, the list_entry() would cause a NULL pointer dereference. Instead, now rpcrdma_buffer_get() returns NULL, which causes call_allocate() to delay and try again. Signed-off-by: Chuck Lever --- net/sunrpc/xprtrdma/verbs.c | 105 +++++++++++++++++++++++++++------------ net/sunrpc/xprtrdma/xprt_rdma.h | 1 2 files changed, 74 insertions(+), 32 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c index 3efc007..f24f0bf 100644 --- a/net/sunrpc/xprtrdma/verbs.c +++ b/net/sunrpc/xprtrdma/verbs.c @@ -1251,6 +1251,69 @@ rpcrdma_buffer_destroy(struct rpcrdma_buffer *buf) kfree(buf->rb_pool); } +static void +rpcrdma_put_mw_locked(struct rpcrdma_mw *mw) +{ + list_add_tail(&mw->mw_list, &mw->mw_pool->rb_mws); +} + +static void +rpcrdma_buffer_put_mw(struct rpcrdma_mw **mw) +{ + rpcrdma_put_mw_locked(*mw); + *mw = NULL; +} + +/* Cycle mw's back in reverse order, and "spin" them. + * This delays and scrambles reuse as much as possible. + */ +static void +rpcrdma_buffer_put_mws(struct rpcrdma_req *req) +{ + struct rpcrdma_mr_seg *seg1 = req->rl_segments; + struct rpcrdma_mr_seg *seg = seg1; + int i; + + for (i = 1, seg++; i < RPCRDMA_MAX_SEGS; seg++, i++) + rpcrdma_buffer_put_mw(&seg->mr_chunk.rl_mw); + rpcrdma_buffer_put_mw(&seg1->mr_chunk.rl_mw); +} + +static void +rpcrdma_send_buffer_put(struct rpcrdma_req *req, struct rpcrdma_buffer *buffers) +{ + buffers->rb_send_bufs[--buffers->rb_send_index] = req; + req->rl_niovs = 0; + if (req->rl_reply) { + buffers->rb_recv_bufs[--buffers->rb_recv_index] = req->rl_reply; + req->rl_reply->rr_func = NULL; + req->rl_reply = NULL; + } +} + +static struct rpcrdma_req * +rpcrdma_buffer_get_mws(struct rpcrdma_req *req, struct rpcrdma_buffer *buffers) +{ + struct rpcrdma_mw *r; + int i; + + for (i = RPCRDMA_MAX_SEGS - 1; i >= 0; i--) { + if (list_empty(&buffers->rb_mws)) + goto out_empty; + + r = list_entry(buffers->rb_mws.next, + struct rpcrdma_mw, mw_list); + list_del(&r->mw_list); + r->mw_pool = buffers; + req->rl_segments[i].mr_chunk.rl_mw = r; + } + return req; +out_empty: + rpcrdma_send_buffer_put(req, buffers); + rpcrdma_buffer_put_mws(req); + return NULL; +} + /* * Get a set of request/reply buffers. * @@ -1263,10 +1326,9 @@ rpcrdma_buffer_destroy(struct rpcrdma_buffer *buf) struct rpcrdma_req * rpcrdma_buffer_get(struct rpcrdma_buffer *buffers) { + struct rpcrdma_ia *ia = rdmab_to_ia(buffers); struct rpcrdma_req *req; unsigned long flags; - int i; - struct rpcrdma_mw *r; spin_lock_irqsave(&buffers->rb_lock, flags); if (buffers->rb_send_index == buffers->rb_max_requests) { @@ -1286,14 +1348,13 @@ rpcrdma_buffer_get(struct rpcrdma_buffer *buffers) buffers->rb_recv_bufs[buffers->rb_recv_index++] = NULL; } buffers->rb_send_bufs[buffers->rb_send_index++] = NULL; - if (!list_empty(&buffers->rb_mws)) { - i = RPCRDMA_MAX_SEGS - 1; - do { - r = list_entry(buffers->rb_mws.next, - struct rpcrdma_mw, mw_list); - list_del(&r->mw_list); - req->rl_segments[i].mr_chunk.rl_mw = r; - } while (--i >= 0); + switch (ia->ri_memreg_strategy) { + case RPCRDMA_FRMR: + case RPCRDMA_MTHCAFMR: + req = rpcrdma_buffer_get_mws(req, buffers); + break; + default: + break; } spin_unlock_irqrestore(&buffers->rb_lock, flags); return req; @@ -1308,34 +1369,14 @@ rpcrdma_buffer_put(struct rpcrdma_req *req) { struct rpcrdma_buffer *buffers = req->rl_buffer; struct rpcrdma_ia *ia = rdmab_to_ia(buffers); - int i; unsigned long flags; spin_lock_irqsave(&buffers->rb_lock, flags); - buffers->rb_send_bufs[--buffers->rb_send_index] = req; - req->rl_niovs = 0; - if (req->rl_reply) { - buffers->rb_recv_bufs[--buffers->rb_recv_index] = req->rl_reply; - req->rl_reply->rr_func = NULL; - req->rl_reply = NULL; - } + rpcrdma_send_buffer_put(req, buffers); switch (ia->ri_memreg_strategy) { case RPCRDMA_FRMR: case RPCRDMA_MTHCAFMR: - /* - * Cycle mw's back in reverse order, and "spin" them. - * This delays and scrambles reuse as much as possible. - */ - i = 1; - do { - struct rpcrdma_mw **mw; - mw = &req->rl_segments[i].mr_chunk.rl_mw; - list_add_tail(&(*mw)->mw_list, &buffers->rb_mws); - *mw = NULL; - } while (++i < RPCRDMA_MAX_SEGS); - list_add_tail(&req->rl_segments[0].mr_chunk.rl_mw->mw_list, - &buffers->rb_mws); - req->rl_segments[0].mr_chunk.rl_mw = NULL; + rpcrdma_buffer_put_mws(req); break; default: break; diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h index 6b5d243..b81e5b5 100644 --- a/net/sunrpc/xprtrdma/xprt_rdma.h +++ b/net/sunrpc/xprtrdma/xprt_rdma.h @@ -175,6 +175,7 @@ struct rpcrdma_mw { struct rpcrdma_frmr frmr; } r; struct list_head mw_list; + struct rpcrdma_buffer *mw_pool; }; #define RPCRDMA_BIT_FASTREG (0)