From patchwork Thu May 22 00:54:50 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 4219351 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id D1D5E9F1CD for ; Thu, 22 May 2014 00:54:56 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E0C3620379 for ; Thu, 22 May 2014 00:54:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 033EA2026C for ; Thu, 22 May 2014 00:54:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751697AbaEVAyx (ORCPT ); Wed, 21 May 2014 20:54:53 -0400 Received: from mail-ig0-f175.google.com ([209.85.213.175]:38653 "EHLO mail-ig0-f175.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751532AbaEVAyw (ORCPT ); Wed, 21 May 2014 20:54:52 -0400 Received: by mail-ig0-f175.google.com with SMTP id uq10so6959932igb.8 for ; Wed, 21 May 2014 17:54:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:subject:to:date:message-id:in-reply-to:references :user-agent:mime-version:content-type:content-transfer-encoding; bh=Gy33okzSwqL+oXFuk7ExJKLPR0EY5T6isfXPZhyhJTM=; b=QBAja0f3POrDQrjABicij/obvy+GyF0KjmRybpnTDAi2jfQnvFoMOM5WzTvdlaPwuw 6X6UMysXcJvjBKs+qdwG84BrUHUdDOQAPihm2sVoXeAZsyy3ESFHVIeLiISSlViprneY Z+7DNbAuLWgVSndTpmQmvqloVxxfgBy4XBDwAkmQNmbWpl+WGjFA1R0lN9os95IislMT 1EyH1ozUDvqEpp1v4bVcWuap6CmvBVrBVHVxCEizyiYU3CZFQIW/vhrT7S9vR0WWZuKw v1/ho0IiN9NxQ52CzhbJfagYQjOi3HomP3sE1PoTZ2BnR2hON3jZi4pniq+8kV61AzZA LntA== X-Received: by 10.50.79.226 with SMTP id m2mr18020865igx.11.1400720091760; Wed, 21 May 2014 17:54:51 -0700 (PDT) Received: from manet.1015granger.net ([2604:8800:100:81fc:82ee:73ff:fe43:d64f]) by mx.google.com with ESMTPSA id g2sm9221422igc.12.2014.05.21.17.54.51 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 21 May 2014 17:54:51 -0700 (PDT) From: Chuck Lever Subject: [PATCH v4 04/24] xprtrdma: Remove BOUNCEBUFFERS memory registration mode To: linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org Date: Wed, 21 May 2014 20:54:50 -0400 Message-ID: <20140522005450.27190.36954.stgit@manet.1015granger.net> In-Reply-To: <20140522004505.27190.58897.stgit@manet.1015granger.net> References: <20140522004505.27190.58897.stgit@manet.1015granger.net> User-Agent: StGIT/0.14.3 MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Spam-Status: No, score=-7.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Clean up: This memory registration mode is slow and was never meant for use in production environments. Remove it to reduce implementation complexity. Signed-off-by: Chuck Lever Tested-by: Steve Wise --- net/sunrpc/xprtrdma/rpc_rdma.c | 11 ----------- net/sunrpc/xprtrdma/transport.c | 13 ------------- net/sunrpc/xprtrdma/verbs.c | 5 +---- 3 files changed, 1 insertions(+), 28 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c index c296468..02b2941 100644 --- a/net/sunrpc/xprtrdma/rpc_rdma.c +++ b/net/sunrpc/xprtrdma/rpc_rdma.c @@ -77,9 +77,6 @@ static const char transfertypes[][12] = { * Prepare the passed-in xdr_buf into representation as RPC/RDMA chunk * elements. Segments are then coalesced when registered, if possible * within the selected memreg mode. - * - * Note, this routine is never called if the connection's memory - * registration strategy is 0 (bounce buffers). */ static int @@ -439,14 +436,6 @@ rpcrdma_marshal_req(struct rpc_rqst *rqst) wtype = rpcrdma_noch; BUG_ON(rtype != rpcrdma_noch && wtype != rpcrdma_noch); - if (r_xprt->rx_ia.ri_memreg_strategy == RPCRDMA_BOUNCEBUFFERS && - (rtype != rpcrdma_noch || wtype != rpcrdma_noch)) { - /* forced to "pure inline"? */ - dprintk("RPC: %s: too much data (%d/%d) for inline\n", - __func__, rqst->rq_rcv_buf.len, rqst->rq_snd_buf.len); - return -1; - } - hdrlen = 28; /*sizeof *headerp;*/ padlen = 0; diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c index 1eb9c46..8c5035a 100644 --- a/net/sunrpc/xprtrdma/transport.c +++ b/net/sunrpc/xprtrdma/transport.c @@ -503,18 +503,6 @@ xprt_rdma_allocate(struct rpc_task *task, size_t size) * If the allocation or registration fails, the RPC framework * will (doggedly) retry. */ - if (rpcx_to_rdmax(xprt)->rx_ia.ri_memreg_strategy == - RPCRDMA_BOUNCEBUFFERS) { - /* forced to "pure inline" */ - dprintk("RPC: %s: too much data (%zd) for inline " - "(r/w max %d/%d)\n", __func__, size, - rpcx_to_rdmad(xprt).inline_rsize, - rpcx_to_rdmad(xprt).inline_wsize); - size = req->rl_size; - rpc_exit(task, -EIO); /* fail the operation */ - rpcx_to_rdmax(xprt)->rx_stats.failed_marshal_count++; - goto out; - } if (task->tk_flags & RPC_TASK_SWAPPER) nreq = kmalloc(sizeof *req + size, GFP_ATOMIC); else @@ -543,7 +531,6 @@ xprt_rdma_allocate(struct rpc_task *task, size_t size) req = nreq; } dprintk("RPC: %s: size %zd, request 0x%p\n", __func__, size, req); -out: req->rl_connect_cookie = 0; /* our reserved value */ return req->rl_xdr_buf; diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c index 9cb88f3..4a4e4ea 100644 --- a/net/sunrpc/xprtrdma/verbs.c +++ b/net/sunrpc/xprtrdma/verbs.c @@ -557,7 +557,6 @@ rpcrdma_ia_open(struct rpcrdma_xprt *xprt, struct sockaddr *addr, int memreg) * adapter. */ switch (memreg) { - case RPCRDMA_BOUNCEBUFFERS: case RPCRDMA_REGISTER: case RPCRDMA_FRMR: break; @@ -778,9 +777,7 @@ rpcrdma_ep_create(struct rpcrdma_ep *ep, struct rpcrdma_ia *ia, /* Client offers RDMA Read but does not initiate */ ep->rep_remote_cma.initiator_depth = 0; - if (ia->ri_memreg_strategy == RPCRDMA_BOUNCEBUFFERS) - ep->rep_remote_cma.responder_resources = 0; - else if (devattr.max_qp_rd_atom > 32) /* arbitrary but <= 255 */ + if (devattr.max_qp_rd_atom > 32) /* arbitrary but <= 255 */ ep->rep_remote_cma.responder_resources = 32; else ep->rep_remote_cma.responder_resources = devattr.max_qp_rd_atom;