From patchwork Fri Jun 21 13:32:39 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Derr X-Patchwork-Id: 2763041 Return-Path: X-Original-To: patchwork-v9fs-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 849379F756 for ; Fri, 21 Jun 2013 14:05:19 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 55A8D20210 for ; Fri, 21 Jun 2013 14:05:18 +0000 (UTC) Received: from lists.sourceforge.net (lists.sourceforge.net [216.34.181.88]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DC37720211 for ; Fri, 21 Jun 2013 14:05:16 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=sfs-ml-4.v29.ch3.sourceforge.com) by sfs-ml-4.v29.ch3.sourceforge.com with esmtp (Exim 4.76) (envelope-from ) id 1Uq1xj-0002zw-2W; Fri, 21 Jun 2013 14:05:15 +0000 Received: from sog-mx-3.v43.ch3.sourceforge.com ([172.29.43.193] helo=mx.sourceforge.net) by sfs-ml-4.v29.ch3.sourceforge.com with esmtp (Exim 4.76) (envelope-from ) id 1Uq1xg-0002zc-4W for v9fs-developer@lists.sourceforge.net; Fri, 21 Jun 2013 14:05:12 +0000 X-ACL-Warn: Received: from ecfrec.frec.bull.fr ([129.183.4.8]) by sog-mx-3.v43.ch3.sourceforge.com with esmtp (Exim 4.76) id 1Uq1xe-0003V0-Rk for v9fs-developer@lists.sourceforge.net; Fri, 21 Jun 2013 14:05:11 +0000 Received: from atlas.frec.bull.fr (atlas.frec.bull.fr [129.183.91.13]) by ecfrec.frec.bull.fr (Postfix) with ESMTP id 9EA006F5F3; Fri, 21 Jun 2013 15:33:03 +0200 (CEST) Received: by atlas.frec.bull.fr (Postfix, from userid 15269) id 4772E38004B; Fri, 21 Jun 2013 15:33:03 +0200 (CEST) From: Simon Derr To: v9fs-developer@lists.sourceforge.net Date: Fri, 21 Jun 2013 15:32:39 +0200 Message-Id: <1371821563-5784-7-git-send-email-simon.derr@bull.net> X-Mailer: git-send-email 1.7.2.2 In-Reply-To: <1371821563-5784-1-git-send-email-simon.derr@bull.net> References: <1371821563-5784-1-git-send-email-simon.derr@bull.net> X-Spam-Score: -1.5 (-) X-Headers-End: 1Uq1xe-0003V0-Rk Cc: simon.derr@bull.net Subject: [V9fs-developer] [PATCH 06/10] 9P/RDMA: Use a semaphore to protect the RQ X-BeenThere: v9fs-developer@lists.sourceforge.net X-Mailman-Version: 2.1.9 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: v9fs-developer-bounces@lists.sourceforge.net X-Spam-Status: No, score=-8.4 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The current code keeps track of the number of buffers posted in the RQ, and will prevent it from overflowing. But it does so by simply dropping post requests (And leaking memory in the process). When this happens there will actually be too few buffers posted, and soon the 9P server will complain about 'RNR retry counter exceeded' errors. Instead, use a semaphore, and block until the RQ is ready for another buffer to be posted. Signed-off-by: Simon Derr --- net/9p/trans_rdma.c | 22 ++++++++++++---------- 1 files changed, 12 insertions(+), 10 deletions(-) diff --git a/net/9p/trans_rdma.c b/net/9p/trans_rdma.c index 274a9c1..ad8dc33 100644 --- a/net/9p/trans_rdma.c +++ b/net/9p/trans_rdma.c @@ -73,7 +73,7 @@ * @sq_depth: The depth of the Send Queue * @sq_sem: Semaphore for the SQ * @rq_depth: The depth of the Receive Queue. - * @rq_count: Count of requests in the Receive Queue. + * @rq_sem: Semaphore for the RQ * @addr: The remote peer's address * @req_lock: Protects the active request list * @cm_done: Completion event for connection management tracking @@ -98,7 +98,7 @@ struct p9_trans_rdma { int sq_depth; struct semaphore sq_sem; int rq_depth; - atomic_t rq_count; + struct semaphore rq_sem; struct sockaddr_in addr; spinlock_t req_lock; @@ -341,8 +341,8 @@ static void cq_comp_handler(struct ib_cq *cq, void *cq_context) switch (c->wc_op) { case IB_WC_RECV: - atomic_dec(&rdma->rq_count); handle_recv(client, rdma, c, wc.status, wc.byte_len); + up(&rdma->rq_sem); break; case IB_WC_SEND: @@ -441,12 +441,14 @@ static int rdma_request(struct p9_client *client, struct p9_req_t *req) * outstanding request, so we must keep a count to avoid * overflowing the RQ. */ - if (atomic_inc_return(&rdma->rq_count) <= rdma->rq_depth) { - err = post_recv(client, rpl_context); - if (err) - goto err_free1; - } else - atomic_dec(&rdma->rq_count); + if (down_interruptible(&rdma->rq_sem)) + goto error; /* FIXME : -EINTR instead */ + + err = post_recv(client, rpl_context); + if (err) { + p9_debug(P9_DEBUG_FCALL, "POST RECV failed\n"); + goto err_free1; + } /* remove posted receive buffer from request structure */ req->rc = NULL; @@ -537,7 +539,7 @@ static struct p9_trans_rdma *alloc_rdma(struct p9_rdma_opts *opts) spin_lock_init(&rdma->req_lock); init_completion(&rdma->cm_done); sema_init(&rdma->sq_sem, rdma->sq_depth); - atomic_set(&rdma->rq_count, 0); + sema_init(&rdma->rq_sem, rdma->rq_depth); return rdma; }