From patchwork Sun Jun 18 15:21:41 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sagi Grimberg X-Patchwork-Id: 9794859 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 66A19601C8 for ; Sun, 18 Jun 2017 15:22:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 595E3283C0 for ; Sun, 18 Jun 2017 15:22:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4E3DD28435; Sun, 18 Jun 2017 15:22:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI, T_DKIM_INVALID, URIBL_BLACK autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E251D283C0 for ; Sun, 18 Jun 2017 15:22:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753113AbdFRPWV (ORCPT ); Sun, 18 Jun 2017 11:22:21 -0400 Received: from merlin.infradead.org ([205.233.59.134]:52102 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753111AbdFRPWU (ORCPT ); Sun, 18 Jun 2017 11:22:20 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=References:In-Reply-To:Message-Id:Date: Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=cbxfvF9NQLVkMYdsPWXk38YbYtTVyC0+58c7md99SjI=; b=2aA1hOsnRe5GpiZOebecsvGVP p1pE/rSdE0Aru0MmTyUyO+Z2U121dIpJyxZlYPQ2T6o61+RLCMZHFrM62MweEhgACNR2Na/AY1I3R IWzWVXtPpckkmcibAJUarEcfUeId9Fdn6tu/KHZjSteJjeE16W3h9L8zZTRdz9Dd/nuFV2jqo+jD8 5PehrHpVm1h5g20iD7jGQ9Tn9f0CzFvLdp4B1ZvL/xkco1XDv0hAEZ09IGM9wG9lg4Qk2bkBxfvTR 71QBdP75W14Fc7bjkee0L22lZ10OGcASqU82UUSpNUKE5mV2lOg/51Uh/SjGD/pu+OCTfzf6Av3SH L7QsNoSyQ==; Received: from bzq-82-81-101-184.red.bezeqint.net ([82.81.101.184] helo=bombadil.infradead.org) by merlin.infradead.org with esmtpsa (Exim 4.87 #1 (Red Hat Linux)) id 1dMc1p-0006WN-Dw; Sun, 18 Jun 2017 15:22:17 +0000 From: Sagi Grimberg To: linux-nvme@lists.infradead.org Cc: Christoph Hellwig , Keith Busch , linux-block@vger.kernel.org Subject: [PATCH rfc 07/30] nvme-rdma: make stop/free queue receive a ctrl and qid struct Date: Sun, 18 Jun 2017 18:21:41 +0300 Message-Id: <1497799324-19598-8-git-send-email-sagi@grimberg.me> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497799324-19598-1-git-send-email-sagi@grimberg.me> References: <1497799324-19598-1-git-send-email-sagi@grimberg.me> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Make it symmetrical to alloc/start queue. Signed-off-by: Sagi Grimberg Reviewed-by: Christoph Hellwig --- drivers/nvme/host/rdma.c | 24 ++++++++++++++---------- 1 file changed, 14 insertions(+), 10 deletions(-) diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index c8016150dc21..86998de90f52 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -563,16 +563,20 @@ static int nvme_rdma_alloc_queue(struct nvme_rdma_ctrl *ctrl, return ret; } -static void nvme_rdma_stop_queue(struct nvme_rdma_queue *queue) +static void nvme_rdma_stop_queue(struct nvme_rdma_ctrl *ctrl, int qid) { + struct nvme_rdma_queue *queue = &ctrl->queues[qid]; + if (test_bit(NVME_RDMA_Q_DELETING, &queue->flags)) return; rdma_disconnect(queue->cm_id); ib_drain_qp(queue->qp); } -static void nvme_rdma_free_queue(struct nvme_rdma_queue *queue) +static void nvme_rdma_free_queue(struct nvme_rdma_ctrl *ctrl, int qid) { + struct nvme_rdma_queue *queue = &ctrl->queues[qid]; + if (test_and_set_bit(NVME_RDMA_Q_DELETING, &queue->flags)) return; nvme_rdma_destroy_queue_ib(queue); @@ -584,7 +588,7 @@ static void nvme_rdma_free_io_queues(struct nvme_rdma_ctrl *ctrl) int i; for (i = 1; i < ctrl->queue_count; i++) - nvme_rdma_free_queue(&ctrl->queues[i]); + nvme_rdma_free_queue(ctrl, i); } static void nvme_rdma_stop_io_queues(struct nvme_rdma_ctrl *ctrl) @@ -592,7 +596,7 @@ static void nvme_rdma_stop_io_queues(struct nvme_rdma_ctrl *ctrl) int i; for (i = 1; i < ctrl->queue_count; i++) - nvme_rdma_stop_queue(&ctrl->queues[i]); + nvme_rdma_stop_queue(ctrl, i); } static void nvme_rdma_destroy_io_queues(struct nvme_rdma_ctrl *ctrl, bool remove) @@ -637,7 +641,7 @@ static int nvme_rdma_start_io_queues(struct nvme_rdma_ctrl *ctrl) out_stop_queues: for (i--; i >= 1; i--) - nvme_rdma_stop_queue(&ctrl->queues[i]); + nvme_rdma_stop_queue(ctrl, i); return ret; } @@ -680,7 +684,7 @@ static int nvme_rdma_alloc_io_queues(struct nvme_rdma_ctrl *ctrl) out_free_queues: for (i--; i >= 1; i--) - nvme_rdma_free_queue(&ctrl->queues[i]); + nvme_rdma_free_queue(ctrl, i); return ret; } @@ -752,7 +756,7 @@ static int nvme_rdma_configure_io_queues(struct nvme_rdma_ctrl *ctrl, bool new) static void nvme_rdma_destroy_admin_queue(struct nvme_rdma_ctrl *ctrl, bool remove) { - nvme_rdma_stop_queue(&ctrl->queues[0]); + nvme_rdma_stop_queue(ctrl, 0); if (remove) { blk_cleanup_queue(ctrl->ctrl.admin_connect_q); blk_cleanup_queue(ctrl->ctrl.admin_q); @@ -762,7 +766,7 @@ static void nvme_rdma_destroy_admin_queue(struct nvme_rdma_ctrl *ctrl, bool remo nvme_rdma_free_qe(ctrl->queues[0].device->dev, &ctrl->async_event_sqe, sizeof(struct nvme_command), DMA_TO_DEVICE); - nvme_rdma_free_queue(&ctrl->queues[0]); + nvme_rdma_free_queue(ctrl, 0); } static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl, bool new) @@ -864,14 +868,14 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl, bool new out_free_tagset: if (new) { /* disconnect and drain the queue before freeing the tagset */ - nvme_rdma_stop_queue(&ctrl->queues[0]); + nvme_rdma_stop_queue(ctrl, 0); blk_mq_free_tag_set(&ctrl->admin_tag_set); } out_put_dev: if (new) nvme_rdma_dev_put(ctrl->device); out_free_queue: - nvme_rdma_free_queue(&ctrl->queues[0]); + nvme_rdma_free_queue(ctrl, 0); return error; }