From patchwork Sun Jun 18 15:21:52 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sagi Grimberg X-Patchwork-Id: 9794883 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 76A1F601C8 for ; Sun, 18 Jun 2017 15:22:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 68FE2283AF for ; Sun, 18 Jun 2017 15:22:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5E136283C0; Sun, 18 Jun 2017 15:22:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI, T_DKIM_INVALID, URIBL_BLACK autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 10CED283AF for ; Sun, 18 Jun 2017 15:22:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753243AbdFRPWf (ORCPT ); Sun, 18 Jun 2017 11:22:35 -0400 Received: from merlin.infradead.org ([205.233.59.134]:52192 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753176AbdFRPWf (ORCPT ); Sun, 18 Jun 2017 11:22:35 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=References:In-Reply-To:Message-Id:Date: Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=s3VNhSNUoxkS3mgAen5oHciGRit+4/JVN4Mafxm4yNI=; b=Hy7IY/pnEan+Yh3Khl1PRaYIk 1peidXg9M6BgZHffWCGCyMWyslB8fRbs7R0tRqjYh+2x5co2GMXku123RfChcreHmCa3K/BFPRALm OAmjsQd1d4VAz9sXX1T7tCTlf2EGqwxYBJnFmMG6Ie3oi8DK0G+ZSd2DO62GD9uuYoMZylE8D7Nu3 zggFbNzQfrmkBJDqCLHqz+xBCxLQ9uIBQ1y8rVljEX5RA9f4L1oAoW/IYkzipPw7oqoupVw/SLoVQ v+RC2uvtIfN5LUijRIsHfDeW8Mas+6WavLkRtD0cSFr/MsJFtnU5R4vOrZBwDBrkl6rYcPi93cdnZ FwuDxC9Ow==; Received: from bzq-82-81-101-184.red.bezeqint.net ([82.81.101.184] helo=bombadil.infradead.org) by merlin.infradead.org with esmtpsa (Exim 4.87 #1 (Red Hat Linux)) id 1dMc23-0006WN-Sh; Sun, 18 Jun 2017 15:22:32 +0000 From: Sagi Grimberg To: linux-nvme@lists.infradead.org Cc: Christoph Hellwig , Keith Busch , linux-block@vger.kernel.org Subject: [PATCH rfc 18/30] nvme-rdma: limit max_queues to rdma device number of completion vectors Date: Sun, 18 Jun 2017 18:21:52 +0300 Message-Id: <1497799324-19598-19-git-send-email-sagi@grimberg.me> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497799324-19598-1-git-send-email-sagi@grimberg.me> References: <1497799324-19598-1-git-send-email-sagi@grimberg.me> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP nvme_rdma_alloc_io_queues is heading to generic code, we want to decouple its dependency from the various transports. Signed-off-by: Sagi Grimberg --- drivers/nvme/host/rdma.c | 18 ++++++++---------- 1 file changed, 8 insertions(+), 10 deletions(-) diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index 7f4b66cf67cc..ce63dd40e6b4 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -625,6 +625,14 @@ static int nvme_rdma_alloc_queue(struct nvme_rdma_ctrl *ctrl, ctrl->ctrl.max_hw_sectors = (ctrl->max_fr_pages - 1) << (PAGE_SHIFT - 9); + /* + * we map queues according to the device irq vectors for + * optimal locality so we don't need more queues than + * completion vectors. + */ + ctrl->ctrl.max_queues = min_t(u32, ctrl->ctrl.max_queues, + ctrl->device->dev->num_comp_vectors + 1); + ret = nvme_rdma_alloc_qe(ctrl->queues[0].device->dev, &ctrl->async_event_sqe, sizeof(struct nvme_command), DMA_TO_DEVICE); @@ -632,7 +640,6 @@ static int nvme_rdma_alloc_queue(struct nvme_rdma_ctrl *ctrl, nvme_rdma_destroy_queue_ib(&ctrl->queues[0]); goto out_destroy_cm_id; } - } clear_bit(NVME_RDMA_Q_DELETING, &queue->flags); @@ -733,18 +740,9 @@ static int nvme_rdma_start_io_queues(struct nvme_rdma_ctrl *ctrl) static int nvme_rdma_alloc_io_queues(struct nvme_rdma_ctrl *ctrl) { unsigned int nr_io_queues = ctrl->ctrl.max_queues - 1; - struct ib_device *ibdev = ctrl->device->dev; int i, ret; nr_io_queues = min(nr_io_queues, num_online_cpus()); - /* - * we map queues according to the device irq vectors for - * optimal locality so we don't need more queues than - * completion vectors. - */ - nr_io_queues = min_t(unsigned int, nr_io_queues, - ibdev->num_comp_vectors); - ret = nvme_set_queue_count(&ctrl->ctrl, &nr_io_queues); if (ret) return ret;