From patchwork Sun Jul 2 15:01:34 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sagi Grimberg X-Patchwork-Id: 9821599 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 93A2D603B5 for ; Sun, 2 Jul 2017 15:01:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8378127FB7 for ; Sun, 2 Jul 2017 15:01:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7209B28294; Sun, 2 Jul 2017 15:01:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4458B27FB7 for ; Sun, 2 Jul 2017 15:01:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751984AbdGBPBw (ORCPT ); Sun, 2 Jul 2017 11:01:52 -0400 Received: from merlin.infradead.org ([205.233.59.134]:36374 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751973AbdGBPBv (ORCPT ); Sun, 2 Jul 2017 11:01:51 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=References:In-Reply-To:Message-Id:Date: Subject:To:From:Sender:Reply-To:Cc:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=IsI4C7+y2+QW0TVqL0K28PfjJuN6OPINcM1bQZm7LAU=; b=AarOPYxkBRMOXWTUXv5AiVRkh Sw/WQsERnIyAsjFPW+MJYZt9Pgj26QBHkDlBsw+CkY1CtLxoqnm1zbzisW4HJ4YmjYymdQpl6r2Gp LrNIIg3m9z89HSzmWL3xJdHjkST31Zv7SUWRk3+9MMGxjt6poQSH7Qx3Ay3skFVdmPh9CDN9qhKTJ MWTg21wRbu1vSwHdRfR2bkseLYnE2GktUrwJtRSVRiHEzAVLlZd5CLvYyWcgGFk1urjNNJSLu1rsR 1XiOUbl4Tb986zb//XJIDmjf8SIXuY6/HAuGKXMr0bly6BVlKj/IY2xP+C62nWh6FKBQ5wdzUhfkJ QDCUdtxRA==; Received: from bzq-82-81-101-184.red.bezeqint.net ([82.81.101.184] helo=bombadil.infradead.org) by merlin.infradead.org with esmtpsa (Exim 4.87 #1 (Red Hat Linux)) id 1dRgNe-0007qh-UM; Sun, 02 Jul 2017 15:01:47 +0000 From: Sagi Grimberg To: linux-nvme@lists.infradead.org, Christoph Hellwig , James Smart , Keith Busch , linux-rdma@vger.kernel.org Subject: [PATCH rfc 3/3] nvmet-rdma: assign cq completion vector based on the port allowed cpus Date: Sun, 2 Jul 2017 18:01:34 +0300 Message-Id: <1499007694-7231-4-git-send-email-sagi@grimberg.me> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1499007694-7231-1-git-send-email-sagi@grimberg.me> References: <1499007694-7231-1-git-send-email-sagi@grimberg.me> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We first take a cpu assignment from the port configured cpulist (spread uniformly accross them) with rdma core API. If the device does not expose a vector affinity mask, or we couldn't find a match, we fallback to the old behavior as we don't have sufficient information to do the "correct" vector assignment. Signed-off-by: Sagi Grimberg --- drivers/nvme/target/rdma.c | 40 ++++++++++++++++++++++++++++------------ 1 file changed, 28 insertions(+), 12 deletions(-) diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c index 56a4cba690b5..a1725d3e174a 100644 --- a/drivers/nvme/target/rdma.c +++ b/drivers/nvme/target/rdma.c @@ -889,27 +889,43 @@ nvmet_rdma_find_get_device(struct rdma_cm_id *cm_id) return NULL; } -static int nvmet_rdma_create_queue_ib(struct nvmet_rdma_queue *queue) +static int nvmet_rdma_assign_vector(struct nvmet_rdma_queue *queue) { - struct ib_qp_init_attr qp_attr; - struct nvmet_rdma_device *ndev = queue->dev; - int comp_vector, nr_cqe, ret, i; + struct ib_device *dev = queue->dev->device; + struct nvmet_port *port = queue->port; + int vec, cpu; /* - * Spread the io queues across completion vectors, - * but still keep all admin queues on vector 0. + * Spread the io queues across port cpus, + * but still keep all admin queues on cpu 0. */ - comp_vector = !queue->host_qid ? 0 : - queue->idx % ndev->device->num_comp_vectors; + cpu = !queue->host_qid ? 0 : port->cpus[queue->idx % port->nr_cpus]; + + if (ib_find_cpu_vector(dev, cpu, &vec)) + return vec; + + pr_debug("device %s could not provide vector to match cpu %d\n", + dev->name, cpu); + /* + * No corresponding vector affinity found, fallback to + * the old behavior where we spread vectors all over... + */ + return !queue->host_qid ? 0 : queue->idx % dev->num_comp_vectors; +} + +static int nvmet_rdma_create_queue_ib(struct nvmet_rdma_queue *queue) +{ + struct ib_qp_init_attr qp_attr; + struct nvmet_rdma_device *ndev = queue->dev; + int nr_cqe, ret, i; /* * Reserve CQ slots for RECV + RDMA_READ/RDMA_WRITE + RDMA_SEND. */ nr_cqe = queue->recv_queue_size + 2 * queue->send_queue_size; - queue->cq = ib_alloc_cq(ndev->device, queue, - nr_cqe + 1, comp_vector, - IB_POLL_WORKQUEUE); + queue->cq = ib_alloc_cq(ndev->device, queue, nr_cqe + 1, + nvmet_rdma_assign_vector(queue), IB_POLL_WORKQUEUE); if (IS_ERR(queue->cq)) { ret = PTR_ERR(queue->cq); pr_err("failed to create CQ cqe= %d ret= %d\n", @@ -1080,6 +1096,7 @@ nvmet_rdma_alloc_queue(struct nvmet_rdma_device *ndev, INIT_WORK(&queue->release_work, nvmet_rdma_release_queue_work); queue->dev = ndev; queue->cm_id = cm_id; + queue->port = cm_id->context; spin_lock_init(&queue->state_lock); queue->state = NVMET_RDMA_Q_CONNECTING; @@ -1198,7 +1215,6 @@ static int nvmet_rdma_queue_connect(struct rdma_cm_id *cm_id, ret = -ENOMEM; goto put_device; } - queue->port = cm_id->context; if (queue->host_qid == 0) { /* Let inflight controller teardown complete */