From patchwork Wed Nov 8 09:57:42 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sagi Grimberg X-Patchwork-Id: 10048171 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C54CB60247 for ; Wed, 8 Nov 2017 09:58:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ACE1F292D6 for ; Wed, 8 Nov 2017 09:58:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A1CB92A4FE; Wed, 8 Nov 2017 09:58:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI, T_DKIM_INVALID, URIBL_BLACK autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4E925292D6 for ; Wed, 8 Nov 2017 09:58:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751081AbdKHJ62 (ORCPT ); Wed, 8 Nov 2017 04:58:28 -0500 Received: from bombadil.infradead.org ([65.50.211.133]:37344 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751829AbdKHJ61 (ORCPT ); Wed, 8 Nov 2017 04:58:27 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=h10Hz37C33RDkNjkMPo2ph2cyKS79zcnN25TnUW0LLY=; b=oG7oCC6myJliOwssa80+342UF Fwbvw1c63sHmbv2mQOFH4hNe0sAxJRNb5VA4VOU7ANbh1t3GUsUWa9XbwoxUndQ01gN9N2pOvMos6 rueUrg+me+aigMxQXq8M8que131+vGrkQGKeYuHkVwcqRZOw0hXAPJVMY8aj48/+SlRR3tls/6Eqr /KMrFZ7yE6CC5L6QZi10duKo2B9zw3z5rEjAHbwe0Ap9RwPrIziD0PUAoZqX51aHKJfCtb3/UETVu 18vdldwsKve0R0BRxlauR/nk86BrgK29mvwITHuce26fkn5t3QULO0KDGz7ykWUvn5LjAWiu6jq22 8NlkMb0Fg==; Received: from [31.154.58.122] (helo=bombadil.infradead.org) by bombadil.infradead.org with esmtpsa (Exim 4.87 #1 (Red Hat Linux)) id 1eCN7n-0001zj-PT; Wed, 08 Nov 2017 09:58:24 +0000 From: Sagi Grimberg To: linux-rdma@vger.kernel.org Cc: linux-nvme@lists.infradead.org, Christoph Hellwig , Max Gurtuvoy Subject: [PATCH v3 9/9] nvmet-rdma: assign cq completion vector based on the port allowed cpus Date: Wed, 8 Nov 2017 11:57:42 +0200 Message-Id: <20171108095742.25365-10-sagi@grimberg.me> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20171108095742.25365-1-sagi@grimberg.me> References: <20171108095742.25365-1-sagi@grimberg.me> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We take a cpu assignment from the port configured cpulist (spread uniformly accross them) and pass it to the queue pair as an affinity hint. Note that if the rdma device does not expose a vector affinity mask, or the core couldn't find a match, it will fallback to the old behavior as we don't have sufficient information to do the "correct" vector assignment. Signed-off-by: Sagi Grimberg --- drivers/nvme/target/rdma.c | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c index d9cdfd2bd623..98d7f2ded511 100644 --- a/drivers/nvme/target/rdma.c +++ b/drivers/nvme/target/rdma.c @@ -892,7 +892,8 @@ static int nvmet_rdma_create_queue_ib(struct nvmet_rdma_queue *queue) { struct ib_qp_init_attr qp_attr; struct nvmet_rdma_device *ndev = queue->dev; - int ret, i; + struct nvmet_port *port = queue->port; + int ret, cpu, i; memset(&qp_attr, 0, sizeof(qp_attr)); qp_attr.create_flags = IB_QP_CREATE_ASSIGN_CQS; @@ -916,6 +917,14 @@ static int nvmet_rdma_create_queue_ib(struct nvmet_rdma_queue *queue) else qp_attr.cap.max_recv_sge = 2; + /* + * Spread the io queues across port cpus, + * but still keep all admin queues on cpu 0. + */ + cpu = !queue->host_qid ? 0 : port->cpus[queue->idx % port->nr_cpus]; + qp_attr.affinity_hint = cpu; + qp_attr.create_flags |= IB_QP_CREATE_AFFINITY_HINT; + ret = rdma_create_qp(queue->cm_id, ndev->pd, &qp_attr); if (ret) { pr_err("failed to create_qp ret= %d\n", ret); @@ -1052,6 +1061,7 @@ nvmet_rdma_alloc_queue(struct nvmet_rdma_device *ndev, INIT_WORK(&queue->release_work, nvmet_rdma_release_queue_work); queue->dev = ndev; queue->cm_id = cm_id; + queue->port = cm_id->context; spin_lock_init(&queue->state_lock); queue->state = NVMET_RDMA_Q_CONNECTING; @@ -1170,7 +1180,6 @@ static int nvmet_rdma_queue_connect(struct rdma_cm_id *cm_id, ret = -ENOMEM; goto put_device; } - queue->port = cm_id->context; if (queue->host_qid == 0) { /* Let inflight controller teardown complete */