From patchwork Thu Feb 21 00:20:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10822887 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CB0B314E1 for ; Thu, 21 Feb 2019 00:21:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B2D332FC4D for ; Thu, 21 Feb 2019 00:21:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A24802FC82; Thu, 21 Feb 2019 00:21:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 368072FC4D for ; Thu, 21 Feb 2019 00:21:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726497AbfBUAVK (ORCPT ); Wed, 20 Feb 2019 19:21:10 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:52388 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726090AbfBUAVK (ORCPT ); Wed, 20 Feb 2019 19:21:10 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=/TIy9xrqbTl7XQw8H/6pLl/2IH5xVjkFWhwdDlMe3H8=; b=gd71sh4qc7Mvz/6dYGaLE1oMX YxUSZNUsWcvGMkOiL9e3oifK+zLbaexIvBbJZoEZGdkL44B68mNWl8ff898avvAiuDUKGIiWooV9s 8b0b4XAYfX2928CBoLILOaLQfQNvaeqijmhxqNbjcXjjHum4h3ZxE4iEQJ9WUSXzTZqN7ZhM3lJoZ gl0a5bQplnRrZsX1r2Tce2HXzh6ebuNahnbQOH27kyNGz+zVHK2SU7yEkup+1YceL5j3e2ZQAtXzx mqdbY2sSMekxxKalpVmHUsEpdhOwsjj8X0G885/ymQtM5egDTL3oP5ct3RHfhRTFCB44PoXGUwIsp 7SgwMaReQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gwc6v-0005u4-Hn; Thu, 21 Feb 2019 00:21:09 +0000 From: Matthew Wilcox To: Jason Gunthorpe Cc: Matthew Wilcox , linux-rdma@vger.kernel.org Subject: [PATCH 01/32] mlx4: Convert srq_table->tree to XArray Date: Wed, 20 Feb 2019 16:20:36 -0800 Message-Id: <20190221002107.22625-2-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190221002107.22625-1-willy@infradead.org> References: <20190221002107.22625-1-willy@infradead.org> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Adjust the locking to not disable interrupts; this isn't needed. Signed-off-by: Matthew Wilcox Signed-off-by: Matthew Wilcox --- drivers/infiniband/hw/mlx4/cq.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/infiniband/hw/mlx4/cq.c b/drivers/infiniband/hw/mlx4/cq.c index 43512347b4f0..ba66ef09bc7c 100644 --- a/drivers/infiniband/hw/mlx4/cq.c +++ b/drivers/infiniband/hw/mlx4/cq.c @@ -738,7 +738,7 @@ static int mlx4_ib_poll_one(struct mlx4_ib_cq *cq, u32 srq_num; g_mlpath_rqpn = be32_to_cpu(cqe->g_mlpath_rqpn); srq_num = g_mlpath_rqpn & 0xffffff; - /* SRQ is also in the radix tree */ + /* SRQ is also in the xarray */ msrq = mlx4_srq_lookup(to_mdev(cq->ibcq.device)->dev, srq_num); } From patchwork Thu Feb 21 00:20:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10822943 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E581313A4 for ; Thu, 21 Feb 2019 00:21:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D46112FC7C for ; Thu, 21 Feb 2019 00:21:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C92FB2FC82; Thu, 21 Feb 2019 00:21:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 518232FC7C for ; Thu, 21 Feb 2019 00:21:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726627AbfBUAVK (ORCPT ); Wed, 20 Feb 2019 19:21:10 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:52396 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726389AbfBUAVK (ORCPT ); Wed, 20 Feb 2019 19:21:10 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=fRRT/VunZWm8UW5sxGTCnr2flXZbA2VaiL0i92988BU=; b=lkerKHEcIYCr+1xOxiESrjr0u puj54As2GpeiTb4dOIVhnDUewtmiuXfJqUI1GmGCNA6vyr3knZ5DsAKeJ3pLG5JOwSnRWcuQqdLhd XDKXSd4V2OWsBRzYfD1CYpiGjmXi6zYVBTgw9JiHtAGgzOPxfdrKMGFY5qLyCfPjfws5+M/n2ZoK0 RvRo5Hl1HLiUj1NgQFD0DEctIfjcJmEPWsZ2OQpxfpDe3GQG7pw8cUIK3Y3gY1lxyXWUiGaYd1z9T 849yU359HC9DZ/p9C0Zo7Q1m+iTEguEkphC+Xgs7pPD+JYBL/MpP2WJbgnOIxOk15lK0Oo6+W8Hs1 m4R7jM6Lg==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gwc6v-0005u8-N4; Thu, 21 Feb 2019 00:21:09 +0000 From: Matthew Wilcox To: Jason Gunthorpe Cc: Matthew Wilcox , linux-rdma@vger.kernel.org Subject: [PATCH 02/32] mlx5: Convert mlx5_srq_table to XArray Date: Wed, 20 Feb 2019 16:20:37 -0800 Message-Id: <20190221002107.22625-3-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190221002107.22625-1-willy@infradead.org> References: <20190221002107.22625-1-willy@infradead.org> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Remove the custom spinlock as the XArray handles its own locking. Signed-off-by: Matthew Wilcox Acked-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/srq.h | 5 +---- drivers/infiniband/hw/mlx5/srq_cmd.c | 27 +++++++++------------------ 2 files changed, 10 insertions(+), 22 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/srq.h b/drivers/infiniband/hw/mlx5/srq.h index 75eb5839ae95..3d4482762712 100644 --- a/drivers/infiniband/hw/mlx5/srq.h +++ b/drivers/infiniband/hw/mlx5/srq.h @@ -53,10 +53,7 @@ struct mlx5_core_srq { struct mlx5_srq_table { struct notifier_block nb; - /* protect radix tree - */ - spinlock_t lock; - struct radix_tree_root tree; + struct xarray array; }; int mlx5_cmd_create_srq(struct mlx5_ib_dev *dev, struct mlx5_core_srq *srq, diff --git a/drivers/infiniband/hw/mlx5/srq_cmd.c b/drivers/infiniband/hw/mlx5/srq_cmd.c index 7aaaffbd4afa..94c0a9d84c5b 100644 --- a/drivers/infiniband/hw/mlx5/srq_cmd.c +++ b/drivers/infiniband/hw/mlx5/srq_cmd.c @@ -83,13 +83,11 @@ struct mlx5_core_srq *mlx5_cmd_get_srq(struct mlx5_ib_dev *dev, u32 srqn) struct mlx5_srq_table *table = &dev->srq_table; struct mlx5_core_srq *srq; - spin_lock(&table->lock); - - srq = radix_tree_lookup(&table->tree, srqn); + xa_lock(&table->array); + srq = xa_load(&table->array, srqn); if (srq) atomic_inc(&srq->refcount); - - spin_unlock(&table->lock); + xa_unlock(&table->array); return srq; } @@ -597,9 +595,7 @@ int mlx5_cmd_create_srq(struct mlx5_ib_dev *dev, struct mlx5_core_srq *srq, atomic_set(&srq->refcount, 1); init_completion(&srq->free); - spin_lock_irq(&table->lock); - err = radix_tree_insert(&table->tree, srq->srqn, srq); - spin_unlock_irq(&table->lock); + err = xa_err(xa_store_irq(&table->array, srq->srqn, srq, GFP_KERNEL)); if (err) goto err_destroy_srq_split; @@ -617,9 +613,7 @@ int mlx5_cmd_destroy_srq(struct mlx5_ib_dev *dev, struct mlx5_core_srq *srq) struct mlx5_core_srq *tmp; int err; - spin_lock_irq(&table->lock); - tmp = radix_tree_delete(&table->tree, srq->srqn); - spin_unlock_irq(&table->lock); + tmp = xa_erase_irq(&table->array, srq->srqn); if (!tmp || tmp != srq) return -EINVAL; @@ -681,13 +675,11 @@ static int srq_event_notifier(struct notifier_block *nb, eqe = data; srqn = be32_to_cpu(eqe->data.qp_srq.qp_srq_n) & 0xffffff; - spin_lock(&table->lock); - - srq = radix_tree_lookup(&table->tree, srqn); + xa_lock(&table->array); + srq = xa_load(&table->array, srqn); if (srq) atomic_inc(&srq->refcount); - - spin_unlock(&table->lock); + xa_unlock(&table->array); if (!srq) return NOTIFY_OK; @@ -705,8 +697,7 @@ int mlx5_init_srq_table(struct mlx5_ib_dev *dev) struct mlx5_srq_table *table = &dev->srq_table; memset(table, 0, sizeof(*table)); - spin_lock_init(&table->lock); - INIT_RADIX_TREE(&table->tree, GFP_ATOMIC); + xa_init_flags(&table->array, XA_FLAGS_LOCK_IRQ); table->nb.notifier_call = srq_event_notifier; mlx5_notifier_register(dev->mdev, &table->nb); From patchwork Thu Feb 21 00:20:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10822891 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 79C4413A4 for ; Thu, 21 Feb 2019 00:21:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 69A4E2FC7C for ; Thu, 21 Feb 2019 00:21:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5E58C2FC4D; Thu, 21 Feb 2019 00:21:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 01D4E2FC4D for ; Thu, 21 Feb 2019 00:21:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726389AbfBUAVL (ORCPT ); Wed, 20 Feb 2019 19:21:11 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:52398 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726412AbfBUAVK (ORCPT ); Wed, 20 Feb 2019 19:21:10 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=SMn3h9AORXvM6UptpaNosjkvGW4gMsmr9kACWi4sysw=; b=cUmEvedSsoMT6ZQcbQiqGAGLn 1mBN1esvOTCmrLgs47KCJoBtFSwE0Uxvp/z6VA8A7vGdGasLoBLk80tdLNXYyi12nihb/LgJq3nLB Ba3ficHwedSvWBpjsBILuq7MHaMqivudryPpwBgJIKlf31oeantN5snfJcJNDcq88THea8GEHQ1DT OqfLDzlGnRX3oDfGxV77AJYFN/NZDGoftk6jgU+YVewQRdDcTf3siqWc+GiC3ORCRybDGauIgQJzG qkJCHnOe4v6bHEfoawBeLRdfnj7RMWbltt8H/tQTBkhzECd7FbTRYp4cByAx2AxyJgavk5SGVIrVd n2Ss/T2Og==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gwc6v-0005uC-Rw; Thu, 21 Feb 2019 00:21:09 +0000 From: Matthew Wilcox To: Jason Gunthorpe Cc: Matthew Wilcox , linux-rdma@vger.kernel.org Subject: [PATCH 03/32] mlx5: Convert mkey_table to XArray Date: Wed, 20 Feb 2019 16:20:38 -0800 Message-Id: <20190221002107.22625-4-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190221002107.22625-1-willy@infradead.org> References: <20190221002107.22625-1-willy@infradead.org> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The lock protecting the data structure does not need to be an rwlock. The only read access to the lock is in an error path, and if that's limiting your scalability, you have bigger performance problems. Eliminate mlx5_mkey_table in favour of using the xarray directly. Continue to use GFP_ATOMIC for allocating XArray nodes as we may be called in interrupt context. Signed-off-by: Matthew Wilcox Signed-off-by: Matthew Wilcox --- drivers/infiniband/hw/mlx5/cq.c | 4 ++-- drivers/infiniband/hw/mlx5/mr.c | 10 +++++----- 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/cq.c b/drivers/infiniband/hw/mlx5/cq.c index 90f1b0bae5b5..3326e07ab7ae 100644 --- a/drivers/infiniband/hw/mlx5/cq.c +++ b/drivers/infiniband/hw/mlx5/cq.c @@ -522,7 +522,7 @@ static int mlx5_poll_one(struct mlx5_ib_cq *cq, case MLX5_CQE_SIG_ERR: sig_err_cqe = (struct mlx5_sig_err_cqe *)cqe64; - read_lock(&dev->mdev->priv.mkey_table.lock); + xa_lock(&dev->mdev->priv.mkey_table); mmkey = __mlx5_mr_lookup(dev->mdev, mlx5_base_mkey(be32_to_cpu(sig_err_cqe->mkey))); mr = to_mibmr(mmkey); @@ -537,7 +537,7 @@ static int mlx5_poll_one(struct mlx5_ib_cq *cq, mr->sig->err_item.expected, mr->sig->err_item.actual); - read_unlock(&dev->mdev->priv.mkey_table.lock); + xa_unlock(&dev->mdev->priv.mkey_table); goto repoll; } diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index fd6ea1f75085..65cf11080bbd 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -132,7 +132,7 @@ static void reg_mr_callback(int status, void *context) struct mlx5_cache_ent *ent = &cache->ent[c]; u8 key; unsigned long flags; - struct mlx5_mkey_table *table = &dev->mdev->priv.mkey_table; + struct xarray *mkeys = &dev->mdev->priv.mkey_table; int err; spin_lock_irqsave(&ent->lock, flags); @@ -160,12 +160,12 @@ static void reg_mr_callback(int status, void *context) ent->size++; spin_unlock_irqrestore(&ent->lock, flags); - write_lock_irqsave(&table->lock, flags); - err = radix_tree_insert(&table->tree, mlx5_base_mkey(mr->mmkey.key), - &mr->mmkey); + xa_lock_irqsave(mkeys, flags); + err = xa_err(__xa_store(mkeys, mlx5_base_mkey(mr->mmkey.key), + &mr->mmkey, GFP_ATOMIC)); if (err) pr_err("Error inserting to mkey tree. 0x%x\n", -err); - write_unlock_irqrestore(&table->lock, flags); + xa_unlock_irqrestore(mkeys, flags); if (!completion_done(&ent->compl)) complete(&ent->compl); From patchwork Thu Feb 21 00:20:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10822935 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 479D513A4 for ; Thu, 21 Feb 2019 00:21:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 36E712FC81 for ; Thu, 21 Feb 2019 00:21:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2B8BF2FC82; Thu, 21 Feb 2019 00:21:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 95C142FC4D for ; Thu, 21 Feb 2019 00:21:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726688AbfBUAVL (ORCPT ); Wed, 20 Feb 2019 19:21:11 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:52404 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726418AbfBUAVK (ORCPT ); Wed, 20 Feb 2019 19:21:10 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=NSTIlI1iewPXtFY3Q4OVYOyylPNIBMgq9sPD7a2LgLs=; b=uAs2/xJI9p+1jSvuojriNlqvE MgNK7tEiEyegdI/LYtK9JyCe5U2mVsIFRgeWKkHQQHmTboJAH/mz9wBG7dDWPqsy2pddLBkEBrJjs NnjrZCeo5yNC6HEPy/ULs1BJqKU7WLMnYrTSGcMlpqbujnorFTgtFnxCiIaVbcTP2v3rZ3TYQ8dUL kBScG7JFxu+RSKFTYunPVSeAMdpgwNuR3uIIKHGLU4NsczySe5/NTmnny5XQqmJeeVjWa3Pn3FTVf n5am4B3pbK4oLXukT+b1ag2mdd7N+0uhzQEPFt+Vqx6mQnHoWrTzsB8n6RjpeJV7DTuOoxcUksyXA N24LIo4Ng==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gwc6w-0005uN-0m; Thu, 21 Feb 2019 00:21:10 +0000 From: Matthew Wilcox To: Jason Gunthorpe Cc: Matthew Wilcox , linux-rdma@vger.kernel.org Subject: [PATCH 04/32] RDMA/hns: Convert cq_table to XArray Date: Wed, 20 Feb 2019 16:20:39 -0800 Message-Id: <20190221002107.22625-5-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190221002107.22625-1-willy@infradead.org> References: <20190221002107.22625-1-willy@infradead.org> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Change the locking to not disable interrupts as the lookup in interrupt context will not see a freed CQ, thanks to the synchronize_irq() call before freeing the CQ. Signed-off-by: Matthew Wilcox --- drivers/infiniband/hw/hns/hns_roce_cq.c | 33 +++++++-------------- drivers/infiniband/hw/hns/hns_roce_device.h | 3 +- 2 files changed, 12 insertions(+), 24 deletions(-) diff --git a/drivers/infiniband/hw/hns/hns_roce_cq.c b/drivers/infiniband/hw/hns/hns_roce_cq.c index 3a485f50fede..81f934930fec 100644 --- a/drivers/infiniband/hw/hns/hns_roce_cq.c +++ b/drivers/infiniband/hw/hns/hns_roce_cq.c @@ -127,13 +127,9 @@ static int hns_roce_cq_alloc(struct hns_roce_dev *hr_dev, int nent, goto err_out; } - /* The cq insert radix tree */ - spin_lock_irq(&cq_table->lock); - /* Radix_tree: The associated pointer and long integer key value like */ - ret = radix_tree_insert(&cq_table->tree, hr_cq->cqn, hr_cq); - spin_unlock_irq(&cq_table->lock); + ret = xa_err(xa_store(&cq_table->array, hr_cq->cqn, hr_cq, GFP_KERNEL)); if (ret) { - dev_err(dev, "CQ alloc.Failed to radix_tree_insert.\n"); + dev_err(dev, "CQ alloc failed xa_store.\n"); goto err_put; } @@ -141,7 +137,7 @@ static int hns_roce_cq_alloc(struct hns_roce_dev *hr_dev, int nent, mailbox = hns_roce_alloc_cmd_mailbox(hr_dev); if (IS_ERR(mailbox)) { ret = PTR_ERR(mailbox); - goto err_radix; + goto err_xa; } hr_dev->hw->write_cqc(hr_dev, hr_cq, mailbox->buf, mtts, dma_handle, @@ -152,7 +148,7 @@ static int hns_roce_cq_alloc(struct hns_roce_dev *hr_dev, int nent, hns_roce_free_cmd_mailbox(hr_dev, mailbox); if (ret) { dev_err(dev, "CQ alloc.Failed to cmd mailbox.\n"); - goto err_radix; + goto err_xa; } hr_cq->cons_index = 0; @@ -164,10 +160,8 @@ static int hns_roce_cq_alloc(struct hns_roce_dev *hr_dev, int nent, return 0; -err_radix: - spin_lock_irq(&cq_table->lock); - radix_tree_delete(&cq_table->tree, hr_cq->cqn); - spin_unlock_irq(&cq_table->lock); +err_xa: + xa_erase(&cq_table->array, hr_cq->cqn); err_put: hns_roce_table_put(hr_dev, &cq_table->table, hr_cq->cqn); @@ -197,6 +191,8 @@ void hns_roce_free_cq(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq) dev_err(dev, "HW2SW_CQ failed (%d) for CQN %06lx\n", ret, hr_cq->cqn); + xa_erase(&cq_table->array, hr_cq->cqn); + /* Waiting interrupt process procedure carried out */ synchronize_irq(hr_dev->eq_table.eq[hr_cq->vector].irq); @@ -205,10 +201,6 @@ void hns_roce_free_cq(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq) complete(&hr_cq->free); wait_for_completion(&hr_cq->free); - spin_lock_irq(&cq_table->lock); - radix_tree_delete(&cq_table->tree, hr_cq->cqn); - spin_unlock_irq(&cq_table->lock); - hns_roce_table_put(hr_dev, &cq_table->table, hr_cq->cqn); hns_roce_bitmap_free(&cq_table->bitmap, hr_cq->cqn, BITMAP_NO_RR); } @@ -490,8 +482,7 @@ void hns_roce_cq_completion(struct hns_roce_dev *hr_dev, u32 cqn) struct device *dev = hr_dev->dev; struct hns_roce_cq *cq; - cq = radix_tree_lookup(&hr_dev->cq_table.tree, - cqn & (hr_dev->caps.num_cqs - 1)); + cq = xa_load(&hr_dev->cq_table.array, cqn & (hr_dev->caps.num_cqs - 1)); if (!cq) { dev_warn(dev, "Completion event for bogus CQ 0x%08x\n", cqn); return; @@ -508,8 +499,7 @@ void hns_roce_cq_event(struct hns_roce_dev *hr_dev, u32 cqn, int event_type) struct device *dev = hr_dev->dev; struct hns_roce_cq *cq; - cq = radix_tree_lookup(&cq_table->tree, - cqn & (hr_dev->caps.num_cqs - 1)); + cq = xa_load(&cq_table->array, cqn & (hr_dev->caps.num_cqs - 1)); if (cq) atomic_inc(&cq->refcount); @@ -529,8 +519,7 @@ int hns_roce_init_cq_table(struct hns_roce_dev *hr_dev) { struct hns_roce_cq_table *cq_table = &hr_dev->cq_table; - spin_lock_init(&cq_table->lock); - INIT_RADIX_TREE(&cq_table->tree, GFP_ATOMIC); + xa_init(&cq_table->array); return hns_roce_bitmap_init(&cq_table->bitmap, hr_dev->caps.num_cqs, hr_dev->caps.num_cqs - 1, diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h index 509e467843f6..4f63fe8869b2 100644 --- a/drivers/infiniband/hw/hns/hns_roce_device.h +++ b/drivers/infiniband/hw/hns/hns_roce_device.h @@ -486,8 +486,7 @@ struct hns_roce_qp_table { struct hns_roce_cq_table { struct hns_roce_bitmap bitmap; - spinlock_t lock; - struct radix_tree_root tree; + struct xarray array; struct hns_roce_hem_table table; }; From patchwork Thu Feb 21 00:20:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10822951 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 366B413A4 for ; Thu, 21 Feb 2019 00:21:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1E48B2FC4D for ; Thu, 21 Feb 2019 00:21:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 128E12FC86; Thu, 21 Feb 2019 00:21:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A525E2FC4D for ; Thu, 21 Feb 2019 00:21:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726352AbfBUAV0 (ORCPT ); Wed, 20 Feb 2019 19:21:26 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:52406 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726090AbfBUAVK (ORCPT ); Wed, 20 Feb 2019 19:21:10 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=ET47UhqLwBKmC7Q3Jrpd8/AJpO2ikOC6HROARf3QYOY=; b=AkCwgGW8MQvO+nsvxrkJ3l45m hJyq3qIjGURkSRk5YEsVWxMVvIf1eUmxV70gKq8bYvHg+JeieQI2mL2H0Wnlq6zKGT04V9zb+PATS NHIIqFiuy0FhFWZ8sI/Xchw71e9q06Wie9wqtljuZ/R+KoiUJck4v5AQ4D7rubguyGJ5zlZWndi9a h0P5Wg3YnS0vckvcog0iDHyjjqpPYFsAeCU8kykSWD26SAxSVjXb+UvQMLqr5gHhu4mG2gPyGdmaM rvxlAiwacC01wC6bG6k+VOgXixmtV7PXYQn8j1P1vOviJQWSVy+2ZP0jSbNe5DixckODZ092zydgs dKOuHMYFg==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gwc6w-0005uW-61; Thu, 21 Feb 2019 00:21:10 +0000 From: Matthew Wilcox To: Jason Gunthorpe Cc: Matthew Wilcox , linux-rdma@vger.kernel.org Subject: [PATCH 05/32] RDMA/hns: Convert qp_table_tree to XArray Date: Wed, 20 Feb 2019 16:20:40 -0800 Message-Id: <20190221002107.22625-6-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190221002107.22625-1-willy@infradead.org> References: <20190221002107.22625-1-willy@infradead.org> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Also fully initialise the qp before storing it in the XArray. Signed-off-by: Matthew Wilcox --- drivers/infiniband/hw/hns/hns_roce_device.h | 6 +-- drivers/infiniband/hw/hns/hns_roce_qp.c | 51 ++++++--------------- 2 files changed, 17 insertions(+), 40 deletions(-) diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h index 4f63fe8869b2..72a10a2953a5 100644 --- a/drivers/infiniband/hw/hns/hns_roce_device.h +++ b/drivers/infiniband/hw/hns/hns_roce_device.h @@ -478,7 +478,6 @@ struct hns_roce_uar_table { struct hns_roce_qp_table { struct hns_roce_bitmap bitmap; - spinlock_t lock; struct hns_roce_hem_table qp_table; struct hns_roce_hem_table irrl_table; struct hns_roce_hem_table trrl_table; @@ -904,7 +903,7 @@ struct hns_roce_dev { int irq[HNS_ROCE_MAX_IRQ_NUM]; u8 __iomem *reg_base; struct hns_roce_caps caps; - struct radix_tree_root qp_table_tree; + struct xarray qp_table_xa; unsigned char dev_addr[HNS_ROCE_MAX_PORTS][MAC_ADDR_OCTET_NUM]; u64 sys_image_guid; @@ -992,8 +991,7 @@ static inline void hns_roce_write64_k(__le32 val[2], void __iomem *dest) static inline struct hns_roce_qp *__hns_roce_qp_lookup(struct hns_roce_dev *hr_dev, u32 qpn) { - return radix_tree_lookup(&hr_dev->qp_table_tree, - qpn & (hr_dev->caps.num_qps - 1)); + return xa_load(&hr_dev->qp_table_xa, qpn & (hr_dev->caps.num_qps - 1)); } static inline void *hns_roce_buf_offset(struct hns_roce_buf *buf, int offset) diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c index 54031c5b53fa..dbcec003fb21 100644 --- a/drivers/infiniband/hw/hns/hns_roce_qp.c +++ b/drivers/infiniband/hw/hns/hns_roce_qp.c @@ -44,17 +44,14 @@ void hns_roce_qp_event(struct hns_roce_dev *hr_dev, u32 qpn, int event_type) { - struct hns_roce_qp_table *qp_table = &hr_dev->qp_table; struct device *dev = hr_dev->dev; struct hns_roce_qp *qp; - spin_lock(&qp_table->lock); - + xa_lock(&hr_dev->qp_table_xa); qp = __hns_roce_qp_lookup(hr_dev, qpn); if (qp) atomic_inc(&qp->refcount); - - spin_unlock(&qp_table->lock); + xa_unlock(&hr_dev->qp_table_xa); if (!qp) { dev_warn(dev, "Async event for bogus QP %08x\n", qpn); @@ -146,29 +143,21 @@ EXPORT_SYMBOL_GPL(to_hns_roce_state); static int hns_roce_gsi_qp_alloc(struct hns_roce_dev *hr_dev, unsigned long qpn, struct hns_roce_qp *hr_qp) { - struct hns_roce_qp_table *qp_table = &hr_dev->qp_table; + struct xarray *xa = &hr_dev->qp_table_xa; int ret; if (!qpn) return -EINVAL; hr_qp->qpn = qpn; - - spin_lock_irq(&qp_table->lock); - ret = radix_tree_insert(&hr_dev->qp_table_tree, - hr_qp->qpn & (hr_dev->caps.num_qps - 1), hr_qp); - spin_unlock_irq(&qp_table->lock); - if (ret) { - dev_err(hr_dev->dev, "QPC radix_tree_insert failed\n"); - goto err_put_irrl; - } - atomic_set(&hr_qp->refcount, 1); init_completion(&hr_qp->free); - return 0; - -err_put_irrl: + ret = xa_err(xa_store_irq(xa, hr_qp->qpn & (hr_dev->caps.num_qps - 1), + hr_qp, GFP_KERNEL)); + if (ret) { + dev_err(hr_dev->dev, "QPC xa_store failed\n"); + } return ret; } @@ -209,17 +198,9 @@ static int hns_roce_qp_alloc(struct hns_roce_dev *hr_dev, unsigned long qpn, } } - spin_lock_irq(&qp_table->lock); - ret = radix_tree_insert(&hr_dev->qp_table_tree, - hr_qp->qpn & (hr_dev->caps.num_qps - 1), hr_qp); - spin_unlock_irq(&qp_table->lock); - if (ret) { - dev_err(dev, "QPC radix_tree_insert failed\n"); + ret = hns_roce_gsi_qp_alloc(hr_dev, qpn, hr_qp); + if (ret) goto err_put_trrl; - } - - atomic_set(&hr_qp->refcount, 1); - init_completion(&hr_qp->free); return 0; @@ -239,13 +220,12 @@ static int hns_roce_qp_alloc(struct hns_roce_dev *hr_dev, unsigned long qpn, void hns_roce_qp_remove(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp) { - struct hns_roce_qp_table *qp_table = &hr_dev->qp_table; + struct xarray *xa = &hr_dev->qp_table_xa; unsigned long flags; - spin_lock_irqsave(&qp_table->lock, flags); - radix_tree_delete(&hr_dev->qp_table_tree, - hr_qp->qpn & (hr_dev->caps.num_qps - 1)); - spin_unlock_irqrestore(&qp_table->lock, flags); + xa_lock_irqsave(xa, flags); + __xa_erase(xa, hr_qp->qpn & (hr_dev->caps.num_qps - 1)); + xa_unlock_irqrestore(xa, flags); } EXPORT_SYMBOL_GPL(hns_roce_qp_remove); @@ -1133,8 +1113,7 @@ int hns_roce_init_qp_table(struct hns_roce_dev *hr_dev) int reserved_from_bot; int ret; - spin_lock_init(&qp_table->lock); - INIT_RADIX_TREE(&hr_dev->qp_table_tree, GFP_ATOMIC); + xa_init(&hr_dev->qp_table_xa); /* In hw v1, a port include two SQP, six ports total 12 */ if (hr_dev->caps.max_sq_sg <= 2) From patchwork Thu Feb 21 00:20:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10822949 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4750213A4 for ; Thu, 21 Feb 2019 00:21:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2EEBC2FC7C for ; Thu, 21 Feb 2019 00:21:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2032F2FC84; Thu, 21 Feb 2019 00:21:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0E56D2FC7C for ; Thu, 21 Feb 2019 00:21:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726655AbfBUAVZ (ORCPT ); Wed, 20 Feb 2019 19:21:25 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:52410 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726352AbfBUAVK (ORCPT ); Wed, 20 Feb 2019 19:21:10 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=Q/VQiNkTY/gtzJy/aQdp6uEh343UVCO28/j17Z8HsiI=; b=i7k7TYaMKi1qj7FGkHfDtzMF4 rGGvlbYTo2rV0aQuaSFsrTgzv9R5uunBos4q/nCGXLuDLFU4f3wjSKJUjYOUY1ASuj8Evl3/Lbm8q O1eSQ2VVp8F0YQc8zykbQeikQ1vi4HKbw9eYqM/+2YWF+1oz770Dldw/usHC/3LBBwQo/QnEfM+Bw bsmKp6bzBhHgOqJEMMO6CS6EjDJdYcOXsUuhjk7yvzCF/pscX5pZJmJmBtyZ9sjP0NISt2JmeEDn8 Fae2Vyjx+148zngEhZlPCX1oVbu7VnplfQpiZylAkyWzFju+V96c4fEHkkzo4t3mlJif+TfWN+CSm jvJA291PA==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gwc6w-0005ub-Ar; Thu, 21 Feb 2019 00:21:10 +0000 From: Matthew Wilcox To: Jason Gunthorpe Cc: Matthew Wilcox , linux-rdma@vger.kernel.org Subject: [PATCH 06/32] infiniband/core: Convert uverbs to XArray Date: Wed, 20 Feb 2019 16:20:41 -0800 Message-Id: <20190221002107.22625-7-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190221002107.22625-1-willy@infradead.org> References: <20190221002107.22625-1-willy@infradead.org> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Use a direct xa_load lookup instead of hyperoptimising the lookup. No locking changes. Signed-off-by: Matthew Wilcox --- drivers/infiniband/core/rdma_core.h | 8 +- drivers/infiniband/core/uverbs_ioctl.c | 67 +++------ drivers/infiniband/core/uverbs_uapi.c | 191 ++++++++++++------------- 3 files changed, 109 insertions(+), 157 deletions(-) diff --git a/drivers/infiniband/core/rdma_core.h b/drivers/infiniband/core/rdma_core.h index 69f8db66925e..df3c6b5cbbdc 100644 --- a/drivers/infiniband/core/rdma_core.h +++ b/drivers/infiniband/core/rdma_core.h @@ -117,7 +117,7 @@ void release_ufile_idr_uobject(struct ib_uverbs_file *ufile); */ /* - * Depending on ID the slot pointer in the radix tree points at one of these + * Depending on ID the pointer in the xarray points at one of these * structs. */ @@ -148,8 +148,8 @@ struct uverbs_api_attr { }; struct uverbs_api { - /* radix tree contains struct uverbs_api_* pointers */ - struct radix_tree_root radix; + /* xarray contains struct uverbs_api_* pointers */ + struct xarray xa; enum rdma_driver_id driver_id; unsigned int num_write; @@ -172,7 +172,7 @@ uapi_get_object(struct uverbs_api *uapi, u16 object_id) if (object_id == UVERBS_IDR_ANY_OBJECT) return ERR_PTR(-ENOMSG); - res = radix_tree_lookup(&uapi->radix, uapi_key_obj(object_id)); + res = xa_load(&uapi->xa, uapi_key_obj(object_id)); if (!res) return ERR_PTR(-ENOENT); diff --git a/drivers/infiniband/core/uverbs_ioctl.c b/drivers/infiniband/core/uverbs_ioctl.c index 0ca04d224015..3c6994634e2b 100644 --- a/drivers/infiniband/core/uverbs_ioctl.c +++ b/drivers/infiniband/core/uverbs_ioctl.c @@ -47,10 +47,8 @@ struct bundle_priv { size_t internal_avail; size_t internal_used; - struct radix_tree_root *radix; + struct xarray *xa; const struct uverbs_api_ioctl_method *method_elm; - void __rcu **radix_slots; - unsigned long radix_slots_len; u32 method_key; struct ib_uverbs_attr __user *user_attrs; @@ -354,32 +352,10 @@ static int uverbs_process_attr(struct bundle_priv *pbundle, return 0; } -/* - * We search the radix tree with the method prefix and now we want to fast - * search the suffix bits to get a particular attribute pointer. It is not - * totally clear to me if this breaks the radix tree encasulation or not, but - * it uses the iter data to determine if the method iter points at the same - * chunk that will store the attribute, if so it just derefs it directly. By - * construction in most kernel configs the method and attrs will all fit in a - * single radix chunk, so in most cases this will have no search. Other cases - * this falls back to a full search. - */ -static void __rcu **uapi_get_attr_for_method(struct bundle_priv *pbundle, - u32 attr_key) +static const struct uverbs_api_attr *uapi_get_attr_for_method( + struct bundle_priv *pbundle, u32 attr_key) { - void __rcu **slot; - - if (likely(attr_key < pbundle->radix_slots_len)) { - void *entry; - - slot = pbundle->radix_slots + attr_key; - entry = rcu_dereference_raw(*slot); - if (likely(!radix_tree_is_internal_node(entry) && entry)) - return slot; - } - - return radix_tree_lookup_slot(pbundle->radix, - pbundle->method_key | attr_key); + return xa_load(pbundle->xa, pbundle->method_key | attr_key); } static int uverbs_set_attr(struct bundle_priv *pbundle, @@ -388,11 +364,10 @@ static int uverbs_set_attr(struct bundle_priv *pbundle, u32 attr_key = uapi_key_attr(uattr->attr_id); u32 attr_bkey = uapi_bkey_attr(attr_key); const struct uverbs_api_attr *attr; - void __rcu **slot; int ret; - slot = uapi_get_attr_for_method(pbundle, attr_key); - if (!slot) { + attr = uapi_get_attr_for_method(pbundle, attr_key); + if (!attr) { /* * Kernel does not support the attribute but user-space says it * is mandatory @@ -401,7 +376,6 @@ static int uverbs_set_attr(struct bundle_priv *pbundle, return -EPROTONOSUPPORT; return 0; } - attr = rcu_dereference_protected(*slot, true); /* Reject duplicate attributes from user-space */ if (test_bit(attr_bkey, pbundle->bundle.attr_present)) @@ -520,17 +494,13 @@ static int bundle_destroy(struct bundle_priv *pbundle, bool commit) i + 1)) < key_bitmap_len) { struct uverbs_attr *attr = &pbundle->bundle.attrs[i]; const struct uverbs_api_attr *attr_uapi; - void __rcu **slot; int current_ret; - slot = uapi_get_attr_for_method( - pbundle, - pbundle->method_key | uapi_bkey_to_key_attr(i)); - if (WARN_ON(!slot)) + attr_uapi = uapi_get_attr_for_method(pbundle, + pbundle->method_key | uapi_bkey_to_key_attr(i)); + if (WARN_ON(!attr_uapi)) continue; - attr_uapi = rcu_dereference_protected(*slot, true); - if (attr_uapi->spec.type == UVERBS_ATTR_TYPE_IDRS_ARRAY) { current_ret = uverbs_free_idrs_array( attr_uapi, &attr->objs_arr_attr, commit); @@ -555,23 +525,20 @@ static int ib_uverbs_cmd_verbs(struct ib_uverbs_file *ufile, { const struct uverbs_api_ioctl_method *method_elm; struct uverbs_api *uapi = ufile->device->uapi; - struct radix_tree_iter attrs_iter; struct bundle_priv *pbundle; struct bundle_priv onstack; - void __rcu **slot; + u32 method_id; int destroy_ret; int ret; if (unlikely(hdr->driver_id != uapi->driver_id)) return -EINVAL; - slot = radix_tree_iter_lookup( - &uapi->radix, &attrs_iter, - uapi_key_obj(hdr->object_id) | - uapi_key_ioctl_method(hdr->method_id)); - if (unlikely(!slot)) + method_id = uapi_key_obj(hdr->object_id) | + uapi_key_ioctl_method(hdr->method_id); + method_elm = xa_load(&uapi->xa, method_id); + if (unlikely(!method_elm)) return -EPROTONOSUPPORT; - method_elm = rcu_dereference_protected(*slot, true); if (!method_elm->use_stack) { pbundle = kmalloc(method_elm->bundle_size, GFP_KERNEL); @@ -590,11 +557,9 @@ static int ib_uverbs_cmd_verbs(struct ib_uverbs_file *ufile, /* Space for the pbundle->bundle.attrs flex array */ pbundle->method_elm = method_elm; - pbundle->method_key = attrs_iter.index; + pbundle->method_key = method_id; pbundle->bundle.ufile = ufile; - pbundle->radix = &uapi->radix; - pbundle->radix_slots = slot; - pbundle->radix_slots_len = radix_tree_chunk_size(&attrs_iter); + pbundle->xa = &uapi->xa; pbundle->user_attrs = user_attrs; pbundle->internal_used = ALIGN(pbundle->method_elm->key_bitmap_len * diff --git a/drivers/infiniband/core/uverbs_uapi.c b/drivers/infiniband/core/uverbs_uapi.c index 9ae08e4b78a3..cd567c062d43 100644 --- a/drivers/infiniband/core/uverbs_uapi.c +++ b/drivers/infiniband/core/uverbs_uapi.c @@ -15,41 +15,46 @@ static int ib_uverbs_notsupp(struct uverbs_attr_bundle *attrs) static void *uapi_add_elm(struct uverbs_api *uapi, u32 key, size_t alloc_size) { - void *elm; - int rc; + void *elm, *curr; if (key == UVERBS_API_KEY_ERR) return ERR_PTR(-EOVERFLOW); elm = kzalloc(alloc_size, GFP_KERNEL); - rc = radix_tree_insert(&uapi->radix, key, elm); - if (rc) { - kfree(elm); - return ERR_PTR(rc); - } - - return elm; + if (!elm) + return ERR_PTR(-ENOMEM); + curr = xa_cmpxchg(&uapi->xa, key, NULL, elm, GFP_KERNEL); + if (!curr) + return elm; + kfree(elm); + if (curr == XA_ERROR(-ENOMEM)) + return ERR_PTR(-ENOMEM); + return ERR_PTR(-EEXIST); } static void *uapi_add_get_elm(struct uverbs_api *uapi, u32 key, size_t alloc_size, bool *exists) { - void *elm; + void *elm, *curr; - elm = uapi_add_elm(uapi, key, alloc_size); - if (!IS_ERR(elm)) { + if (key == UVERBS_API_KEY_ERR) + return ERR_PTR(-EOVERFLOW); + + elm = kzalloc(alloc_size, GFP_KERNEL); + if (!elm) + return ERR_PTR(-ENOMEM); + curr = xa_cmpxchg(&uapi->xa, key, NULL, elm, GFP_KERNEL); + if (!curr) { *exists = false; return elm; } - if (elm != ERR_PTR(-EEXIST)) - return elm; + kfree(elm); + if (curr == XA_ERROR(-ENOMEM)) + return ERR_PTR(-ENOMEM); - elm = radix_tree_lookup(&uapi->radix, key); - if (WARN_ON(!elm)) - return ERR_PTR(-EINVAL); *exists = true; - return elm; + return curr; } static int uapi_create_write(struct uverbs_api *uapi, @@ -349,22 +354,19 @@ uapi_finalize_ioctl_method(struct uverbs_api *uapi, struct uverbs_api_ioctl_method *method_elm, u32 method_key) { - struct radix_tree_iter iter; + XA_STATE(xas, &uapi->xa, uapi_key_attrs_start(method_key)); + struct uverbs_api_attr *elm; unsigned int num_attrs = 0; unsigned int max_bkey = 0; bool single_uobj = false; - void __rcu **slot; method_elm->destroy_bkey = UVERBS_API_ATTR_BKEY_LEN; - radix_tree_for_each_slot (slot, &uapi->radix, &iter, - uapi_key_attrs_start(method_key)) { - struct uverbs_api_attr *elm = - rcu_dereference_protected(*slot, true); - u32 attr_key = iter.index & UVERBS_API_ATTR_KEY_MASK; + xas_for_each(&xas, elm, ULONG_MAX) { + u32 attr_key = xas.xa_index & UVERBS_API_ATTR_KEY_MASK; u32 attr_bkey = uapi_bkey_attr(attr_key); u8 type = elm->spec.type; - if (uapi_key_attr_to_ioctl_method(iter.index) != + if (uapi_key_attr_to_ioctl_method(xas.xa_index) != uapi_key_attr_to_ioctl_method(method_key)) break; @@ -413,29 +415,26 @@ static int uapi_finalize(struct uverbs_api *uapi) const struct uverbs_api_write_method **data; unsigned long max_write_ex = 0; unsigned long max_write = 0; - struct radix_tree_iter iter; - void __rcu **slot; + XA_STATE(xas, &uapi->xa, 0); + struct uverbs_api_ioctl_method *method_elm; int rc; int i; - radix_tree_for_each_slot (slot, &uapi->radix, &iter, 0) { - struct uverbs_api_ioctl_method *method_elm = - rcu_dereference_protected(*slot, true); - - if (uapi_key_is_ioctl_method(iter.index)) { + xas_for_each(&xas, method_elm, ULONG_MAX) { + if (uapi_key_is_ioctl_method(xas.xa_index)) { rc = uapi_finalize_ioctl_method(uapi, method_elm, - iter.index); + xas.xa_index); if (rc) return rc; } - if (uapi_key_is_write_method(iter.index)) - max_write = max(max_write, - iter.index & UVERBS_API_ATTR_KEY_MASK); - if (uapi_key_is_write_ex_method(iter.index)) + if (uapi_key_is_write_method(xas.xa_index)) + max_write = max(max_write, xas.xa_index & + UVERBS_API_ATTR_KEY_MASK); + if (uapi_key_is_write_ex_method(xas.xa_index)) max_write_ex = - max(max_write_ex, - iter.index & UVERBS_API_ATTR_KEY_MASK); + max(max_write_ex, xas.xa_index & + UVERBS_API_ATTR_KEY_MASK); } uapi->notsupp_method.handler = ib_uverbs_notsupp; @@ -448,15 +447,16 @@ static int uapi_finalize(struct uverbs_api *uapi) uapi->write_methods = data; uapi->write_ex_methods = data + uapi->num_write; - radix_tree_for_each_slot (slot, &uapi->radix, &iter, 0) { - if (uapi_key_is_write_method(iter.index)) - uapi->write_methods[iter.index & + xas_set(&xas, 0); + xas_for_each(&xas, method_elm, ULONG_MAX) { + if (uapi_key_is_write_method(xas.xa_index)) + uapi->write_methods[xas.xa_index & UVERBS_API_ATTR_KEY_MASK] = - rcu_dereference_protected(*slot, true); - if (uapi_key_is_write_ex_method(iter.index)) - uapi->write_ex_methods[iter.index & + (const void *)method_elm; + if (uapi_key_is_write_ex_method(xas.xa_index)) + uapi->write_ex_methods[xas.xa_index & UVERBS_API_ATTR_KEY_MASK] = - rcu_dereference_protected(*slot, true); + (const void *)method_elm; } return 0; @@ -464,14 +464,12 @@ static int uapi_finalize(struct uverbs_api *uapi) static void uapi_remove_range(struct uverbs_api *uapi, u32 start, u32 last) { - struct radix_tree_iter iter; - void __rcu **slot; - - radix_tree_for_each_slot (slot, &uapi->radix, &iter, start) { - if (iter.index > last) - return; - kfree(rcu_dereference_protected(*slot, true)); - radix_tree_iter_delete(&uapi->radix, &iter, slot); + XA_STATE(xas, &uapi->xa, start); + void *entry; + + xas_for_each(&xas, entry, last) { + kfree(entry); + xas_store(&xas, NULL); } } @@ -518,56 +516,50 @@ static void uapi_key_okay(u32 key) static void uapi_finalize_disable(struct uverbs_api *uapi) { - struct radix_tree_iter iter; + XA_STATE(xas, &uapi->xa, 0); u32 starting_key = 0; bool scan_again = false; - void __rcu **slot; + void *entry; again: - radix_tree_for_each_slot (slot, &uapi->radix, &iter, starting_key) { - uapi_key_okay(iter.index); + xas_for_each(&xas, entry, ULONG_MAX) { + uapi_key_okay(xas.xa_index); - if (uapi_key_is_object(iter.index)) { - struct uverbs_api_object *obj_elm = - rcu_dereference_protected(*slot, true); + if (uapi_key_is_object(xas.xa_index)) { + struct uverbs_api_object *obj_elm = entry; if (obj_elm->disabled) { /* Have to check all the attrs again */ scan_again = true; - starting_key = iter.index; - uapi_remove_object(uapi, iter.index); - goto again; + starting_key = xas.xa_index; + uapi_remove_object(uapi, xas.xa_index); } continue; } - if (uapi_key_is_ioctl_method(iter.index)) { - struct uverbs_api_ioctl_method *method_elm = - rcu_dereference_protected(*slot, true); + if (uapi_key_is_ioctl_method(xas.xa_index)) { + struct uverbs_api_ioctl_method *method_elm = entry; if (method_elm->disabled) { - starting_key = iter.index; - uapi_remove_method(uapi, iter.index); - goto again; + starting_key = xas.xa_index; + uapi_remove_method(uapi, xas.xa_index); } continue; } - if (uapi_key_is_write_method(iter.index) || - uapi_key_is_write_ex_method(iter.index)) { - struct uverbs_api_write_method *method_elm = - rcu_dereference_protected(*slot, true); + if (uapi_key_is_write_method(xas.xa_index) || + uapi_key_is_write_ex_method(xas.xa_index)) { + struct uverbs_api_write_method *method_elm = entry; if (method_elm->disabled) { kfree(method_elm); - radix_tree_iter_delete(&uapi->radix, &iter, slot); + xas_store(&xas, NULL); } continue; } - if (uapi_key_is_attr(iter.index)) { - struct uverbs_api_attr *attr_elm = - rcu_dereference_protected(*slot, true); + if (uapi_key_is_attr(xas.xa_index)) { + struct uverbs_api_attr *attr_elm = entry; const struct uverbs_api_object *tmp_obj; u32 obj_key; @@ -590,19 +582,19 @@ static void uapi_finalize_disable(struct uverbs_api *uapi) continue; } - starting_key = iter.index; - uapi_remove_method( - uapi, - iter.index & (UVERBS_API_OBJ_KEY_MASK | + starting_key = xas.xa_index; + uapi_remove_method(uapi, + xas.xa_index & (UVERBS_API_OBJ_KEY_MASK | UVERBS_API_METHOD_KEY_MASK)); - goto again; + continue; } - WARN_ON(false); + WARN_ON(true); } if (!scan_again) return; + xas_set(&xas, starting_key); scan_again = false; starting_key = 0; goto again; @@ -639,7 +631,7 @@ struct uverbs_api *uverbs_alloc_api(struct ib_device *ibdev) if (!uapi) return ERR_PTR(-ENOMEM); - INIT_RADIX_TREE(&uapi->radix, GFP_KERNEL); + xa_init(&uapi->xa); uapi->driver_id = ibdev->driver_id; rc = uapi_merge_def(uapi, ibdev, uverbs_core_api, false); @@ -673,16 +665,13 @@ struct uverbs_api *uverbs_alloc_api(struct ib_device *ibdev) void uverbs_disassociate_api_pre(struct ib_uverbs_device *uverbs_dev) { struct uverbs_api *uapi = uverbs_dev->uapi; - struct radix_tree_iter iter; - void __rcu **slot; + XA_STATE(xas, &uapi->xa, 0); + struct uverbs_api_ioctl_method *method_elm; rcu_assign_pointer(uverbs_dev->ib_dev, NULL); - radix_tree_for_each_slot (slot, &uapi->radix, &iter, 0) { - if (uapi_key_is_ioctl_method(iter.index)) { - struct uverbs_api_ioctl_method *method_elm = - rcu_dereference_protected(*slot, true); - + xas_for_each(&xas, method_elm, ULONG_MAX) { + if (uapi_key_is_ioctl_method(xas.xa_index)) { if (method_elm->driver_method) rcu_assign_pointer(method_elm->handler, NULL); } @@ -698,13 +687,12 @@ void uverbs_disassociate_api_pre(struct ib_uverbs_device *uverbs_dev) */ void uverbs_disassociate_api(struct uverbs_api *uapi) { - struct radix_tree_iter iter; - void __rcu **slot; + XA_STATE(xas, &uapi->xa, 0); + void *entry; - radix_tree_for_each_slot (slot, &uapi->radix, &iter, 0) { - if (uapi_key_is_object(iter.index)) { - struct uverbs_api_object *object_elm = - rcu_dereference_protected(*slot, true); + xas_for_each(&xas, entry, ULONG_MAX) { + if (uapi_key_is_object(xas.xa_index)) { + struct uverbs_api_object *object_elm = entry; /* * Some type_attrs are in the driver module. We don't @@ -712,9 +700,8 @@ void uverbs_disassociate_api(struct uverbs_api *uapi) * no use of this after disassociate. */ object_elm->type_attrs = NULL; - } else if (uapi_key_is_attr(iter.index)) { - struct uverbs_api_attr *elm = - rcu_dereference_protected(*slot, true); + } else if (uapi_key_is_attr(xas.xa_index)) { + struct uverbs_api_attr *elm = entry; if (elm->spec.type == UVERBS_ATTR_TYPE_ENUM_IN) elm->spec.u2.enum_def.ids = NULL; From patchwork Thu Feb 21 00:20:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10822893 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 64F2A14E1 for ; Thu, 21 Feb 2019 00:21:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5527B2FC4D for ; Thu, 21 Feb 2019 00:21:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 49D6E2FC82; Thu, 21 Feb 2019 00:21:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D14902FC4D for ; Thu, 21 Feb 2019 00:21:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726418AbfBUAVL (ORCPT ); Wed, 20 Feb 2019 19:21:11 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:52414 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726632AbfBUAVK (ORCPT ); Wed, 20 Feb 2019 19:21:10 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=zfBWzzpPYk0wQp2onwtzhwhk9p+kQkO4Enw4pa11TSU=; b=ZrZ+ATXC8XN4BigIw0EJt6RXN NkGHQZxSyEVJFNCXGf+B+Gn6fGYQ1n9HLTSFvfwbpTVtJc8tdz+4DhpgMJXsEwqJ8y7padPgOzgTV /Q61AxM/wfpNzagHbdULY654+sj5R4RNuNbreLbVY8gtxW6yLwamn0p4yRD4ngdwYDJsCN3K5uSzt bWzmlYXjv2wHEgNX4orVA57Ci52JjiszPp217eW9B4kOiWKsmHOFlGzKsrB6wdctfujNYHo303uCc HVek9q7tvQQ0s/IcYo1ilv5VPpdHYNNU/9TI2i7nOkwdqYz4yfQPVhG+YJYCoQG6jWH+o/SV1MFPo Lx2TPnBWQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gwc6w-0005uk-G6; Thu, 21 Feb 2019 00:21:10 +0000 From: Matthew Wilcox To: Jason Gunthorpe Cc: Matthew Wilcox , linux-rdma@vger.kernel.org Subject: [PATCH 07/32] IB/mad: Convert ib_mad_clients to XArray Date: Wed, 20 Feb 2019 16:20:42 -0800 Message-Id: <20190221002107.22625-8-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190221002107.22625-1-willy@infradead.org> References: <20190221002107.22625-1-willy@infradead.org> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Pull the allocation function out into its own function to reduce the length of ib_register_mad_agent() a little and keep all the allocation logic together. Signed-off-by: Matthew Wilcox --- drivers/infiniband/core/mad.c | 39 +++++++++++++---------------------- 1 file changed, 14 insertions(+), 25 deletions(-) diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c index 7870823bac47..df43bf49e42f 100644 --- a/drivers/infiniband/core/mad.c +++ b/drivers/infiniband/core/mad.c @@ -38,10 +38,10 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #include -#include #include #include #include +#include #include #include "mad_priv.h" @@ -59,12 +59,9 @@ MODULE_PARM_DESC(send_queue_size, "Size of send queue in number of work requests module_param_named(recv_queue_size, mad_recvq_size, int, 0444); MODULE_PARM_DESC(recv_queue_size, "Size of receive queue in number of work requests"); -/* - * The mlx4 driver uses the top byte to distinguish which virtual function - * generated the MAD, so we must avoid using it. - */ -#define AGENT_ID_LIMIT (1 << 24) -static DEFINE_IDR(ib_mad_clients); +/* Client ID 0 is used for snoop-only clients */ +static DEFINE_XARRAY_ALLOC1(ib_mad_clients); +static u32 ib_mad_client_next; static struct list_head ib_mad_port_list; /* Port list lock */ @@ -389,18 +386,17 @@ struct ib_mad_agent *ib_register_mad_agent(struct ib_device *device, goto error4; } - idr_preload(GFP_KERNEL); - idr_lock(&ib_mad_clients); - ret2 = idr_alloc_cyclic(&ib_mad_clients, mad_agent_priv, 0, - AGENT_ID_LIMIT, GFP_ATOMIC); - idr_unlock(&ib_mad_clients); - idr_preload_end(); - + /* + * The mlx4 driver uses the top byte to distinguish which virtual + * function generated the MAD, so we must avoid using it. + */ + ret2 = xa_alloc_cyclic(&ib_mad_clients, &mad_agent_priv->agent.hi_tid, + mad_agent_priv, XA_LIMIT(0, (1 << 24) - 1), + &ib_mad_client_next, GFP_KERNEL); if (ret2 < 0) { ret = ERR_PTR(ret2); goto error5; } - mad_agent_priv->agent.hi_tid = ret2; /* * Make sure MAD registration (if supplied) @@ -448,9 +444,7 @@ struct ib_mad_agent *ib_register_mad_agent(struct ib_device *device, return &mad_agent_priv->agent; error6: spin_unlock_irq(&port_priv->reg_lock); - idr_lock(&ib_mad_clients); - idr_remove(&ib_mad_clients, mad_agent_priv->agent.hi_tid); - idr_unlock(&ib_mad_clients); + xa_erase(&ib_mad_clients, mad_agent_priv->agent.hi_tid); error5: ib_mad_agent_security_cleanup(&mad_agent_priv->agent); error4: @@ -614,9 +608,7 @@ static void unregister_mad_agent(struct ib_mad_agent_private *mad_agent_priv) spin_lock_irq(&port_priv->reg_lock); remove_mad_reg_req(mad_agent_priv); spin_unlock_irq(&port_priv->reg_lock); - idr_lock(&ib_mad_clients); - idr_remove(&ib_mad_clients, mad_agent_priv->agent.hi_tid); - idr_unlock(&ib_mad_clients); + xa_erase(&ib_mad_clients, mad_agent_priv->agent.hi_tid); flush_workqueue(port_priv->wq); ib_cancel_rmpp_recvs(mad_agent_priv); @@ -1756,7 +1748,7 @@ find_mad_agent(struct ib_mad_port_private *port_priv, */ hi_tid = be64_to_cpu(mad_hdr->tid) >> 32; rcu_read_lock(); - mad_agent = idr_find(&ib_mad_clients, hi_tid); + mad_agent = xa_load(&ib_mad_clients, hi_tid); if (mad_agent && !atomic_inc_not_zero(&mad_agent->refcount)) mad_agent = NULL; rcu_read_unlock(); @@ -3356,9 +3348,6 @@ int ib_mad_init(void) INIT_LIST_HEAD(&ib_mad_port_list); - /* Client ID 0 is used for snoop-only clients */ - idr_alloc(&ib_mad_clients, NULL, 0, 0, GFP_KERNEL); - if (ib_register_client(&mad_client)) { pr_err("Couldn't register ib_mad client\n"); return -EINVAL; From patchwork Thu Feb 21 00:20:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10822947 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6F9D5184E for ; Thu, 21 Feb 2019 00:21:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 601622FC81 for ; Thu, 21 Feb 2019 00:21:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 550532FC4D; Thu, 21 Feb 2019 00:21:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E14542FC84 for ; Thu, 21 Feb 2019 00:21:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726751AbfBUAVY (ORCPT ); Wed, 20 Feb 2019 19:21:24 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:52418 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726655AbfBUAVL (ORCPT ); Wed, 20 Feb 2019 19:21:11 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=6B1wE/ZaH10JO9QBHbUnoNhIYfGt3HsIxcctgVsLZjI=; b=q7mO1OAWV26YKQzViGzMCoHxX 1PtqB92im+3hxfSveCnqws6kZX4oDGabaGnrb5AGq0YUi2r7fJiyg974OVrocssEODRrxMreB0S0K GxwMiZpFOlbbyRX8ctK2dRyijddAcCatjgjBNmrsPlVD76iDi7rxCyMvIH4d/RGkGXzb1jiKSQ6fY Ol5SdjsEYE0gYmPVabHMoloPFthYKjbZlrr4BqBGzHYdNAe22ow/HIy1mgwmJCEpyEvG7EBayFQ16 BpOsbMl/pJga93l6tS4ntSFFCRMZkNvoVFrGwHVtS/h09OGTvcQkj0b0XdX+EYC6eMY9MRbYP/ZYm opBne7w7Q==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gwc6w-0005uq-Ky; Thu, 21 Feb 2019 00:21:10 +0000 From: Matthew Wilcox To: Jason Gunthorpe Cc: Matthew Wilcox , linux-rdma@vger.kernel.org Subject: [PATCH 08/32] RDMA/cm: Convert local_id_table to XArray Date: Wed, 20 Feb 2019 16:20:43 -0800 Message-Id: <20190221002107.22625-9-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190221002107.22625-1-willy@infradead.org> References: <20190221002107.22625-1-willy@infradead.org> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Also introduce cm_local_id() to reduce the amount of boilerplate when converting a local ID to an XArray index. Signed-off-by: Matthew Wilcox --- drivers/infiniband/core/cm.c | 40 +++++++++++++++--------------------- 1 file changed, 17 insertions(+), 23 deletions(-) diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c index 37980c7564c0..b97592b44ce2 100644 --- a/drivers/infiniband/core/cm.c +++ b/drivers/infiniband/core/cm.c @@ -124,7 +124,8 @@ static struct ib_cm { struct rb_root remote_qp_table; struct rb_root remote_id_table; struct rb_root remote_sidr_table; - struct idr local_id_table; + struct xarray local_id_table; + u32 local_id_next; __be32 random_id_operand; struct list_head timewait_list; struct workqueue_struct *wq; @@ -598,35 +599,31 @@ static int cm_init_av_by_path(struct sa_path_rec *path, static int cm_alloc_id(struct cm_id_private *cm_id_priv) { - unsigned long flags; - int id; - - idr_preload(GFP_KERNEL); - spin_lock_irqsave(&cm.lock, flags); + int err; + u32 id; - id = idr_alloc_cyclic(&cm.local_id_table, cm_id_priv, 0, 0, GFP_NOWAIT); - - spin_unlock_irqrestore(&cm.lock, flags); - idr_preload_end(); + err = xa_alloc_cyclic_irq(&cm.local_id_table, &id, cm_id_priv, + xa_limit_32b, &cm.local_id_next, GFP_KERNEL); cm_id_priv->id.local_id = (__force __be32)id ^ cm.random_id_operand; - return id < 0 ? id : 0; + return err; +} + +static u32 cm_local_id(__be32 local_id) +{ + return (__force u32) (local_id ^ cm.random_id_operand); } static void cm_free_id(__be32 local_id) { - spin_lock_irq(&cm.lock); - idr_remove(&cm.local_id_table, - (__force int) (local_id ^ cm.random_id_operand)); - spin_unlock_irq(&cm.lock); + xa_erase_irq(&cm.local_id_table, cm_local_id(local_id)); } static struct cm_id_private * cm_get_id(__be32 local_id, __be32 remote_id) { struct cm_id_private *cm_id_priv; - cm_id_priv = idr_find(&cm.local_id_table, - (__force int) (local_id ^ cm.random_id_operand)); + cm_id_priv = xa_load(&cm.local_id_table, cm_local_id(local_id)); if (cm_id_priv) { if (cm_id_priv->id.remote_id == remote_id) atomic_inc(&cm_id_priv->refcount); @@ -2824,9 +2821,8 @@ static struct cm_id_private * cm_acquire_rejected_id(struct cm_rej_msg *rej_msg) spin_unlock_irq(&cm.lock); return NULL; } - cm_id_priv = idr_find(&cm.local_id_table, (__force int) - (timewait_info->work.local_id ^ - cm.random_id_operand)); + cm_id_priv = xa_load(&cm.local_id_table, + cm_local_id(timewait_info->work.local_id)); if (cm_id_priv) { if (cm_id_priv->id.remote_id == remote_id) atomic_inc(&cm_id_priv->refcount); @@ -4513,7 +4509,7 @@ static int __init ib_cm_init(void) cm.remote_id_table = RB_ROOT; cm.remote_qp_table = RB_ROOT; cm.remote_sidr_table = RB_ROOT; - idr_init(&cm.local_id_table); + xa_init_flags(&cm.local_id_table, XA_FLAGS_ALLOC | XA_FLAGS_LOCK_IRQ); get_random_bytes(&cm.random_id_operand, sizeof cm.random_id_operand); INIT_LIST_HEAD(&cm.timewait_list); @@ -4539,7 +4535,6 @@ static int __init ib_cm_init(void) error2: class_unregister(&cm_class); error1: - idr_destroy(&cm.local_id_table); return ret; } @@ -4561,7 +4556,6 @@ static void __exit ib_cm_cleanup(void) } class_unregister(&cm_class); - idr_destroy(&cm.local_id_table); } module_init(ib_cm_init); From patchwork Thu Feb 21 00:20:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10822895 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C48571805 for ; Thu, 21 Feb 2019 00:21:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B33452FC4D for ; Thu, 21 Feb 2019 00:21:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A7CBD2FC81; Thu, 21 Feb 2019 00:21:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 339482FC7C for ; Thu, 21 Feb 2019 00:21:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726632AbfBUAVM (ORCPT ); Wed, 20 Feb 2019 19:21:12 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:52422 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726669AbfBUAVL (ORCPT ); Wed, 20 Feb 2019 19:21:11 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=ZOhtV3UYvcNvZVcVzXeEQV0B8heFhUA8cOhRUcGdPmw=; b=nu2pxodPpDV7r5gs9wydEQUVn Q0Q2HtlqXv/20ywYZDdEldIWyjA654PtWm0ucZmDsACtE+8HSCqZcA1kStUJf4Toh8beyTx7rHj7q AXGft1KVc6LrP0HEr4yo+YoBZE9SrqIlXokMCwCzNzphwJ08yase/shCsikBY+k9dv2edA8KmRWAH mrqtePsgameXCiRfFaUVRqlWUq8fgV9bUQIQHmOCGIU5wRccDCWF/pwYt9pPiW0K/Akdm0MH03wiv 2/Q9h1f/yZyHEv4VFyZXAcwHqU2v8jG+eD8ycWnx2g403KdytDHtbxZbbZWByqk+ngT5gfCqP3ZSJ S66vV3IgA==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gwc6w-0005uy-Q7; Thu, 21 Feb 2019 00:21:10 +0000 From: Matthew Wilcox To: Jason Gunthorpe Cc: Matthew Wilcox , linux-rdma@vger.kernel.org Subject: [PATCH 09/32] mlx4: Convert pv_id_table to XArray Date: Wed, 20 Feb 2019 16:20:44 -0800 Message-Id: <20190221002107.22625-10-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190221002107.22625-1-willy@infradead.org> References: <20190221002107.22625-1-willy@infradead.org> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Matthew Wilcox Acked-by: Leon Romanovsky --- drivers/infiniband/hw/mlx4/cm.c | 36 ++++++++++------------------ drivers/infiniband/hw/mlx4/mlx4_ib.h | 5 ++-- 2 files changed, 16 insertions(+), 25 deletions(-) diff --git a/drivers/infiniband/hw/mlx4/cm.c b/drivers/infiniband/hw/mlx4/cm.c index fedaf8260105..c0e53d8b7c95 100644 --- a/drivers/infiniband/hw/mlx4/cm.c +++ b/drivers/infiniband/hw/mlx4/cm.c @@ -168,20 +168,17 @@ static void id_map_ent_timeout(struct work_struct *work) { struct delayed_work *delay = to_delayed_work(work); struct id_map_entry *ent = container_of(delay, struct id_map_entry, timeout); - struct id_map_entry *db_ent, *found_ent; + struct id_map_entry *found_ent; struct mlx4_ib_dev *dev = ent->dev; struct mlx4_ib_sriov *sriov = &dev->sriov; struct rb_root *sl_id_map = &sriov->sl_id_map; - int pv_id = (int) ent->pv_cm_id; spin_lock(&sriov->id_map_lock); - db_ent = (struct id_map_entry *)idr_find(&sriov->pv_id_table, pv_id); - if (!db_ent) + if (!xa_erase(&sriov->pv_id_table, ent->pv_cm_id)) goto out; found_ent = id_map_find_by_sl_id(&dev->ib_dev, ent->slave_id, ent->sl_cm_id); if (found_ent && found_ent == ent) rb_erase(&found_ent->node, sl_id_map); - idr_remove(&sriov->pv_id_table, pv_id); out: list_del(&ent->list); @@ -196,13 +193,12 @@ static void id_map_find_del(struct ib_device *ibdev, int pv_cm_id) struct id_map_entry *ent, *found_ent; spin_lock(&sriov->id_map_lock); - ent = (struct id_map_entry *)idr_find(&sriov->pv_id_table, pv_cm_id); + ent = xa_erase(&sriov->pv_id_table, pv_cm_id); if (!ent) goto out; found_ent = id_map_find_by_sl_id(ibdev, ent->slave_id, ent->sl_cm_id); if (found_ent && found_ent == ent) rb_erase(&found_ent->node, sl_id_map); - idr_remove(&sriov->pv_id_table, pv_cm_id); out: spin_unlock(&sriov->id_map_lock); } @@ -256,25 +252,19 @@ id_map_alloc(struct ib_device *ibdev, int slave_id, u32 sl_cm_id) ent->dev = to_mdev(ibdev); INIT_DELAYED_WORK(&ent->timeout, id_map_ent_timeout); - idr_preload(GFP_KERNEL); - spin_lock(&to_mdev(ibdev)->sriov.id_map_lock); - - ret = idr_alloc_cyclic(&sriov->pv_id_table, ent, 0, 0, GFP_NOWAIT); + ret = xa_alloc_cyclic(&sriov->pv_id_table, &ent->pv_cm_id, ent, + xa_limit_32b, &sriov->pv_id_next, GFP_KERNEL); if (ret >= 0) { - ent->pv_cm_id = (u32)ret; + spin_lock(&sriov->id_map_lock); sl_id_map_add(ibdev, ent); list_add_tail(&ent->list, &sriov->cm_list); - } - - spin_unlock(&sriov->id_map_lock); - idr_preload_end(); - - if (ret >= 0) + spin_unlock(&sriov->id_map_lock); return ent; + } /*error flow*/ kfree(ent); - mlx4_ib_warn(ibdev, "No more space in the idr (err:0x%x)\n", ret); + mlx4_ib_warn(ibdev, "Allocation failed (err:0x%x)\n", ret); return ERR_PTR(-ENOMEM); } @@ -290,7 +280,7 @@ id_map_get(struct ib_device *ibdev, int *pv_cm_id, int slave_id, int sl_cm_id) if (ent) *pv_cm_id = (int) ent->pv_cm_id; } else - ent = (struct id_map_entry *)idr_find(&sriov->pv_id_table, *pv_cm_id); + ent = xa_load(&sriov->pv_id_table, *pv_cm_id); spin_unlock(&sriov->id_map_lock); return ent; @@ -407,7 +397,7 @@ void mlx4_ib_cm_paravirt_init(struct mlx4_ib_dev *dev) spin_lock_init(&dev->sriov.id_map_lock); INIT_LIST_HEAD(&dev->sriov.cm_list); dev->sriov.sl_id_map = RB_ROOT; - idr_init(&dev->sriov.pv_id_table); + xa_init_flags(&dev->sriov.pv_id_table, XA_FLAGS_ALLOC); } /* slave = -1 ==> all slaves */ @@ -444,7 +434,7 @@ void mlx4_ib_cm_paravirt_clean(struct mlx4_ib_dev *dev, int slave) struct id_map_entry, node); rb_erase(&ent->node, sl_id_map); - idr_remove(&sriov->pv_id_table, (int) ent->pv_cm_id); + xa_erase(&sriov->pv_id_table, ent->pv_cm_id); } list_splice_init(&dev->sriov.cm_list, &lh); } else { @@ -460,7 +450,7 @@ void mlx4_ib_cm_paravirt_clean(struct mlx4_ib_dev *dev, int slave) /* remove those nodes from databases */ list_for_each_entry_safe(map, tmp_map, &lh, list) { rb_erase(&map->node, sl_id_map); - idr_remove(&sriov->pv_id_table, (int) map->pv_cm_id); + xa_erase(&sriov->pv_id_table, map->pv_cm_id); } /* add remaining nodes from cm_list */ diff --git a/drivers/infiniband/hw/mlx4/mlx4_ib.h b/drivers/infiniband/hw/mlx4/mlx4_ib.h index e491f3eda6e7..4c774304cc63 100644 --- a/drivers/infiniband/hw/mlx4/mlx4_ib.h +++ b/drivers/infiniband/hw/mlx4/mlx4_ib.h @@ -492,10 +492,11 @@ struct mlx4_ib_sriov { struct mlx4_sriov_alias_guid alias_guid; /* CM paravirtualization fields */ - struct list_head cm_list; + struct xarray pv_id_table; + u32 pv_id_next; spinlock_t id_map_lock; struct rb_root sl_id_map; - struct idr pv_id_table; + struct list_head cm_list; }; struct gid_cache_context { From patchwork Thu Feb 21 00:20:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10822929 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9D44813A4 for ; Thu, 21 Feb 2019 00:21:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8C5B62FC4D for ; Thu, 21 Feb 2019 00:21:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 80F432FC82; Thu, 21 Feb 2019 00:21:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E95BD2FC4D for ; Thu, 21 Feb 2019 00:21:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726669AbfBUAVM (ORCPT ); Wed, 20 Feb 2019 19:21:12 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:52428 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726412AbfBUAVL (ORCPT ); Wed, 20 Feb 2019 19:21:11 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=g/M7n7HnwXlYn+ZNeYk5dKuNtLSWZYCLcI2UHCznirk=; b=JZ957IfII80NBrXiIGUyOGwA/ HuXWMRAfcAJVU287xj50Z5X9NgmAKyqwEvhiDSs6UWKKkjSYTnOUEP31qKm6w+4J3rOHw03MAHLGG ihH7we5lFuqCd1ouoSAAolhPJww5HDWl4LmsygxRu1wjIYz6SvLrMK7lkvDnwT1yjVB5RhwN++MfE +3qb3K2CD8yrB380Ul00en5yUtCrl4gwB1NgI5PYj5LEKX1pu+LZOQGzzaYCQlkONZIvKjyfpZtkd M+gQ8l4Jj5UCBTgVCZ68ZhpqtxXzz76GluGu4Que0S0/UmGkrKYwxiIxeEO+bjqgEeKdsPBRkD4Wh DvmOdEeEA==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gwc6w-0005v7-VN; Thu, 21 Feb 2019 00:21:10 +0000 From: Matthew Wilcox To: Jason Gunthorpe Cc: Matthew Wilcox , linux-rdma@vger.kernel.org Subject: [PATCH 10/32] uverbs: Convert idr to XArray Date: Wed, 20 Feb 2019 16:20:45 -0800 Message-Id: <20190221002107.22625-11-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190221002107.22625-1-willy@infradead.org> References: <20190221002107.22625-1-willy@infradead.org> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The word 'idr' is scattered throughout the API, so I haven't changed it, but the 'idr' variable is now an XArray. Signed-off-by: Matthew Wilcox --- drivers/infiniband/core/rdma_core.c | 78 +++++++---------------------- drivers/infiniband/core/uverbs.h | 4 +- 2 files changed, 18 insertions(+), 64 deletions(-) diff --git a/drivers/infiniband/core/rdma_core.c b/drivers/infiniband/core/rdma_core.c index 6c4747e61d2b..1084685ddce7 100644 --- a/drivers/infiniband/core/rdma_core.c +++ b/drivers/infiniband/core/rdma_core.c @@ -296,25 +296,8 @@ static struct ib_uobject *alloc_uobj(struct ib_uverbs_file *ufile, static int idr_add_uobj(struct ib_uobject *uobj) { - int ret; - - idr_preload(GFP_KERNEL); - spin_lock(&uobj->ufile->idr_lock); - - /* - * We start with allocating an idr pointing to NULL. This represents an - * object which isn't initialized yet. We'll replace it later on with - * the real object once we commit. - */ - ret = idr_alloc(&uobj->ufile->idr, NULL, 0, - min_t(unsigned long, U32_MAX - 1, INT_MAX), GFP_NOWAIT); - if (ret >= 0) - uobj->id = ret; - - spin_unlock(&uobj->ufile->idr_lock); - idr_preload_end(); - - return ret < 0 ? ret : 0; + return xa_alloc(&uobj->ufile->idr, &uobj->id, uobj, xa_limit_32b, + GFP_KERNEL); } /* Returns the ib_uobject or an error. The caller should check for IS_ERR. */ @@ -324,29 +307,14 @@ lookup_get_idr_uobject(const struct uverbs_api_object *obj, enum rdma_lookup_mode mode) { struct ib_uobject *uobj; - unsigned long idrno = id; - if (id < 0 || id > ULONG_MAX) + if ((u64)id > ULONG_MAX) return ERR_PTR(-EINVAL); rcu_read_lock(); - /* object won't be released as we're protected in rcu */ - uobj = idr_find(&ufile->idr, idrno); - if (!uobj) { - uobj = ERR_PTR(-ENOENT); - goto free; - } - - /* - * The idr_find is guaranteed to return a pointer to something that - * isn't freed yet, or NULL, as the free after idr_remove goes through - * kfree_rcu(). However the object may still have been released and - * kfree() could be called at any time. - */ - if (!kref_get_unless_zero(&uobj->ref)) + uobj = xa_load(&ufile->idr, id); + if (!uobj || !kref_get_unless_zero(&uobj->ref)) uobj = ERR_PTR(-ENOENT); - -free: rcu_read_unlock(); return uobj; } @@ -398,7 +366,7 @@ struct ib_uobject *rdma_lookup_get_uobject(const struct uverbs_api_object *obj, struct ib_uobject *uobj; int ret; - if (IS_ERR(obj) && PTR_ERR(obj) == -ENOMSG) { + if (obj == ERR_PTR(-ENOMSG)) { /* must be UVERBS_IDR_ANY_OBJECT, see uapi_get_object() */ uobj = lookup_get_idr_uobject(NULL, ufile, id, mode); if (IS_ERR(uobj)) @@ -457,14 +425,12 @@ alloc_begin_idr_uobject(const struct uverbs_api_object *obj, ret = ib_rdmacg_try_charge(&uobj->cg_obj, uobj->context->device, RDMACG_RESOURCE_HCA_OBJECT); if (ret) - goto idr_remove; + goto remove; return uobj; -idr_remove: - spin_lock(&ufile->idr_lock); - idr_remove(&ufile->idr, uobj->id); - spin_unlock(&ufile->idr_lock); +remove: + xa_erase(&ufile->idr, uobj->id); uobj_put: uverbs_uobject_put(uobj); return ERR_PTR(ret); @@ -522,9 +488,7 @@ static void alloc_abort_idr_uobject(struct ib_uobject *uobj) ib_rdmacg_uncharge(&uobj->cg_obj, uobj->context->device, RDMACG_RESOURCE_HCA_OBJECT); - spin_lock(&uobj->ufile->idr_lock); - idr_remove(&uobj->ufile->idr, uobj->id); - spin_unlock(&uobj->ufile->idr_lock); + xa_erase(&uobj->ufile->idr, uobj->id); } static int __must_check destroy_hw_idr_uobject(struct ib_uobject *uobj, @@ -554,9 +518,7 @@ static int __must_check destroy_hw_idr_uobject(struct ib_uobject *uobj, static void remove_handle_idr_uobject(struct ib_uobject *uobj) { - spin_lock(&uobj->ufile->idr_lock); - idr_remove(&uobj->ufile->idr, uobj->id); - spin_unlock(&uobj->ufile->idr_lock); + xa_erase(&uobj->ufile->idr, uobj->id); /* Matches the kref in alloc_commit_idr_uobject */ uverbs_uobject_put(uobj); } @@ -587,16 +549,11 @@ static int alloc_commit_idr_uobject(struct ib_uobject *uobj) { struct ib_uverbs_file *ufile = uobj->ufile; - spin_lock(&ufile->idr_lock); /* - * We already allocated this IDR with a NULL object, so - * this shouldn't fail. - * - * NOTE: Once we set the IDR we loose ownership of our kref on uobj. + * NOTE: Storing the uobj transfers our kref on uobj to the XArray. * It will be put by remove_commit_idr_uobject() */ - WARN_ON(idr_replace(&ufile->idr, uobj, uobj->id)); - spin_unlock(&ufile->idr_lock); + xa_store(&ufile->idr, uobj->id, uobj, 0); return 0; } @@ -728,14 +685,13 @@ void rdma_lookup_put_uobject(struct ib_uobject *uobj, void setup_ufile_idr_uobject(struct ib_uverbs_file *ufile) { - spin_lock_init(&ufile->idr_lock); - idr_init(&ufile->idr); + xa_init_flags(&ufile->idr, XA_FLAGS_ALLOC); } void release_ufile_idr_uobject(struct ib_uverbs_file *ufile) { struct ib_uobject *entry; - int id; + unsigned long id; /* * At this point uverbs_cleanup_ufile() is guaranteed to have run, and @@ -745,12 +701,12 @@ void release_ufile_idr_uobject(struct ib_uverbs_file *ufile) * * This is an optimized equivalent to remove_handle_idr_uobject */ - idr_for_each_entry(&ufile->idr, entry, id) { + xa_for_each(&ufile->idr, id, entry) { WARN_ON(entry->object); uverbs_uobject_put(entry); } - idr_destroy(&ufile->idr); + xa_destroy(&ufile->idr); } const struct uverbs_obj_type_class uverbs_idr_class = { diff --git a/drivers/infiniband/core/uverbs.h b/drivers/infiniband/core/uverbs.h index ea0bc6885517..580e220ef1b9 100644 --- a/drivers/infiniband/core/uverbs.h +++ b/drivers/infiniband/core/uverbs.h @@ -161,9 +161,7 @@ struct ib_uverbs_file { struct mutex umap_lock; struct list_head umaps; - struct idr idr; - /* spinlock protects write access to idr */ - spinlock_t idr_lock; + struct xarray idr; }; struct ib_uverbs_event { From patchwork Thu Feb 21 00:20:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10822927 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D28C514E1 for ; Thu, 21 Feb 2019 00:21:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BFAB32FC4D for ; Thu, 21 Feb 2019 00:21:21 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B43F22FC81; Thu, 21 Feb 2019 00:21:21 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4D5312FC7C for ; Thu, 21 Feb 2019 00:21:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726837AbfBUAVM (ORCPT ); Wed, 20 Feb 2019 19:21:12 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:52432 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726713AbfBUAVL (ORCPT ); Wed, 20 Feb 2019 19:21:11 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=Sb1Lt1I3YLp3ZYD0XNtnrhiqM0Y/knhUq4Pcqgx6KHQ=; b=GThhQp94+SGmEOpwBnMxx9pMN dgcVy+K5wdx3/QBgXa5vrTegMMvoFoVbi0W/dMawhWtYUnVlFRKNf0uVKN5JX/3u1OxrdNHGpaQdj +0Tgndy/JazEywstafzgHSwbooedeA4Qv2wkUx/0sZPsSO7RhyILmFUl6esgZvQjuDpSBq/NyJ0kc nw7Oge5P5YaIEyVLp3hSgnQTEukVUA7I6KK6+i2YbIPP4z2uQeLmmXJdbSuMcSOIi8wRajT9Wsp9d WNasPraKycOOF6UyH3mc9SvCKgTp8+qeHS9uWlitW4yFQ3HPv6ePl0g0Frb0ENG3ZlGxmnKh1VJSm Sw8hzeIVA==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gwc6x-0005vE-4A; Thu, 21 Feb 2019 00:21:11 +0000 From: Matthew Wilcox To: Jason Gunthorpe Cc: Matthew Wilcox , linux-rdma@vger.kernel.org Subject: [PATCH 11/32] ib core: Convert query_idr to XArray Date: Wed, 20 Feb 2019 16:20:46 -0800 Message-Id: <20190221002107.22625-12-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190221002107.22625-1-willy@infradead.org> References: <20190221002107.22625-1-willy@infradead.org> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Matthew Wilcox --- drivers/infiniband/core/sa_query.c | 43 ++++++++++++------------------ 1 file changed, 17 insertions(+), 26 deletions(-) diff --git a/drivers/infiniband/core/sa_query.c b/drivers/infiniband/core/sa_query.c index 97e6d7b69abf..4c86e54436f8 100644 --- a/drivers/infiniband/core/sa_query.c +++ b/drivers/infiniband/core/sa_query.c @@ -40,7 +40,7 @@ #include #include #include -#include +#include #include #include #include @@ -183,8 +183,7 @@ static struct ib_client sa_client = { .remove = ib_sa_remove_one }; -static DEFINE_SPINLOCK(idr_lock); -static DEFINE_IDR(query_idr); +static DEFINE_XARRAY_FLAGS(queries, XA_FLAGS_ALLOC | XA_FLAGS_LOCK_IRQ); static DEFINE_SPINLOCK(tid_lock); static u32 tid; @@ -1180,14 +1179,14 @@ void ib_sa_cancel_query(int id, struct ib_sa_query *query) struct ib_mad_agent *agent; struct ib_mad_send_buf *mad_buf; - spin_lock_irqsave(&idr_lock, flags); - if (idr_find(&query_idr, id) != query) { - spin_unlock_irqrestore(&idr_lock, flags); + xa_lock_irqsave(&queries, flags); + if (xa_load(&queries, id) != query) { + xa_unlock_irqrestore(&queries, flags); return; } agent = query->port->agent; mad_buf = query->mad_buf; - spin_unlock_irqrestore(&idr_lock, flags); + xa_unlock_irqrestore(&queries, flags); /* * If the query is still on the netlink request list, schedule @@ -1363,21 +1362,14 @@ static void init_mad(struct ib_sa_query *query, struct ib_mad_agent *agent) static int send_mad(struct ib_sa_query *query, unsigned long timeout_ms, gfp_t gfp_mask) { - bool preload = gfpflags_allow_blocking(gfp_mask); unsigned long flags; int ret, id; - if (preload) - idr_preload(gfp_mask); - spin_lock_irqsave(&idr_lock, flags); - - id = idr_alloc(&query_idr, query, 0, 0, GFP_NOWAIT); - - spin_unlock_irqrestore(&idr_lock, flags); - if (preload) - idr_preload_end(); - if (id < 0) - return id; + xa_lock_irqsave(&queries, flags); + ret = __xa_alloc(&queries, &id, query, xa_limit_32b, gfp_mask); + xa_unlock_irqrestore(&queries, flags); + if (ret < 0) + return ret; query->mad_buf->timeout_ms = timeout_ms; query->mad_buf->context[0] = query; @@ -1394,9 +1386,9 @@ static int send_mad(struct ib_sa_query *query, unsigned long timeout_ms, ret = ib_post_send_mad(query->mad_buf, NULL); if (ret) { - spin_lock_irqsave(&idr_lock, flags); - idr_remove(&query_idr, id); - spin_unlock_irqrestore(&idr_lock, flags); + xa_lock_irqsave(&queries, flags); + __xa_erase(&queries, id); + xa_unlock_irqrestore(&queries, flags); } /* @@ -2188,9 +2180,9 @@ static void send_handler(struct ib_mad_agent *agent, break; } - spin_lock_irqsave(&idr_lock, flags); - idr_remove(&query_idr, query->id); - spin_unlock_irqrestore(&idr_lock, flags); + xa_lock_irqsave(&queries, flags); + __xa_erase(&queries, query->id); + xa_unlock_irqrestore(&queries, flags); free_mad(query); if (query->client) @@ -2477,5 +2469,4 @@ void ib_sa_cleanup(void) destroy_workqueue(ib_nl_wq); mcast_cleanup(); ib_unregister_client(&sa_client); - idr_destroy(&query_idr); } From patchwork Thu Feb 21 00:20:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10822939 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BB24B184E for ; Thu, 21 Feb 2019 00:21:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AB8842FC4D for ; Thu, 21 Feb 2019 00:21:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A02DD2FC7C; Thu, 21 Feb 2019 00:21:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3EB952FC86 for ; Thu, 21 Feb 2019 00:21:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726304AbfBUAVX (ORCPT ); Wed, 20 Feb 2019 19:21:23 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:52436 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726751AbfBUAVL (ORCPT ); Wed, 20 Feb 2019 19:21:11 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=+zLbvd8rLEgg5FXstvLvKQqMosv7NhDS82b5XXAN54o=; b=HgnZ1L678o0Rw+qneTyY19La7 quzrDbLRd9+eRIWmMWl0JAjbWTg12oXgJUD0aSjvzg9GyDXUtcAQbjwQShXCS03rrMWzT1UdmIhhT UHuEYisNg+UoFXpdOV6oAC3hOkNJCWt4FlgIXMyjupCbbLpMtznEzwDIVRvBsw11bbxjGs5FwtkrR h1fuK/mV+0X4uAinDMb0qBxqbmcaKL4WpWllhndUWHw6ajq8eiO5+6fFAzsE481uW/0piN7JYFfXa QfQiamBs8cPlFbBi1HcX5StNnnOWOX6Km2UbNDKES2dFM/jCP1Hb6TgPJssrT7njpqAe6Y5eT+0hG F09AIrgaQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gwc6x-0005vL-9F; Thu, 21 Feb 2019 00:21:11 +0000 From: Matthew Wilcox To: Jason Gunthorpe Cc: Matthew Wilcox , linux-rdma@vger.kernel.org Subject: [PATCH 12/32] cxgb3: Convert cqidr to XArray Date: Wed, 20 Feb 2019 16:20:47 -0800 Message-Id: <20190221002107.22625-13-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190221002107.22625-1-willy@infradead.org> References: <20190221002107.22625-1-willy@infradead.org> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP It would make sense to convert this to an allocating XArray and remove the kfifo that is currently used to allocate the CQID, but that work is better done by someone who has the hardware to test with. Signed-off-by: Matthew Wilcox Acked-by: Steve Wise --- drivers/infiniband/hw/cxgb3/iwch.c | 3 +-- drivers/infiniband/hw/cxgb3/iwch.h | 4 ++-- drivers/infiniband/hw/cxgb3/iwch_provider.c | 4 ++-- 3 files changed, 5 insertions(+), 6 deletions(-) diff --git a/drivers/infiniband/hw/cxgb3/iwch.c b/drivers/infiniband/hw/cxgb3/iwch.c index 591de319c178..ecd23f74ab50 100644 --- a/drivers/infiniband/hw/cxgb3/iwch.c +++ b/drivers/infiniband/hw/cxgb3/iwch.c @@ -105,7 +105,7 @@ static void iwch_db_drop_task(struct work_struct *work) static void rnic_init(struct iwch_dev *rnicp) { pr_debug("%s iwch_dev %p\n", __func__, rnicp); - idr_init(&rnicp->cqidr); + xa_init_flags(&rnicp->cqs, XA_FLAGS_LOCK_IRQ); idr_init(&rnicp->qpidr); idr_init(&rnicp->mmidr); spin_lock_init(&rnicp->lock); @@ -190,7 +190,6 @@ static void close_rnic_dev(struct t3cdev *tdev) list_del(&dev->entry); iwch_unregister_device(dev); cxio_rdev_close(&dev->rdev); - idr_destroy(&dev->cqidr); idr_destroy(&dev->qpidr); idr_destroy(&dev->mmidr); ib_dealloc_device(&dev->ibdev); diff --git a/drivers/infiniband/hw/cxgb3/iwch.h b/drivers/infiniband/hw/cxgb3/iwch.h index c69bc4f52049..d45de53392e7 100644 --- a/drivers/infiniband/hw/cxgb3/iwch.h +++ b/drivers/infiniband/hw/cxgb3/iwch.h @@ -106,7 +106,7 @@ struct iwch_dev { struct cxio_rdev rdev; u32 device_cap_flags; struct iwch_rnic_attributes attr; - struct idr cqidr; + struct xarray cqs; struct idr qpidr; struct idr mmidr; spinlock_t lock; @@ -136,7 +136,7 @@ static inline int t3a_device(const struct iwch_dev *rhp) static inline struct iwch_cq *get_chp(struct iwch_dev *rhp, u32 cqid) { - return idr_find(&rhp->cqidr, cqid); + return xa_load(&rhp->cqs, cqid); } static inline struct iwch_qp *get_qhp(struct iwch_dev *rhp, u32 qpid) diff --git a/drivers/infiniband/hw/cxgb3/iwch_provider.c b/drivers/infiniband/hw/cxgb3/iwch_provider.c index b34b1a1bd94b..7f3fd4f7ca44 100644 --- a/drivers/infiniband/hw/cxgb3/iwch_provider.c +++ b/drivers/infiniband/hw/cxgb3/iwch_provider.c @@ -98,7 +98,7 @@ static int iwch_destroy_cq(struct ib_cq *ib_cq) pr_debug("%s ib_cq %p\n", __func__, ib_cq); chp = to_iwch_cq(ib_cq); - remove_handle(chp->rhp, &chp->rhp->cqidr, chp->cq.cqid); + xa_erase_irq(&chp->rhp->cqs, chp->cq.cqid); atomic_dec(&chp->refcnt); wait_event(chp->wait, !atomic_read(&chp->refcnt)); @@ -167,7 +167,7 @@ static struct ib_cq *iwch_create_cq(struct ib_device *ibdev, spin_lock_init(&chp->comp_handler_lock); atomic_set(&chp->refcnt, 1); init_waitqueue_head(&chp->wait); - if (insert_handle(rhp, &rhp->cqidr, chp, chp->cq.cqid)) { + if (xa_store_irq(&rhp->cqs, chp->cq.cqid, chp, GFP_KERNEL)) { cxio_destroy_cq(&chp->rhp->rdev, &chp->cq); kfree(chp); return ERR_PTR(-ENOMEM); From patchwork Thu Feb 21 00:20:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10822897 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0035813A4 for ; Thu, 21 Feb 2019 00:21:15 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E21972FC4D for ; Thu, 21 Feb 2019 00:21:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D6A342FC81; Thu, 21 Feb 2019 00:21:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5031C2FC4D for ; Thu, 21 Feb 2019 00:21:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726878AbfBUAVN (ORCPT ); Wed, 20 Feb 2019 19:21:13 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:52440 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726756AbfBUAVM (ORCPT ); Wed, 20 Feb 2019 19:21:12 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=oMLAoBZVpUDkvBnKQQxz7ZC4TfUWa/gXChfXVPJqhPQ=; b=I4B4Qol8zPcfxSGf69SEoy5oz ckjY1IkKWm5y5C3RH7+sUNawNCNJ6kmsn2zWhS6eLofyEKaGFZl/tDnTO3jgOMLelyJNouO7A1qtT MjSRVpWifbuCaddNaqdQXguTbUwAcYTryZqKL94rWJuE1r6WEyPMyy17E7BqcQC0b8jlyNosC7CL0 bWP5AjhqjvsU8OfPKcJSyXHxmG9dYLn+Qr5/+YwaKpvFsPdmwAxd3ff2ZS1uM1WmZB7Qt4jHYYhEi Mlr/Y1GMTK6w7ui2cYxjSR7dJxHcc3w0vO0e2GkXsucGpScdh5QgvpZMRcxc3iyhhiyUCvZxFdMOX Vt2sYHlNA==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gwc6x-0005vQ-E3; Thu, 21 Feb 2019 00:21:11 +0000 From: Matthew Wilcox To: Jason Gunthorpe Cc: Matthew Wilcox , linux-rdma@vger.kernel.org Subject: [PATCH 13/32] cxgb3: Convert qpidr to XArray Date: Wed, 20 Feb 2019 16:20:48 -0800 Message-Id: <20190221002107.22625-14-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190221002107.22625-1-willy@infradead.org> References: <20190221002107.22625-1-willy@infradead.org> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Matthew Wilcox Acked-by: Steve Wise --- drivers/infiniband/hw/cxgb3/iwch.c | 46 +++++++++------------ drivers/infiniband/hw/cxgb3/iwch.h | 4 +- drivers/infiniband/hw/cxgb3/iwch_ev.c | 18 ++++---- drivers/infiniband/hw/cxgb3/iwch_provider.c | 4 +- 4 files changed, 32 insertions(+), 40 deletions(-) diff --git a/drivers/infiniband/hw/cxgb3/iwch.c b/drivers/infiniband/hw/cxgb3/iwch.c index ecd23f74ab50..1f479ad4fde6 100644 --- a/drivers/infiniband/hw/cxgb3/iwch.c +++ b/drivers/infiniband/hw/cxgb3/iwch.c @@ -62,37 +62,30 @@ struct cxgb3_client t3c_client = { static LIST_HEAD(dev_list); static DEFINE_MUTEX(dev_mutex); -static int disable_qp_db(int id, void *p, void *data) -{ - struct iwch_qp *qhp = p; - - cxio_disable_wq_db(&qhp->wq); - return 0; -} - -static int enable_qp_db(int id, void *p, void *data) -{ - struct iwch_qp *qhp = p; - - if (data) - ring_doorbell(qhp->rhp->rdev.ctrl_qp.doorbell, qhp->wq.qpid); - cxio_enable_wq_db(&qhp->wq); - return 0; -} - static void disable_dbs(struct iwch_dev *rnicp) { - spin_lock_irq(&rnicp->lock); - idr_for_each(&rnicp->qpidr, disable_qp_db, NULL); - spin_unlock_irq(&rnicp->lock); + unsigned long index; + struct iwch_qp *qhp; + + xa_lock_irq(&rnicp->qps); + xa_for_each(&rnicp->qps, index, qhp) + cxio_disable_wq_db(&qhp->wq); + xa_unlock_irq(&rnicp->qps); } static void enable_dbs(struct iwch_dev *rnicp, int ring_db) { - spin_lock_irq(&rnicp->lock); - idr_for_each(&rnicp->qpidr, enable_qp_db, - (void *)(unsigned long)ring_db); - spin_unlock_irq(&rnicp->lock); + unsigned long index; + struct iwch_qp *qhp; + + xa_lock_irq(&rnicp->qps); + xa_for_each(&rnicp->qps, index, qhp) { + if (ring_db) + ring_doorbell(qhp->rhp->rdev.ctrl_qp.doorbell, + qhp->wq.qpid); + cxio_enable_wq_db(&qhp->wq); + } + xa_unlock_irq(&rnicp->qps); } static void iwch_db_drop_task(struct work_struct *work) @@ -106,7 +99,7 @@ static void rnic_init(struct iwch_dev *rnicp) { pr_debug("%s iwch_dev %p\n", __func__, rnicp); xa_init_flags(&rnicp->cqs, XA_FLAGS_LOCK_IRQ); - idr_init(&rnicp->qpidr); + xa_init_flags(&rnicp->qps, XA_FLAGS_LOCK_IRQ); idr_init(&rnicp->mmidr); spin_lock_init(&rnicp->lock); INIT_DELAYED_WORK(&rnicp->db_drop_task, iwch_db_drop_task); @@ -190,7 +183,6 @@ static void close_rnic_dev(struct t3cdev *tdev) list_del(&dev->entry); iwch_unregister_device(dev); cxio_rdev_close(&dev->rdev); - idr_destroy(&dev->qpidr); idr_destroy(&dev->mmidr); ib_dealloc_device(&dev->ibdev); break; diff --git a/drivers/infiniband/hw/cxgb3/iwch.h b/drivers/infiniband/hw/cxgb3/iwch.h index d45de53392e7..70e086946d30 100644 --- a/drivers/infiniband/hw/cxgb3/iwch.h +++ b/drivers/infiniband/hw/cxgb3/iwch.h @@ -107,7 +107,7 @@ struct iwch_dev { u32 device_cap_flags; struct iwch_rnic_attributes attr; struct xarray cqs; - struct idr qpidr; + struct xarray qps; struct idr mmidr; spinlock_t lock; struct list_head entry; @@ -141,7 +141,7 @@ static inline struct iwch_cq *get_chp(struct iwch_dev *rhp, u32 cqid) static inline struct iwch_qp *get_qhp(struct iwch_dev *rhp, u32 qpid) { - return idr_find(&rhp->qpidr, qpid); + return xa_load(&rhp->qps, qpid); } static inline struct iwch_mr *get_mhp(struct iwch_dev *rhp, u32 mmid) diff --git a/drivers/infiniband/hw/cxgb3/iwch_ev.c b/drivers/infiniband/hw/cxgb3/iwch_ev.c index 4a0c82a8fb60..9d356c1301c7 100644 --- a/drivers/infiniband/hw/cxgb3/iwch_ev.c +++ b/drivers/infiniband/hw/cxgb3/iwch_ev.c @@ -48,14 +48,14 @@ static void post_qp_event(struct iwch_dev *rnicp, struct iwch_cq *chp, struct iwch_qp *qhp; unsigned long flag; - spin_lock(&rnicp->lock); - qhp = get_qhp(rnicp, CQE_QPID(rsp_msg->cqe)); + xa_lock(&rnicp->qps); + qhp = xa_load(&rnicp->qps, CQE_QPID(rsp_msg->cqe)); if (!qhp) { pr_err("%s unaffiliated error 0x%x qpid 0x%x\n", __func__, CQE_STATUS(rsp_msg->cqe), CQE_QPID(rsp_msg->cqe)); - spin_unlock(&rnicp->lock); + xa_unlock(&rnicp->qps); return; } @@ -65,7 +65,7 @@ static void post_qp_event(struct iwch_dev *rnicp, struct iwch_cq *chp, __func__, qhp->attr.state, qhp->wq.qpid, CQE_STATUS(rsp_msg->cqe)); - spin_unlock(&rnicp->lock); + xa_unlock(&rnicp->qps); return; } @@ -76,7 +76,7 @@ static void post_qp_event(struct iwch_dev *rnicp, struct iwch_cq *chp, CQE_WRID_HI(rsp_msg->cqe), CQE_WRID_LOW(rsp_msg->cqe)); atomic_inc(&qhp->refcnt); - spin_unlock(&rnicp->lock); + xa_unlock(&rnicp->qps); if (qhp->attr.state == IWCH_QP_STATE_RTS) { attrs.next_state = IWCH_QP_STATE_TERMINATE; @@ -114,21 +114,21 @@ void iwch_ev_dispatch(struct cxio_rdev *rdev_p, struct sk_buff *skb) unsigned long flag; rnicp = (struct iwch_dev *) rdev_p->ulp; - spin_lock(&rnicp->lock); + xa_lock(&rnicp->qps); chp = get_chp(rnicp, cqid); - qhp = get_qhp(rnicp, CQE_QPID(rsp_msg->cqe)); + qhp = xa_load(&rnicp->qps, CQE_QPID(rsp_msg->cqe)); if (!chp || !qhp) { pr_err("BAD AE cqid 0x%x qpid 0x%x opcode %d status 0x%x type %d wrid.hi 0x%x wrid.lo 0x%x\n", cqid, CQE_QPID(rsp_msg->cqe), CQE_OPCODE(rsp_msg->cqe), CQE_STATUS(rsp_msg->cqe), CQE_TYPE(rsp_msg->cqe), CQE_WRID_HI(rsp_msg->cqe), CQE_WRID_LOW(rsp_msg->cqe)); - spin_unlock(&rnicp->lock); + xa_unlock(&rnicp->qps); goto out; } iwch_qp_add_ref(&qhp->ibqp); atomic_inc(&chp->refcnt); - spin_unlock(&rnicp->lock); + xa_unlock(&rnicp->qps); /* * 1) completion of our sending a TERMINATE. diff --git a/drivers/infiniband/hw/cxgb3/iwch_provider.c b/drivers/infiniband/hw/cxgb3/iwch_provider.c index 7f3fd4f7ca44..76120fdf7d1b 100644 --- a/drivers/infiniband/hw/cxgb3/iwch_provider.c +++ b/drivers/infiniband/hw/cxgb3/iwch_provider.c @@ -770,7 +770,7 @@ static int iwch_destroy_qp(struct ib_qp *ib_qp) iwch_modify_qp(rhp, qhp, IWCH_QP_ATTR_NEXT_STATE, &attrs, 0); wait_event(qhp->wait, !qhp->ep); - remove_handle(rhp, &rhp->qpidr, qhp->wq.qpid); + xa_erase_irq(&rhp->qps, qhp->wq.qpid); atomic_dec(&qhp->refcnt); wait_event(qhp->wait, !atomic_read(&qhp->refcnt)); @@ -885,7 +885,7 @@ static struct ib_qp *iwch_create_qp(struct ib_pd *pd, init_waitqueue_head(&qhp->wait); atomic_set(&qhp->refcnt, 1); - if (insert_handle(rhp, &rhp->qpidr, qhp, qhp->wq.qpid)) { + if (xa_store_irq(&rhp->qps, qhp->wq.qpid, qhp, GFP_KERNEL)) { cxio_destroy_qp(&rhp->rdev, &qhp->wq, ucontext ? &ucontext->uctx : &rhp->rdev.uctx); kfree(qhp); From patchwork Thu Feb 21 00:20:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10822941 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 02D3213A4 for ; Thu, 21 Feb 2019 00:21:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E48DF2FC4D for ; Thu, 21 Feb 2019 00:21:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D8EC42FC7C; Thu, 21 Feb 2019 00:21:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 61FFA2FC81 for ; Thu, 21 Feb 2019 00:21:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726776AbfBUAVX (ORCPT ); Wed, 20 Feb 2019 19:21:23 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:52442 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726304AbfBUAVL (ORCPT ); Wed, 20 Feb 2019 19:21:11 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=r4GjFMIcEa7iL/a/i2LPxSzdWml4VJwL7mQra1nqDXA=; b=p4t15W5viKxd/EisQY+K2gfy/ l08JvuI2WVaxbk0vy1oKrKn84A7M8ex7bLmr2584ZqVfAHzddlsltMiv55tc519rXKZWDJBF5Xywq dJ9qIyYA+rO9HL73EhNPBorIhBJkK1Sx2UBBVzzow0CY6GFo43SbMUfJSHWuthbi1yrNcN2GfTELc hvHb6saogOLHXN8uRMASw8UmyLyjdpHLKYxrmuibyc8d2DI4XFAFOSpvvbfznS6okI6mbRups97UP TvQMUsC11tLVkLAL/G7fVbixmCQLrPhgXpKszoSFxbI1Hgmd9CP9lxAGs1w4RLalf8UUUTftiFtSt NrGla2kdw==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gwc6x-0005vX-J5; Thu, 21 Feb 2019 00:21:11 +0000 From: Matthew Wilcox To: Jason Gunthorpe Cc: Matthew Wilcox , linux-rdma@vger.kernel.org Subject: [PATCH 14/32] cxgb3: Convert mmidr to XArray Date: Wed, 20 Feb 2019 16:20:49 -0800 Message-Id: <20190221002107.22625-15-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190221002107.22625-1-willy@infradead.org> References: <20190221002107.22625-1-willy@infradead.org> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Matthew Wilcox Acked-by: Steve Wise --- drivers/infiniband/hw/cxgb3/iwch.c | 4 +-- drivers/infiniband/hw/cxgb3/iwch.h | 30 +++------------------ drivers/infiniband/hw/cxgb3/iwch_mem.c | 2 +- drivers/infiniband/hw/cxgb3/iwch_provider.c | 8 +++--- 4 files changed, 9 insertions(+), 35 deletions(-) diff --git a/drivers/infiniband/hw/cxgb3/iwch.c b/drivers/infiniband/hw/cxgb3/iwch.c index 1f479ad4fde6..76dfcdc42df3 100644 --- a/drivers/infiniband/hw/cxgb3/iwch.c +++ b/drivers/infiniband/hw/cxgb3/iwch.c @@ -100,8 +100,7 @@ static void rnic_init(struct iwch_dev *rnicp) pr_debug("%s iwch_dev %p\n", __func__, rnicp); xa_init_flags(&rnicp->cqs, XA_FLAGS_LOCK_IRQ); xa_init_flags(&rnicp->qps, XA_FLAGS_LOCK_IRQ); - idr_init(&rnicp->mmidr); - spin_lock_init(&rnicp->lock); + xa_init_flags(&rnicp->mrs, XA_FLAGS_LOCK_IRQ); INIT_DELAYED_WORK(&rnicp->db_drop_task, iwch_db_drop_task); rnicp->attr.max_qps = T3_MAX_NUM_QP - 32; @@ -183,7 +182,6 @@ static void close_rnic_dev(struct t3cdev *tdev) list_del(&dev->entry); iwch_unregister_device(dev); cxio_rdev_close(&dev->rdev); - idr_destroy(&dev->mmidr); ib_dealloc_device(&dev->ibdev); break; } diff --git a/drivers/infiniband/hw/cxgb3/iwch.h b/drivers/infiniband/hw/cxgb3/iwch.h index 70e086946d30..310a937bffcf 100644 --- a/drivers/infiniband/hw/cxgb3/iwch.h +++ b/drivers/infiniband/hw/cxgb3/iwch.h @@ -35,7 +35,7 @@ #include #include #include -#include +#include #include #include @@ -108,8 +108,7 @@ struct iwch_dev { struct iwch_rnic_attributes attr; struct xarray cqs; struct xarray qps; - struct idr mmidr; - spinlock_t lock; + struct xarray mrs; struct list_head entry; struct delayed_work db_drop_task; }; @@ -146,30 +145,7 @@ static inline struct iwch_qp *get_qhp(struct iwch_dev *rhp, u32 qpid) static inline struct iwch_mr *get_mhp(struct iwch_dev *rhp, u32 mmid) { - return idr_find(&rhp->mmidr, mmid); -} - -static inline int insert_handle(struct iwch_dev *rhp, struct idr *idr, - void *handle, u32 id) -{ - int ret; - - idr_preload(GFP_KERNEL); - spin_lock_irq(&rhp->lock); - - ret = idr_alloc(idr, handle, id, id + 1, GFP_NOWAIT); - - spin_unlock_irq(&rhp->lock); - idr_preload_end(); - - return ret < 0 ? ret : 0; -} - -static inline void remove_handle(struct iwch_dev *rhp, struct idr *idr, u32 id) -{ - spin_lock_irq(&rhp->lock); - idr_remove(idr, id); - spin_unlock_irq(&rhp->lock); + return xa_load(&rhp->mrs, mmid); } extern struct cxgb3_client t3c_client; diff --git a/drivers/infiniband/hw/cxgb3/iwch_mem.c b/drivers/infiniband/hw/cxgb3/iwch_mem.c index 12886b1b4b10..ce0f2741821d 100644 --- a/drivers/infiniband/hw/cxgb3/iwch_mem.c +++ b/drivers/infiniband/hw/cxgb3/iwch_mem.c @@ -49,7 +49,7 @@ static int iwch_finish_mem_reg(struct iwch_mr *mhp, u32 stag) mmid = stag >> 8; mhp->ibmr.rkey = mhp->ibmr.lkey = stag; pr_debug("%s mmid 0x%x mhp %p\n", __func__, mmid, mhp); - return insert_handle(mhp->rhp, &mhp->rhp->mmidr, mhp, mmid); + return xa_insert_irq(&mhp->rhp->mrs, mmid, mhp, GFP_KERNEL); } int iwch_register_mem(struct iwch_dev *rhp, struct iwch_pd *php, diff --git a/drivers/infiniband/hw/cxgb3/iwch_provider.c b/drivers/infiniband/hw/cxgb3/iwch_provider.c index 76120fdf7d1b..04dc8ede36a9 100644 --- a/drivers/infiniband/hw/cxgb3/iwch_provider.c +++ b/drivers/infiniband/hw/cxgb3/iwch_provider.c @@ -430,7 +430,7 @@ static int iwch_dereg_mr(struct ib_mr *ib_mr) cxio_dereg_mem(&rhp->rdev, mhp->attr.stag, mhp->attr.pbl_size, mhp->attr.pbl_addr); iwch_free_pbl(mhp); - remove_handle(rhp, &rhp->mmidr, mmid); + xa_erase_irq(&rhp->mrs, mmid); if (mhp->kva) kfree((void *) (unsigned long) mhp->kva); if (mhp->umem) @@ -650,7 +650,7 @@ static struct ib_mw *iwch_alloc_mw(struct ib_pd *pd, enum ib_mw_type type, mhp->attr.stag = stag; mmid = (stag) >> 8; mhp->ibmw.rkey = stag; - if (insert_handle(rhp, &rhp->mmidr, mhp, mmid)) { + if (xa_insert_irq(&rhp->mrs, mmid, mhp, GFP_KERNEL)) { cxio_deallocate_window(&rhp->rdev, mhp->attr.stag); kfree(mhp); return ERR_PTR(-ENOMEM); @@ -669,7 +669,7 @@ static int iwch_dealloc_mw(struct ib_mw *mw) rhp = mhp->rhp; mmid = (mw->rkey) >> 8; cxio_deallocate_window(&rhp->rdev, mhp->attr.stag); - remove_handle(rhp, &rhp->mmidr, mmid); + xa_erase_irq(&rhp->mrs, mmid); pr_debug("%s ib_mw %p mmid 0x%x ptr %p\n", __func__, mw, mmid, mhp); kfree(mhp); return 0; @@ -715,7 +715,7 @@ static struct ib_mr *iwch_alloc_mr(struct ib_pd *pd, mhp->attr.state = 1; mmid = (stag) >> 8; mhp->ibmr.rkey = mhp->ibmr.lkey = stag; - ret = insert_handle(rhp, &rhp->mmidr, mhp, mmid); + ret = xa_insert_irq(&rhp->mrs, mmid, mhp, GFP_KERNEL); if (ret) goto err3; From patchwork Thu Feb 21 00:20:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10822937 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 644A91805 for ; Thu, 21 Feb 2019 00:21:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 528BE2FC4D for ; Thu, 21 Feb 2019 00:21:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 46E0B2FC88; Thu, 21 Feb 2019 00:21:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B6A0E2FC7C for ; Thu, 21 Feb 2019 00:21:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726803AbfBUAVW (ORCPT ); Wed, 20 Feb 2019 19:21:22 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:52446 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726773AbfBUAVM (ORCPT ); Wed, 20 Feb 2019 19:21:12 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=n5nuzgj35revEIwdG7dW3qP01l2PcOUUh4zltHWTBJ0=; b=j2FbULCF6wb16hnCIK7xoWMko 1m/7eGaGpXqe4qaZ8Pd3CqqLjp/pyQrsQ3pknvmtABl1wB3MSO9afQWoMGJOitku9FShPl530APEm T++g0aPowRM1+zRvyEEl4bdke862lNTZ+cK/WiENOPFZFG0bBIIfFD13k26/nbfkDNNFONgK+UUiH AK5Hnwlbmi96ay3jnQwTiY4CWQbTsRJoVBtYKz6osVzaWwdb3TEPyaj8IriynXp3XuxeoGSkEDlEy epQGiRe809V/a8WvtlAGkVWE8FrWwWk3fv/ACk6Tliq2M3bKXIMo0ImW79/Y7nP+bbRVlGgTr1/+f mJ7izNmSg==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gwc6x-0005vg-P8; Thu, 21 Feb 2019 00:21:11 +0000 From: Matthew Wilcox To: Jason Gunthorpe Cc: Matthew Wilcox , linux-rdma@vger.kernel.org Subject: [PATCH 15/32] cxgb4: Convert cqidr to XArray Date: Wed, 20 Feb 2019 16:20:50 -0800 Message-Id: <20190221002107.22625-16-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190221002107.22625-1-willy@infradead.org> References: <20190221002107.22625-1-willy@infradead.org> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Matthew Wilcox Acked-by: Steve Wise --- drivers/infiniband/hw/cxgb4/cq.c | 6 +++--- drivers/infiniband/hw/cxgb4/device.c | 4 +--- drivers/infiniband/hw/cxgb4/ev.c | 8 ++++---- drivers/infiniband/hw/cxgb4/iw_cxgb4.h | 4 ++-- 4 files changed, 10 insertions(+), 12 deletions(-) diff --git a/drivers/infiniband/hw/cxgb4/cq.c b/drivers/infiniband/hw/cxgb4/cq.c index 1fd8798d91a7..1fa5f6445be3 100644 --- a/drivers/infiniband/hw/cxgb4/cq.c +++ b/drivers/infiniband/hw/cxgb4/cq.c @@ -976,7 +976,7 @@ int c4iw_destroy_cq(struct ib_cq *ib_cq) pr_debug("ib_cq %p\n", ib_cq); chp = to_c4iw_cq(ib_cq); - remove_handle(chp->rhp, &chp->rhp->cqidr, chp->cq.cqid); + xa_erase_irq(&chp->rhp->cqs, chp->cq.cqid); atomic_dec(&chp->refcnt); wait_event(chp->wait, !atomic_read(&chp->refcnt)); @@ -1088,7 +1088,7 @@ struct ib_cq *c4iw_create_cq(struct ib_device *ibdev, spin_lock_init(&chp->comp_handler_lock); atomic_set(&chp->refcnt, 1); init_waitqueue_head(&chp->wait); - ret = insert_handle(rhp, &rhp->cqidr, chp, chp->cq.cqid); + ret = xa_insert_irq(&rhp->cqs, chp->cq.cqid, chp, GFP_KERNEL); if (ret) goto err_destroy_cq; @@ -1143,7 +1143,7 @@ struct ib_cq *c4iw_create_cq(struct ib_device *ibdev, err_free_mm: kfree(mm); err_remove_handle: - remove_handle(rhp, &rhp->cqidr, chp->cq.cqid); + xa_erase_irq(&rhp->cqs, chp->cq.cqid); err_destroy_cq: destroy_cq(&chp->rhp->rdev, &chp->cq, ucontext ? &ucontext->uctx : &rhp->rdev.uctx, diff --git a/drivers/infiniband/hw/cxgb4/device.c b/drivers/infiniband/hw/cxgb4/device.c index c13c0ba30f63..d78039043c55 100644 --- a/drivers/infiniband/hw/cxgb4/device.c +++ b/drivers/infiniband/hw/cxgb4/device.c @@ -924,8 +924,6 @@ static void c4iw_rdev_close(struct c4iw_rdev *rdev) void c4iw_dealloc(struct uld_ctx *ctx) { c4iw_rdev_close(&ctx->dev->rdev); - WARN_ON_ONCE(!idr_is_empty(&ctx->dev->cqidr)); - idr_destroy(&ctx->dev->cqidr); WARN_ON_ONCE(!idr_is_empty(&ctx->dev->qpidr)); idr_destroy(&ctx->dev->qpidr); WARN_ON_ONCE(!idr_is_empty(&ctx->dev->mmidr)); @@ -1037,7 +1035,7 @@ static struct c4iw_dev *c4iw_alloc(const struct cxgb4_lld_info *infop) return ERR_PTR(ret); } - idr_init(&devp->cqidr); + xa_init_flags(&devp->cqs, XA_FLAGS_LOCK_IRQ); idr_init(&devp->qpidr); idr_init(&devp->mmidr); idr_init(&devp->hwtid_idr); diff --git a/drivers/infiniband/hw/cxgb4/ev.c b/drivers/infiniband/hw/cxgb4/ev.c index 8741d23168f3..670c2c802e45 100644 --- a/drivers/infiniband/hw/cxgb4/ev.c +++ b/drivers/infiniband/hw/cxgb4/ev.c @@ -225,11 +225,11 @@ int c4iw_ev_handler(struct c4iw_dev *dev, u32 qid) struct c4iw_cq *chp; unsigned long flag; - spin_lock_irqsave(&dev->lock, flag); - chp = get_chp(dev, qid); + xa_lock_irqsave(&dev->cqs, flag); + chp = xa_load(&dev->cqs, qid); if (chp) { atomic_inc(&chp->refcnt); - spin_unlock_irqrestore(&dev->lock, flag); + xa_unlock_irqrestore(&dev->cqs, flag); t4_clear_cq_armed(&chp->cq); spin_lock_irqsave(&chp->comp_handler_lock, flag); (*chp->ibcq.comp_handler)(&chp->ibcq, chp->ibcq.cq_context); @@ -238,7 +238,7 @@ int c4iw_ev_handler(struct c4iw_dev *dev, u32 qid) wake_up(&chp->wait); } else { pr_debug("unknown cqid 0x%x\n", qid); - spin_unlock_irqrestore(&dev->lock, flag); + xa_unlock_irqrestore(&dev->cqs, flag); } return 0; } diff --git a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h index f0fceadd0d12..6729fd7b1b42 100644 --- a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h +++ b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h @@ -315,7 +315,7 @@ struct c4iw_dev { struct ib_device ibdev; struct c4iw_rdev rdev; u32 device_cap_flags; - struct idr cqidr; + struct xarray cqs; struct idr qpidr; struct idr mmidr; spinlock_t lock; @@ -349,7 +349,7 @@ static inline struct c4iw_dev *rdev_to_c4iw_dev(struct c4iw_rdev *rdev) static inline struct c4iw_cq *get_chp(struct c4iw_dev *rhp, u32 cqid) { - return idr_find(&rhp->cqidr, cqid); + return xa_load(&rhp->cqs, cqid); } static inline struct c4iw_qp *get_qhp(struct c4iw_dev *rhp, u32 qpid) From patchwork Thu Feb 21 00:20:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10822945 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 50C7314E1 for ; Thu, 21 Feb 2019 00:21:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3FAD92FC4D for ; Thu, 21 Feb 2019 00:21:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 341642FC82; Thu, 21 Feb 2019 00:21:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 17C362FC4D for ; Thu, 21 Feb 2019 00:21:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726773AbfBUAVX (ORCPT ); Wed, 20 Feb 2019 19:21:23 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:52452 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726776AbfBUAVM (ORCPT ); Wed, 20 Feb 2019 19:21:12 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=RLrFUE2FqKjGUYYCW/Mb+6DhrNFInAjFMx56R3dLFyE=; b=G0B0ilclUmCPmfPC5pDcdDFee MDxDVcTQYg5PchHFSvH2/mKysCOsb13jK1iWrVqWzHAFk4PQB2Wfq20tdkmyXiNv1cf+2xVS2fTVr Z7DEwOuRqmbKbmzdS2NhkCzp5BwZN/9NwGdNNJyhZoapnMcuo0Ecad15U2P2/LaXvTrabjhgg9rH3 gc+15H7e2K/HoAb022uUHQAwInfTuMbrMR6PE+pgszMksYqsfuRgnUZIOoKpKmaXQYxmNVeALMSmN Vi3htHMNpKiZuWd/NdZlTso/Z06rNH5jDprf1nYwo4jEAw0SsEO9seYcvHdCzTvo+49ZsKaoVedeY inRteaJbQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gwc6x-0005vn-UW; Thu, 21 Feb 2019 00:21:11 +0000 From: Matthew Wilcox To: Jason Gunthorpe Cc: Matthew Wilcox , linux-rdma@vger.kernel.org Subject: [PATCH 16/32] cxgb4: Convert qpidr to XArray Date: Wed, 20 Feb 2019 16:20:51 -0800 Message-Id: <20190221002107.22625-17-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190221002107.22625-1-willy@infradead.org> References: <20190221002107.22625-1-willy@infradead.org> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Matthew Wilcox Acked-by: Steve Wise --- drivers/infiniband/hw/cxgb4/device.c | 120 ++++++++++--------------- drivers/infiniband/hw/cxgb4/ev.c | 10 +-- drivers/infiniband/hw/cxgb4/iw_cxgb4.h | 4 +- drivers/infiniband/hw/cxgb4/qp.c | 33 ++++--- 4 files changed, 72 insertions(+), 95 deletions(-) diff --git a/drivers/infiniband/hw/cxgb4/device.c b/drivers/infiniband/hw/cxgb4/device.c index d78039043c55..7dcf7740a732 100644 --- a/drivers/infiniband/hw/cxgb4/device.c +++ b/drivers/infiniband/hw/cxgb4/device.c @@ -250,16 +250,11 @@ static void set_ep_sin6_addrs(struct c4iw_ep *ep, } } -static int dump_qp(int id, void *p, void *data) +static int dump_qp(struct c4iw_qp *qp, struct c4iw_debugfs_data *qpd) { - struct c4iw_qp *qp = p; - struct c4iw_debugfs_data *qpd = data; int space; int cc; - if (id != qp->wq.sq.qid) - return 0; - space = qpd->bufsize - qpd->pos - 1; if (space == 0) return 1; @@ -335,7 +330,9 @@ static int qp_release(struct inode *inode, struct file *file) static int qp_open(struct inode *inode, struct file *file) { + struct c4iw_qp *qp; struct c4iw_debugfs_data *qpd; + unsigned long index; int count = 1; qpd = kmalloc(sizeof *qpd, GFP_KERNEL); @@ -345,9 +342,12 @@ static int qp_open(struct inode *inode, struct file *file) qpd->devp = inode->i_private; qpd->pos = 0; - spin_lock_irq(&qpd->devp->lock); - idr_for_each(&qpd->devp->qpidr, count_idrs, &count); - spin_unlock_irq(&qpd->devp->lock); + /* + * No need to lock; we drop the lock to call vmalloc so it's racy + * anyway. Someone who cares should switch this over to seq_file + */ + xa_for_each(&qpd->devp->qps, index, qp) + count++; qpd->bufsize = count * 180; qpd->buf = vmalloc(qpd->bufsize); @@ -356,9 +356,10 @@ static int qp_open(struct inode *inode, struct file *file) return -ENOMEM; } - spin_lock_irq(&qpd->devp->lock); - idr_for_each(&qpd->devp->qpidr, dump_qp, qpd); - spin_unlock_irq(&qpd->devp->lock); + xa_lock_irq(&qpd->devp->qps); + xa_for_each(&qpd->devp->qps, index, qp) + dump_qp(qp, qpd); + xa_unlock_irq(&qpd->devp->qps); qpd->buf[qpd->pos++] = 0; file->private_data = qpd; @@ -924,8 +925,6 @@ static void c4iw_rdev_close(struct c4iw_rdev *rdev) void c4iw_dealloc(struct uld_ctx *ctx) { c4iw_rdev_close(&ctx->dev->rdev); - WARN_ON_ONCE(!idr_is_empty(&ctx->dev->qpidr)); - idr_destroy(&ctx->dev->qpidr); WARN_ON_ONCE(!idr_is_empty(&ctx->dev->mmidr)); idr_destroy(&ctx->dev->mmidr); wait_event(ctx->dev->wait, idr_is_empty(&ctx->dev->hwtid_idr)); @@ -1036,7 +1035,7 @@ static struct c4iw_dev *c4iw_alloc(const struct cxgb4_lld_info *infop) } xa_init_flags(&devp->cqs, XA_FLAGS_LOCK_IRQ); - idr_init(&devp->qpidr); + xa_init_flags(&devp->qps, XA_FLAGS_LOCK_IRQ); idr_init(&devp->mmidr); idr_init(&devp->hwtid_idr); idr_init(&devp->stid_idr); @@ -1256,34 +1255,21 @@ static int c4iw_uld_state_change(void *handle, enum cxgb4_state new_state) return 0; } -static int disable_qp_db(int id, void *p, void *data) -{ - struct c4iw_qp *qp = p; - - t4_disable_wq_db(&qp->wq); - return 0; -} - static void stop_queues(struct uld_ctx *ctx) { - unsigned long flags; + struct c4iw_qp *qp; + unsigned long index, flags; - spin_lock_irqsave(&ctx->dev->lock, flags); + xa_lock_irqsave(&ctx->dev->qps, flags); ctx->dev->rdev.stats.db_state_transitions++; ctx->dev->db_state = STOPPED; - if (ctx->dev->rdev.flags & T4_STATUS_PAGE_DISABLED) - idr_for_each(&ctx->dev->qpidr, disable_qp_db, NULL); - else + if (ctx->dev->rdev.flags & T4_STATUS_PAGE_DISABLED) { + xa_for_each(&ctx->dev->qps, index, qp) + t4_disable_wq_db(&qp->wq); + } else { ctx->dev->rdev.status_page->db_off = 1; - spin_unlock_irqrestore(&ctx->dev->lock, flags); -} - -static int enable_qp_db(int id, void *p, void *data) -{ - struct c4iw_qp *qp = p; - - t4_enable_wq_db(&qp->wq); - return 0; + } + xa_unlock_irqrestore(&ctx->dev->qps, flags); } static void resume_rc_qp(struct c4iw_qp *qp) @@ -1313,18 +1299,21 @@ static void resume_a_chunk(struct uld_ctx *ctx) static void resume_queues(struct uld_ctx *ctx) { - spin_lock_irq(&ctx->dev->lock); + xa_lock_irq(&ctx->dev->qps); if (ctx->dev->db_state != STOPPED) goto out; ctx->dev->db_state = FLOW_CONTROL; while (1) { if (list_empty(&ctx->dev->db_fc_list)) { + struct c4iw_qp *qp; + unsigned long index; + WARN_ON(ctx->dev->db_state != FLOW_CONTROL); ctx->dev->db_state = NORMAL; ctx->dev->rdev.stats.db_state_transitions++; if (ctx->dev->rdev.flags & T4_STATUS_PAGE_DISABLED) { - idr_for_each(&ctx->dev->qpidr, enable_qp_db, - NULL); + xa_for_each(&ctx->dev->qps, index, qp) + t4_enable_wq_db(&qp->wq); } else { ctx->dev->rdev.status_page->db_off = 0; } @@ -1336,12 +1325,12 @@ static void resume_queues(struct uld_ctx *ctx) resume_a_chunk(ctx); } if (!list_empty(&ctx->dev->db_fc_list)) { - spin_unlock_irq(&ctx->dev->lock); + xa_unlock_irq(&ctx->dev->qps); if (DB_FC_RESUME_DELAY) { set_current_state(TASK_UNINTERRUPTIBLE); schedule_timeout(DB_FC_RESUME_DELAY); } - spin_lock_irq(&ctx->dev->lock); + xa_lock_irq(&ctx->dev->qps); if (ctx->dev->db_state != FLOW_CONTROL) break; } @@ -1350,7 +1339,7 @@ static void resume_queues(struct uld_ctx *ctx) out: if (ctx->dev->db_state != NORMAL) ctx->dev->rdev.stats.db_fc_interruptions++; - spin_unlock_irq(&ctx->dev->lock); + xa_unlock_irq(&ctx->dev->qps); } struct qp_list { @@ -1358,23 +1347,6 @@ struct qp_list { struct c4iw_qp **qps; }; -static int add_and_ref_qp(int id, void *p, void *data) -{ - struct qp_list *qp_listp = data; - struct c4iw_qp *qp = p; - - c4iw_qp_add_ref(&qp->ibqp); - qp_listp->qps[qp_listp->idx++] = qp; - return 0; -} - -static int count_qps(int id, void *p, void *data) -{ - unsigned *countp = data; - (*countp)++; - return 0; -} - static void deref_qps(struct qp_list *qp_list) { int idx; @@ -1391,7 +1363,7 @@ static void recover_lost_dbs(struct uld_ctx *ctx, struct qp_list *qp_list) for (idx = 0; idx < qp_list->idx; idx++) { struct c4iw_qp *qp = qp_list->qps[idx]; - spin_lock_irq(&qp->rhp->lock); + xa_lock_irq(&qp->rhp->qps); spin_lock(&qp->lock); ret = cxgb4_sync_txq_pidx(qp->rhp->rdev.lldi.ports[0], qp->wq.sq.qid, @@ -1401,7 +1373,7 @@ static void recover_lost_dbs(struct uld_ctx *ctx, struct qp_list *qp_list) pr_err("%s: Fatal error - DB overflow recovery failed - error syncing SQ qid %u\n", pci_name(ctx->lldi.pdev), qp->wq.sq.qid); spin_unlock(&qp->lock); - spin_unlock_irq(&qp->rhp->lock); + xa_unlock_irq(&qp->rhp->qps); return; } qp->wq.sq.wq_pidx_inc = 0; @@ -1415,12 +1387,12 @@ static void recover_lost_dbs(struct uld_ctx *ctx, struct qp_list *qp_list) pr_err("%s: Fatal error - DB overflow recovery failed - error syncing RQ qid %u\n", pci_name(ctx->lldi.pdev), qp->wq.rq.qid); spin_unlock(&qp->lock); - spin_unlock_irq(&qp->rhp->lock); + xa_unlock_irq(&qp->rhp->qps); return; } qp->wq.rq.wq_pidx_inc = 0; spin_unlock(&qp->lock); - spin_unlock_irq(&qp->rhp->lock); + xa_unlock_irq(&qp->rhp->qps); /* Wait for the dbfifo to drain */ while (cxgb4_dbfifo_count(qp->rhp->rdev.lldi.ports[0], 1) > 0) { @@ -1432,6 +1404,8 @@ static void recover_lost_dbs(struct uld_ctx *ctx, struct qp_list *qp_list) static void recover_queues(struct uld_ctx *ctx) { + struct c4iw_qp *qp; + unsigned long index; int count = 0; struct qp_list qp_list; int ret; @@ -1449,22 +1423,26 @@ static void recover_queues(struct uld_ctx *ctx) } /* Count active queues so we can build a list of queues to recover */ - spin_lock_irq(&ctx->dev->lock); + xa_lock_irq(&ctx->dev->qps); WARN_ON(ctx->dev->db_state != STOPPED); ctx->dev->db_state = RECOVERY; - idr_for_each(&ctx->dev->qpidr, count_qps, &count); + xa_for_each(&ctx->dev->qps, index, qp) + count++; qp_list.qps = kcalloc(count, sizeof(*qp_list.qps), GFP_ATOMIC); if (!qp_list.qps) { - spin_unlock_irq(&ctx->dev->lock); + xa_unlock_irq(&ctx->dev->qps); return; } qp_list.idx = 0; /* add and ref each qp so it doesn't get freed */ - idr_for_each(&ctx->dev->qpidr, add_and_ref_qp, &qp_list); + xa_for_each(&ctx->dev->qps, index, qp) { + c4iw_qp_add_ref(&qp->ibqp); + qp_list.qps[qp_list.idx++] = qp; + } - spin_unlock_irq(&ctx->dev->lock); + xa_unlock_irq(&ctx->dev->qps); /* now traverse the list in a safe context to recover the db state*/ recover_lost_dbs(ctx, &qp_list); @@ -1473,10 +1451,10 @@ static void recover_queues(struct uld_ctx *ctx) deref_qps(&qp_list); kfree(qp_list.qps); - spin_lock_irq(&ctx->dev->lock); + xa_lock_irq(&ctx->dev->qps); WARN_ON(ctx->dev->db_state != RECOVERY); ctx->dev->db_state = STOPPED; - spin_unlock_irq(&ctx->dev->lock); + xa_unlock_irq(&ctx->dev->qps); } static int c4iw_uld_control(void *handle, enum cxgb4_control control, ...) diff --git a/drivers/infiniband/hw/cxgb4/ev.c b/drivers/infiniband/hw/cxgb4/ev.c index 670c2c802e45..4cd877bd2f56 100644 --- a/drivers/infiniband/hw/cxgb4/ev.c +++ b/drivers/infiniband/hw/cxgb4/ev.c @@ -123,15 +123,15 @@ void c4iw_ev_dispatch(struct c4iw_dev *dev, struct t4_cqe *err_cqe) struct c4iw_qp *qhp; u32 cqid; - spin_lock_irq(&dev->lock); - qhp = get_qhp(dev, CQE_QPID(err_cqe)); + xa_lock_irq(&dev->qps); + qhp = xa_load(&dev->qps, CQE_QPID(err_cqe)); if (!qhp) { pr_err("BAD AE qpid 0x%x opcode %d status 0x%x type %d wrid.hi 0x%x wrid.lo 0x%x\n", CQE_QPID(err_cqe), CQE_OPCODE(err_cqe), CQE_STATUS(err_cqe), CQE_TYPE(err_cqe), CQE_WRID_HI(err_cqe), CQE_WRID_LOW(err_cqe)); - spin_unlock_irq(&dev->lock); + xa_unlock_irq(&dev->qps); goto out; } @@ -146,13 +146,13 @@ void c4iw_ev_dispatch(struct c4iw_dev *dev, struct t4_cqe *err_cqe) CQE_OPCODE(err_cqe), CQE_STATUS(err_cqe), CQE_TYPE(err_cqe), CQE_WRID_HI(err_cqe), CQE_WRID_LOW(err_cqe)); - spin_unlock_irq(&dev->lock); + xa_unlock_irq(&dev->qps); goto out; } c4iw_qp_add_ref(&qhp->ibqp); atomic_inc(&chp->refcnt); - spin_unlock_irq(&dev->lock); + xa_unlock_irq(&dev->qps); /* Bad incoming write */ if (RQ_TYPE(err_cqe) && diff --git a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h index 6729fd7b1b42..79ceab2fd756 100644 --- a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h +++ b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h @@ -316,7 +316,7 @@ struct c4iw_dev { struct c4iw_rdev rdev; u32 device_cap_flags; struct xarray cqs; - struct idr qpidr; + struct xarray qps; struct idr mmidr; spinlock_t lock; struct mutex db_mutex; @@ -354,7 +354,7 @@ static inline struct c4iw_cq *get_chp(struct c4iw_dev *rhp, u32 cqid) static inline struct c4iw_qp *get_qhp(struct c4iw_dev *rhp, u32 qpid) { - return idr_find(&rhp->qpidr, qpid); + return xa_load(&rhp->qps, qpid); } static inline struct c4iw_mr *get_mhp(struct c4iw_dev *rhp, u32 mmid) diff --git a/drivers/infiniband/hw/cxgb4/qp.c b/drivers/infiniband/hw/cxgb4/qp.c index 504cf525508f..b13ec5b7d6a7 100644 --- a/drivers/infiniband/hw/cxgb4/qp.c +++ b/drivers/infiniband/hw/cxgb4/qp.c @@ -62,12 +62,12 @@ static int alloc_ird(struct c4iw_dev *dev, u32 ird) { int ret = 0; - spin_lock_irq(&dev->lock); + xa_lock_irq(&dev->qps); if (ird <= dev->avail_ird) dev->avail_ird -= ird; else ret = -ENOMEM; - spin_unlock_irq(&dev->lock); + xa_unlock_irq(&dev->qps); if (ret) dev_warn(&dev->rdev.lldi.pdev->dev, @@ -78,9 +78,9 @@ static int alloc_ird(struct c4iw_dev *dev, u32 ird) static void free_ird(struct c4iw_dev *dev, int ird) { - spin_lock_irq(&dev->lock); + xa_lock_irq(&dev->qps); dev->avail_ird += ird; - spin_unlock_irq(&dev->lock); + xa_unlock_irq(&dev->qps); } static void set_state(struct c4iw_qp *qhp, enum c4iw_qp_state state) @@ -934,7 +934,7 @@ static int ring_kernel_sq_db(struct c4iw_qp *qhp, u16 inc) { unsigned long flags; - spin_lock_irqsave(&qhp->rhp->lock, flags); + xa_lock_irqsave(&qhp->rhp->qps, flags); spin_lock(&qhp->lock); if (qhp->rhp->db_state == NORMAL) t4_ring_sq_db(&qhp->wq, inc, NULL); @@ -943,7 +943,7 @@ static int ring_kernel_sq_db(struct c4iw_qp *qhp, u16 inc) qhp->wq.sq.wq_pidx_inc += inc; } spin_unlock(&qhp->lock); - spin_unlock_irqrestore(&qhp->rhp->lock, flags); + xa_unlock_irqrestore(&qhp->rhp->qps, flags); return 0; } @@ -951,7 +951,7 @@ static int ring_kernel_rq_db(struct c4iw_qp *qhp, u16 inc) { unsigned long flags; - spin_lock_irqsave(&qhp->rhp->lock, flags); + xa_lock_irqsave(&qhp->rhp->qps, flags); spin_lock(&qhp->lock); if (qhp->rhp->db_state == NORMAL) t4_ring_rq_db(&qhp->wq, inc, NULL); @@ -960,7 +960,7 @@ static int ring_kernel_rq_db(struct c4iw_qp *qhp, u16 inc) qhp->wq.rq.wq_pidx_inc += inc; } spin_unlock(&qhp->lock); - spin_unlock_irqrestore(&qhp->rhp->lock, flags); + xa_unlock_irqrestore(&qhp->rhp->qps, flags); return 0; } @@ -2105,12 +2105,11 @@ int c4iw_destroy_qp(struct ib_qp *ib_qp) c4iw_modify_qp(rhp, qhp, C4IW_QP_ATTR_NEXT_STATE, &attrs, 0); wait_event(qhp->wait, !qhp->ep); - remove_handle(rhp, &rhp->qpidr, qhp->wq.sq.qid); - - spin_lock_irq(&rhp->lock); + xa_lock_irq(&rhp->qps); + __xa_erase(&rhp->qps, qhp->wq.sq.qid); if (!list_empty(&qhp->db_fc_entry)) list_del_init(&qhp->db_fc_entry); - spin_unlock_irq(&rhp->lock); + xa_unlock_irq(&rhp->qps); free_ird(rhp, qhp->attr.max_ird); c4iw_qp_rem_ref(ib_qp); @@ -2229,7 +2228,7 @@ struct ib_qp *c4iw_create_qp(struct ib_pd *pd, struct ib_qp_init_attr *attrs, kref_init(&qhp->kref); INIT_WORK(&qhp->free_work, free_qp_work); - ret = insert_handle(rhp, &rhp->qpidr, qhp, qhp->wq.sq.qid); + ret = xa_insert_irq(&rhp->qps, qhp->wq.sq.qid, qhp, GFP_KERNEL); if (ret) goto err_destroy_qp; @@ -2366,7 +2365,7 @@ struct ib_qp *c4iw_create_qp(struct ib_pd *pd, struct ib_qp_init_attr *attrs, err_free_sq_key: kfree(sq_key_mm); err_remove_handle: - remove_handle(rhp, &rhp->qpidr, qhp->wq.sq.qid); + xa_erase_irq(&rhp->qps, qhp->wq.sq.qid); err_destroy_qp: destroy_qp(&rhp->rdev, &qhp->wq, ucontext ? &ucontext->uctx : &rhp->rdev.uctx, !attrs->srq); @@ -2755,7 +2754,7 @@ struct ib_srq *c4iw_create_srq(struct ib_pd *pd, struct ib_srq_init_attr *attrs, if (CHELSIO_CHIP_VERSION(rhp->rdev.lldi.adapter_type) > CHELSIO_T6) srq->flags = T4_SRQ_LIMIT_SUPPORT; - ret = insert_handle(rhp, &rhp->qpidr, srq, srq->wq.qid); + ret = xa_insert_irq(&rhp->qps, srq->wq.qid, srq, GFP_KERNEL); if (ret) goto err_free_queue; @@ -2807,7 +2806,7 @@ struct ib_srq *c4iw_create_srq(struct ib_pd *pd, struct ib_srq_init_attr *attrs, err_free_srq_key_mm: kfree(srq_key_mm); err_remove_handle: - remove_handle(rhp, &rhp->qpidr, srq->wq.qid); + xa_erase_irq(&rhp->qps, srq->wq.qid); err_free_queue: free_srq_queue(srq, ucontext ? &ucontext->uctx : &rhp->rdev.uctx, srq->wr_waitp); @@ -2833,7 +2832,7 @@ int c4iw_destroy_srq(struct ib_srq *ibsrq) pr_debug("%s id %d\n", __func__, srq->wq.qid); - remove_handle(rhp, &rhp->qpidr, srq->wq.qid); + xa_erase_irq(&rhp->qps, srq->wq.qid); ucontext = ibsrq->uobject ? to_c4iw_ucontext(ibsrq->uobject->context) : NULL; free_srq_queue(srq, ucontext ? &ucontext->uctx : &rhp->rdev.uctx, From patchwork Thu Feb 21 00:20:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10822933 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4D95C184E for ; Thu, 21 Feb 2019 00:21:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 396552FC4D for ; Thu, 21 Feb 2019 00:21:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2E0202FC7C; Thu, 21 Feb 2019 00:21:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9EFF42FC81 for ; Thu, 21 Feb 2019 00:21:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726818AbfBUAVV (ORCPT ); Wed, 20 Feb 2019 19:21:21 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:52456 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726803AbfBUAVM (ORCPT ); Wed, 20 Feb 2019 19:21:12 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=X7lMY8Du2/fQuEb4fhE/MvD1GP3GydY9B8Eyaimx/WA=; b=YLS3r4Dc1ASDnVmjfIGrMxB3W 2IlbmW7UeVoT8u2PRdindchdlQr5WABTmG7N+K6gJLwT38GPhLeVh63OZGPCFnaPyfdKxErUGSBDP eujqTzd7vw1a1udI7zn0Ws8oQlqsjbXgqvuVXo7HYwDq63v0GympQ3eYVt4SEB/o3PX6FD9+oIxuN g7nLLBXLhIrThVTWus43fNUMklpubinJIPaeZdbwVr4EFno5WfwAobQo+85SR1EmOZl+HFImUbL66 LYCQSOR8xrrP43RGESVNIBmEjBwKv8UHL5rmlO1uMs8DwIn91Oc/d5lzWWUz5CTSeABzoFaA/+CGL /CwG2ZppA==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gwc6y-0005vt-33; Thu, 21 Feb 2019 00:21:12 +0000 From: Matthew Wilcox To: Jason Gunthorpe Cc: Matthew Wilcox , linux-rdma@vger.kernel.org Subject: [PATCH 17/32] cxgb4: Convert mmidr to XArray Date: Wed, 20 Feb 2019 16:20:52 -0800 Message-Id: <20190221002107.22625-18-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190221002107.22625-1-willy@infradead.org> References: <20190221002107.22625-1-willy@infradead.org> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Matthew Wilcox Acked-by: Steve Wise --- drivers/infiniband/hw/cxgb4/device.c | 21 ++++++++++----------- drivers/infiniband/hw/cxgb4/iw_cxgb4.h | 6 +----- drivers/infiniband/hw/cxgb4/mem.c | 16 ++++++++-------- 3 files changed, 19 insertions(+), 24 deletions(-) diff --git a/drivers/infiniband/hw/cxgb4/device.c b/drivers/infiniband/hw/cxgb4/device.c index 7dcf7740a732..4e9d5bd880d4 100644 --- a/drivers/infiniband/hw/cxgb4/device.c +++ b/drivers/infiniband/hw/cxgb4/device.c @@ -374,9 +374,8 @@ static const struct file_operations qp_debugfs_fops = { .llseek = default_llseek, }; -static int dump_stag(int id, void *p, void *data) +static int dump_stag(unsigned long id, struct c4iw_debugfs_data *stagd) { - struct c4iw_debugfs_data *stagd = data; int space; int cc; struct fw_ri_tpte tpte; @@ -425,6 +424,8 @@ static int stag_release(struct inode *inode, struct file *file) static int stag_open(struct inode *inode, struct file *file) { struct c4iw_debugfs_data *stagd; + void *p; + unsigned long index; int ret = 0; int count = 1; @@ -436,9 +437,8 @@ static int stag_open(struct inode *inode, struct file *file) stagd->devp = inode->i_private; stagd->pos = 0; - spin_lock_irq(&stagd->devp->lock); - idr_for_each(&stagd->devp->mmidr, count_idrs, &count); - spin_unlock_irq(&stagd->devp->lock); + xa_for_each(&stagd->devp->mrs, index, p) + count++; stagd->bufsize = count * 256; stagd->buf = vmalloc(stagd->bufsize); @@ -447,9 +447,10 @@ static int stag_open(struct inode *inode, struct file *file) goto err1; } - spin_lock_irq(&stagd->devp->lock); - idr_for_each(&stagd->devp->mmidr, dump_stag, stagd); - spin_unlock_irq(&stagd->devp->lock); + xa_lock_irq(&stagd->devp->mrs); + xa_for_each(&stagd->devp->mrs, index, p) + dump_stag(index, stagd); + xa_unlock_irq(&stagd->devp->mrs); stagd->buf[stagd->pos++] = 0; file->private_data = stagd; @@ -925,8 +926,6 @@ static void c4iw_rdev_close(struct c4iw_rdev *rdev) void c4iw_dealloc(struct uld_ctx *ctx) { c4iw_rdev_close(&ctx->dev->rdev); - WARN_ON_ONCE(!idr_is_empty(&ctx->dev->mmidr)); - idr_destroy(&ctx->dev->mmidr); wait_event(ctx->dev->wait, idr_is_empty(&ctx->dev->hwtid_idr)); idr_destroy(&ctx->dev->hwtid_idr); idr_destroy(&ctx->dev->stid_idr); @@ -1036,7 +1035,7 @@ static struct c4iw_dev *c4iw_alloc(const struct cxgb4_lld_info *infop) xa_init_flags(&devp->cqs, XA_FLAGS_LOCK_IRQ); xa_init_flags(&devp->qps, XA_FLAGS_LOCK_IRQ); - idr_init(&devp->mmidr); + xa_init_flags(&devp->mrs, XA_FLAGS_LOCK_IRQ); idr_init(&devp->hwtid_idr); idr_init(&devp->stid_idr); idr_init(&devp->atid_idr); diff --git a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h index 79ceab2fd756..f24f503971ef 100644 --- a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h +++ b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h @@ -317,7 +317,7 @@ struct c4iw_dev { u32 device_cap_flags; struct xarray cqs; struct xarray qps; - struct idr mmidr; + struct xarray mrs; spinlock_t lock; struct mutex db_mutex; struct dentry *debugfs_root; @@ -357,10 +357,6 @@ static inline struct c4iw_qp *get_qhp(struct c4iw_dev *rhp, u32 qpid) return xa_load(&rhp->qps, qpid); } -static inline struct c4iw_mr *get_mhp(struct c4iw_dev *rhp, u32 mmid) -{ - return idr_find(&rhp->mmidr, mmid); -} static inline int _insert_handle(struct c4iw_dev *rhp, struct idr *idr, void *handle, u32 id, int lock) diff --git a/drivers/infiniband/hw/cxgb4/mem.c b/drivers/infiniband/hw/cxgb4/mem.c index 7b76e6f81aeb..8f97e4791fd0 100644 --- a/drivers/infiniband/hw/cxgb4/mem.c +++ b/drivers/infiniband/hw/cxgb4/mem.c @@ -395,7 +395,7 @@ static int finish_mem_reg(struct c4iw_mr *mhp, u32 stag) mhp->ibmr.iova = mhp->attr.va_fbo; mhp->ibmr.page_size = 1U << (mhp->attr.page_size + 12); pr_debug("mmid 0x%x mhp %p\n", mmid, mhp); - return insert_handle(mhp->rhp, &mhp->rhp->mmidr, mhp, mmid); + return xa_insert_irq(&mhp->rhp->mrs, mmid, mhp, GFP_KERNEL); } static int register_mem(struct c4iw_dev *rhp, struct c4iw_pd *php, @@ -651,7 +651,7 @@ struct ib_mw *c4iw_alloc_mw(struct ib_pd *pd, enum ib_mw_type type, mhp->attr.stag = stag; mmid = (stag) >> 8; mhp->ibmw.rkey = stag; - if (insert_handle(rhp, &rhp->mmidr, mhp, mmid)) { + if (xa_insert_irq(&rhp->mrs, mmid, mhp, GFP_KERNEL)) { ret = -ENOMEM; goto dealloc_win; } @@ -679,7 +679,7 @@ int c4iw_dealloc_mw(struct ib_mw *mw) mhp = to_c4iw_mw(mw); rhp = mhp->rhp; mmid = (mw->rkey) >> 8; - remove_handle(rhp, &rhp->mmidr, mmid); + xa_erase_irq(&rhp->mrs, mmid); deallocate_window(&rhp->rdev, mhp->attr.stag, mhp->dereg_skb, mhp->wr_waitp); kfree_skb(mhp->dereg_skb); @@ -746,7 +746,7 @@ struct ib_mr *c4iw_alloc_mr(struct ib_pd *pd, mhp->attr.state = 0; mmid = (stag) >> 8; mhp->ibmr.rkey = mhp->ibmr.lkey = stag; - if (insert_handle(rhp, &rhp->mmidr, mhp, mmid)) { + if (xa_insert_irq(&rhp->mrs, mmid, mhp, GFP_KERNEL)) { ret = -ENOMEM; goto err_dereg; } @@ -803,7 +803,7 @@ int c4iw_dereg_mr(struct ib_mr *ib_mr) mhp = to_c4iw_mr(ib_mr); rhp = mhp->rhp; mmid = mhp->attr.stag >> 8; - remove_handle(rhp, &rhp->mmidr, mmid); + xa_erase_irq(&rhp->mrs, mmid); if (mhp->mpl) dma_free_coherent(&mhp->rhp->rdev.lldi.pdev->dev, mhp->max_mpl_len, mhp->mpl, mhp->mpl_addr); @@ -827,9 +827,9 @@ void c4iw_invalidate_mr(struct c4iw_dev *rhp, u32 rkey) struct c4iw_mr *mhp; unsigned long flags; - spin_lock_irqsave(&rhp->lock, flags); - mhp = get_mhp(rhp, rkey >> 8); + xa_lock_irqsave(&rhp->mrs, flags); + mhp = xa_load(&rhp->mrs, rkey >> 8); if (mhp) mhp->attr.state = 0; - spin_unlock_irqrestore(&rhp->lock, flags); + xa_unlock_irqrestore(&rhp->mrs, flags); } From patchwork Thu Feb 21 00:20:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10822931 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C4D361805 for ; Thu, 21 Feb 2019 00:21:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B2B9C2FC4D for ; Thu, 21 Feb 2019 00:21:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A74A82FC82; Thu, 21 Feb 2019 00:21:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2A40A2FC7C for ; Thu, 21 Feb 2019 00:21:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726869AbfBUAVV (ORCPT ); Wed, 20 Feb 2019 19:21:21 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:52460 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726818AbfBUAVM (ORCPT ); Wed, 20 Feb 2019 19:21:12 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=+Js6YcyQf5NJ0dSHzfZV3cSCFKo1jbzaK26i/jx7viw=; b=NusWI+yBCDm2VD9d6+oqiLNzB +MSnPGUZ9i/GZ9BhcTYWoDLLSqJBEy/RceK8vvIFHDDVV5YdrRsGjdACa7GtZd5VfnzPlO2hJTPB2 1Sm7mLSzqrZFggpXCRgtrm/j7b1VDGPhMhm79nIZ+f8sx8cudIDGSSC0wHimMuaD0JPaSPlnHdB6B q/d8rKbUV7j2Xwi7kYIWkNZzxeFpWsEFT/WiM4UCz3g8fXyWycyAeZ4Vr4vSRu1CypCyCvqwyikSH r1qhsBT3itJf5iNZo2PpNY8EdiW3DU7KMJWLDGL3I4eF2uCxc8DMyahEdoA678BJESJgJpiOFvyFX sj+li+LwQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gwc6y-0005w2-8A; Thu, 21 Feb 2019 00:21:12 +0000 From: Matthew Wilcox To: Jason Gunthorpe Cc: Matthew Wilcox , linux-rdma@vger.kernel.org Subject: [PATCH 18/32] cxgb4: Convert hwtid_idr to XArray Date: Wed, 20 Feb 2019 16:20:53 -0800 Message-Id: <20190221002107.22625-19-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190221002107.22625-1-willy@infradead.org> References: <20190221002107.22625-1-willy@infradead.org> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Matthew Wilcox Acked-by: Steve Wise --- drivers/infiniband/hw/cxgb4/cm.c | 27 ++++++++++++++------------ drivers/infiniband/hw/cxgb4/device.c | 26 ++++++++++++++++--------- drivers/infiniband/hw/cxgb4/iw_cxgb4.h | 2 +- 3 files changed, 33 insertions(+), 22 deletions(-) diff --git a/drivers/infiniband/hw/cxgb4/cm.c b/drivers/infiniband/hw/cxgb4/cm.c index 8221813219e5..b31adec05009 100644 --- a/drivers/infiniband/hw/cxgb4/cm.c +++ b/drivers/infiniband/hw/cxgb4/cm.c @@ -331,20 +331,23 @@ static void remove_ep_tid(struct c4iw_ep *ep) { unsigned long flags; - spin_lock_irqsave(&ep->com.dev->lock, flags); - _remove_handle(ep->com.dev, &ep->com.dev->hwtid_idr, ep->hwtid, 0); - if (idr_is_empty(&ep->com.dev->hwtid_idr)) + xa_lock_irqsave(&ep->com.dev->hwtids, flags); + __xa_erase(&ep->com.dev->hwtids, ep->hwtid); + if (xa_empty(&ep->com.dev->hwtids)) wake_up(&ep->com.dev->wait); - spin_unlock_irqrestore(&ep->com.dev->lock, flags); + xa_unlock_irqrestore(&ep->com.dev->hwtids, flags); } -static void insert_ep_tid(struct c4iw_ep *ep) +static int insert_ep_tid(struct c4iw_ep *ep) { unsigned long flags; + int err; + + xa_lock_irqsave(&ep->com.dev->hwtids, flags); + err = __xa_insert(&ep->com.dev->hwtids, ep->hwtid, ep, GFP_KERNEL); + xa_unlock_irqrestore(&ep->com.dev->hwtids, flags); - spin_lock_irqsave(&ep->com.dev->lock, flags); - _insert_handle(ep->com.dev, &ep->com.dev->hwtid_idr, ep, ep->hwtid, 0); - spin_unlock_irqrestore(&ep->com.dev->lock, flags); + return err; } /* @@ -355,11 +358,11 @@ static struct c4iw_ep *get_ep_from_tid(struct c4iw_dev *dev, unsigned int tid) struct c4iw_ep *ep; unsigned long flags; - spin_lock_irqsave(&dev->lock, flags); - ep = idr_find(&dev->hwtid_idr, tid); + xa_lock_irqsave(&dev->hwtids, flags); + ep = xa_load(&dev->hwtids, tid); if (ep) c4iw_get_ep(&ep->com); - spin_unlock_irqrestore(&dev->lock, flags); + xa_unlock_irqrestore(&dev->hwtids, flags); return ep; } @@ -2870,7 +2873,7 @@ static int peer_abort(struct c4iw_dev *dev, struct sk_buff *skb) (const u32 *)&sin6->sin6_addr.s6_addr, 1); } - remove_handle(ep->com.dev, &ep->com.dev->hwtid_idr, ep->hwtid); + xa_erase_irq(&ep->com.dev->hwtids, ep->hwtid); cxgb4_remove_tid(ep->com.dev->rdev.lldi.tids, 0, ep->hwtid, ep->com.local_addr.ss_family); dst_release(ep->dst); diff --git a/drivers/infiniband/hw/cxgb4/device.c b/drivers/infiniband/hw/cxgb4/device.c index 4e9d5bd880d4..7fae6e68c8f4 100644 --- a/drivers/infiniband/hw/cxgb4/device.c +++ b/drivers/infiniband/hw/cxgb4/device.c @@ -560,10 +560,8 @@ static const struct file_operations stats_debugfs_fops = { .write = stats_clear, }; -static int dump_ep(int id, void *p, void *data) +static int dump_ep(struct c4iw_ep *ep, struct c4iw_debugfs_data *epd) { - struct c4iw_ep *ep = p; - struct c4iw_debugfs_data *epd = data; int space; int cc; @@ -619,6 +617,11 @@ static int dump_ep(int id, void *p, void *data) return 0; } +static int _dump_ep(int id, void *p, void *data) +{ + return dump_ep(p, data); +} + static int dump_listen_ep(int id, void *p, void *data) { struct c4iw_listen_ep *ep = p; @@ -676,6 +679,8 @@ static int ep_release(struct inode *inode, struct file *file) static int ep_open(struct inode *inode, struct file *file) { + struct c4iw_ep *ep; + unsigned long index; struct c4iw_debugfs_data *epd; int ret = 0; int count = 1; @@ -688,8 +693,9 @@ static int ep_open(struct inode *inode, struct file *file) epd->devp = inode->i_private; epd->pos = 0; + xa_for_each(&epd->devp->hwtids, index, ep) + count++; spin_lock_irq(&epd->devp->lock); - idr_for_each(&epd->devp->hwtid_idr, count_idrs, &count); idr_for_each(&epd->devp->atid_idr, count_idrs, &count); idr_for_each(&epd->devp->stid_idr, count_idrs, &count); spin_unlock_irq(&epd->devp->lock); @@ -701,9 +707,12 @@ static int ep_open(struct inode *inode, struct file *file) goto err1; } + xa_lock_irq(&epd->devp->hwtids); + xa_for_each(&epd->devp->hwtids, index, ep) + dump_ep(ep, epd); + xa_unlock_irq(&epd->devp->hwtids); spin_lock_irq(&epd->devp->lock); - idr_for_each(&epd->devp->hwtid_idr, dump_ep, epd); - idr_for_each(&epd->devp->atid_idr, dump_ep, epd); + idr_for_each(&epd->devp->atid_idr, _dump_ep, epd); idr_for_each(&epd->devp->stid_idr, dump_listen_ep, epd); spin_unlock_irq(&epd->devp->lock); @@ -926,8 +935,7 @@ static void c4iw_rdev_close(struct c4iw_rdev *rdev) void c4iw_dealloc(struct uld_ctx *ctx) { c4iw_rdev_close(&ctx->dev->rdev); - wait_event(ctx->dev->wait, idr_is_empty(&ctx->dev->hwtid_idr)); - idr_destroy(&ctx->dev->hwtid_idr); + wait_event(ctx->dev->wait, xa_empty(&ctx->dev->hwtids)); idr_destroy(&ctx->dev->stid_idr); idr_destroy(&ctx->dev->atid_idr); if (ctx->dev->rdev.bar2_kva) @@ -1036,7 +1044,7 @@ static struct c4iw_dev *c4iw_alloc(const struct cxgb4_lld_info *infop) xa_init_flags(&devp->cqs, XA_FLAGS_LOCK_IRQ); xa_init_flags(&devp->qps, XA_FLAGS_LOCK_IRQ); xa_init_flags(&devp->mrs, XA_FLAGS_LOCK_IRQ); - idr_init(&devp->hwtid_idr); + xa_init_flags(&devp->hwtids, XA_FLAGS_LOCK_IRQ); idr_init(&devp->stid_idr); idr_init(&devp->atid_idr); spin_lock_init(&devp->lock); diff --git a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h index f24f503971ef..37003c3b8d51 100644 --- a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h +++ b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h @@ -322,7 +322,7 @@ struct c4iw_dev { struct mutex db_mutex; struct dentry *debugfs_root; enum db_state db_state; - struct idr hwtid_idr; + struct xarray hwtids; struct idr atid_idr; struct idr stid_idr; struct list_head db_fc_list; From patchwork Thu Feb 21 00:20:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10822903 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AE1F413A4 for ; Thu, 21 Feb 2019 00:21:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9D7F72FC4D for ; Thu, 21 Feb 2019 00:21:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 91D392FC82; Thu, 21 Feb 2019 00:21:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0BF502FC7C for ; Thu, 21 Feb 2019 00:21:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726924AbfBUAVO (ORCPT ); Wed, 20 Feb 2019 19:21:14 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:52464 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726825AbfBUAVM (ORCPT ); Wed, 20 Feb 2019 19:21:12 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=5lGKzXGAINiFss2ErEcK79AdqbeWNc6Nkxw616Xvcac=; b=GbEMZoTQkGag9TBRqz4KWvbwH fPIy9SeNDJ93qof/DOgGWpDNC07FX6EE/by0ymqHBix9ZMx6hcRn0dAJE48yOxCjvSBjDs5ym8Qst BIQFIttvagHX/3KIdvPOtfQNTQ7ay9KP8wXKuC4blhvJRctdGPWI7dyzBA3IK9NULOzDIc/VuJcov qW8ktiHg4DQLjIqhhpcKcC60JBpDoTJzP9H85VvBpYR1qjJogqKsUGB6jKJrTbsdUWrqfPH/GO3BR QaFDMwsT0maN0mYAkJT8UQQTlwku4sJXMOelClho0dEpdYJL0bscWqULdCqxRKGQQXS1BOmD5yZFu o0lKTNNwg==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gwc6y-0005w9-Df; Thu, 21 Feb 2019 00:21:12 +0000 From: Matthew Wilcox To: Jason Gunthorpe Cc: Matthew Wilcox , linux-rdma@vger.kernel.org Subject: [PATCH 19/32] cxgb4: Convert atid_idr to XArray Date: Wed, 20 Feb 2019 16:20:54 -0800 Message-Id: <20190221002107.22625-20-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190221002107.22625-1-willy@infradead.org> References: <20190221002107.22625-1-willy@infradead.org> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Matthew Wilcox --- drivers/infiniband/hw/cxgb4/cm.c | 25 +++++++++++++++---------- drivers/infiniband/hw/cxgb4/device.c | 16 +++++++--------- drivers/infiniband/hw/cxgb4/iw_cxgb4.h | 2 +- 3 files changed, 23 insertions(+), 20 deletions(-) diff --git a/drivers/infiniband/hw/cxgb4/cm.c b/drivers/infiniband/hw/cxgb4/cm.c index b31adec05009..3a0d4ce471c8 100644 --- a/drivers/infiniband/hw/cxgb4/cm.c +++ b/drivers/infiniband/hw/cxgb4/cm.c @@ -558,7 +558,7 @@ static void act_open_req_arp_failure(void *handle, struct sk_buff *skb) cxgb4_clip_release(ep->com.dev->rdev.lldi.ports[0], (const u32 *)&sin6->sin6_addr.s6_addr, 1); } - remove_handle(ep->com.dev, &ep->com.dev->atid_idr, ep->atid); + xa_erase_irq(&ep->com.dev->atids, ep->atid); cxgb4_free_atid(ep->com.dev->rdev.lldi.tids, ep->atid); queue_arp_failure_cpl(ep, skb, FAKE_CPL_PUT_EP_SAFE); } @@ -1201,7 +1201,7 @@ static int act_establish(struct c4iw_dev *dev, struct sk_buff *skb) set_emss(ep, tcp_opt); /* dealloc the atid */ - remove_handle(ep->com.dev, &ep->com.dev->atid_idr, atid); + xa_erase_irq(&ep->com.dev->atids, atid); cxgb4_free_atid(t, atid); set_bit(ACT_ESTAB, &ep->com.history); @@ -2147,7 +2147,9 @@ static int c4iw_reconnect(struct c4iw_ep *ep) err = -ENOMEM; goto fail2; } - insert_handle(ep->com.dev, &ep->com.dev->atid_idr, ep, ep->atid); + err = xa_insert_irq(&ep->com.dev->atids, ep->atid, ep, GFP_KERNEL); + if (err) + goto fail5; /* find a route */ if (ep->com.cm_id->m_local_addr.ss_family == AF_INET) { @@ -2198,7 +2200,8 @@ static int c4iw_reconnect(struct c4iw_ep *ep) fail4: dst_release(ep->dst); fail3: - remove_handle(ep->com.dev, &ep->com.dev->atid_idr, ep->atid); + xa_erase_irq(&ep->com.dev->atids, ep->atid); +fail5: cxgb4_free_atid(ep->com.dev->rdev.lldi.tids, ep->atid); fail2: /* @@ -2281,8 +2284,7 @@ static int act_open_rpl(struct c4iw_dev *dev, struct sk_buff *skb) (const u32 *) &sin6->sin6_addr.s6_addr, 1); } - remove_handle(ep->com.dev, &ep->com.dev->atid_idr, - atid); + xa_erase_irq(&ep->com.dev->atids, atid); cxgb4_free_atid(t, atid); dst_release(ep->dst); cxgb4_l2t_release(ep->l2t); @@ -2319,7 +2321,7 @@ static int act_open_rpl(struct c4iw_dev *dev, struct sk_buff *skb) cxgb4_remove_tid(ep->com.dev->rdev.lldi.tids, 0, GET_TID(rpl), ep->com.local_addr.ss_family); - remove_handle(ep->com.dev, &ep->com.dev->atid_idr, atid); + xa_erase_irq(&ep->com.dev->atids, atid); cxgb4_free_atid(t, atid); dst_release(ep->dst); cxgb4_l2t_release(ep->l2t); @@ -3265,7 +3267,9 @@ int c4iw_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param) err = -ENOMEM; goto fail2; } - insert_handle(dev, &dev->atid_idr, ep, ep->atid); + err = xa_insert_irq(&dev->atids, ep->atid, ep, GFP_KERNEL); + if (err) + goto fail5; memcpy(&ep->com.local_addr, &cm_id->m_local_addr, sizeof(ep->com.local_addr)); @@ -3353,7 +3357,8 @@ int c4iw_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param) fail4: dst_release(ep->dst); fail3: - remove_handle(ep->com.dev, &ep->com.dev->atid_idr, ep->atid); + xa_erase_irq(&ep->com.dev->atids, ep->atid); +fail5: cxgb4_free_atid(ep->com.dev->rdev.lldi.tids, ep->atid); fail2: skb_queue_purge(&ep->com.ep_skb_list); @@ -3687,7 +3692,7 @@ static void active_ofld_conn_reply(struct c4iw_dev *dev, struct sk_buff *skb, cxgb4_clip_release(ep->com.dev->rdev.lldi.ports[0], (const u32 *)&sin6->sin6_addr.s6_addr, 1); } - remove_handle(dev, &dev->atid_idr, atid); + xa_erase_irq(&dev->atids, atid); cxgb4_free_atid(dev->rdev.lldi.tids, atid); dst_release(ep->dst); cxgb4_l2t_release(ep->l2t); diff --git a/drivers/infiniband/hw/cxgb4/device.c b/drivers/infiniband/hw/cxgb4/device.c index 7fae6e68c8f4..0bfba18a12d3 100644 --- a/drivers/infiniband/hw/cxgb4/device.c +++ b/drivers/infiniband/hw/cxgb4/device.c @@ -617,11 +617,6 @@ static int dump_ep(struct c4iw_ep *ep, struct c4iw_debugfs_data *epd) return 0; } -static int _dump_ep(int id, void *p, void *data) -{ - return dump_ep(p, data); -} - static int dump_listen_ep(int id, void *p, void *data) { struct c4iw_listen_ep *ep = p; @@ -695,8 +690,9 @@ static int ep_open(struct inode *inode, struct file *file) xa_for_each(&epd->devp->hwtids, index, ep) count++; + xa_for_each(&epd->devp->atids, index, ep) + count++; spin_lock_irq(&epd->devp->lock); - idr_for_each(&epd->devp->atid_idr, count_idrs, &count); idr_for_each(&epd->devp->stid_idr, count_idrs, &count); spin_unlock_irq(&epd->devp->lock); @@ -711,8 +707,11 @@ static int ep_open(struct inode *inode, struct file *file) xa_for_each(&epd->devp->hwtids, index, ep) dump_ep(ep, epd); xa_unlock_irq(&epd->devp->hwtids); + xa_lock_irq(&epd->devp->atids); + xa_for_each(&epd->devp->atids, index, ep) + dump_ep(ep, epd); + xa_unlock_irq(&epd->devp->atids); spin_lock_irq(&epd->devp->lock); - idr_for_each(&epd->devp->atid_idr, _dump_ep, epd); idr_for_each(&epd->devp->stid_idr, dump_listen_ep, epd); spin_unlock_irq(&epd->devp->lock); @@ -937,7 +936,6 @@ void c4iw_dealloc(struct uld_ctx *ctx) c4iw_rdev_close(&ctx->dev->rdev); wait_event(ctx->dev->wait, xa_empty(&ctx->dev->hwtids)); idr_destroy(&ctx->dev->stid_idr); - idr_destroy(&ctx->dev->atid_idr); if (ctx->dev->rdev.bar2_kva) iounmap(ctx->dev->rdev.bar2_kva); if (ctx->dev->rdev.oc_mw_kva) @@ -1045,8 +1043,8 @@ static struct c4iw_dev *c4iw_alloc(const struct cxgb4_lld_info *infop) xa_init_flags(&devp->qps, XA_FLAGS_LOCK_IRQ); xa_init_flags(&devp->mrs, XA_FLAGS_LOCK_IRQ); xa_init_flags(&devp->hwtids, XA_FLAGS_LOCK_IRQ); + xa_init_flags(&devp->atids, XA_FLAGS_LOCK_IRQ); idr_init(&devp->stid_idr); - idr_init(&devp->atid_idr); spin_lock_init(&devp->lock); mutex_init(&devp->rdev.stats.lock); mutex_init(&devp->db_mutex); diff --git a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h index 37003c3b8d51..6ff687f32249 100644 --- a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h +++ b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h @@ -323,7 +323,7 @@ struct c4iw_dev { struct dentry *debugfs_root; enum db_state db_state; struct xarray hwtids; - struct idr atid_idr; + struct xarray atids; struct idr stid_idr; struct list_head db_fc_list; u32 avail_ird; From patchwork Thu Feb 21 00:20:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10822899 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A9A3614E1 for ; Thu, 21 Feb 2019 00:21:15 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 982042FC4D for ; Thu, 21 Feb 2019 00:21:15 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8CE682FC7C; Thu, 21 Feb 2019 00:21:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E74FB2FC82 for ; Thu, 21 Feb 2019 00:21:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726756AbfBUAVN (ORCPT ); Wed, 20 Feb 2019 19:21:13 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:52466 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726412AbfBUAVN (ORCPT ); Wed, 20 Feb 2019 19:21:13 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=+KrKAEzbW1FXEkbVm43AmOjUiIN6KloqQYXLf1N2/bw=; b=X6LZuDpP6DCa+Gk9mUTTXsjUV DuAxrJdDNLckW9fpQFRMw7MtVcOw1ncfTAAl1J54xLqRVBcCuYR6VERrE8Uw4VA5ojwT1Gw0o5hXJ 3ySGcbYrt0NIzCUTcMmnAtluuxdGerVQjAa4MFWMa6Mn7JzNq0tDjppvMuYFvRJ9MWwQ4hu1SR9px /es3bpOOPIPQBolL0AzzzrPcql7dkr9OGwoqiWQT+6PB3yKh4CKh+5CkvAf+RY/VQVlk9su0OL4Qs A1e8/mgVxjse2WS92NUhq5iwKpON6b8suJND25j9iS4G+5a+8lBAfSVqh2oJwXQteq8C69yvQZShZ Udmyp/eDg==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gwc6y-0005wH-Im; Thu, 21 Feb 2019 00:21:12 +0000 From: Matthew Wilcox To: Jason Gunthorpe Cc: Matthew Wilcox , linux-rdma@vger.kernel.org Subject: [PATCH 20/32] cxgb4: Convert stid_idr to XArray Date: Wed, 20 Feb 2019 16:20:55 -0800 Message-Id: <20190221002107.22625-21-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190221002107.22625-1-willy@infradead.org> References: <20190221002107.22625-1-willy@infradead.org> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Matthew Wilcox Acked-by: Steve Wise --- drivers/infiniband/hw/cxgb4/cm.c | 15 ++++--- drivers/infiniband/hw/cxgb4/device.c | 30 +++++-------- drivers/infiniband/hw/cxgb4/iw_cxgb4.h | 59 +------------------------- 3 files changed, 21 insertions(+), 83 deletions(-) diff --git a/drivers/infiniband/hw/cxgb4/cm.c b/drivers/infiniband/hw/cxgb4/cm.c index 3a0d4ce471c8..00ff322feb1d 100644 --- a/drivers/infiniband/hw/cxgb4/cm.c +++ b/drivers/infiniband/hw/cxgb4/cm.c @@ -375,11 +375,11 @@ static struct c4iw_listen_ep *get_ep_from_stid(struct c4iw_dev *dev, struct c4iw_listen_ep *ep; unsigned long flags; - spin_lock_irqsave(&dev->lock, flags); - ep = idr_find(&dev->stid_idr, stid); + xa_lock_irqsave(&dev->stids, flags); + ep = xa_load(&dev->stids, stid); if (ep) c4iw_get_ep(&ep->com); - spin_unlock_irqrestore(&dev->lock, flags); + xa_unlock_irqrestore(&dev->stids, flags); return ep; } @@ -3481,7 +3481,9 @@ int c4iw_create_listen(struct iw_cm_id *cm_id, int backlog) err = -ENOMEM; goto fail2; } - insert_handle(dev, &dev->stid_idr, ep, ep->stid); + err = xa_insert_irq(&dev->stids, ep->stid, ep, GFP_KERNEL); + if (err) + goto fail3; state_set(&ep->com, LISTEN); if (ep->com.local_addr.ss_family == AF_INET) @@ -3492,7 +3494,8 @@ int c4iw_create_listen(struct iw_cm_id *cm_id, int backlog) cm_id->provider_data = ep; goto out; } - remove_handle(ep->com.dev, &ep->com.dev->stid_idr, ep->stid); + xa_erase_irq(&ep->com.dev->stids, ep->stid); +fail3: cxgb4_free_stid(ep->com.dev->rdev.lldi.tids, ep->stid, ep->com.local_addr.ss_family); fail2: @@ -3531,7 +3534,7 @@ int c4iw_destroy_listen(struct iw_cm_id *cm_id) cxgb4_clip_release(ep->com.dev->rdev.lldi.ports[0], (const u32 *)&sin6->sin6_addr.s6_addr, 1); } - remove_handle(ep->com.dev, &ep->com.dev->stid_idr, ep->stid); + xa_erase_irq(&ep->com.dev->stids, ep->stid); cxgb4_free_stid(ep->com.dev->rdev.lldi.tids, ep->stid, ep->com.local_addr.ss_family); done: diff --git a/drivers/infiniband/hw/cxgb4/device.c b/drivers/infiniband/hw/cxgb4/device.c index 0bfba18a12d3..a32c53f15303 100644 --- a/drivers/infiniband/hw/cxgb4/device.c +++ b/drivers/infiniband/hw/cxgb4/device.c @@ -81,14 +81,6 @@ struct c4iw_debugfs_data { int pos; }; -static int count_idrs(int id, void *p, void *data) -{ - int *countp = data; - - *countp = *countp + 1; - return 0; -} - static ssize_t debugfs_read(struct file *file, char __user *buf, size_t count, loff_t *ppos) { @@ -617,10 +609,9 @@ static int dump_ep(struct c4iw_ep *ep, struct c4iw_debugfs_data *epd) return 0; } -static int dump_listen_ep(int id, void *p, void *data) +static +int dump_listen_ep(struct c4iw_listen_ep *ep, struct c4iw_debugfs_data *epd) { - struct c4iw_listen_ep *ep = p; - struct c4iw_debugfs_data *epd = data; int space; int cc; @@ -675,6 +666,7 @@ static int ep_release(struct inode *inode, struct file *file) static int ep_open(struct inode *inode, struct file *file) { struct c4iw_ep *ep; + struct c4iw_listen_ep *lep; unsigned long index; struct c4iw_debugfs_data *epd; int ret = 0; @@ -692,9 +684,8 @@ static int ep_open(struct inode *inode, struct file *file) count++; xa_for_each(&epd->devp->atids, index, ep) count++; - spin_lock_irq(&epd->devp->lock); - idr_for_each(&epd->devp->stid_idr, count_idrs, &count); - spin_unlock_irq(&epd->devp->lock); + xa_for_each(&epd->devp->stids, index, lep) + count++; epd->bufsize = count * 240; epd->buf = vmalloc(epd->bufsize); @@ -711,9 +702,10 @@ static int ep_open(struct inode *inode, struct file *file) xa_for_each(&epd->devp->atids, index, ep) dump_ep(ep, epd); xa_unlock_irq(&epd->devp->atids); - spin_lock_irq(&epd->devp->lock); - idr_for_each(&epd->devp->stid_idr, dump_listen_ep, epd); - spin_unlock_irq(&epd->devp->lock); + xa_lock_irq(&epd->devp->stids); + xa_for_each(&epd->devp->stids, index, lep) + dump_listen_ep(lep, epd); + xa_unlock_irq(&epd->devp->stids); file->private_data = epd; goto out; @@ -935,7 +927,6 @@ void c4iw_dealloc(struct uld_ctx *ctx) { c4iw_rdev_close(&ctx->dev->rdev); wait_event(ctx->dev->wait, xa_empty(&ctx->dev->hwtids)); - idr_destroy(&ctx->dev->stid_idr); if (ctx->dev->rdev.bar2_kva) iounmap(ctx->dev->rdev.bar2_kva); if (ctx->dev->rdev.oc_mw_kva) @@ -1044,8 +1035,7 @@ static struct c4iw_dev *c4iw_alloc(const struct cxgb4_lld_info *infop) xa_init_flags(&devp->mrs, XA_FLAGS_LOCK_IRQ); xa_init_flags(&devp->hwtids, XA_FLAGS_LOCK_IRQ); xa_init_flags(&devp->atids, XA_FLAGS_LOCK_IRQ); - idr_init(&devp->stid_idr); - spin_lock_init(&devp->lock); + xa_init_flags(&devp->stids, XA_FLAGS_LOCK_IRQ); mutex_init(&devp->rdev.stats.lock); mutex_init(&devp->db_mutex); INIT_LIST_HEAD(&devp->db_fc_list); diff --git a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h index 6ff687f32249..5e8434720e78 100644 --- a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h +++ b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h @@ -34,7 +34,7 @@ #include #include #include -#include +#include #include #include #include @@ -318,13 +318,12 @@ struct c4iw_dev { struct xarray cqs; struct xarray qps; struct xarray mrs; - spinlock_t lock; struct mutex db_mutex; struct dentry *debugfs_root; enum db_state db_state; struct xarray hwtids; struct xarray atids; - struct idr stid_idr; + struct xarray stids; struct list_head db_fc_list; u32 avail_ird; wait_queue_head_t wait; @@ -357,60 +356,6 @@ static inline struct c4iw_qp *get_qhp(struct c4iw_dev *rhp, u32 qpid) return xa_load(&rhp->qps, qpid); } - -static inline int _insert_handle(struct c4iw_dev *rhp, struct idr *idr, - void *handle, u32 id, int lock) -{ - int ret; - - if (lock) { - idr_preload(GFP_KERNEL); - spin_lock_irq(&rhp->lock); - } - - ret = idr_alloc(idr, handle, id, id + 1, GFP_ATOMIC); - - if (lock) { - spin_unlock_irq(&rhp->lock); - idr_preload_end(); - } - - return ret < 0 ? ret : 0; -} - -static inline int insert_handle(struct c4iw_dev *rhp, struct idr *idr, - void *handle, u32 id) -{ - return _insert_handle(rhp, idr, handle, id, 1); -} - -static inline int insert_handle_nolock(struct c4iw_dev *rhp, struct idr *idr, - void *handle, u32 id) -{ - return _insert_handle(rhp, idr, handle, id, 0); -} - -static inline void _remove_handle(struct c4iw_dev *rhp, struct idr *idr, - u32 id, int lock) -{ - if (lock) - spin_lock_irq(&rhp->lock); - idr_remove(idr, id); - if (lock) - spin_unlock_irq(&rhp->lock); -} - -static inline void remove_handle(struct c4iw_dev *rhp, struct idr *idr, u32 id) -{ - _remove_handle(rhp, idr, id, 1); -} - -static inline void remove_handle_nolock(struct c4iw_dev *rhp, - struct idr *idr, u32 id) -{ - _remove_handle(rhp, idr, id, 0); -} - extern uint c4iw_max_read_depth; static inline int cur_max_read_depth(struct c4iw_dev *dev) From patchwork Thu Feb 21 00:20:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10822917 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D093013A4 for ; Thu, 21 Feb 2019 00:21:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BCFC12FC4D for ; Thu, 21 Feb 2019 00:21:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B14242FC7C; Thu, 21 Feb 2019 00:21:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EF7EB2FC82 for ; Thu, 21 Feb 2019 00:21:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726862AbfBUAVP (ORCPT ); Wed, 20 Feb 2019 19:21:15 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:52476 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726842AbfBUAVN (ORCPT ); Wed, 20 Feb 2019 19:21:13 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=mJyW/TjKi5KQEpItOyWIJzXAG8jGGzmdykgU1OyErbA=; b=KXIE1iRJzVYZ2DgNfeJpmTnaw Rtl9ks3n9L5vUrLN94UA2pOCLyDKemSxCa4tgi7NTDn4hLKK20I4RUa6WhcUGtmqp8ptXEGwYxHnC 41T0kuO3qweGrVdNe86nBqJWi+JGHOsITbDPtyNENJic7bcr9BNS0S2aCwhcaQzXIC4TM7/R2hzkJ 8vbxQxSZftR32FDLLEuYfvj6yvU2FO1VzLim+2desMxGc3WWc549vAgaqJaOT4/c4+ZmLlEyTRl2b Vwn9JpOn6vxpujxTa79OOyOTiWndVN9h1Ob7IjR5QQxynHB1tqBEtu++tNaleWJtw344597dNrN8A dUMywIi6Q==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gwc6y-0005wO-ON; Thu, 21 Feb 2019 00:21:12 +0000 From: Matthew Wilcox To: Jason Gunthorpe Cc: Matthew Wilcox , linux-rdma@vger.kernel.org Subject: [PATCH 21/32] hfi1: Convert hfi1_unit_table to XArray Date: Wed, 20 Feb 2019 16:20:56 -0800 Message-Id: <20190221002107.22625-22-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190221002107.22625-1-willy@infradead.org> References: <20190221002107.22625-1-willy@infradead.org> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Also remove hfi1_devs_list. Signed-off-by: Matthew Wilcox Reviewed-by: Dennis Dalessandro --- drivers/infiniband/hw/hfi1/chip.c | 16 ++++----- drivers/infiniband/hw/hfi1/debugfs.c | 8 ++--- drivers/infiniband/hw/hfi1/driver.c | 10 +++--- drivers/infiniband/hw/hfi1/hfi.h | 5 ++- drivers/infiniband/hw/hfi1/init.c | 51 +++++----------------------- drivers/infiniband/hw/hfi1/verbs.c | 8 ++--- 6 files changed, 30 insertions(+), 68 deletions(-) diff --git a/drivers/infiniband/hw/hfi1/chip.c b/drivers/infiniband/hw/hfi1/chip.c index b443642eac02..ad8a3216cde4 100644 --- a/drivers/infiniband/hw/hfi1/chip.c +++ b/drivers/infiniband/hw/hfi1/chip.c @@ -14641,8 +14641,8 @@ void hfi1_start_cleanup(struct hfi1_devdata *dd) */ static int init_asic_data(struct hfi1_devdata *dd) { - unsigned long flags; - struct hfi1_devdata *tmp, *peer = NULL; + unsigned long index; + struct hfi1_devdata *peer; struct hfi1_asic_data *asic_data; int ret = 0; @@ -14651,14 +14651,12 @@ static int init_asic_data(struct hfi1_devdata *dd) if (!asic_data) return -ENOMEM; - spin_lock_irqsave(&hfi1_devs_lock, flags); + xa_lock_irq(&hfi1_dev_table); /* Find our peer device */ - list_for_each_entry(tmp, &hfi1_dev_list, list) { - if ((HFI_BASE_GUID(dd) == HFI_BASE_GUID(tmp)) && - dd->unit != tmp->unit) { - peer = tmp; + xa_for_each(&hfi1_dev_table, index, peer) { + if ((HFI_BASE_GUID(dd) == HFI_BASE_GUID(peer)) && + dd->unit != peer->unit) break; - } } if (peer) { @@ -14670,7 +14668,7 @@ static int init_asic_data(struct hfi1_devdata *dd) mutex_init(&dd->asic_data->asic_resource_mutex); } dd->asic_data->dds[dd->hfi1_id] = dd; /* self back-pointer */ - spin_unlock_irqrestore(&hfi1_devs_lock, flags); + xa_unlock_irq(&hfi1_dev_table); /* first one through - set up i2c devices */ if (!peer) diff --git a/drivers/infiniband/hw/hfi1/debugfs.c b/drivers/infiniband/hw/hfi1/debugfs.c index 0a557795563c..6d904636a6b0 100644 --- a/drivers/infiniband/hw/hfi1/debugfs.c +++ b/drivers/infiniband/hw/hfi1/debugfs.c @@ -1304,15 +1304,15 @@ static void _driver_stats_seq_stop(struct seq_file *s, void *v) static u64 hfi1_sps_ints(void) { - unsigned long flags; + unsigned long index, flags; struct hfi1_devdata *dd; u64 sps_ints = 0; - spin_lock_irqsave(&hfi1_devs_lock, flags); - list_for_each_entry(dd, &hfi1_dev_list, list) { + xa_lock_irqsave(&hfi1_dev_table, flags); + xa_for_each(&hfi1_dev_table, index, dd) { sps_ints += get_all_cpu_total(dd->int_counter); } - spin_unlock_irqrestore(&hfi1_devs_lock, flags); + xa_unlock_irqrestore(&hfi1_dev_table, flags); return sps_ints; } diff --git a/drivers/infiniband/hw/hfi1/driver.c b/drivers/infiniband/hw/hfi1/driver.c index a8ad70730203..fcc64c7ba0d8 100644 --- a/drivers/infiniband/hw/hfi1/driver.c +++ b/drivers/infiniband/hw/hfi1/driver.c @@ -72,8 +72,6 @@ */ const char ib_hfi1_version[] = HFI1_DRIVER_VERSION "\n"; -DEFINE_SPINLOCK(hfi1_devs_lock); -LIST_HEAD(hfi1_dev_list); DEFINE_MUTEX(hfi1_mutex); /* general driver use */ unsigned int hfi1_max_mtu = HFI1_DEFAULT_MAX_MTU; @@ -175,11 +173,11 @@ int hfi1_count_active_units(void) { struct hfi1_devdata *dd; struct hfi1_pportdata *ppd; - unsigned long flags; + unsigned long index, flags; int pidx, nunits_active = 0; - spin_lock_irqsave(&hfi1_devs_lock, flags); - list_for_each_entry(dd, &hfi1_dev_list, list) { + xa_lock_irqsave(&hfi1_dev_table, flags); + xa_for_each(&hfi1_dev_table, index, dd) { if (!(dd->flags & HFI1_PRESENT) || !dd->kregbase1) continue; for (pidx = 0; pidx < dd->num_pports; ++pidx) { @@ -190,7 +188,7 @@ int hfi1_count_active_units(void) } } } - spin_unlock_irqrestore(&hfi1_devs_lock, flags); + xa_unlock_irqrestore(&hfi1_dev_table, flags); return nunits_active; } diff --git a/drivers/infiniband/hw/hfi1/hfi.h b/drivers/infiniband/hw/hfi1/hfi.h index 6db2276f5c13..728f3d6fa179 100644 --- a/drivers/infiniband/hw/hfi1/hfi.h +++ b/drivers/infiniband/hw/hfi1/hfi.h @@ -65,6 +65,7 @@ #include #include #include +#include #include #include #include @@ -1021,7 +1022,6 @@ struct sdma_vl_map; typedef int (*send_routine)(struct rvt_qp *, struct hfi1_pkt_state *, u64); struct hfi1_devdata { struct hfi1_ibdev verbs_dev; /* must be first */ - struct list_head list; /* pointers to related structs for this device */ /* pci access data structure */ struct pci_dev *pcidev; @@ -1406,8 +1406,7 @@ struct hfi1_filedata { struct mm_struct *mm; }; -extern struct list_head hfi1_dev_list; -extern spinlock_t hfi1_devs_lock; +extern struct xarray hfi1_dev_table; struct hfi1_devdata *hfi1_lookup(int unit); static inline unsigned long uctxt_offset(struct hfi1_ctxtdata *uctxt) diff --git a/drivers/infiniband/hw/hfi1/init.c b/drivers/infiniband/hw/hfi1/init.c index 7835eb52e7c5..090f948fdeb0 100644 --- a/drivers/infiniband/hw/hfi1/init.c +++ b/drivers/infiniband/hw/hfi1/init.c @@ -49,7 +49,7 @@ #include #include #include -#include +#include #include #include #include @@ -124,7 +124,7 @@ MODULE_PARM_DESC(user_credit_return_threshold, "Credit return threshold for user static inline u64 encode_rcv_header_entry_size(u16 size); -static struct idr hfi1_unit_table; +DEFINE_XARRAY_FLAGS(hfi1_dev_table, XA_FLAGS_ALLOC | XA_FLAGS_LOCK_IRQ); static int hfi1_create_kctxt(struct hfi1_devdata *dd, struct hfi1_pportdata *ppd) @@ -1004,21 +1004,9 @@ int hfi1_init(struct hfi1_devdata *dd, int reinit) return ret; } -static inline struct hfi1_devdata *__hfi1_lookup(int unit) -{ - return idr_find(&hfi1_unit_table, unit); -} - struct hfi1_devdata *hfi1_lookup(int unit) { - struct hfi1_devdata *dd; - unsigned long flags; - - spin_lock_irqsave(&hfi1_devs_lock, flags); - dd = __hfi1_lookup(unit); - spin_unlock_irqrestore(&hfi1_devs_lock, flags); - - return dd; + return xa_load(&hfi1_dev_table, unit); } /* @@ -1186,7 +1174,7 @@ void hfi1_free_ctxtdata(struct hfi1_devdata *dd, struct hfi1_ctxtdata *rcd) /* * Release our hold on the shared asic data. If we are the last one, * return the structure to be finalized outside the lock. Must be - * holding hfi1_devs_lock. + * holding hfi1_dev_table lock. */ static struct hfi1_asic_data *release_asic_data(struct hfi1_devdata *dd) { @@ -1222,13 +1210,10 @@ static void hfi1_clean_devdata(struct hfi1_devdata *dd) struct hfi1_asic_data *ad; unsigned long flags; - spin_lock_irqsave(&hfi1_devs_lock, flags); - if (!list_empty(&dd->list)) { - idr_remove(&hfi1_unit_table, dd->unit); - list_del_init(&dd->list); - } + xa_lock_irqsave(&hfi1_dev_table, flags); + __xa_erase(&hfi1_dev_table, dd->unit); ad = release_asic_data(dd); - spin_unlock_irqrestore(&hfi1_devs_lock, flags); + xa_unlock_irqrestore(&hfi1_dev_table, flags); finalize_asic_data(dd, ad); free_platform_config(dd); @@ -1272,13 +1257,10 @@ void hfi1_free_devdata(struct hfi1_devdata *dd) * Must be done via verbs allocator, because the verbs cleanup process * both does cleanup and free of the data structure. * "extra" is for chip-specific data. - * - * Use the idr mechanism to get a unit number for this unit. */ static struct hfi1_devdata *hfi1_alloc_devdata(struct pci_dev *pdev, size_t extra) { - unsigned long flags; struct hfi1_devdata *dd; int ret, nports; @@ -1293,21 +1275,10 @@ static struct hfi1_devdata *hfi1_alloc_devdata(struct pci_dev *pdev, dd->pport = (struct hfi1_pportdata *)(dd + 1); dd->pcidev = pdev; pci_set_drvdata(pdev, dd); - - INIT_LIST_HEAD(&dd->list); - idr_preload(GFP_KERNEL); - spin_lock_irqsave(&hfi1_devs_lock, flags); - - ret = idr_alloc(&hfi1_unit_table, dd, 0, 0, GFP_NOWAIT); - if (ret >= 0) { - dd->unit = ret; - list_add(&dd->list, &hfi1_dev_list); - } dd->node = -1; - spin_unlock_irqrestore(&hfi1_devs_lock, flags); - idr_preload_end(); - + ret = xa_alloc_irq(&hfi1_dev_table, &dd->unit, dd, xa_limit_32b, + GFP_KERNEL); if (ret < 0) { dev_err(&pdev->dev, "Could not allocate unit ID: error %d\n", -ret); @@ -1501,8 +1472,6 @@ static int __init hfi1_mod_init(void) * These must be called before the driver is registered with * the PCI subsystem. */ - idr_init(&hfi1_unit_table); - hfi1_dbg_init(); ret = pci_register_driver(&hfi1_pci_driver); if (ret < 0) { @@ -1513,7 +1482,6 @@ static int __init hfi1_mod_init(void) bail_dev: hfi1_dbg_exit(); - idr_destroy(&hfi1_unit_table); dev_cleanup(); bail: return ret; @@ -1530,7 +1498,6 @@ static void __exit hfi1_mod_cleanup(void) node_affinity_destroy_all(); hfi1_dbg_exit(); - idr_destroy(&hfi1_unit_table); dispose_firmware(); /* asymmetric with obtain_firmware() */ dev_cleanup(); } diff --git a/drivers/infiniband/hw/hfi1/verbs.c b/drivers/infiniband/hw/hfi1/verbs.c index ec582d86025f..7201087d1dfb 100644 --- a/drivers/infiniband/hw/hfi1/verbs.c +++ b/drivers/infiniband/hw/hfi1/verbs.c @@ -1579,15 +1579,15 @@ static struct rdma_hw_stats *alloc_hw_stats(struct ib_device *ibdev, static u64 hfi1_sps_ints(void) { - unsigned long flags; + unsigned long index, flags; struct hfi1_devdata *dd; u64 sps_ints = 0; - spin_lock_irqsave(&hfi1_devs_lock, flags); - list_for_each_entry(dd, &hfi1_dev_list, list) { + xa_lock_irqsave(&hfi1_dev_table, flags); + xa_for_each(&hfi1_dev_table, index, dd) { sps_ints += get_all_cpu_total(dd->int_counter); } - spin_unlock_irqrestore(&hfi1_devs_lock, flags); + xa_unlock_irqrestore(&hfi1_dev_table, flags); return sps_ints; } From patchwork Thu Feb 21 00:20:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10822901 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5840014E1 for ; Thu, 21 Feb 2019 00:21:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 47C782FC4D for ; Thu, 21 Feb 2019 00:21:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3C4ED2FC82; Thu, 21 Feb 2019 00:21:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CF0B62FC4D for ; Thu, 21 Feb 2019 00:21:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726825AbfBUAVO (ORCPT ); Wed, 20 Feb 2019 19:21:14 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:52480 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726862AbfBUAVN (ORCPT ); Wed, 20 Feb 2019 19:21:13 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=UGpfBuMcqm9U2KY48Xdvj99b4QavbmWIGZCjsrReQl0=; b=JP2C9j8nNKem18m2JTASRxXlK 8RqPeqqrGuNsIWmHUqVu7c1rxBEC+u59QLX7vztIKUOkENd0n4mvkYEr6sFUzDjMppcHZP9TdRW/X I57jbliHWJz6NXAXxQxIgcv4p5veJgjQkcXs5Yypba/72kXPJVIoB4XSQPfUcC1Yd5WKRVNA7gVcl rpTt/S1RJozxal+mdIirWtUReBZQTuMUF/hnHuELt4pKmdFN7AXDQ+MU9PKTnpKuLHxopz13FAePq ASC9ctrrLo5qUSbCE3zIq2fobMyR1hO+KODYX4lXdozPSKMnQtSc9Uq9y7eE2OQp67IDfZGFIadtp pkkCJw5GA==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gwc6y-0005wW-Tt; Thu, 21 Feb 2019 00:21:12 +0000 From: Matthew Wilcox To: Jason Gunthorpe Cc: Matthew Wilcox , linux-rdma@vger.kernel.org Subject: [PATCH 22/32] hfi1: Convert vesw_idr to XArray Date: Wed, 20 Feb 2019 16:20:57 -0800 Message-Id: <20190221002107.22625-23-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190221002107.22625-1-willy@infradead.org> References: <20190221002107.22625-1-willy@infradead.org> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Matthew Wilcox --- drivers/infiniband/hw/hfi1/hfi.h | 3 +-- drivers/infiniband/hw/hfi1/vnic_main.c | 15 +++++++-------- 2 files changed, 8 insertions(+), 10 deletions(-) diff --git a/drivers/infiniband/hw/hfi1/hfi.h b/drivers/infiniband/hw/hfi1/hfi.h index 728f3d6fa179..0ba7a3d2bee8 100644 --- a/drivers/infiniband/hw/hfi1/hfi.h +++ b/drivers/infiniband/hw/hfi1/hfi.h @@ -54,7 +54,6 @@ #include #include #include -#include #include #include #include @@ -1002,8 +1001,8 @@ struct hfi1_asic_data { struct hfi1_vnic_data { struct hfi1_ctxtdata *ctxt[HFI1_NUM_VNIC_CTXT]; struct kmem_cache *txreq_cache; + struct xarray vesws; u8 num_vports; - struct idr vesw_idr; u8 rmt_start; u8 num_ctxt; }; diff --git a/drivers/infiniband/hw/hfi1/vnic_main.c b/drivers/infiniband/hw/hfi1/vnic_main.c index a922db58be14..de40a03cddad 100644 --- a/drivers/infiniband/hw/hfi1/vnic_main.c +++ b/drivers/infiniband/hw/hfi1/vnic_main.c @@ -162,12 +162,11 @@ static void deallocate_vnic_ctxt(struct hfi1_devdata *dd, void hfi1_vnic_setup(struct hfi1_devdata *dd) { - idr_init(&dd->vnic.vesw_idr); + xa_init(&dd->vnic.vesws); } void hfi1_vnic_cleanup(struct hfi1_devdata *dd) { - idr_destroy(&dd->vnic.vesw_idr); } #define SUM_GRP_COUNTERS(stats, qstats, x_grp) do { \ @@ -534,7 +533,7 @@ void hfi1_vnic_bypass_rcv(struct hfi1_packet *packet) l4_type = hfi1_16B_get_l4(packet->ebuf); if (likely(l4_type == OPA_16B_L4_ETHR)) { vesw_id = HFI1_VNIC_GET_VESWID(packet->ebuf); - vinfo = idr_find(&dd->vnic.vesw_idr, vesw_id); + vinfo = xa_load(&dd->vnic.vesws, vesw_id); /* * In case of invalid vesw id, count the error on @@ -542,9 +541,10 @@ void hfi1_vnic_bypass_rcv(struct hfi1_packet *packet) */ if (unlikely(!vinfo)) { struct hfi1_vnic_vport_info *vinfo_tmp; - int id_tmp = 0; + unsigned long index = 0; - vinfo_tmp = idr_get_next(&dd->vnic.vesw_idr, &id_tmp); + vinfo_tmp = xa_find(&dd->vnic.vesws, &index, ULONG_MAX, + XA_PRESENT); if (vinfo_tmp) { spin_lock(&vport_cntr_lock); vinfo_tmp->stats[0].netstats.rx_nohandler++; @@ -598,8 +598,7 @@ static int hfi1_vnic_up(struct hfi1_vnic_vport_info *vinfo) if (!vinfo->vesw_id) return -EINVAL; - rc = idr_alloc(&dd->vnic.vesw_idr, vinfo, vinfo->vesw_id, - vinfo->vesw_id + 1, GFP_NOWAIT); + rc = xa_insert(&dd->vnic.vesws, vinfo->vesw_id, vinfo, GFP_KERNEL); if (rc < 0) return rc; @@ -625,7 +624,7 @@ static void hfi1_vnic_down(struct hfi1_vnic_vport_info *vinfo) clear_bit(HFI1_VNIC_UP, &vinfo->flags); netif_carrier_off(vinfo->netdev); netif_tx_disable(vinfo->netdev); - idr_remove(&dd->vnic.vesw_idr, vinfo->vesw_id); + xa_erase(&dd->vnic.vesws, vinfo->vesw_id); /* ensure irqs see the change */ msix_vnic_synchronize_irq(dd); From patchwork Thu Feb 21 00:20:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10822925 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8EC6D1805 for ; Thu, 21 Feb 2019 00:21:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7D96E2FC4D for ; Thu, 21 Feb 2019 00:21:21 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 721372FC82; Thu, 21 Feb 2019 00:21:21 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 03FB12FC4D for ; Thu, 21 Feb 2019 00:21:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726687AbfBUAVT (ORCPT ); Wed, 20 Feb 2019 19:21:19 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:52484 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726869AbfBUAVN (ORCPT ); Wed, 20 Feb 2019 19:21:13 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=aZ/4yqHTzTsBV9atc9gkeLYWCCStgGFuK/pe5l9wm3Y=; b=ZUPOqM4Et7QYbVqwkUEnWhJn0 N/WUhrSpAXfOXjLtWSPt/8aYMfKFySi8KV8oyqEPibm2A0yvNMBLwhP9gXD1sZkHz7n22GhSBs0Pr JqObfzdtNRJhvFzPHSqXEWAgAsaFn5oFbkzQqLlT+JNXLv6721XEi1prC/P1bUSar+5q+VrtMWJvz 4HYoNJBjKaYGAIlbw833CXGkcMuJdm80vxJgH5oBPzBDAqA8sUjIbeJzXT8eTZRBN8j+X7EWNvdXN 5FiuLhbupog9luKtWtz69wC9p7tstzGGqDgJ6Fe4w04GsIxhmST4I1aNerShuXYECeRR2TQqKOEEU 1U9FggMCg==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gwc6z-0005wi-2e; Thu, 21 Feb 2019 00:21:13 +0000 From: Matthew Wilcox To: Jason Gunthorpe Cc: Matthew Wilcox , linux-rdma@vger.kernel.org Subject: [PATCH 23/32] qedr: Convert qpidr to XArray Date: Wed, 20 Feb 2019 16:20:58 -0800 Message-Id: <20190221002107.22625-24-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190221002107.22625-1-willy@infradead.org> References: <20190221002107.22625-1-willy@infradead.org> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Matthew Wilcox --- drivers/infiniband/hw/qedr/main.c | 3 +-- drivers/infiniband/hw/qedr/qedr.h | 2 +- drivers/infiniband/hw/qedr/qedr_iw_cm.c | 10 ++++------ drivers/infiniband/hw/qedr/verbs.c | 4 ++-- 4 files changed, 8 insertions(+), 11 deletions(-) diff --git a/drivers/infiniband/hw/qedr/main.c b/drivers/infiniband/hw/qedr/main.c index 75940e2a8791..ba22ea9ec4db 100644 --- a/drivers/infiniband/hw/qedr/main.c +++ b/drivers/infiniband/hw/qedr/main.c @@ -361,8 +361,7 @@ static int qedr_alloc_resources(struct qedr_dev *dev) spin_lock_init(&dev->sgid_lock); if (IS_IWARP(dev)) { - spin_lock_init(&dev->qpidr.idr_lock); - idr_init(&dev->qpidr.idr); + xa_init_flags(&dev->qps, XA_FLAGS_LOCK_IRQ); dev->iwarp_wq = create_singlethread_workqueue("qedr_iwarpq"); } diff --git a/drivers/infiniband/hw/qedr/qedr.h b/drivers/infiniband/hw/qedr/qedr.h index 53bbe6b4e6e6..1d906034f10f 100644 --- a/drivers/infiniband/hw/qedr/qedr.h +++ b/drivers/infiniband/hw/qedr/qedr.h @@ -171,7 +171,7 @@ struct qedr_dev { struct qedr_cq *gsi_rqcq; struct qedr_qp *gsi_qp; enum qed_rdma_type rdma_type; - struct qedr_idr qpidr; + struct xarray qps; struct qedr_idr srqidr; struct workqueue_struct *iwarp_wq; u16 iwarp_max_mtu; diff --git a/drivers/infiniband/hw/qedr/qedr_iw_cm.c b/drivers/infiniband/hw/qedr/qedr_iw_cm.c index 93b16237b767..5f5eeea5d0bc 100644 --- a/drivers/infiniband/hw/qedr/qedr_iw_cm.c +++ b/drivers/infiniband/hw/qedr/qedr_iw_cm.c @@ -491,7 +491,7 @@ int qedr_iw_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param) int rc = 0; int i; - qp = idr_find(&dev->qpidr.idr, conn_param->qpn); + qp = xa_load(&dev->qps, conn_param->qpn); if (unlikely(!qp)) return -EINVAL; @@ -681,7 +681,7 @@ int qedr_iw_accept(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param) DP_DEBUG(dev, QEDR_MSG_IWARP, "Accept on qpid=%d\n", conn_param->qpn); - qp = idr_find(&dev->qpidr.idr, conn_param->qpn); + qp = xa_load(&dev->qps, conn_param->qpn); if (!qp) { DP_ERR(dev, "Invalid QP number %d\n", conn_param->qpn); return -EINVAL; @@ -739,9 +739,7 @@ void qedr_iw_qp_rem_ref(struct ib_qp *ibqp) struct qedr_qp *qp = get_qedr_qp(ibqp); if (atomic_dec_and_test(&qp->refcnt)) { - spin_lock_irq(&qp->dev->qpidr.idr_lock); - idr_remove(&qp->dev->qpidr.idr, qp->qp_id); - spin_unlock_irq(&qp->dev->qpidr.idr_lock); + xa_erase_irq(&qp->dev->qps, qp->qp_id); kfree(qp); } } @@ -750,5 +748,5 @@ struct ib_qp *qedr_iw_get_qp(struct ib_device *ibdev, int qpn) { struct qedr_dev *dev = get_qedr_dev(ibdev); - return idr_find(&dev->qpidr.idr, qpn); + return xa_load(&dev->qps, qpn); } diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c index e1ccf32b1c3d..2f372d5910e0 100644 --- a/drivers/infiniband/hw/qedr/verbs.c +++ b/drivers/infiniband/hw/qedr/verbs.c @@ -2032,7 +2032,7 @@ struct ib_qp *qedr_create_qp(struct ib_pd *ibpd, qp->ibqp.qp_num = qp->qp_id; if (rdma_protocol_iwarp(&dev->ibdev, 1)) { - rc = qedr_idr_add(dev, &dev->qpidr, qp, qp->qp_id); + rc = xa_insert_irq(&dev->qps, qp->qp_id, qp, GFP_KERNEL); if (rc) goto err; } @@ -2608,7 +2608,7 @@ int qedr_destroy_qp(struct ib_qp *ibqp) if (atomic_dec_and_test(&qp->refcnt) && rdma_protocol_iwarp(&dev->ibdev, 1)) { - qedr_idr_remove(dev, &dev->qpidr, qp->qp_id); + xa_erase_irq(&dev->qps, qp->qp_id); kfree(qp); } return rc; From patchwork Thu Feb 21 00:20:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10822923 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2C4FF14E1 for ; Thu, 21 Feb 2019 00:21:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1B0052FC7C for ; Thu, 21 Feb 2019 00:21:21 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0295F2FC81; Thu, 21 Feb 2019 00:21:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6EE352FC4D for ; Thu, 21 Feb 2019 00:21:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726906AbfBUAVT (ORCPT ); Wed, 20 Feb 2019 19:21:19 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:52488 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726713AbfBUAVN (ORCPT ); Wed, 20 Feb 2019 19:21:13 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=ZQyfyK+9tWMvEEyRuRqM8fZ/EXOeJYYTOZnd7qnp/2s=; b=UNwtjM8e3tzwDUX7r0z5ZcTn8 Syg7TilxdaHI3Jc6IprcZkbPYt+DX1OQ7FxZ+yFPfZG2d9E1mUVxBaTPmNvHL7qn7H7LAjr6Z4X0c 0Sy9dDxQ+xsEZOJSnsULtSPYU8sPE8AZ+apvrDQHtYqn07aosYCtOR1aTt1wIPBl3ytqlxi50hWaO SX7biCFS0UkHgjYGYu5rlLLWHDn+kMVP+XjLCb87ZpkocPqyf0/Xzn4miIu/yuGqRm841TxkPBkLA VDvA3uJoFc95LWXN+PQz8ayMUSFzR1xVMLwd3Q7Y/+0UmZ9KxKAsGL4JSCiadCca+M72xvuq1Mqoh WvravFpIA==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gwc6z-0005wy-7a; Thu, 21 Feb 2019 00:21:13 +0000 From: Matthew Wilcox To: Jason Gunthorpe Cc: Matthew Wilcox , linux-rdma@vger.kernel.org Subject: [PATCH 24/32] qedr: Convert srqidr to XArray Date: Wed, 20 Feb 2019 16:20:59 -0800 Message-Id: <20190221002107.22625-25-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190221002107.22625-1-willy@infradead.org> References: <20190221002107.22625-1-willy@infradead.org> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Matthew Wilcox --- drivers/infiniband/hw/qedr/main.c | 7 +++---- drivers/infiniband/hw/qedr/qedr.h | 9 ++------- drivers/infiniband/hw/qedr/verbs.c | 32 ++---------------------------- 3 files changed, 7 insertions(+), 41 deletions(-) diff --git a/drivers/infiniband/hw/qedr/main.c b/drivers/infiniband/hw/qedr/main.c index ba22ea9ec4db..9600c615dfbb 100644 --- a/drivers/infiniband/hw/qedr/main.c +++ b/drivers/infiniband/hw/qedr/main.c @@ -39,7 +39,6 @@ #include #include #include -#include #include #include @@ -756,8 +755,8 @@ static void qedr_affiliated_event(void *context, u8 e_code, void *fw_handle) break; case EVENT_TYPE_SRQ: srq_id = (u16)roce_handle64; - spin_lock_irqsave(&dev->srqidr.idr_lock, flags); - srq = idr_find(&dev->srqidr.idr, srq_id); + xa_lock_irqsave(&dev->srqs, flags); + srq = xa_load(&dev->srqs, srq_id); if (srq) { ibsrq = &srq->ibsrq; if (ibsrq->event_handler) { @@ -771,7 +770,7 @@ static void qedr_affiliated_event(void *context, u8 e_code, void *fw_handle) "SRQ event with NULL pointer ibsrq. Handle=%llx\n", roce_handle64); } - spin_unlock_irqrestore(&dev->srqidr.idr_lock, flags); + xa_unlock_irqrestore(&dev->srqs, flags); DP_NOTICE(dev, "SRQ event %d on handle %p\n", e_code, srq); default: break; diff --git a/drivers/infiniband/hw/qedr/qedr.h b/drivers/infiniband/hw/qedr/qedr.h index 1d906034f10f..6175d1e98717 100644 --- a/drivers/infiniband/hw/qedr/qedr.h +++ b/drivers/infiniband/hw/qedr/qedr.h @@ -33,7 +33,7 @@ #define __QEDR_H__ #include -#include +#include #include #include #include @@ -123,11 +123,6 @@ struct qedr_device_attr { #define QEDR_ENET_STATE_BIT (0) -struct qedr_idr { - spinlock_t idr_lock; /* Protect idr data-structure */ - struct idr idr; -}; - struct qedr_dev { struct ib_device ibdev; struct qed_dev *cdev; @@ -172,7 +167,7 @@ struct qedr_dev { struct qedr_qp *gsi_qp; enum qed_rdma_type rdma_type; struct xarray qps; - struct qedr_idr srqidr; + struct xarray srqs; struct workqueue_struct *iwarp_wq; u16 iwarp_max_mtu; diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c index 2f372d5910e0..2755120f97e6 100644 --- a/drivers/infiniband/hw/qedr/verbs.c +++ b/drivers/infiniband/hw/qedr/verbs.c @@ -1418,11 +1418,6 @@ static int qedr_alloc_srq_kernel_params(struct qedr_srq *srq, return rc; } -static int qedr_idr_add(struct qedr_dev *dev, struct qedr_idr *qidr, - void *ptr, u32 id); -static void qedr_idr_remove(struct qedr_dev *dev, - struct qedr_idr *qidr, u32 id); - struct ib_srq *qedr_create_srq(struct ib_pd *ibpd, struct ib_srq_init_attr *init_attr, struct ib_udata *udata) @@ -1508,7 +1503,7 @@ struct ib_srq *qedr_create_srq(struct ib_pd *ibpd, goto err2; } - rc = qedr_idr_add(dev, &dev->srqidr, srq, srq->srq_id); + rc = xa_insert_irq(&dev->srqs, srq->srq_id, srq, GFP_KERNEL); if (rc) goto err2; @@ -1537,7 +1532,7 @@ int qedr_destroy_srq(struct ib_srq *ibsrq) struct qedr_dev *dev = get_qedr_dev(ibsrq->device); struct qedr_srq *srq = get_qedr_srq(ibsrq); - qedr_idr_remove(dev, &dev->srqidr, srq->srq_id); + xa_erase_irq(&dev->srqs, srq->srq_id); in_params.srq_id = srq->srq_id; dev->ops->rdma_destroy_srq(dev->rdma_ctx, &in_params); @@ -1637,29 +1632,6 @@ static inline void qedr_qp_user_print(struct qedr_dev *dev, struct qedr_qp *qp) qp->usq.buf_len, qp->urq.buf_addr, qp->urq.buf_len); } -static int qedr_idr_add(struct qedr_dev *dev, struct qedr_idr *qidr, - void *ptr, u32 id) -{ - int rc; - - idr_preload(GFP_KERNEL); - spin_lock_irq(&qidr->idr_lock); - - rc = idr_alloc(&qidr->idr, ptr, id, id + 1, GFP_ATOMIC); - - spin_unlock_irq(&qidr->idr_lock); - idr_preload_end(); - - return rc < 0 ? rc : 0; -} - -static void qedr_idr_remove(struct qedr_dev *dev, struct qedr_idr *qidr, u32 id) -{ - spin_lock_irq(&qidr->idr_lock); - idr_remove(&qidr->idr, id); - spin_unlock_irq(&qidr->idr_lock); -} - static inline void qedr_iwarp_populate_user_qp(struct qedr_dev *dev, struct qedr_qp *qp, From patchwork Thu Feb 21 00:21:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10822921 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A4FCE13A4 for ; Thu, 21 Feb 2019 00:21:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 932EE2FC7C for ; Thu, 21 Feb 2019 00:21:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 856462FC82; Thu, 21 Feb 2019 00:21:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C49A52FC82 for ; Thu, 21 Feb 2019 00:21:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726917AbfBUAVT (ORCPT ); Wed, 20 Feb 2019 19:21:19 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:52492 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726906AbfBUAVN (ORCPT ); Wed, 20 Feb 2019 19:21:13 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=qSWMpCm5WXO7e7YLQ15MCNjPRo/GhpBY7+hY9kA+1ac=; b=VV1ykCJVF3lHoc3HxBVZw2HaF q5iHAzHjzOtAlyIIPqt9UrkXxQUSI02Bpp5JN9880ntS3ljshYGPS66kDpsCcqCj9t/5/UIGQYyjk 5Xsrq0AkRB9/H220dh5FqLgcLnXgi49fiZZ4fvZ7pWNX3qQd6pyNapvurQvrbSzCIov12B6Tvo9rr /1rmMrWz8F5/rZTSkNtBcK+EJjBEdBUquoja+JgLo4tCOmjbaP23FR2UITGm3aprmCTxzWa/PB7S7 fPNHWXEFjKe8LVvP7cnvvgKW0QyEKXybkjro8K1n5nmsHm/BuR26/ezRPY2+lT8yi771Uc/LNsOAN qWcyVkFZw==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gwc6z-0005x7-Cu; Thu, 21 Feb 2019 00:21:13 +0000 From: Matthew Wilcox To: Jason Gunthorpe Cc: Matthew Wilcox , linux-rdma@vger.kernel.org Subject: [PATCH 25/32] qib: Convert qib_unit_table to XArray Date: Wed, 20 Feb 2019 16:21:00 -0800 Message-Id: <20190221002107.22625-26-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190221002107.22625-1-willy@infradead.org> References: <20190221002107.22625-1-willy@infradead.org> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Also remove qib_devs_list. Signed-off-by: Matthew Wilcox Reviewed-by: Dennis Dalessandro --- drivers/infiniband/hw/qib/qib.h | 4 +- drivers/infiniband/hw/qib/qib_driver.c | 20 ++++----- drivers/infiniband/hw/qib/qib_fs.c | 12 ++---- drivers/infiniband/hw/qib/qib_iba7322.c | 4 +- drivers/infiniband/hw/qib/qib_init.c | 55 +++++-------------------- 5 files changed, 26 insertions(+), 69 deletions(-) diff --git a/drivers/infiniband/hw/qib/qib.h b/drivers/infiniband/hw/qib/qib.h index 83d2349188db..432d6d0fd7f4 100644 --- a/drivers/infiniband/hw/qib/qib.h +++ b/drivers/infiniband/hw/qib/qib.h @@ -52,6 +52,7 @@ #include #include #include +#include #include #include @@ -1105,8 +1106,7 @@ struct qib_filedata { int rec_cpu_num; /* for cpu affinity; -1 if none */ }; -extern struct list_head qib_dev_list; -extern spinlock_t qib_devs_lock; +extern struct xarray qib_dev_table; extern struct qib_devdata *qib_lookup(int unit); extern u32 qib_cpulist_count; extern unsigned long *qib_cpulist; diff --git a/drivers/infiniband/hw/qib/qib_driver.c b/drivers/infiniband/hw/qib/qib_driver.c index 3117cc5f2a9a..92eeea5679e2 100644 --- a/drivers/infiniband/hw/qib/qib_driver.c +++ b/drivers/infiniband/hw/qib/qib_driver.c @@ -49,8 +49,6 @@ */ const char ib_qib_version[] = QIB_DRIVER_VERSION "\n"; -DEFINE_SPINLOCK(qib_devs_lock); -LIST_HEAD(qib_dev_list); DEFINE_MUTEX(qib_mutex); /* general driver use */ unsigned qib_ibmtu; @@ -96,11 +94,11 @@ int qib_count_active_units(void) { struct qib_devdata *dd; struct qib_pportdata *ppd; - unsigned long flags; + unsigned long index, flags; int pidx, nunits_active = 0; - spin_lock_irqsave(&qib_devs_lock, flags); - list_for_each_entry(dd, &qib_dev_list, list) { + xa_lock_irqsave(&qib_dev_table, flags); + xa_for_each(&qib_dev_table, index, dd) { if (!(dd->flags & QIB_PRESENT) || !dd->kregbase) continue; for (pidx = 0; pidx < dd->num_pports; ++pidx) { @@ -112,7 +110,7 @@ int qib_count_active_units(void) } } } - spin_unlock_irqrestore(&qib_devs_lock, flags); + xa_unlock_irqrestore(&qib_dev_table, flags); return nunits_active; } @@ -125,13 +123,12 @@ int qib_count_units(int *npresentp, int *nupp) { int nunits = 0, npresent = 0, nup = 0; struct qib_devdata *dd; - unsigned long flags; + unsigned long index, flags; int pidx; struct qib_pportdata *ppd; - spin_lock_irqsave(&qib_devs_lock, flags); - - list_for_each_entry(dd, &qib_dev_list, list) { + xa_lock_irqsave(&qib_dev_table, flags); + xa_for_each(&qib_dev_table, index, dd) { nunits++; if ((dd->flags & QIB_PRESENT) && dd->kregbase) npresent++; @@ -142,8 +139,7 @@ int qib_count_units(int *npresentp, int *nupp) nup++; } } - - spin_unlock_irqrestore(&qib_devs_lock, flags); + xa_unlock_irqrestore(&qib_dev_table, flags); if (npresentp) *npresentp = npresent; diff --git a/drivers/infiniband/hw/qib/qib_fs.c b/drivers/infiniband/hw/qib/qib_fs.c index 1d940a2885c9..ceb42d948412 100644 --- a/drivers/infiniband/hw/qib/qib_fs.c +++ b/drivers/infiniband/hw/qib/qib_fs.c @@ -508,8 +508,8 @@ static int remove_device_files(struct super_block *sb, */ static int qibfs_fill_super(struct super_block *sb, void *data, int silent) { - struct qib_devdata *dd, *tmp; - unsigned long flags; + struct qib_devdata *dd; + unsigned long index; int ret; static const struct tree_descr files[] = { @@ -524,18 +524,12 @@ static int qibfs_fill_super(struct super_block *sb, void *data, int silent) goto bail; } - spin_lock_irqsave(&qib_devs_lock, flags); - - list_for_each_entry_safe(dd, tmp, &qib_dev_list, list) { - spin_unlock_irqrestore(&qib_devs_lock, flags); + xa_for_each(&qib_dev_table, index, dd) { ret = add_cntr_files(sb, dd); if (ret) goto bail; - spin_lock_irqsave(&qib_devs_lock, flags); } - spin_unlock_irqrestore(&qib_devs_lock, flags); - bail: return ret; } diff --git a/drivers/infiniband/hw/qib/qib_iba7322.c b/drivers/infiniband/hw/qib/qib_iba7322.c index 17d6b24b3473..5f4aa36e5ca4 100644 --- a/drivers/infiniband/hw/qib/qib_iba7322.c +++ b/drivers/infiniband/hw/qib/qib_iba7322.c @@ -6140,7 +6140,7 @@ static void set_no_qsfp_atten(struct qib_devdata *dd, int change) static int setup_txselect(const char *str, const struct kernel_param *kp) { struct qib_devdata *dd; - unsigned long val; + unsigned long index, val; char *n; if (strlen(str) >= ARRAY_SIZE(txselect_list)) { @@ -6156,7 +6156,7 @@ static int setup_txselect(const char *str, const struct kernel_param *kp) } strncpy(txselect_list, str, ARRAY_SIZE(txselect_list) - 1); - list_for_each_entry(dd, &qib_dev_list, list) + xa_for_each(&qib_dev_table, index, dd) if (dd->deviceid == PCI_DEVICE_ID_QLOGIC_IB_7322) set_no_qsfp_atten(dd, 1); return 0; diff --git a/drivers/infiniband/hw/qib/qib_init.c b/drivers/infiniband/hw/qib/qib_init.c index 9fd69903ca57..f4e2d5f13ab9 100644 --- a/drivers/infiniband/hw/qib/qib_init.c +++ b/drivers/infiniband/hw/qib/qib_init.c @@ -36,7 +36,6 @@ #include #include #include -#include #include #include #ifdef CONFIG_INFINIBAND_QIB_DCA @@ -95,7 +94,7 @@ MODULE_PARM_DESC(cc_table_size, "Congestion control table entries 0 (CCA disable static void verify_interrupt(struct timer_list *); -static struct idr qib_unit_table; +DEFINE_XARRAY_FLAGS(qib_dev_table, XA_FLAGS_ALLOC | XA_FLAGS_LOCK_IRQ); u32 qib_cpulist_count; unsigned long *qib_cpulist; @@ -785,21 +784,9 @@ void __attribute__((weak)) qib_disable_wc(struct qib_devdata *dd) { } -static inline struct qib_devdata *__qib_lookup(int unit) -{ - return idr_find(&qib_unit_table, unit); -} - struct qib_devdata *qib_lookup(int unit) { - struct qib_devdata *dd; - unsigned long flags; - - spin_lock_irqsave(&qib_devs_lock, flags); - dd = __qib_lookup(unit); - spin_unlock_irqrestore(&qib_devs_lock, flags); - - return dd; + return xa_load(&qib_dev_table, unit); } /* @@ -1046,10 +1033,9 @@ void qib_free_devdata(struct qib_devdata *dd) { unsigned long flags; - spin_lock_irqsave(&qib_devs_lock, flags); - idr_remove(&qib_unit_table, dd->unit); - list_del(&dd->list); - spin_unlock_irqrestore(&qib_devs_lock, flags); + xa_lock_irqsave(&qib_dev_table, flags); + __xa_erase(&qib_dev_table, dd->unit); + xa_unlock_irqrestore(&qib_dev_table, flags); #ifdef CONFIG_DEBUG_FS qib_dbg_ibdev_exit(&dd->verbs_dev); @@ -1070,15 +1056,15 @@ u64 qib_int_counter(struct qib_devdata *dd) u64 qib_sps_ints(void) { - unsigned long flags; + unsigned long index, flags; struct qib_devdata *dd; u64 sps_ints = 0; - spin_lock_irqsave(&qib_devs_lock, flags); - list_for_each_entry(dd, &qib_dev_list, list) { + xa_lock_irqsave(&qib_dev_table, flags); + xa_for_each(&qib_dev_table, index, dd) { sps_ints += qib_int_counter(dd); } - spin_unlock_irqrestore(&qib_devs_lock, flags); + xa_unlock_irqrestore(&qib_dev_table, flags); return sps_ints; } @@ -1087,12 +1073,9 @@ u64 qib_sps_ints(void) * allocator, because the verbs cleanup process both does cleanup and * free of the data structure. * "extra" is for chip-specific data. - * - * Use the idr mechanism to get a unit number for this unit. */ struct qib_devdata *qib_alloc_devdata(struct pci_dev *pdev, size_t extra) { - unsigned long flags; struct qib_devdata *dd; int ret, nports; @@ -1103,20 +1086,8 @@ struct qib_devdata *qib_alloc_devdata(struct pci_dev *pdev, size_t extra) if (!dd) return ERR_PTR(-ENOMEM); - INIT_LIST_HEAD(&dd->list); - - idr_preload(GFP_KERNEL); - spin_lock_irqsave(&qib_devs_lock, flags); - - ret = idr_alloc(&qib_unit_table, dd, 0, 0, GFP_NOWAIT); - if (ret >= 0) { - dd->unit = ret; - list_add(&dd->list, &qib_dev_list); - } - - spin_unlock_irqrestore(&qib_devs_lock, flags); - idr_preload_end(); - + ret = xa_alloc_irq(&qib_dev_table, &dd->unit, dd, xa_limit_32b, + GFP_KERNEL); if (ret < 0) { qib_early_err(&pdev->dev, "Could not allocate unit ID: error %d\n", -ret); @@ -1255,8 +1226,6 @@ static int __init qib_ib_init(void) * These must be called before the driver is registered with * the PCI subsystem. */ - idr_init(&qib_unit_table); - #ifdef CONFIG_INFINIBAND_QIB_DCA dca_register_notify(&dca_notifier); #endif @@ -1281,7 +1250,6 @@ static int __init qib_ib_init(void) #ifdef CONFIG_DEBUG_FS qib_dbg_exit(); #endif - idr_destroy(&qib_unit_table); qib_dev_cleanup(); bail: return ret; @@ -1313,7 +1281,6 @@ static void __exit qib_ib_cleanup(void) qib_cpulist_count = 0; kfree(qib_cpulist); - idr_destroy(&qib_unit_table); qib_dev_cleanup(); } From patchwork Thu Feb 21 00:21:01 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10822919 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 872161805 for ; Thu, 21 Feb 2019 00:21:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 76C922FC7C for ; Thu, 21 Feb 2019 00:21:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6B00C2FC84; Thu, 21 Feb 2019 00:21:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F00412FC4D for ; Thu, 21 Feb 2019 00:21:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726643AbfBUAVS (ORCPT ); Wed, 20 Feb 2019 19:21:18 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:52500 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726917AbfBUAVN (ORCPT ); Wed, 20 Feb 2019 19:21:13 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=nsuR39xELEiYwQCOViNv2n0dclzYvpbA7y4uUuqeQ8g=; b=hH+/rwT6ibVIBSjNt39aZjuu0 T5hz+sSMx/YbWNjY2gQSrT5H1KEhzvlZFB02Wb+FOs7a79yj/2Te6AkCj6/kgo36w5L/mxbEMsztc SP8g+ROIT/mVgih7PiTHsXQET8SFPxjFgjYeBq5LFXYVRysV/F1QbHsjIcBnU+nJAAiqjhZivDCRq GNdz73FO0t4pns0puEVXgNG8lahxSvuQYjiPJS8asiIx0/MC6Obzt01QApt/WMfug1i9OXok3wWLK JsrIfFNRVrIIMCCWsrSwUU4tLJhCVfhqos+ajKM0Gd/ZygNHiEHV3r6ndQbLx4qiRm4O4H8S670bv AtftXWvZA==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gwc6z-0005xE-Hq; Thu, 21 Feb 2019 00:21:13 +0000 From: Matthew Wilcox To: Jason Gunthorpe Cc: Matthew Wilcox , linux-rdma@vger.kernel.org Subject: [PATCH 26/32] opa_vnic: Convert vport_idr to XArray Date: Wed, 20 Feb 2019 16:21:01 -0800 Message-Id: <20190221002107.22625-27-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190221002107.22625-1-willy@infradead.org> References: <20190221002107.22625-1-willy@infradead.org> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Matthew Wilcox --- .../infiniband/ulp/opa_vnic/opa_vnic_vema.c | 60 +++++++------------ 1 file changed, 23 insertions(+), 37 deletions(-) diff --git a/drivers/infiniband/ulp/opa_vnic/opa_vnic_vema.c b/drivers/infiniband/ulp/opa_vnic/opa_vnic_vema.c index 560e4f2d466e..88e8168ac955 100644 --- a/drivers/infiniband/ulp/opa_vnic/opa_vnic_vema.c +++ b/drivers/infiniband/ulp/opa_vnic/opa_vnic_vema.c @@ -51,6 +51,7 @@ */ #include +#include #include #include #include @@ -97,7 +98,7 @@ const char opa_vnic_driver_version[] = DRV_VERSION; * @class_port_info: Class port info information. * @tid: Transaction id * @port_num: OPA port number - * @vport_idr: vnic ports idr + * @vports: vnic ports * @event_handler: ib event handler * @lock: adapter interface lock */ @@ -107,7 +108,7 @@ struct opa_vnic_vema_port { struct opa_class_port_info class_port_info; u64 tid; u8 port_num; - struct idr vport_idr; + struct xarray vports; struct ib_event_handler event_handler; /* Lock to query/update network adapter */ @@ -148,7 +149,7 @@ vema_get_vport_adapter(struct opa_vnic_vema_mad *recvd_mad, { u8 vport_num = vema_get_vport_num(recvd_mad); - return idr_find(&port->vport_idr, vport_num); + return xa_load(&port->vports, vport_num); } /** @@ -207,8 +208,7 @@ static struct opa_vnic_adapter *vema_add_vport(struct opa_vnic_vema_port *port, int rc; adapter->cport = cport; - rc = idr_alloc(&port->vport_idr, adapter, vport_num, - vport_num + 1, GFP_NOWAIT); + rc = xa_insert(&port->vports, vport_num, adapter, GFP_KERNEL); if (rc < 0) { opa_vnic_rem_netdev(adapter); adapter = ERR_PTR(rc); @@ -853,36 +853,14 @@ void opa_vnic_vema_send_trap(struct opa_vnic_adapter *adapter, v_err("Aborting trap\n"); } -static int vema_rem_vport(int id, void *p, void *data) -{ - struct opa_vnic_adapter *adapter = p; - - opa_vnic_rem_netdev(adapter); - return 0; -} - -static int vema_enable_vport(int id, void *p, void *data) -{ - struct opa_vnic_adapter *adapter = p; - - netif_carrier_on(adapter->netdev); - return 0; -} - -static int vema_disable_vport(int id, void *p, void *data) -{ - struct opa_vnic_adapter *adapter = p; - - netif_carrier_off(adapter->netdev); - return 0; -} - static void opa_vnic_event(struct ib_event_handler *handler, struct ib_event *record) { struct opa_vnic_vema_port *port = container_of(handler, struct opa_vnic_vema_port, event_handler); struct opa_vnic_ctrl_port *cport = port->cport; + struct opa_vnic_adapter *adapter; + unsigned long index; if (record->element.port_num != port->port_num) return; @@ -891,10 +869,16 @@ static void opa_vnic_event(struct ib_event_handler *handler, record->event, dev_name(&record->device->dev), record->element.port_num); - if (record->event == IB_EVENT_PORT_ERR) - idr_for_each(&port->vport_idr, vema_disable_vport, NULL); - if (record->event == IB_EVENT_PORT_ACTIVE) - idr_for_each(&port->vport_idr, vema_enable_vport, NULL); + if (record->event != IB_EVENT_PORT_ERR || + record->event != IB_EVENT_PORT_ACTIVE) + return; + + xa_for_each(&port->vports, index, adapter) { + if (record->event == IB_EVENT_PORT_ERR) + netif_carrier_off(adapter->netdev); + else + netif_carrier_on(adapter->netdev); + } } /** @@ -905,6 +889,8 @@ static void opa_vnic_event(struct ib_event_handler *handler, */ static void vema_unregister(struct opa_vnic_ctrl_port *cport) { + struct opa_vnic_adapter *adapter; + unsigned long index; int i; for (i = 1; i <= cport->num_ports; i++) { @@ -915,13 +901,14 @@ static void vema_unregister(struct opa_vnic_ctrl_port *cport) /* Lock ensures no MAD is being processed */ mutex_lock(&port->lock); - idr_for_each(&port->vport_idr, vema_rem_vport, NULL); + xa_for_each(&port->vports, index, adapter) + opa_vnic_rem_netdev(adapter); mutex_unlock(&port->lock); ib_unregister_mad_agent(port->mad_agent); port->mad_agent = NULL; mutex_destroy(&port->lock); - idr_destroy(&port->vport_idr); + xa_destroy(&port->vports); ib_unregister_event_handler(&port->event_handler); } } @@ -958,7 +945,7 @@ static int vema_register(struct opa_vnic_ctrl_port *cport) cport->ibdev, opa_vnic_event); ib_register_event_handler(&port->event_handler); - idr_init(&port->vport_idr); + xa_init(&port->vports); mutex_init(&port->lock); port->mad_agent = ib_register_mad_agent(cport->ibdev, i, IB_QPT_GSI, ®_req, @@ -969,7 +956,6 @@ static int vema_register(struct opa_vnic_ctrl_port *cport) ret = PTR_ERR(port->mad_agent); port->mad_agent = NULL; mutex_destroy(&port->lock); - idr_destroy(&port->vport_idr); vema_unregister(cport); return ret; } From patchwork Thu Feb 21 00:21:02 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10822913 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AED9914E1 for ; Thu, 21 Feb 2019 00:21:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9DF442FC4D for ; Thu, 21 Feb 2019 00:21:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 927BC2FC82; Thu, 21 Feb 2019 00:21:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3900A2FC4D for ; Thu, 21 Feb 2019 00:21:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726505AbfBUAVQ (ORCPT ); Wed, 20 Feb 2019 19:21:16 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:52508 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726919AbfBUAVO (ORCPT ); Wed, 20 Feb 2019 19:21:14 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=9o60OXyx6UzwXXsC3ijt9XTLR5UDYAq+HoJb1hR37Pc=; b=avaI0m+q+lWdG4JFj0LNrM4jG M3TyyFDPif+dDz0TVku0XWx5CjiejuAdGsIVJsjZeLuWLpfA98NLyDKD/4Glr6KtWbSYLqCDr23W3 6eGh0TbgC2eEVKGL6snycBcLcBDrNhzWmuyVUSpr2TpsFQl3T4mCDqcOOxveR6eAjx1YQzYhZ6ZaV QCcMe0sPYQUcqTaKB5Yyezc0UoTI/WcB5YxcbWY/n0LX/iq0ZXykDv5iX4kV5o0gJaSNGYdQA70MY 9YUUKmfwYyYC8gGliCgF5I+IaV7TI+rRLj8X0DiXZnukzuuXw1cAEQG2/Bmon6BKSMstDvvnS/1u9 p9w75yzxg==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gwc6z-0005xJ-Mj; Thu, 21 Feb 2019 00:21:13 +0000 From: Matthew Wilcox To: Jason Gunthorpe Cc: Matthew Wilcox , linux-rdma@vger.kernel.org Subject: [PATCH 27/32] ocrdma: Convert ocrdma_dev_id to IDA Date: Wed, 20 Feb 2019 16:21:02 -0800 Message-Id: <20190221002107.22625-28-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190221002107.22625-1-willy@infradead.org> References: <20190221002107.22625-1-willy@infradead.org> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This driver doesn't look up the pointer that it stores, so this can just be an IDA and save some memory. Signed-off-by: Matthew Wilcox --- drivers/infiniband/hw/ocrdma/ocrdma_main.c | 16 +++++++--------- 1 file changed, 7 insertions(+), 9 deletions(-) diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_main.c b/drivers/infiniband/hw/ocrdma/ocrdma_main.c index 1f393842453a..b014eeffbcce 100644 --- a/drivers/infiniband/hw/ocrdma/ocrdma_main.c +++ b/drivers/infiniband/hw/ocrdma/ocrdma_main.c @@ -62,7 +62,7 @@ MODULE_DESCRIPTION(OCRDMA_ROCE_DRV_DESC " " OCRDMA_ROCE_DRV_VERSION); MODULE_AUTHOR("Emulex Corporation"); MODULE_LICENSE("Dual BSD/GPL"); -static DEFINE_IDR(ocrdma_dev_id); +static DEFINE_IDA(ocrdma_dev_id); void ocrdma_get_guid(struct ocrdma_dev *dev, u8 *guid) { @@ -302,12 +302,12 @@ static struct ocrdma_dev *ocrdma_add(struct be_dev_info *dev_info) } dev->mbx_cmd = kzalloc(sizeof(struct ocrdma_mqe_emb_cmd), GFP_KERNEL); if (!dev->mbx_cmd) - goto idr_err; + goto ida_err; memcpy(&dev->nic_info, dev_info, sizeof(*dev_info)); - dev->id = idr_alloc(&ocrdma_dev_id, NULL, 0, 0, GFP_KERNEL); + dev->id = ida_alloc(&ocrdma_dev_id, GFP_KERNEL); if (dev->id < 0) - goto idr_err; + goto ida_err; status = ocrdma_init_hw(dev); if (status) @@ -345,8 +345,8 @@ static struct ocrdma_dev *ocrdma_add(struct be_dev_info *dev_info) ocrdma_free_resources(dev); ocrdma_cleanup_hw(dev); init_err: - idr_remove(&ocrdma_dev_id, dev->id); -idr_err: + ida_free(&ocrdma_dev_id, dev->id); +ida_err: kfree(dev->mbx_cmd); ib_dealloc_device(&dev->ibdev); pr_err("%s() leaving. ret=%d\n", __func__, status); @@ -355,8 +355,7 @@ static struct ocrdma_dev *ocrdma_add(struct be_dev_info *dev_info) static void ocrdma_remove_free(struct ocrdma_dev *dev) { - - idr_remove(&ocrdma_dev_id, dev->id); + ida_free(&ocrdma_dev_id, dev->id); kfree(dev->mbx_cmd); ib_dealloc_device(&dev->ibdev); } @@ -461,7 +460,6 @@ static void __exit ocrdma_exit_module(void) { be_roce_unregister_driver(&ocrdma_drv); ocrdma_rem_debugfs(); - idr_destroy(&ocrdma_dev_id); } module_init(ocrdma_init_module); From patchwork Thu Feb 21 00:21:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10822907 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 730B613A4 for ; Thu, 21 Feb 2019 00:21:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 62A7F2FC7C for ; Thu, 21 Feb 2019 00:21:17 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 56F952FC82; Thu, 21 Feb 2019 00:21:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E1E272FC7C for ; Thu, 21 Feb 2019 00:21:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726842AbfBUAVP (ORCPT ); Wed, 20 Feb 2019 19:21:15 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:52512 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726412AbfBUAVO (ORCPT ); Wed, 20 Feb 2019 19:21:14 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=/wRWzBXhXKmtVutuJXimTSr/Nx3eI+M69zRcTPsYPKU=; b=rqblRxfh6oOMla2xYqojyTlyB LDbzHxMJw7a4LlREzV9WybrKZcihIZwUrZQkqsQg61cBPI2FNPu+4HG5xLePhPEiMZi69nUtw5+aB 7HV18RRmniwRPtvPS0mfEvA6N6fWYKiZXpkjfQ1qKrqlNW4I/51T88opMEEfSLJQr0s+yYUhQUvrW MTLwA/NElRWI6k6aNXBMHXgbcF3Ux5Y40l2dlqclZc0enb1YcjJRr1fVYpcKx01MW0YR+VPJUQdjb Z0TSu3Ptcf3632Gr4LQOPefeXp3ObUlPjbrPbnqd4Y300L5bThBKiVYICG1PtAe/RgAnbOz5MYxeP doqVRVFiA==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gwc6z-0005xU-Rj; Thu, 21 Feb 2019 00:21:13 +0000 From: Matthew Wilcox To: Jason Gunthorpe Cc: Matthew Wilcox , linux-rdma@vger.kernel.org Subject: [PATCH 28/32] ucm: Convert ctx_id_table to XArray Date: Wed, 20 Feb 2019 16:21:03 -0800 Message-Id: <20190221002107.22625-29-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190221002107.22625-1-willy@infradead.org> References: <20190221002107.22625-1-willy@infradead.org> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Matthew Wilcox --- drivers/infiniband/core/ucm.c | 34 ++++++++++++---------------------- 1 file changed, 12 insertions(+), 22 deletions(-) diff --git a/drivers/infiniband/core/ucm.c b/drivers/infiniband/core/ucm.c index 7541fbaf58a3..c4f822811faa 100644 --- a/drivers/infiniband/core/ucm.c +++ b/drivers/infiniband/core/ucm.c @@ -42,7 +42,7 @@ #include #include #include -#include +#include #include #include @@ -125,23 +125,22 @@ static struct ib_client ucm_client = { .remove = ib_ucm_remove_one }; -static DEFINE_MUTEX(ctx_id_mutex); -static DEFINE_IDR(ctx_id_table); +static DEFINE_XARRAY_ALLOC(ctx_id_table); static DECLARE_BITMAP(dev_map, IB_UCM_MAX_DEVICES); static struct ib_ucm_context *ib_ucm_ctx_get(struct ib_ucm_file *file, int id) { struct ib_ucm_context *ctx; - mutex_lock(&ctx_id_mutex); - ctx = idr_find(&ctx_id_table, id); + xa_lock(&ctx_id_table); + ctx = xa_load(&ctx_id_table, id); if (!ctx) ctx = ERR_PTR(-ENOENT); else if (ctx->file != file) ctx = ERR_PTR(-EINVAL); else atomic_inc(&ctx->ref); - mutex_unlock(&ctx_id_mutex); + xa_unlock(&ctx_id_table); return ctx; } @@ -194,10 +193,7 @@ static struct ib_ucm_context *ib_ucm_ctx_alloc(struct ib_ucm_file *file) ctx->file = file; INIT_LIST_HEAD(&ctx->events); - mutex_lock(&ctx_id_mutex); - ctx->id = idr_alloc(&ctx_id_table, ctx, 0, 0, GFP_KERNEL); - mutex_unlock(&ctx_id_mutex); - if (ctx->id < 0) + if (xa_alloc(&ctx_id_table, &ctx->id, ctx, xa_limit_32b, GFP_KERNEL)) goto error; list_add_tail(&ctx->file_list, &file->ctxs); @@ -514,9 +510,7 @@ static ssize_t ib_ucm_create_id(struct ib_ucm_file *file, err2: ib_destroy_cm_id(ctx->cm_id); err1: - mutex_lock(&ctx_id_mutex); - idr_remove(&ctx_id_table, ctx->id); - mutex_unlock(&ctx_id_mutex); + xa_erase(&ctx_id_table, ctx->id); kfree(ctx); return result; } @@ -536,15 +530,15 @@ static ssize_t ib_ucm_destroy_id(struct ib_ucm_file *file, if (copy_from_user(&cmd, inbuf, sizeof(cmd))) return -EFAULT; - mutex_lock(&ctx_id_mutex); - ctx = idr_find(&ctx_id_table, cmd.id); + xa_lock(&ctx_id_table); + ctx = xa_load(&ctx_id_table, cmd.id); if (!ctx) ctx = ERR_PTR(-ENOENT); else if (ctx->file != file) ctx = ERR_PTR(-EINVAL); else - idr_remove(&ctx_id_table, ctx->id); - mutex_unlock(&ctx_id_mutex); + __xa_erase(&ctx_id_table, ctx->id); + xa_unlock(&ctx_id_table); if (IS_ERR(ctx)) return PTR_ERR(ctx); @@ -1189,10 +1183,7 @@ static int ib_ucm_close(struct inode *inode, struct file *filp) struct ib_ucm_context, file_list); mutex_unlock(&file->file_mutex); - mutex_lock(&ctx_id_mutex); - idr_remove(&ctx_id_table, ctx->id); - mutex_unlock(&ctx_id_mutex); - + xa_erase(&ctx_id_table, ctx->id); ib_destroy_cm_id(ctx->cm_id); ib_ucm_cleanup_events(ctx); kfree(ctx); @@ -1352,7 +1343,6 @@ static void __exit ib_ucm_cleanup(void) class_remove_file(&cm_class, &class_attr_abi_version.attr); unregister_chrdev_region(IB_UCM_BASE_DEV, IB_UCM_NUM_FIXED_MINOR); unregister_chrdev_region(dynamic_ucm_dev, IB_UCM_NUM_DYNAMIC_MINOR); - idr_destroy(&ctx_id_table); } module_init(ib_ucm_init); From patchwork Thu Feb 21 00:21:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10822911 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 26E551805 for ; Thu, 21 Feb 2019 00:21:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 157152FC4D for ; Thu, 21 Feb 2019 00:21:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 09CF32FC81; Thu, 21 Feb 2019 00:21:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A0CB22FC7C for ; Thu, 21 Feb 2019 00:21:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726953AbfBUAVQ (ORCPT ); Wed, 20 Feb 2019 19:21:16 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:52516 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726937AbfBUAVO (ORCPT ); Wed, 20 Feb 2019 19:21:14 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=aDuiHe9aWCn+0nL22jaN9ScdaaLMiWNSihu/j7khyQ0=; b=RsVfUW6kbOxehUkGlAUKIZ7f3 Q02LU81sxiY9nRXdJGE8EbTDLp7mGV35Q2FGYyNLJywz0uO96pcjarXcbtZ/uTD4s3wscj8wVKoOj 9Kun4EA+evhnXDkfV1rPIKYnu1oxi4LXg3qGVUEHMWIeouiD4T5xCJqDs+e8HS8DrWRe8pxs8lRGm XczInqmhLipy4MpTbUrUZAnnASzPolnLVm0lmRXBw0+pwnapBwLy+vv73+psp3dByyDwNJ7T4x8kK x+He5snqXis4/zWL8jlfzYiE3iw/+l2QGxLjGt+KMbBNKIBPtqp5K+ZPfdDXp78HTh/lVzvRxFaFR 9IcOC23kQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gwc70-0005xo-0Z; Thu, 21 Feb 2019 00:21:14 +0000 From: Matthew Wilcox To: Jason Gunthorpe Cc: Matthew Wilcox , linux-rdma@vger.kernel.org Subject: [PATCH 29/32] ucma: Convert multicast_idr to XArray Date: Wed, 20 Feb 2019 16:21:04 -0800 Message-Id: <20190221002107.22625-30-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190221002107.22625-1-willy@infradead.org> References: <20190221002107.22625-1-willy@infradead.org> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Matthew Wilcox --- drivers/infiniband/core/ucma.c | 26 +++++++++----------------- 1 file changed, 9 insertions(+), 17 deletions(-) diff --git a/drivers/infiniband/core/ucma.c b/drivers/infiniband/core/ucma.c index 01d68ed46c1b..9a30068d4c1d 100644 --- a/drivers/infiniband/core/ucma.c +++ b/drivers/infiniband/core/ucma.c @@ -124,7 +124,7 @@ struct ucma_event { static DEFINE_MUTEX(mut); static DEFINE_IDR(ctx_idr); -static DEFINE_IDR(multicast_idr); +static DEFINE_XARRAY_ALLOC(multicast_table); static const struct file_operations ucma_fops; @@ -238,10 +238,7 @@ static struct ucma_multicast* ucma_alloc_multicast(struct ucma_context *ctx) if (!mc) return NULL; - mutex_lock(&mut); - mc->id = idr_alloc(&multicast_idr, NULL, 0, 0, GFP_KERNEL); - mutex_unlock(&mut); - if (mc->id < 0) + if (xa_alloc(&multicast_table, &mc->id, NULL, xa_limit_32b, GFP_KERNEL)) goto error; mc->ctx = ctx; @@ -540,7 +537,7 @@ static void ucma_cleanup_multicast(struct ucma_context *ctx) mutex_lock(&mut); list_for_each_entry_safe(mc, tmp, &ctx->mc_list, list) { list_del(&mc->list); - idr_remove(&multicast_idr, mc->id); + xa_erase(&multicast_table, mc->id); kfree(mc); } mutex_unlock(&mut); @@ -1425,9 +1422,7 @@ static ssize_t ucma_process_join(struct ucma_file *file, goto err3; } - mutex_lock(&mut); - idr_replace(&multicast_idr, mc, mc->id); - mutex_unlock(&mut); + xa_store(&multicast_table, mc->id, mc, 0); mutex_unlock(&file->mut); ucma_put_ctx(ctx); @@ -1437,9 +1432,7 @@ static ssize_t ucma_process_join(struct ucma_file *file, rdma_leave_multicast(ctx->cm_id, (struct sockaddr *) &mc->addr); ucma_cleanup_mc_events(mc); err2: - mutex_lock(&mut); - idr_remove(&multicast_idr, mc->id); - mutex_unlock(&mut); + xa_erase(&multicast_table, mc->id); list_del(&mc->list); kfree(mc); err1: @@ -1501,8 +1494,8 @@ static ssize_t ucma_leave_multicast(struct ucma_file *file, if (copy_from_user(&cmd, inbuf, sizeof(cmd))) return -EFAULT; - mutex_lock(&mut); - mc = idr_find(&multicast_idr, cmd.id); + xa_lock(&multicast_table); + mc = xa_load(&multicast_table, cmd.id); if (!mc) mc = ERR_PTR(-ENOENT); else if (mc->ctx->file != file) @@ -1510,8 +1503,8 @@ static ssize_t ucma_leave_multicast(struct ucma_file *file, else if (!atomic_inc_not_zero(&mc->ctx->ref)) mc = ERR_PTR(-ENXIO); else - idr_remove(&multicast_idr, mc->id); - mutex_unlock(&mut); + __xa_erase(&multicast_table, mc->id); + xa_unlock(&multicast_table); if (IS_ERR(mc)) { ret = PTR_ERR(mc); @@ -1840,7 +1833,6 @@ static void __exit ucma_cleanup(void) device_remove_file(ucma_misc.this_device, &dev_attr_abi_version); misc_deregister(&ucma_misc); idr_destroy(&ctx_idr); - idr_destroy(&multicast_idr); } module_init(ucma_init); From patchwork Thu Feb 21 00:21:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10822915 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0109A13A4 for ; Thu, 21 Feb 2019 00:21:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E39012FC4D for ; Thu, 21 Feb 2019 00:21:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D7E9D2FC81; Thu, 21 Feb 2019 00:21:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 599282FC7C for ; Thu, 21 Feb 2019 00:21:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726949AbfBUAVQ (ORCPT ); Wed, 20 Feb 2019 19:21:16 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:52520 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726941AbfBUAVO (ORCPT ); Wed, 20 Feb 2019 19:21:14 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=o1YqJhsbyeKNAunehkJ5Ggct9yfHuLv2JDj5ZT8qQnQ=; b=ZkNy23LuvV3cPCG5vzliZ8CsX +8KUa8w9ifG4w44xCRvXS934heNIcvji4XlbaP2VxE2U5IzbVHirfy64SusBdoi8AEmQyN3CQBHj9 xbRzW5yCMWxtYO9CLACsv34jxnJ67bXPVcWeYeGNJ46d/9pzN1oz2+UIX9twBwMRm6dWs9fxEZEO3 fiUfFrjeACSrIj0sL9MdEs4f8ysdTANvgYgoBp+z2SuTtjhZ6EmCKphFb9iuJIpigf2B/mTmHj3NW XZwbpkjI0uyLtd2VDY0RnUPPopX00rbxPerRtgiDhQblfFmuwiHZTcGPh+3ici53UhBNQuXu15ckw Lrbq+UGTA==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gwc70-0005y6-5Q; Thu, 21 Feb 2019 00:21:14 +0000 From: Matthew Wilcox To: Jason Gunthorpe Cc: Matthew Wilcox , linux-rdma@vger.kernel.org Subject: [PATCH 30/32] ucma: Convert ctx_idr to XArray Date: Wed, 20 Feb 2019 16:21:05 -0800 Message-Id: <20190221002107.22625-31-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190221002107.22625-1-willy@infradead.org> References: <20190221002107.22625-1-willy@infradead.org> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Matthew Wilcox --- drivers/infiniband/core/ucma.c | 56 ++++++++++++++-------------------- 1 file changed, 23 insertions(+), 33 deletions(-) diff --git a/drivers/infiniband/core/ucma.c b/drivers/infiniband/core/ucma.c index 9a30068d4c1d..b5140f221ca7 100644 --- a/drivers/infiniband/core/ucma.c +++ b/drivers/infiniband/core/ucma.c @@ -94,7 +94,7 @@ struct ucma_context { struct list_head list; struct list_head mc_list; /* mark that device is in process of destroying the internal HW - * resources, protected by the global mut + * resources, protected by the ctx_table lock */ int closing; /* sync between removal event and id destroy, protected by file mut */ @@ -122,8 +122,7 @@ struct ucma_event { struct work_struct close_work; }; -static DEFINE_MUTEX(mut); -static DEFINE_IDR(ctx_idr); +static DEFINE_XARRAY_ALLOC(ctx_table); static DEFINE_XARRAY_ALLOC(multicast_table); static const struct file_operations ucma_fops; @@ -133,7 +132,7 @@ static inline struct ucma_context *_ucma_find_context(int id, { struct ucma_context *ctx; - ctx = idr_find(&ctx_idr, id); + ctx = xa_load(&ctx_table, id); if (!ctx) ctx = ERR_PTR(-ENOENT); else if (ctx->file != file || !ctx->cm_id) @@ -145,7 +144,7 @@ static struct ucma_context *ucma_get_ctx(struct ucma_file *file, int id) { struct ucma_context *ctx; - mutex_lock(&mut); + xa_lock(&ctx_table); ctx = _ucma_find_context(id, file); if (!IS_ERR(ctx)) { if (ctx->closing) @@ -153,7 +152,7 @@ static struct ucma_context *ucma_get_ctx(struct ucma_file *file, int id) else atomic_inc(&ctx->ref); } - mutex_unlock(&mut); + xa_unlock(&ctx_table); return ctx; } @@ -216,10 +215,7 @@ static struct ucma_context *ucma_alloc_ctx(struct ucma_file *file) INIT_LIST_HEAD(&ctx->mc_list); ctx->file = file; - mutex_lock(&mut); - ctx->id = idr_alloc(&ctx_idr, ctx, 0, 0, GFP_KERNEL); - mutex_unlock(&mut); - if (ctx->id < 0) + if (xa_alloc(&ctx_table, &ctx->id, ctx, xa_limit_32b, GFP_KERNEL)) goto error; list_add_tail(&ctx->list, &file->ctx_list); @@ -316,9 +312,9 @@ static void ucma_removal_event_handler(struct rdma_cm_id *cm_id) * handled separately below. */ if (ctx->cm_id == cm_id) { - mutex_lock(&mut); + xa_lock(&ctx_table); ctx->closing = 1; - mutex_unlock(&mut); + xa_unlock(&ctx_table); queue_work(ctx->file->close_wq, &ctx->close_work); return; } @@ -520,9 +516,7 @@ static ssize_t ucma_create_id(struct ucma_file *file, const char __user *inbuf, err2: rdma_destroy_id(cm_id); err1: - mutex_lock(&mut); - idr_remove(&ctx_idr, ctx->id); - mutex_unlock(&mut); + xa_erase(&ctx_table, ctx->id); mutex_lock(&file->mut); list_del(&ctx->list); mutex_unlock(&file->mut); @@ -534,13 +528,13 @@ static void ucma_cleanup_multicast(struct ucma_context *ctx) { struct ucma_multicast *mc, *tmp; - mutex_lock(&mut); + mutex_lock(&ctx->file->mut); list_for_each_entry_safe(mc, tmp, &ctx->mc_list, list) { list_del(&mc->list); xa_erase(&multicast_table, mc->id); kfree(mc); } - mutex_unlock(&mut); + mutex_unlock(&ctx->file->mut); } static void ucma_cleanup_mc_events(struct ucma_multicast *mc) @@ -611,11 +605,11 @@ static ssize_t ucma_destroy_id(struct ucma_file *file, const char __user *inbuf, if (copy_from_user(&cmd, inbuf, sizeof(cmd))) return -EFAULT; - mutex_lock(&mut); + xa_lock(&ctx_table); ctx = _ucma_find_context(cmd.id, file); if (!IS_ERR(ctx)) - idr_remove(&ctx_idr, ctx->id); - mutex_unlock(&mut); + __xa_erase(&ctx_table, ctx->id); + xa_unlock(&ctx_table); if (IS_ERR(ctx)) return PTR_ERR(ctx); @@ -627,14 +621,14 @@ static ssize_t ucma_destroy_id(struct ucma_file *file, const char __user *inbuf, flush_workqueue(ctx->file->close_wq); /* At this point it's guaranteed that there is no inflight * closing task */ - mutex_lock(&mut); + xa_lock(&ctx_table); if (!ctx->closing) { - mutex_unlock(&mut); + xa_unlock(&ctx_table); ucma_put_ctx(ctx); wait_for_completion(&ctx->comp); rdma_destroy_id(ctx->cm_id); } else { - mutex_unlock(&mut); + xa_unlock(&ctx_table); } resp.events_reported = ucma_free_ctx(ctx); @@ -1601,14 +1595,14 @@ static ssize_t ucma_migrate_id(struct ucma_file *new_file, * events being added before existing events. */ ucma_lock_files(cur_file, new_file); - mutex_lock(&mut); + xa_lock(&ctx_table); list_move_tail(&ctx->list, &new_file->ctx_list); ucma_move_events(ctx, new_file); ctx->file = new_file; resp.events_reported = ctx->events_reported; - mutex_unlock(&mut); + xa_unlock(&ctx_table); ucma_unlock_files(cur_file, new_file); response: @@ -1743,18 +1737,15 @@ static int ucma_close(struct inode *inode, struct file *filp) ctx->destroying = 1; mutex_unlock(&file->mut); - mutex_lock(&mut); - idr_remove(&ctx_idr, ctx->id); - mutex_unlock(&mut); - + xa_erase(&ctx_table, ctx->id); flush_workqueue(file->close_wq); /* At that step once ctx was marked as destroying and workqueue * was flushed we are safe from any inflights handlers that * might put other closing task. */ - mutex_lock(&mut); + xa_lock(&ctx_table); if (!ctx->closing) { - mutex_unlock(&mut); + xa_unlock(&ctx_table); ucma_put_ctx(ctx); wait_for_completion(&ctx->comp); /* rdma_destroy_id ensures that no event handlers are @@ -1762,7 +1753,7 @@ static int ucma_close(struct inode *inode, struct file *filp) */ rdma_destroy_id(ctx->cm_id); } else { - mutex_unlock(&mut); + xa_unlock(&ctx_table); } ucma_free_ctx(ctx); @@ -1832,7 +1823,6 @@ static void __exit ucma_cleanup(void) unregister_net_sysctl_table(ucma_ctl_table_hdr); device_remove_file(ucma_misc.this_device, &dev_attr_abi_version); misc_deregister(&ucma_misc); - idr_destroy(&ctx_idr); } module_init(ucma_init); From patchwork Thu Feb 21 00:21:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10822905 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0C8641805 for ; Thu, 21 Feb 2019 00:21:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EFFB42FC4D for ; Thu, 21 Feb 2019 00:21:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E4A552FC82; Thu, 21 Feb 2019 00:21:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 886D92FC81 for ; Thu, 21 Feb 2019 00:21:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726412AbfBUAVP (ORCPT ); Wed, 20 Feb 2019 19:21:15 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:52524 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726949AbfBUAVO (ORCPT ); Wed, 20 Feb 2019 19:21:14 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=1zhX9OUtXd3Vd6er3MqRHnQlsnFMk9l6foaiWp8dBnM=; b=chzni8ZPX0LNNKk/nHop0CdJp qFCkjMscA15AGUf6aw0dur/6P+g3XlCz2v7/iJbOIExHzMLfmVZzCFxt+SNxjPSkBrfaPRfdDtNU6 a2cTpRK+6oH9Vzk/kfUBYpZQtOsjjjx+JiPdszO8UJJUMc/8YhdQFERgdUkuhs3y6bGnmrmAiNncT XawP2Wur9RGKqgIU83ipdavNJ4rUmhFNxW7IIclMp/HY5+IHc71sSuLu1BKaJMlZSunq3inCqxVzZ 3d96+ZIz50djN/tZcg4taUiUBb9Bn36d2ckhFW9vNnO/4sQaW4wmrlOfV/fWdWBmH+Uoug3pVbLYp mbx1xv7kQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gwc70-0005yE-AP; Thu, 21 Feb 2019 00:21:14 +0000 From: Matthew Wilcox To: Jason Gunthorpe Cc: Matthew Wilcox , linux-rdma@vger.kernel.org Subject: [PATCH 31/32] cma: Convert portspace IDRs to XArray Date: Wed, 20 Feb 2019 16:21:06 -0800 Message-Id: <20190221002107.22625-32-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190221002107.22625-1-willy@infradead.org> References: <20190221002107.22625-1-willy@infradead.org> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Matthew Wilcox --- drivers/infiniband/core/cma.c | 44 ++++++++++++++--------------------- 1 file changed, 17 insertions(+), 27 deletions(-) diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c index 84f077b2b90a..d3fab53eb30b 100644 --- a/drivers/infiniband/core/cma.c +++ b/drivers/infiniband/core/cma.c @@ -39,7 +39,7 @@ #include #include #include -#include +#include #include #include #include @@ -191,10 +191,10 @@ static struct workqueue_struct *cma_wq; static unsigned int cma_pernet_id; struct cma_pernet { - struct idr tcp_ps; - struct idr udp_ps; - struct idr ipoib_ps; - struct idr ib_ps; + struct xarray tcp_ps; + struct xarray udp_ps; + struct xarray ipoib_ps; + struct xarray ib_ps; }; static struct cma_pernet *cma_pernet(struct net *net) @@ -202,7 +202,8 @@ static struct cma_pernet *cma_pernet(struct net *net) return net_generic(net, cma_pernet_id); } -static struct idr *cma_pernet_idr(struct net *net, enum rdma_ucm_port_space ps) +static +struct xarray *cma_pernet_xa(struct net *net, enum rdma_ucm_port_space ps) { struct cma_pernet *pernet = cma_pernet(net); @@ -247,25 +248,25 @@ struct class_port_info_context { static int cma_ps_alloc(struct net *net, enum rdma_ucm_port_space ps, struct rdma_bind_list *bind_list, int snum) { - struct idr *idr = cma_pernet_idr(net, ps); + struct xarray *xa = cma_pernet_xa(net, ps); - return idr_alloc(idr, bind_list, snum, snum + 1, GFP_KERNEL); + return xa_insert(xa, snum, bind_list, GFP_KERNEL); } static struct rdma_bind_list *cma_ps_find(struct net *net, enum rdma_ucm_port_space ps, int snum) { - struct idr *idr = cma_pernet_idr(net, ps); + struct xarray *xa = cma_pernet_xa(net, ps); - return idr_find(idr, snum); + return xa_load(xa, snum); } static void cma_ps_remove(struct net *net, enum rdma_ucm_port_space ps, int snum) { - struct idr *idr = cma_pernet_idr(net, ps); + struct xarray *xa = cma_pernet_xa(net, ps); - idr_remove(idr, snum); + xa_erase(xa, snum); } enum { @@ -4688,27 +4689,16 @@ static int cma_init_net(struct net *net) { struct cma_pernet *pernet = cma_pernet(net); - idr_init(&pernet->tcp_ps); - idr_init(&pernet->udp_ps); - idr_init(&pernet->ipoib_ps); - idr_init(&pernet->ib_ps); + xa_init(&pernet->tcp_ps); + xa_init(&pernet->udp_ps); + xa_init(&pernet->ipoib_ps); + xa_init(&pernet->ib_ps); return 0; } -static void cma_exit_net(struct net *net) -{ - struct cma_pernet *pernet = cma_pernet(net); - - idr_destroy(&pernet->tcp_ps); - idr_destroy(&pernet->udp_ps); - idr_destroy(&pernet->ipoib_ps); - idr_destroy(&pernet->ib_ps); -} - static struct pernet_operations cma_pernet_operations = { .init = cma_init_net, - .exit = cma_exit_net, .id = &cma_pernet_id, .size = sizeof(struct cma_pernet), }; From patchwork Thu Feb 21 00:21:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10822909 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9E8CD14E1 for ; Thu, 21 Feb 2019 00:21:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8D97F2FC4D for ; Thu, 21 Feb 2019 00:21:17 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 81FAE2FC81; Thu, 21 Feb 2019 00:21:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3CCF42FC4D for ; Thu, 21 Feb 2019 00:21:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726941AbfBUAVQ (ORCPT ); Wed, 20 Feb 2019 19:21:16 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:52526 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726953AbfBUAVO (ORCPT ); Wed, 20 Feb 2019 19:21:14 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=WCg9ra2oSdhmVCMLNk+f5t3k9s92N6trxAtJn18G3is=; b=t6aw28+Ref/oBrZzpT+kCXOaj AMgj4k7Iibh7xM32hQxMKbckEehrCbyzLWK9rLNzUa2B2ofdQIKViX2eS+5PUP7vOLKU+i0sVir00 drbBhKYhszVsCGxLljD94JeNNEX2ZBwBDkzqemv+syYbGEBvk9ZLI2yxkLr5s4jHv9SJuJR8j1Ewa tu2I0y8pIja+NOhzndDetZkVEZINSsb7CpE9/wUGi2PzmgrV0mrQBX92zjq2RqCuKBEWsxyj511hL qcZbAcLYhzVEyrCeXKZjj/M6sWU+iZZNJZeSeBOUoNs29cU019PO0wywE3UfktGZYIqaICsLxBsrj +UkDd/trg==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gwc70-0005yM-FB; Thu, 21 Feb 2019 00:21:14 +0000 From: Matthew Wilcox To: Jason Gunthorpe Cc: Matthew Wilcox , linux-rdma@vger.kernel.org Subject: [PATCH 32/32] ib/bnxt: Remove mention of idr_alloc from comment Date: Wed, 20 Feb 2019 16:21:07 -0800 Message-Id: <20190221002107.22625-33-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190221002107.22625-1-willy@infradead.org> References: <20190221002107.22625-1-willy@infradead.org> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Matthew Wilcox --- drivers/infiniband/hw/bnxt_re/ib_verbs.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c index 1e2515e2eb62..a031d9469eba 100644 --- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c +++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c @@ -3691,7 +3691,7 @@ struct ib_ucontext *bnxt_re_alloc_ucontext(struct ib_device *ibdev, } spin_lock_init(&uctx->sh_lock); - resp.dev_id = rdev->en_dev->pdev->devfn; /*Temp, Use idr_alloc instead*/ + resp.dev_id = rdev->en_dev->pdev->devfn; /*Temp, Use xa_alloc instead*/ resp.max_qp = rdev->qplib_ctx.qpc_count; resp.pg_size = PAGE_SIZE; resp.cqe_sz = sizeof(struct cq_base);