From patchwork Wed Oct 20 22:05:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12573449 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D1B8C433EF for ; Wed, 20 Oct 2021 22:07:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 19EE760551 for ; Wed, 20 Oct 2021 22:07:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230325AbhJTWJn (ORCPT ); Wed, 20 Oct 2021 18:09:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44236 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229842AbhJTWJm (ORCPT ); Wed, 20 Oct 2021 18:09:42 -0400 Received: from mail-ot1-x32f.google.com (mail-ot1-x32f.google.com [IPv6:2607:f8b0:4864:20::32f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C3792C06161C for ; Wed, 20 Oct 2021 15:07:27 -0700 (PDT) Received: by mail-ot1-x32f.google.com with SMTP id e59-20020a9d01c1000000b00552c91a99f7so8758919ote.6 for ; Wed, 20 Oct 2021 15:07:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5f2RYedBSwTAXItaN5rzJx0DVmB6p+WqMvoad9ipDU4=; b=V66VnCwdLr43b8nU6IAio1a+95FOPNrtmVPp0/uXuMPiaH8YOgvQxfGrYuYxIwjfjZ bfsbzw6YkfqKHLH6hc8/l/axTy4JAjhtongNSc6pV3Yz4azQnTsyatkkbtFqJKVJKg5B w3QHfo3DXrKKHJ/VHrlT9F3ZsOpm3PZInIvsqKjTe0CBnaENP9LriCTtdaVMM34iqeqW GWjuonD1XEmYjwuQ6FOtCeDLAqzI9hyi6BpJ639cCD1aitsXVH4+0Yq949QJ6pqRJgGb 7VpsFcQdFl4PVMKdyYVYNdaRPxeiKNEStGRsgxnu5Q6d2cLnUPK/c6uWyr/HoGWvLrCH QjnQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5f2RYedBSwTAXItaN5rzJx0DVmB6p+WqMvoad9ipDU4=; b=y86Qk/bJswFERmR4uFNLZyBjtqK4n/wOjovTx4nsIh6p+VeNK87ZK9UGj0PqKmZl1x b7VWnuQsJqPHD+BFkKZP53XXfF6y2+eWdXj2VEGqNMKFVSym+zUsSNdKhub/k3HWMGwc ELoEnsBsyZVpXNv7YbE+8kjJFLAN9Drt95YcNE5g63Z6nPgxRfpNLaMFTgtoJ/OCbI44 A/RBcxdVMjzIG68+p/QwRZjJcCk7ZAsFDyYywt1kH3OZ5QeYH2GNAfRkrYjcSPDJr+px iRoDx4iXbp0y3fHvP/mGRlQNiyDBGwlIWdcbQOUBchrqgOLzHYDOB9Bx41onxDpJEPRv DbeQ== X-Gm-Message-State: AOAM533AcGNt7SR6xH/0k1md92mCLWBLdbDRM1NllxNqXAuljtEIuDVI S1AobDyjpUD63brA7J4YH+spIK6sLJU= X-Google-Smtp-Source: ABdhPJx0OaIbJFIVcRjkH46n/QnMwZ3q33E6a/5Bp+tFnbNVxA8RS4/DYSmsb50ySZ10hgvPRhka1w== X-Received: by 2002:a05:6830:4387:: with SMTP id s7mr1485824otv.273.1634767647115; Wed, 20 Oct 2021 15:07:27 -0700 (PDT) Received: from ubunto-21.tx.rr.com (2603-8081-140c-1a00-8d65-dc0b-4dcc-2f9b.res6.spectrum.com. [2603:8081:140c:1a00:8d65:dc0b:4dcc:2f9b]) by smtp.gmail.com with ESMTPSA id v13sm725050otn.41.2021.10.20.15.07.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Oct 2021 15:07:26 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 1/7] RDMA/rxe: Replace irqsave locks with bh locks Date: Wed, 20 Oct 2021 17:05:44 -0500 Message-Id: <20211020220549.36145-2-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211020220549.36145-1-rpearsonhpe@gmail.com> References: <20211020220549.36145-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Most of the locks in the rxe driver are _irqsave/restore locks but in fact there are no interrupt threads that run rxe code or share data with rxe. There are softirq threads and data sharing so the appropriate lock type is _bh. This patch replaces all irqsave type locks with bh type locks. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_comp.c | 8 ++--- drivers/infiniband/sw/rxe/rxe_cq.c | 19 +++++------- drivers/infiniband/sw/rxe/rxe_mcast.c | 10 +++---- drivers/infiniband/sw/rxe/rxe_mw.c | 15 ++++------ drivers/infiniband/sw/rxe/rxe_pool.c | 42 +++++++++++---------------- drivers/infiniband/sw/rxe/rxe_queue.c | 9 +++--- drivers/infiniband/sw/rxe/rxe_req.c | 11 +++---- drivers/infiniband/sw/rxe/rxe_task.c | 18 +++++------- drivers/infiniband/sw/rxe/rxe_verbs.c | 27 +++++++---------- 9 files changed, 65 insertions(+), 94 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c index d771ba8449a1..f363fe3fa414 100644 --- a/drivers/infiniband/sw/rxe/rxe_comp.c +++ b/drivers/infiniband/sw/rxe/rxe_comp.c @@ -458,8 +458,6 @@ static inline enum comp_state complete_ack(struct rxe_qp *qp, struct rxe_pkt_info *pkt, struct rxe_send_wqe *wqe) { - unsigned long flags; - if (wqe->has_rd_atomic) { wqe->has_rd_atomic = 0; atomic_inc(&qp->req.rd_atomic); @@ -472,11 +470,11 @@ static inline enum comp_state complete_ack(struct rxe_qp *qp, if (unlikely(qp->req.state == QP_STATE_DRAIN)) { /* state_lock used by requester & completer */ - spin_lock_irqsave(&qp->state_lock, flags); + spin_lock_bh(&qp->state_lock); if ((qp->req.state == QP_STATE_DRAIN) && (qp->comp.psn == qp->req.psn)) { qp->req.state = QP_STATE_DRAINED; - spin_unlock_irqrestore(&qp->state_lock, flags); + spin_unlock_bh(&qp->state_lock); if (qp->ibqp.event_handler) { struct ib_event ev; @@ -488,7 +486,7 @@ static inline enum comp_state complete_ack(struct rxe_qp *qp, qp->ibqp.qp_context); } } else { - spin_unlock_irqrestore(&qp->state_lock, flags); + spin_unlock_bh(&qp->state_lock); } } diff --git a/drivers/infiniband/sw/rxe/rxe_cq.c b/drivers/infiniband/sw/rxe/rxe_cq.c index 0c05d612ae63..dda510e4d904 100644 --- a/drivers/infiniband/sw/rxe/rxe_cq.c +++ b/drivers/infiniband/sw/rxe/rxe_cq.c @@ -42,14 +42,13 @@ int rxe_cq_chk_attr(struct rxe_dev *rxe, struct rxe_cq *cq, static void rxe_send_complete(struct tasklet_struct *t) { struct rxe_cq *cq = from_tasklet(cq, t, comp_task); - unsigned long flags; - spin_lock_irqsave(&cq->cq_lock, flags); + spin_lock_bh(&cq->cq_lock); if (cq->is_dying) { - spin_unlock_irqrestore(&cq->cq_lock, flags); + spin_unlock_bh(&cq->cq_lock); return; } - spin_unlock_irqrestore(&cq->cq_lock, flags); + spin_unlock_bh(&cq->cq_lock); cq->ibcq.comp_handler(&cq->ibcq, cq->ibcq.cq_context); } @@ -106,15 +105,14 @@ int rxe_cq_resize_queue(struct rxe_cq *cq, int cqe, int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe *cqe, int solicited) { struct ib_event ev; - unsigned long flags; int full; void *addr; - spin_lock_irqsave(&cq->cq_lock, flags); + spin_lock_bh(&cq->cq_lock); full = queue_full(cq->queue, QUEUE_TYPE_TO_CLIENT); if (unlikely(full)) { - spin_unlock_irqrestore(&cq->cq_lock, flags); + spin_unlock_bh(&cq->cq_lock); if (cq->ibcq.event_handler) { ev.device = cq->ibcq.device; ev.element.cq = &cq->ibcq; @@ -130,7 +128,7 @@ int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe *cqe, int solicited) queue_advance_producer(cq->queue, QUEUE_TYPE_TO_CLIENT); - spin_unlock_irqrestore(&cq->cq_lock, flags); + spin_unlock_bh(&cq->cq_lock); if ((cq->notify == IB_CQ_NEXT_COMP) || (cq->notify == IB_CQ_SOLICITED && solicited)) { @@ -144,12 +142,11 @@ int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe *cqe, int solicited) void rxe_cq_cleanup(struct rxe_pool_entry *arg) { struct rxe_cq *cq = container_of(arg, typeof(*cq), pelem); - unsigned long flags; /* TODO get rid of this */ - spin_lock_irqsave(&cq->cq_lock, flags); + spin_lock_bh(&cq->cq_lock); cq->is_dying = true; - spin_unlock_irqrestore(&cq->cq_lock, flags); + spin_unlock_bh(&cq->cq_lock); if (cq->queue) rxe_queue_cleanup(cq->queue); diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index 685440a20669..a05487ca628e 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -33,14 +33,13 @@ static int rxe_mcast_get_grp(struct rxe_dev *rxe, union ib_gid *mgid, { struct rxe_pool *pool = &rxe->mc_grp_pool; struct rxe_mc_grp *grp; - unsigned long flags; int err = 0; /* Perform this while holding the mc_grp_pool lock * to prevent races where two coincident calls fail to lookup the * same group and then both create the same group. */ - write_lock_irqsave(&pool->pool_lock, flags); + write_lock_bh(&pool->pool_lock); grp = rxe_pool_get_key_locked(pool, mgid); if (grp) goto done; @@ -66,7 +65,7 @@ static int rxe_mcast_get_grp(struct rxe_dev *rxe, union ib_gid *mgid, rxe_add_ref_locked(grp); done: *grp_p = grp; - write_unlock_irqrestore(&pool->pool_lock, flags); + write_unlock_bh(&pool->pool_lock); return err; } @@ -75,9 +74,8 @@ static void rxe_mcast_put_grp(struct rxe_mc_grp *grp) { struct rxe_dev *rxe = grp->rxe; struct rxe_pool *pool = &rxe->mc_grp_pool; - unsigned long flags; - write_lock_irqsave(&pool->pool_lock, flags); + write_lock_bh(&pool->pool_lock); rxe_drop_ref_locked(grp); @@ -86,7 +84,7 @@ static void rxe_mcast_put_grp(struct rxe_mc_grp *grp) rxe_fini_ref_locked(grp); } - write_unlock_irqrestore(&pool->pool_lock, flags); + write_unlock_bh(&pool->pool_lock); } /** diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c index 599699f93332..7c264599b3d4 100644 --- a/drivers/infiniband/sw/rxe/rxe_mw.c +++ b/drivers/infiniband/sw/rxe/rxe_mw.c @@ -55,12 +55,11 @@ int rxe_dealloc_mw(struct ib_mw *ibmw) { struct rxe_mw *mw = to_rmw(ibmw); struct rxe_pd *pd = to_rpd(ibmw->pd); - unsigned long flags; int err; - spin_lock_irqsave(&mw->lock, flags); + spin_lock_bh(&mw->lock); rxe_do_dealloc_mw(mw); - spin_unlock_irqrestore(&mw->lock, flags); + spin_unlock_bh(&mw->lock); err = rxe_fini_ref(mw); if (err) @@ -199,7 +198,6 @@ int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe) struct rxe_dev *rxe = to_rdev(qp->ibqp.device); u32 mw_rkey = wqe->wr.wr.mw.mw_rkey; u32 mr_lkey = wqe->wr.wr.mw.mr_lkey; - unsigned long flags; mw = rxe_pool_get_index(&rxe->mw_pool, mw_rkey >> 8); if (unlikely(!mw)) { @@ -226,7 +224,7 @@ int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe) } } - spin_lock_irqsave(&mw->lock, flags); + spin_lock_bh(&mw->lock); ret = rxe_check_bind_mw(qp, wqe, mw, mr); if (ret) { if (mr) @@ -236,7 +234,7 @@ int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe) rxe_do_bind_mw(qp, wqe, mw, mr); err_unlock: - spin_unlock_irqrestore(&mw->lock, flags); + spin_unlock_bh(&mw->lock); err_drop_mw: rxe_drop_ref(mw); err: @@ -280,7 +278,6 @@ static void rxe_do_invalidate_mw(struct rxe_mw *mw) int rxe_invalidate_mw(struct rxe_qp *qp, u32 rkey) { struct rxe_dev *rxe = to_rdev(qp->ibqp.device); - unsigned long flags; struct rxe_mw *mw; int ret; @@ -295,7 +292,7 @@ int rxe_invalidate_mw(struct rxe_qp *qp, u32 rkey) goto err_drop_ref; } - spin_lock_irqsave(&mw->lock, flags); + spin_lock_bh(&mw->lock); ret = rxe_check_invalidate_mw(qp, mw); if (ret) @@ -303,7 +300,7 @@ int rxe_invalidate_mw(struct rxe_qp *qp, u32 rkey) rxe_do_invalidate_mw(mw); err_unlock: - spin_unlock_irqrestore(&mw->lock, flags); + spin_unlock_bh(&mw->lock); err_drop_ref: rxe_drop_ref(mw); err: diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 59f1a1919e30..58f826ab3bc6 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -342,34 +342,31 @@ void *rxe_alloc_with_key_locked(struct rxe_pool *pool, void *key) void *rxe_alloc(struct rxe_pool *pool) { - unsigned long flags; void *obj; - write_lock_irqsave(&pool->pool_lock, flags); + write_lock_bh(&pool->pool_lock); obj = rxe_alloc_locked(pool); - write_unlock_irqrestore(&pool->pool_lock, flags); + write_unlock_bh(&pool->pool_lock); return obj; } void *rxe_alloc_with_key(struct rxe_pool *pool, void *key) { - unsigned long flags; void *obj; - write_lock_irqsave(&pool->pool_lock, flags); + write_lock_bh(&pool->pool_lock); obj = rxe_alloc_with_key_locked(pool, key); - write_unlock_irqrestore(&pool->pool_lock, flags); + write_unlock_bh(&pool->pool_lock); return obj; } int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_entry *elem) { - unsigned long flags; int err; - write_lock_irqsave(&pool->pool_lock, flags); + write_lock_bh(&pool->pool_lock); if (atomic_inc_return(&pool->num_elem) > pool->max_elem) goto out_cnt; @@ -383,13 +380,13 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_entry *elem) } refcount_set(&elem->refcnt, 1); - write_unlock_irqrestore(&pool->pool_lock, flags); + write_unlock_bh(&pool->pool_lock); return 0; out_cnt: atomic_dec(&pool->num_elem); - write_unlock_irqrestore(&pool->pool_lock, flags); + write_unlock_bh(&pool->pool_lock); return -EINVAL; } @@ -421,11 +418,10 @@ void *rxe_pool_get_index_locked(struct rxe_pool *pool, u32 index) void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) { void *obj; - unsigned long flags; - read_lock_irqsave(&pool->pool_lock, flags); + read_lock_bh(&pool->pool_lock); obj = rxe_pool_get_index_locked(pool, index); - read_unlock_irqrestore(&pool->pool_lock, flags); + read_unlock_bh(&pool->pool_lock); return obj; } @@ -462,11 +458,10 @@ void *rxe_pool_get_key_locked(struct rxe_pool *pool, void *key) void *rxe_pool_get_key(struct rxe_pool *pool, void *key) { void *obj; - unsigned long flags; - read_lock_irqsave(&pool->pool_lock, flags); + read_lock_bh(&pool->pool_lock); obj = rxe_pool_get_key_locked(pool, key); - read_unlock_irqrestore(&pool->pool_lock, flags); + read_unlock_bh(&pool->pool_lock); return obj; } @@ -485,12 +480,11 @@ int __rxe_add_ref_locked(struct rxe_pool_entry *elem) int __rxe_add_ref(struct rxe_pool_entry *elem) { struct rxe_pool *pool = elem->pool; - unsigned long flags; int ret; - read_lock_irqsave(&pool->pool_lock, flags); + read_lock_bh(&pool->pool_lock); ret = __rxe_add_ref_locked(elem); - read_unlock_irqrestore(&pool->pool_lock, flags); + read_unlock_bh(&pool->pool_lock); return ret; } @@ -509,12 +503,11 @@ int __rxe_drop_ref_locked(struct rxe_pool_entry *elem) int __rxe_drop_ref(struct rxe_pool_entry *elem) { struct rxe_pool *pool = elem->pool; - unsigned long flags; int ret; - read_lock_irqsave(&pool->pool_lock, flags); + read_lock_bh(&pool->pool_lock); ret = __rxe_drop_ref_locked(elem); - read_unlock_irqrestore(&pool->pool_lock, flags); + read_unlock_bh(&pool->pool_lock); return ret; } @@ -560,12 +553,11 @@ int __rxe_fini_ref_locked(struct rxe_pool_entry *elem) int __rxe_fini_ref(struct rxe_pool_entry *elem) { struct rxe_pool *pool = elem->pool; - unsigned long flags; int ret; - read_lock_irqsave(&pool->pool_lock, flags); + read_lock_bh(&pool->pool_lock); ret = __rxe_fini(elem); - read_unlock_irqrestore(&pool->pool_lock, flags); + read_unlock_bh(&pool->pool_lock); if (!ret) { if (pool->cleanup) diff --git a/drivers/infiniband/sw/rxe/rxe_queue.c b/drivers/infiniband/sw/rxe/rxe_queue.c index 6e6e023c1b45..a1b283dd2d4c 100644 --- a/drivers/infiniband/sw/rxe/rxe_queue.c +++ b/drivers/infiniband/sw/rxe/rxe_queue.c @@ -151,7 +151,6 @@ int rxe_queue_resize(struct rxe_queue *q, unsigned int *num_elem_p, struct rxe_queue *new_q; unsigned int num_elem = *num_elem_p; int err; - unsigned long flags = 0, flags1; new_q = rxe_queue_init(q->rxe, &num_elem, elem_size, q->type); if (!new_q) @@ -165,17 +164,17 @@ int rxe_queue_resize(struct rxe_queue *q, unsigned int *num_elem_p, goto err1; } - spin_lock_irqsave(consumer_lock, flags1); + spin_lock_bh(consumer_lock); if (producer_lock) { - spin_lock_irqsave(producer_lock, flags); + spin_lock_bh(producer_lock); err = resize_finish(q, new_q, num_elem); - spin_unlock_irqrestore(producer_lock, flags); + spin_unlock_bh(producer_lock); } else { err = resize_finish(q, new_q, num_elem); } - spin_unlock_irqrestore(consumer_lock, flags1); + spin_unlock_bh(consumer_lock); rxe_queue_cleanup(new_q); /* new/old dep on err */ if (err) diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 891cf98c74a0..7bc1ec8a5aa6 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -110,7 +110,6 @@ void rnr_nak_timer(struct timer_list *t) static struct rxe_send_wqe *req_next_wqe(struct rxe_qp *qp) { struct rxe_send_wqe *wqe; - unsigned long flags; struct rxe_queue *q = qp->sq.queue; unsigned int index = qp->req.wqe_index; unsigned int cons; @@ -124,25 +123,23 @@ static struct rxe_send_wqe *req_next_wqe(struct rxe_qp *qp) /* check to see if we are drained; * state_lock used by requester and completer */ - spin_lock_irqsave(&qp->state_lock, flags); + spin_lock_bh(&qp->state_lock); do { if (qp->req.state != QP_STATE_DRAIN) { /* comp just finished */ - spin_unlock_irqrestore(&qp->state_lock, - flags); + spin_unlock_bh(&qp->state_lock); break; } if (wqe && ((index != cons) || (wqe->state != wqe_state_posted))) { /* comp not done yet */ - spin_unlock_irqrestore(&qp->state_lock, - flags); + spin_unlock_bh(&qp->state_lock); break; } qp->req.state = QP_STATE_DRAINED; - spin_unlock_irqrestore(&qp->state_lock, flags); + spin_unlock_bh(&qp->state_lock); if (qp->ibqp.event_handler) { struct ib_event ev; diff --git a/drivers/infiniband/sw/rxe/rxe_task.c b/drivers/infiniband/sw/rxe/rxe_task.c index 6951fdcb31bf..0c4db5bb17d7 100644 --- a/drivers/infiniband/sw/rxe/rxe_task.c +++ b/drivers/infiniband/sw/rxe/rxe_task.c @@ -32,25 +32,24 @@ void rxe_do_task(struct tasklet_struct *t) { int cont; int ret; - unsigned long flags; struct rxe_task *task = from_tasklet(task, t, tasklet); - spin_lock_irqsave(&task->state_lock, flags); + spin_lock_bh(&task->state_lock); switch (task->state) { case TASK_STATE_START: task->state = TASK_STATE_BUSY; - spin_unlock_irqrestore(&task->state_lock, flags); + spin_unlock_bh(&task->state_lock); break; case TASK_STATE_BUSY: task->state = TASK_STATE_ARMED; fallthrough; case TASK_STATE_ARMED: - spin_unlock_irqrestore(&task->state_lock, flags); + spin_unlock_bh(&task->state_lock); return; default: - spin_unlock_irqrestore(&task->state_lock, flags); + spin_unlock_bh(&task->state_lock); pr_warn("%s failed with bad state %d\n", __func__, task->state); return; } @@ -59,7 +58,7 @@ void rxe_do_task(struct tasklet_struct *t) cont = 0; ret = task->func(task->arg); - spin_lock_irqsave(&task->state_lock, flags); + spin_lock_bh(&task->state_lock); switch (task->state) { case TASK_STATE_BUSY: if (ret) @@ -81,7 +80,7 @@ void rxe_do_task(struct tasklet_struct *t) pr_warn("%s failed with bad state %d\n", __func__, task->state); } - spin_unlock_irqrestore(&task->state_lock, flags); + spin_unlock_bh(&task->state_lock); } while (cont); task->ret = ret; @@ -106,7 +105,6 @@ int rxe_init_task(void *obj, struct rxe_task *task, void rxe_cleanup_task(struct rxe_task *task) { - unsigned long flags; bool idle; /* @@ -116,9 +114,9 @@ void rxe_cleanup_task(struct rxe_task *task) task->destroyed = true; do { - spin_lock_irqsave(&task->state_lock, flags); + spin_lock_bh(&task->state_lock); idle = (task->state == TASK_STATE_START); - spin_unlock_irqrestore(&task->state_lock, flags); + spin_unlock_bh(&task->state_lock); } while (!idle); tasklet_kill(&task->tasklet); diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 1b5084fd10ab..2b0ba33cff31 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -381,10 +381,9 @@ static int rxe_post_srq_recv(struct ib_srq *ibsrq, const struct ib_recv_wr *wr, const struct ib_recv_wr **bad_wr) { int err = 0; - unsigned long flags; struct rxe_srq *srq = to_rsrq(ibsrq); - spin_lock_irqsave(&srq->rq.producer_lock, flags); + spin_lock_bh(&srq->rq.producer_lock); while (wr) { err = post_one_recv(&srq->rq, wr); @@ -393,7 +392,7 @@ static int rxe_post_srq_recv(struct ib_srq *ibsrq, const struct ib_recv_wr *wr, wr = wr->next; } - spin_unlock_irqrestore(&srq->rq.producer_lock, flags); + spin_unlock_bh(&srq->rq.producer_lock); if (err) *bad_wr = wr; @@ -627,19 +626,18 @@ static int post_one_send(struct rxe_qp *qp, const struct ib_send_wr *ibwr, int err; struct rxe_sq *sq = &qp->sq; struct rxe_send_wqe *send_wqe; - unsigned long flags; int full; err = validate_send_wr(qp, ibwr, mask, length); if (err) return err; - spin_lock_irqsave(&qp->sq.sq_lock, flags); + spin_lock_bh(&qp->sq.sq_lock); full = queue_full(sq->queue, QUEUE_TYPE_TO_DRIVER); if (unlikely(full)) { - spin_unlock_irqrestore(&qp->sq.sq_lock, flags); + spin_unlock_bh(&qp->sq.sq_lock); return -ENOMEM; } @@ -648,7 +646,7 @@ static int post_one_send(struct rxe_qp *qp, const struct ib_send_wr *ibwr, queue_advance_producer(sq->queue, QUEUE_TYPE_TO_DRIVER); - spin_unlock_irqrestore(&qp->sq.sq_lock, flags); + spin_unlock_bh(&qp->sq.sq_lock); return 0; } @@ -728,7 +726,6 @@ static int rxe_post_recv(struct ib_qp *ibqp, const struct ib_recv_wr *wr, int err = 0; struct rxe_qp *qp = to_rqp(ibqp); struct rxe_rq *rq = &qp->rq; - unsigned long flags; if (unlikely((qp_state(qp) < IB_QPS_INIT) || !qp->valid)) { *bad_wr = wr; @@ -742,7 +739,7 @@ static int rxe_post_recv(struct ib_qp *ibqp, const struct ib_recv_wr *wr, goto err1; } - spin_lock_irqsave(&rq->producer_lock, flags); + spin_lock_bh(&rq->producer_lock); while (wr) { err = post_one_recv(rq, wr); @@ -753,7 +750,7 @@ static int rxe_post_recv(struct ib_qp *ibqp, const struct ib_recv_wr *wr, wr = wr->next; } - spin_unlock_irqrestore(&rq->producer_lock, flags); + spin_unlock_bh(&rq->producer_lock); if (qp->resp.state == QP_STATE_ERROR) rxe_run_task(&qp->resp.task, 1); @@ -831,9 +828,8 @@ static int rxe_poll_cq(struct ib_cq *ibcq, int num_entries, struct ib_wc *wc) int i; struct rxe_cq *cq = to_rcq(ibcq); struct rxe_cqe *cqe; - unsigned long flags; - spin_lock_irqsave(&cq->cq_lock, flags); + spin_lock_bh(&cq->cq_lock); for (i = 0; i < num_entries; i++) { cqe = queue_head(cq->queue, QUEUE_TYPE_FROM_DRIVER); if (!cqe) @@ -842,7 +838,7 @@ static int rxe_poll_cq(struct ib_cq *ibcq, int num_entries, struct ib_wc *wc) memcpy(wc++, &cqe->ibwc, sizeof(*wc)); queue_advance_consumer(cq->queue, QUEUE_TYPE_FROM_DRIVER); } - spin_unlock_irqrestore(&cq->cq_lock, flags); + spin_unlock_bh(&cq->cq_lock); return i; } @@ -860,11 +856,10 @@ static int rxe_peek_cq(struct ib_cq *ibcq, int wc_cnt) static int rxe_req_notify_cq(struct ib_cq *ibcq, enum ib_cq_notify_flags flags) { struct rxe_cq *cq = to_rcq(ibcq); - unsigned long irq_flags; int ret = 0; int empty; - spin_lock_irqsave(&cq->cq_lock, irq_flags); + spin_lock_bh(&cq->cq_lock); if (cq->notify != IB_CQ_NEXT_COMP) cq->notify = flags & IB_CQ_SOLICITED_MASK; @@ -873,7 +868,7 @@ static int rxe_req_notify_cq(struct ib_cq *ibcq, enum ib_cq_notify_flags flags) if ((flags & IB_CQ_REPORT_MISSED_EVENTS) && !empty) ret = 1; - spin_unlock_irqrestore(&cq->cq_lock, irq_flags); + spin_unlock_bh(&cq->cq_lock); return ret; } From patchwork Wed Oct 20 22:05:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12573451 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CDDD6C433F5 for ; Wed, 20 Oct 2021 22:07:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A90CD604AC for ; Wed, 20 Oct 2021 22:07:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230331AbhJTWJo (ORCPT ); Wed, 20 Oct 2021 18:09:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44240 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230234AbhJTWJn (ORCPT ); Wed, 20 Oct 2021 18:09:43 -0400 Received: from mail-ot1-x32d.google.com (mail-ot1-x32d.google.com [IPv6:2607:f8b0:4864:20::32d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 990B9C06161C for ; Wed, 20 Oct 2021 15:07:28 -0700 (PDT) Received: by mail-ot1-x32d.google.com with SMTP id g62-20020a9d2dc4000000b0054752cfbc59so7909451otb.1 for ; Wed, 20 Oct 2021 15:07:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=bjnJ4oIrIHrevNzkurUP0n4Sgiwxskhb+Y0lKxYmJm4=; b=cmJYaR53EsMend4cBXLok192NmatNOuFaR/MY6jQ34f1QTVIA5sjTapUAeUujPq2j1 lrFCqBSBcM7YXNUC2PZbGeUMbFuyfEvfnoc6GJNyspRBMXnx3JQHe4RMrrmiYnkp8ZXT NYz1aQey7m0mdJ8dAwDgxxr1HguJlocrws3zhGE3mzZi7bdxO146OeD/X5KyfBOPB0j+ qbARXVDzfC8wn3s17hQ0s/eQHgMta+VP1ReiHdNZ1Gv0HZgVKE232rsKdcpr8qG9nSER zO7puz8RTiWmUti7bt+Zu1Y3Q3UejMiqw+A8fxm2eHKoUrCjBH9c+548vQ0VV0tMnVFo JWYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=bjnJ4oIrIHrevNzkurUP0n4Sgiwxskhb+Y0lKxYmJm4=; b=Z6Jg1wg/y2Fq5prdAptfO0cbmpIR6XYM51wyJSHlZvvpkNT2dK5Ef373l4By6JwiLo oVl8+fdoLd7SDQhVphTbuI2YI80pmvUUTBOFi4RHhnmxBpSsX6jh/hoRDGy1ICzm8ukc 5CaX93tbrs1Smb0+dafWh1Stf3B6UKdOs2XQzJ6mHY+pP8mSENOMkkCuXhj//GjvI0XF E8RSfKGC97O3/v/EyY+ccm1a2QxSXLszdERLApP3NlnUIN9Z6XevOPW5lIzSQWfNkMkt h0G7k+6XA6rMY+gDXbc6GrxBqgn/8Af9i0vqvZKhoImSAAx8+EF2O643wQ5cVCv9jBRp OSRA== X-Gm-Message-State: AOAM530kLEM5Lc6P8AW1L5Ca+ifmJsTQNqxIuVKB66UiSCuMrG149wBi FEB6VQ1gFFqjZdRGE5d0WkBqMJk4K48= X-Google-Smtp-Source: ABdhPJwLTukHN+zn1PuZswOMkXM1w7XHGgqaHOFJRZN80qH9EBty0ziuTUq+rNO6sHWm6CvQffxX+Q== X-Received: by 2002:a05:6830:4187:: with SMTP id r7mr1576444otu.126.1634767647904; Wed, 20 Oct 2021 15:07:27 -0700 (PDT) Received: from ubunto-21.tx.rr.com (2603-8081-140c-1a00-8d65-dc0b-4dcc-2f9b.res6.spectrum.com. [2603:8081:140c:1a00:8d65:dc0b:4dcc:2f9b]) by smtp.gmail.com with ESMTPSA id v13sm725050otn.41.2021.10.20.15.07.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Oct 2021 15:07:27 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 2/7] RDMA/rxe: Cleanup rxe_pool_entry Date: Wed, 20 Oct 2021 17:05:45 -0500 Message-Id: <20211020220549.36145-3-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211020220549.36145-1-rpearsonhpe@gmail.com> References: <20211020220549.36145-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Currently three different names are used to describe rxe pool elements. They are referred to as entries, elems or pelems. This patch chooses one 'elem' and changes the other ones. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_cq.c | 4 +- drivers/infiniband/sw/rxe/rxe_loc.h | 10 ++-- drivers/infiniband/sw/rxe/rxe_mr.c | 6 +-- drivers/infiniband/sw/rxe/rxe_mw.c | 4 +- drivers/infiniband/sw/rxe/rxe_pool.c | 70 +++++++++++++-------------- drivers/infiniband/sw/rxe/rxe_pool.h | 38 +++++++-------- drivers/infiniband/sw/rxe/rxe_qp.c | 6 +-- drivers/infiniband/sw/rxe/rxe_srq.c | 6 +-- drivers/infiniband/sw/rxe/rxe_verbs.c | 2 +- drivers/infiniband/sw/rxe/rxe_verbs.h | 22 ++++----- 10 files changed, 84 insertions(+), 84 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_cq.c b/drivers/infiniband/sw/rxe/rxe_cq.c index dda510e4d904..79682a3b1357 100644 --- a/drivers/infiniband/sw/rxe/rxe_cq.c +++ b/drivers/infiniband/sw/rxe/rxe_cq.c @@ -139,9 +139,9 @@ int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe *cqe, int solicited) return 0; } -void rxe_cq_cleanup(struct rxe_pool_entry *arg) +void rxe_cq_cleanup(struct rxe_pool_elem *arg) { - struct rxe_cq *cq = container_of(arg, typeof(*cq), pelem); + struct rxe_cq *cq = container_of(arg, typeof(*cq), elem); /* TODO get rid of this */ spin_lock_bh(&cq->cq_lock); diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index a689ee8386b8..2d073dfd99a1 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -35,7 +35,7 @@ int rxe_cq_resize_queue(struct rxe_cq *cq, int new_cqe, int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe *cqe, int solicited); -void rxe_cq_cleanup(struct rxe_pool_entry *arg); +void rxe_cq_cleanup(struct rxe_pool_elem *arg); /* rxe_mcast.c */ int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp, @@ -80,7 +80,7 @@ int rxe_invalidate_mr(struct rxe_qp *qp, u32 rkey); int rxe_reg_fast_mr(struct rxe_qp *qp, struct rxe_send_wqe *wqe); int rxe_mr_set_page(struct ib_mr *ibmr, u64 addr); int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata); -void rxe_mr_cleanup(struct rxe_pool_entry *arg); +void rxe_mr_cleanup(struct rxe_pool_elem *arg); /* rxe_mw.c */ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata); @@ -88,7 +88,7 @@ int rxe_dealloc_mw(struct ib_mw *ibmw); int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe); int rxe_invalidate_mw(struct rxe_qp *qp, u32 rkey); struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey); -void rxe_mw_cleanup(struct rxe_pool_entry *arg); +void rxe_mw_cleanup(struct rxe_pool_elem *arg); /* rxe_net.c */ struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, @@ -121,7 +121,7 @@ void rxe_qp_error(struct rxe_qp *qp); void rxe_qp_destroy(struct rxe_qp *qp); -void rxe_qp_cleanup(struct rxe_pool_entry *arg); +void rxe_qp_cleanup(struct rxe_pool_elem *arg); static inline int qp_num(struct rxe_qp *qp) { @@ -177,7 +177,7 @@ int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq, int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq, struct ib_srq_attr *attr, enum ib_srq_attr_mask mask, struct rxe_modify_srq_cmd *ucmd, struct ib_udata *udata); -void rxe_srq_cleanup(struct rxe_pool_entry *arg); +void rxe_srq_cleanup(struct rxe_pool_elem *arg); void rxe_dealloc(struct ib_device *ib_dev); diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 6c50c8562fd8..63a36b7f2aa5 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -50,7 +50,7 @@ int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length) static void rxe_mr_init(int access, struct rxe_mr *mr) { - u32 lkey = mr->pelem.index << 8 | rxe_get_next_key(-1); + u32 lkey = mr->elem.index << 8 | rxe_get_next_key(-1); u32 rkey = (access & IB_ACCESS_REMOTE) ? lkey : 0; /* set ibmr->l/rkey and also copy into private l/rkey @@ -704,9 +704,9 @@ int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) return 0; } -void rxe_mr_cleanup(struct rxe_pool_entry *arg) +void rxe_mr_cleanup(struct rxe_pool_elem *arg) { - struct rxe_mr *mr = container_of(arg, typeof(*mr), pelem); + struct rxe_mr *mr = container_of(arg, typeof(*mr), elem); ib_umem_release(mr->umem); diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c index 7c264599b3d4..cd690dad9d39 100644 --- a/drivers/infiniband/sw/rxe/rxe_mw.c +++ b/drivers/infiniband/sw/rxe/rxe_mw.c @@ -20,7 +20,7 @@ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata) return ret; } - mw->rkey = ibmw->rkey = (mw->pelem.index << 8) | rxe_get_next_key(-1); + mw->rkey = ibmw->rkey = (mw->elem.index << 8) | rxe_get_next_key(-1); mw->state = (mw->ibmw.type == IB_MW_TYPE_2) ? RXE_MW_STATE_FREE : RXE_MW_STATE_VALID; spin_lock_init(&mw->lock); @@ -330,7 +330,7 @@ struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey) return mw; } -void rxe_mw_cleanup(struct rxe_pool_entry *elem) +void rxe_mw_cleanup(struct rxe_pool_elem *elem) { /* nothing to do currently */ } diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 58f826ab3bc6..24ebd1b663c3 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -12,19 +12,19 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = { [RXE_TYPE_UC] = { .name = "rxe-uc", .size = sizeof(struct rxe_ucontext), - .elem_offset = offsetof(struct rxe_ucontext, pelem), + .elem_offset = offsetof(struct rxe_ucontext, elem), .flags = RXE_POOL_NO_ALLOC, }, [RXE_TYPE_PD] = { .name = "rxe-pd", .size = sizeof(struct rxe_pd), - .elem_offset = offsetof(struct rxe_pd, pelem), + .elem_offset = offsetof(struct rxe_pd, elem), .flags = RXE_POOL_NO_ALLOC, }, [RXE_TYPE_AH] = { .name = "rxe-ah", .size = sizeof(struct rxe_ah), - .elem_offset = offsetof(struct rxe_ah, pelem), + .elem_offset = offsetof(struct rxe_ah, elem), .flags = RXE_POOL_INDEX | RXE_POOL_NO_ALLOC, .min_index = RXE_MIN_AH_INDEX, .max_index = RXE_MAX_AH_INDEX, @@ -32,7 +32,7 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = { [RXE_TYPE_SRQ] = { .name = "rxe-srq", .size = sizeof(struct rxe_srq), - .elem_offset = offsetof(struct rxe_srq, pelem), + .elem_offset = offsetof(struct rxe_srq, elem), .cleanup = rxe_srq_cleanup, .flags = RXE_POOL_INDEX | RXE_POOL_NO_ALLOC, .min_index = RXE_MIN_SRQ_INDEX, @@ -41,7 +41,7 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = { [RXE_TYPE_QP] = { .name = "rxe-qp", .size = sizeof(struct rxe_qp), - .elem_offset = offsetof(struct rxe_qp, pelem), + .elem_offset = offsetof(struct rxe_qp, elem), .cleanup = rxe_qp_cleanup, .flags = RXE_POOL_INDEX | RXE_POOL_NO_ALLOC, .min_index = RXE_MIN_QP_INDEX, @@ -50,14 +50,14 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = { [RXE_TYPE_CQ] = { .name = "rxe-cq", .size = sizeof(struct rxe_cq), - .elem_offset = offsetof(struct rxe_cq, pelem), + .elem_offset = offsetof(struct rxe_cq, elem), .flags = RXE_POOL_NO_ALLOC, .cleanup = rxe_cq_cleanup, }, [RXE_TYPE_MR] = { .name = "rxe-mr", .size = sizeof(struct rxe_mr), - .elem_offset = offsetof(struct rxe_mr, pelem), + .elem_offset = offsetof(struct rxe_mr, elem), .cleanup = rxe_mr_cleanup, .flags = RXE_POOL_INDEX, .max_index = RXE_MAX_MR_INDEX, @@ -66,7 +66,7 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = { [RXE_TYPE_MW] = { .name = "rxe-mw", .size = sizeof(struct rxe_mw), - .elem_offset = offsetof(struct rxe_mw, pelem), + .elem_offset = offsetof(struct rxe_mw, elem), .cleanup = rxe_mw_cleanup, .flags = RXE_POOL_INDEX | RXE_POOL_NO_ALLOC, .max_index = RXE_MAX_MW_INDEX, @@ -75,7 +75,7 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = { [RXE_TYPE_MC_GRP] = { .name = "rxe-mc_grp", .size = sizeof(struct rxe_mc_grp), - .elem_offset = offsetof(struct rxe_mc_grp, pelem), + .elem_offset = offsetof(struct rxe_mc_grp, elem), .flags = RXE_POOL_KEY, .key_offset = offsetof(struct rxe_mc_grp, mgid), .key_size = sizeof(union ib_gid), @@ -180,15 +180,15 @@ static u32 alloc_index(struct rxe_pool *pool) return index + pool->index.min_index; } -static int rxe_insert_index(struct rxe_pool *pool, struct rxe_pool_entry *new) +static int rxe_insert_index(struct rxe_pool *pool, struct rxe_pool_elem *new) { struct rb_node **link = &pool->index.tree.rb_node; struct rb_node *parent = NULL; - struct rxe_pool_entry *elem; + struct rxe_pool_elem *elem; while (*link) { parent = *link; - elem = rb_entry(parent, struct rxe_pool_entry, index_node); + elem = rb_entry(parent, struct rxe_pool_elem, index_node); /* this can happen if memory was recycled and/or the * old object was not deleted from the pool index @@ -211,16 +211,16 @@ static int rxe_insert_index(struct rxe_pool *pool, struct rxe_pool_entry *new) return 0; } -static int rxe_insert_key(struct rxe_pool *pool, struct rxe_pool_entry *new) +static int rxe_insert_key(struct rxe_pool *pool, struct rxe_pool_elem *new) { struct rb_node **link = &pool->key.tree.rb_node; struct rb_node *parent = NULL; - struct rxe_pool_entry *elem; + struct rxe_pool_elem *elem; int cmp; while (*link) { parent = *link; - elem = rb_entry(parent, struct rxe_pool_entry, key_node); + elem = rb_entry(parent, struct rxe_pool_elem, key_node); cmp = memcmp((u8 *)elem + pool->key.key_offset, (u8 *)new + pool->key.key_offset, @@ -243,7 +243,7 @@ static int rxe_insert_key(struct rxe_pool *pool, struct rxe_pool_entry *new) return 0; } -static int rxe_add_index(struct rxe_pool_entry *elem) +static int rxe_add_index(struct rxe_pool_elem *elem) { struct rxe_pool *pool = elem->pool; int err; @@ -257,7 +257,7 @@ static int rxe_add_index(struct rxe_pool_entry *elem) return err; } -static void rxe_drop_index(struct rxe_pool_entry *elem) +static void rxe_drop_index(struct rxe_pool_elem *elem) { struct rxe_pool *pool = elem->pool; @@ -267,7 +267,7 @@ static void rxe_drop_index(struct rxe_pool_entry *elem) static void *__rxe_alloc_locked(struct rxe_pool *pool) { - struct rxe_pool_entry *elem; + struct rxe_pool_elem *elem; void *obj; int err; @@ -278,7 +278,7 @@ static void *__rxe_alloc_locked(struct rxe_pool *pool) if (!obj) goto out_cnt; - elem = (struct rxe_pool_entry *)((u8 *)obj + pool->elem_offset); + elem = (struct rxe_pool_elem *)((u8 *)obj + pool->elem_offset); elem->pool = pool; elem->obj = obj; @@ -300,14 +300,14 @@ static void *__rxe_alloc_locked(struct rxe_pool *pool) void *rxe_alloc_locked(struct rxe_pool *pool) { - struct rxe_pool_entry *elem; + struct rxe_pool_elem *elem; void *obj; obj = __rxe_alloc_locked(pool); if (!obj) return NULL; - elem = (struct rxe_pool_entry *)(obj + pool->elem_offset); + elem = (struct rxe_pool_elem *)(obj + pool->elem_offset); refcount_set(&elem->refcnt, 1); return obj; @@ -315,7 +315,7 @@ void *rxe_alloc_locked(struct rxe_pool *pool) void *rxe_alloc_with_key_locked(struct rxe_pool *pool, void *key) { - struct rxe_pool_entry *elem; + struct rxe_pool_elem *elem; void *obj; int err; @@ -323,7 +323,7 @@ void *rxe_alloc_with_key_locked(struct rxe_pool *pool, void *key) if (!obj) return NULL; - elem = (struct rxe_pool_entry *)((u8 *)obj + pool->elem_offset); + elem = (struct rxe_pool_elem *)((u8 *)obj + pool->elem_offset); memcpy((u8 *)elem + pool->key.key_offset, key, pool->key.key_size); err = rxe_insert_key(pool, elem); if (err) { @@ -362,7 +362,7 @@ void *rxe_alloc_with_key(struct rxe_pool *pool, void *key) return obj; } -int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_entry *elem) +int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) { int err; @@ -393,13 +393,13 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_entry *elem) void *rxe_pool_get_index_locked(struct rxe_pool *pool, u32 index) { struct rb_node *node; - struct rxe_pool_entry *elem; + struct rxe_pool_elem *elem; void *obj = NULL; node = pool->index.tree.rb_node; while (node) { - elem = rb_entry(node, struct rxe_pool_entry, index_node); + elem = rb_entry(node, struct rxe_pool_elem, index_node); if (elem->index > index) node = node->rb_left; @@ -429,14 +429,14 @@ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) void *rxe_pool_get_key_locked(struct rxe_pool *pool, void *key) { struct rb_node *node; - struct rxe_pool_entry *elem; + struct rxe_pool_elem *elem; void *obj = NULL; int cmp; node = pool->key.tree.rb_node; while (node) { - elem = rb_entry(node, struct rxe_pool_entry, key_node); + elem = rb_entry(node, struct rxe_pool_elem, key_node); cmp = memcmp((u8 *)elem + pool->key.key_offset, key, pool->key.key_size); @@ -466,7 +466,7 @@ void *rxe_pool_get_key(struct rxe_pool *pool, void *key) return obj; } -int __rxe_add_ref_locked(struct rxe_pool_entry *elem) +int __rxe_add_ref_locked(struct rxe_pool_elem *elem) { int done; @@ -477,7 +477,7 @@ int __rxe_add_ref_locked(struct rxe_pool_entry *elem) return -EINVAL; } -int __rxe_add_ref(struct rxe_pool_entry *elem) +int __rxe_add_ref(struct rxe_pool_elem *elem) { struct rxe_pool *pool = elem->pool; int ret; @@ -489,7 +489,7 @@ int __rxe_add_ref(struct rxe_pool_entry *elem) return ret; } -int __rxe_drop_ref_locked(struct rxe_pool_entry *elem) +int __rxe_drop_ref_locked(struct rxe_pool_elem *elem) { int done; @@ -500,7 +500,7 @@ int __rxe_drop_ref_locked(struct rxe_pool_entry *elem) return -EINVAL; } -int __rxe_drop_ref(struct rxe_pool_entry *elem) +int __rxe_drop_ref(struct rxe_pool_elem *elem) { struct rxe_pool *pool = elem->pool; int ret; @@ -512,7 +512,7 @@ int __rxe_drop_ref(struct rxe_pool_entry *elem) return ret; } -static int __rxe_fini(struct rxe_pool_entry *elem) +static int __rxe_fini(struct rxe_pool_elem *elem) { struct rxe_pool *pool = elem->pool; int done; @@ -533,7 +533,7 @@ static int __rxe_fini(struct rxe_pool_entry *elem) /* can only be used by pools that have a cleanup * routine that can run while holding a spinlock */ -int __rxe_fini_ref_locked(struct rxe_pool_entry *elem) +int __rxe_fini_ref_locked(struct rxe_pool_elem *elem) { struct rxe_pool *pool = elem->pool; int ret; @@ -550,7 +550,7 @@ int __rxe_fini_ref_locked(struct rxe_pool_entry *elem) return ret; } -int __rxe_fini_ref(struct rxe_pool_entry *elem) +int __rxe_fini_ref(struct rxe_pool_elem *elem) { struct rxe_pool *pool = elem->pool; int ret; diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index f04df69c52ba..3e78c275c7c5 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -31,13 +31,13 @@ enum rxe_elem_type { RXE_NUM_TYPES, /* keep me last */ }; -struct rxe_pool_entry; +struct rxe_pool_elem; struct rxe_type_info { const char *name; size_t size; size_t elem_offset; - void (*cleanup)(struct rxe_pool_entry *obj); + void (*cleanup)(struct rxe_pool_elem *obj); enum rxe_pool_flags flags; u32 max_index; u32 min_index; @@ -45,7 +45,7 @@ struct rxe_type_info { size_t key_size; }; -struct rxe_pool_entry { +struct rxe_pool_elem { struct rxe_pool *pool; void *obj; refcount_t refcnt; @@ -63,7 +63,7 @@ struct rxe_pool { struct rxe_dev *rxe; const char *name; rwlock_t pool_lock; /* protects pool add/del/search */ - void (*cleanup)(struct rxe_pool_entry *obj); + void (*cleanup)(struct rxe_pool_elem *obj); enum rxe_pool_flags flags; enum rxe_elem_type type; @@ -110,9 +110,9 @@ void *rxe_alloc_with_key_locked(struct rxe_pool *pool, void *key); void *rxe_alloc_with_key(struct rxe_pool *pool, void *key); /* connect already allocated object to pool */ -int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_entry *elem); +int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem); -#define rxe_add_to_pool(pool, obj) __rxe_add_to_pool(pool, &(obj)->pelem) +#define rxe_add_to_pool(pool, obj) __rxe_add_to_pool(pool, &(obj)->elem) /* lookup an indexed object from index holding and not holding the pool_lock. * takes a reference on object @@ -129,32 +129,32 @@ void *rxe_pool_get_key_locked(struct rxe_pool *pool, void *key); void *rxe_pool_get_key(struct rxe_pool *pool, void *key); /* take a reference on an object */ -int __rxe_add_ref_locked(struct rxe_pool_entry *elem); +int __rxe_add_ref_locked(struct rxe_pool_elem *elem); -#define rxe_add_ref_locked(obj) __rxe_add_ref_locked(&(obj)->pelem) +#define rxe_add_ref_locked(obj) __rxe_add_ref_locked(&(obj)->elem) -int __rxe_add_ref(struct rxe_pool_entry *elem); +int __rxe_add_ref(struct rxe_pool_elem *elem); -#define rxe_add_ref(obj) __rxe_add_ref(&(obj)->pelem) +#define rxe_add_ref(obj) __rxe_add_ref(&(obj)->elem) /* drop a reference on an object */ -int __rxe_drop_ref_locked(struct rxe_pool_entry *elem); +int __rxe_drop_ref_locked(struct rxe_pool_elem *elem); -#define rxe_drop_ref_locked(obj) __rxe_drop_ref_locked(&(obj)->pelem) +#define rxe_drop_ref_locked(obj) __rxe_drop_ref_locked(&(obj)->elem) -int __rxe_drop_ref(struct rxe_pool_entry *elem); +int __rxe_drop_ref(struct rxe_pool_elem *elem); -#define rxe_drop_ref(obj) __rxe_drop_ref(&(obj)->pelem) +#define rxe_drop_ref(obj) __rxe_drop_ref(&(obj)->elem) /* drop last reference on an object */ -int __rxe_fini_ref_locked(struct rxe_pool_entry *elem); +int __rxe_fini_ref_locked(struct rxe_pool_elem *elem); -#define rxe_fini_ref_locked(obj) __rxe_fini_ref_locked(&(obj)->pelem) +#define rxe_fini_ref_locked(obj) __rxe_fini_ref_locked(&(obj)->elem) -int __rxe_fini_ref(struct rxe_pool_entry *elem); +int __rxe_fini_ref(struct rxe_pool_elem *elem); -#define rxe_fini_ref(obj) __rxe_fini_ref(&(obj)->pelem) +#define rxe_fini_ref(obj) __rxe_fini_ref(&(obj)->elem) -#define rxe_read_ref(obj) refcount_read(&(obj)->pelem.refcnt) +#define rxe_read_ref(obj) refcount_read(&(obj)->elem.refcnt) #endif /* RXE_POOL_H */ diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c index 23b4ffe23c4f..f1b89585d6e0 100644 --- a/drivers/infiniband/sw/rxe/rxe_qp.c +++ b/drivers/infiniband/sw/rxe/rxe_qp.c @@ -163,7 +163,7 @@ static void rxe_qp_init_misc(struct rxe_dev *rxe, struct rxe_qp *qp, qp->attr.path_mtu = 1; qp->mtu = ib_mtu_enum_to_int(qp->attr.path_mtu); - qpn = qp->pelem.index; + qpn = qp->elem.index; port = &rxe->port; switch (init->qp_type) { @@ -825,9 +825,9 @@ static void rxe_qp_do_cleanup(struct work_struct *work) } /* called when the last reference to the qp is dropped */ -void rxe_qp_cleanup(struct rxe_pool_entry *arg) +void rxe_qp_cleanup(struct rxe_pool_elem *arg) { - struct rxe_qp *qp = container_of(arg, typeof(*qp), pelem); + struct rxe_qp *qp = container_of(arg, typeof(*qp), elem); rxe_qp_destroy(qp); execute_in_process_context(rxe_qp_do_cleanup, &qp->cleanup_work); diff --git a/drivers/infiniband/sw/rxe/rxe_srq.c b/drivers/infiniband/sw/rxe/rxe_srq.c index bb00643a2929..66273342da9f 100644 --- a/drivers/infiniband/sw/rxe/rxe_srq.c +++ b/drivers/infiniband/sw/rxe/rxe_srq.c @@ -83,7 +83,7 @@ int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq, srq->ibsrq.event_handler = init->event_handler; srq->ibsrq.srq_context = init->srq_context; srq->limit = init->attr.srq_limit; - srq->srq_num = srq->pelem.index; + srq->srq_num = srq->elem.index; srq->rq.max_wr = init->attr.max_wr; srq->rq.max_sge = init->attr.max_sge; @@ -155,9 +155,9 @@ int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq, return err; } -void rxe_srq_cleanup(struct rxe_pool_entry *arg) +void rxe_srq_cleanup(struct rxe_pool_elem *arg) { - struct rxe_srq *srq = container_of(arg, typeof(*srq), pelem); + struct rxe_srq *srq = container_of(arg, typeof(*srq), elem); if (srq->rq.queue) rxe_queue_cleanup(srq->rq.queue); diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 2b0ba33cff31..eea89873215d 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -180,7 +180,7 @@ static int rxe_create_ah(struct ib_ah *ibah, return err; /* create index > 0 */ - ah->ah_num = ah->pelem.index; + ah->ah_num = ah->elem.index; if (uresp) { /* only if new user provider */ diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 0cfbef7a36c9..52e8752d2983 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -35,17 +35,17 @@ static inline int psn_compare(u32 psn_a, u32 psn_b) struct rxe_ucontext { struct ib_ucontext ibuc; - struct rxe_pool_entry pelem; + struct rxe_pool_elem elem; }; struct rxe_pd { struct ib_pd ibpd; - struct rxe_pool_entry pelem; + struct rxe_pool_elem elem; }; struct rxe_ah { struct ib_ah ibah; - struct rxe_pool_entry pelem; + struct rxe_pool_elem elem; struct rxe_av av; bool is_user; int ah_num; @@ -60,7 +60,7 @@ struct rxe_cqe { struct rxe_cq { struct ib_cq ibcq; - struct rxe_pool_entry pelem; + struct rxe_pool_elem elem; struct rxe_queue *queue; spinlock_t cq_lock; u8 notify; @@ -95,7 +95,7 @@ struct rxe_rq { struct rxe_srq { struct ib_srq ibsrq; - struct rxe_pool_entry pelem; + struct rxe_pool_elem elem; struct rxe_pd *pd; struct rxe_rq rq; u32 srq_num; @@ -208,7 +208,7 @@ struct rxe_resp_info { struct rxe_qp { struct ib_qp ibqp; - struct rxe_pool_entry pelem; + struct rxe_pool_elem elem; struct ib_qp_attr attr; unsigned int valid; unsigned int mtu; @@ -308,7 +308,7 @@ static inline int rkey_is_mw(u32 rkey) } struct rxe_mr { - struct rxe_pool_entry pelem; + struct rxe_pool_elem elem; struct ib_mr ibmr; struct ib_umem *umem; @@ -341,7 +341,7 @@ enum rxe_mw_state { struct rxe_mw { struct ib_mw ibmw; - struct rxe_pool_entry pelem; + struct rxe_pool_elem elem; spinlock_t lock; enum rxe_mw_state state; struct rxe_qp *qp; /* Type 2 only */ @@ -353,7 +353,7 @@ struct rxe_mw { }; struct rxe_mc_grp { - struct rxe_pool_entry pelem; + struct rxe_pool_elem elem; spinlock_t mcg_lock; /* guard group */ struct rxe_dev *rxe; struct list_head qp_list; @@ -364,7 +364,7 @@ struct rxe_mc_grp { }; struct rxe_mc_elem { - struct rxe_pool_entry pelem; + struct rxe_pool_elem elem; struct list_head qp_list; struct list_head grp_list; struct rxe_qp *qp; @@ -484,6 +484,6 @@ static inline struct rxe_pd *rxe_mw_pd(struct rxe_mw *mw) int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name); -void rxe_mc_cleanup(struct rxe_pool_entry *arg); +void rxe_mc_cleanup(struct rxe_pool_elem *arg); #endif /* RXE_VERBS_H */ From patchwork Wed Oct 20 22:05:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12573455 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43FD1C433FE for ; Wed, 20 Oct 2021 22:07:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 24C6B60551 for ; Wed, 20 Oct 2021 22:07:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230453AbhJTWJo (ORCPT ); Wed, 20 Oct 2021 18:09:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44242 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229842AbhJTWJo (ORCPT ); Wed, 20 Oct 2021 18:09:44 -0400 Received: from mail-ot1-x32f.google.com (mail-ot1-x32f.google.com [IPv6:2607:f8b0:4864:20::32f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3AFE9C061749 for ; Wed, 20 Oct 2021 15:07:29 -0700 (PDT) Received: by mail-ot1-x32f.google.com with SMTP id x27-20020a9d459b000000b0055303520cc4so9959503ote.13 for ; Wed, 20 Oct 2021 15:07:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+kwDB+WyuiMNWaO6iHCXpNevUwT5XXkj0VKU7Sq+tro=; b=GzybidH8pB5FS3vc2kSbMIhJwoHXBYJMjt2upop7bgfU7l0TjyNjz9IkJ4070EQ/1t tfXG/WQ0ytxAw1jBQEZguYlwYIi03k9NJkEUehphyOE+wbZuhGoSZ+haBVW3sGoUCBVw W9Ag8ELo8fts9/bYf/Mow8q6hmN6UEOKCNG4M3otmNqNy3NjnN+/bMnisIhA6bIkOVoN Cbr7Qkgq8z54jI4WRFBh8DHKo2CLqvJg97nQB1ZJjeipYrivJ5+ZxzARUyukMTizn2Tv shfBdemZ83QIRtccgwtVS0Qu/9982mu9rADQrD9k9ANAK65NAwvTNln2Eq87k3Zb57QB 6iuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+kwDB+WyuiMNWaO6iHCXpNevUwT5XXkj0VKU7Sq+tro=; b=mWZzc3JzSm7NWycXiKfQeaDyXwGp+7pAH4iK/faqxRQGAli9P0Th105jmHtPdAXG7I iGZSCT8jL2K2LEV3cq0QCLePiZ2dDdGGa2YQPqPVGsPSXfgFWeoZnd67mjQv8sZp+1tx E8AtIxg0xaonvO8dRQBvLMpuflqNG6jyDREuVCLMfcUpIEdsjOScw4C0pkkBl/JGu3aW jGEXCxODzK2U+A1K4/z9yVxgfFCsD6FFP2pdEvqfqBMDKBIPzs/j/2C77SYnCQa6zHcD h/hrT8vKZNzsneSLlD6W6JqE521Bf8UFLVekWmRO7VDRpcfUnA7L0Cteru4V7ReDGYlF O4PQ== X-Gm-Message-State: AOAM531oJUXQn19gcZoO64KpPxpfO5fb4e/rkdSOc+uNdY8kDcD9b1wZ PQRnLXOpvhlMUUXwlNaZfegXffopU08= X-Google-Smtp-Source: ABdhPJw/hhxWft9cBcsTfG1O3HBHODotqTtP/Na8ab67+mZC2OVMQTTqMZ95eO2pwFN0K2zm1JQvXg== X-Received: by 2002:a05:6830:19f5:: with SMTP id t21mr1473321ott.293.1634767648620; Wed, 20 Oct 2021 15:07:28 -0700 (PDT) Received: from ubunto-21.tx.rr.com (2603-8081-140c-1a00-8d65-dc0b-4dcc-2f9b.res6.spectrum.com. [2603:8081:140c:1a00:8d65:dc0b:4dcc:2f9b]) by smtp.gmail.com with ESMTPSA id v13sm725050otn.41.2021.10.20.15.07.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Oct 2021 15:07:28 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 3/7] RDMA/rxe: Add xarray support to rxe_pool.c Date: Wed, 20 Oct 2021 17:05:46 -0500 Message-Id: <20211020220549.36145-4-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211020220549.36145-1-rpearsonhpe@gmail.com> References: <20211020220549.36145-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Currently the rxe driver uses red-black trees to add indices and keys to the rxe object pool. Linux xarrays provide a better way to implement the same functionality for indices but not keys. This patch adds a second alternative to adding indices based on cyclic allocating xarrays. The AH pool is modified to hold either xarrays or red-black trees. The code is tested for both options. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_pool.c | 100 ++++++++++++++++++++++----- drivers/infiniband/sw/rxe/rxe_pool.h | 9 +++ 2 files changed, 92 insertions(+), 17 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 24ebd1b663c3..ba5c600fa9e8 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -25,7 +25,8 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = { .name = "rxe-ah", .size = sizeof(struct rxe_ah), .elem_offset = offsetof(struct rxe_ah, elem), - .flags = RXE_POOL_INDEX | RXE_POOL_NO_ALLOC, + //.flags = RXE_POOL_INDEX | RXE_POOL_NO_ALLOC, + .flags = RXE_POOL_XARRAY | RXE_POOL_NO_ALLOC, .min_index = RXE_MIN_AH_INDEX, .max_index = RXE_MAX_AH_INDEX, }, @@ -128,15 +129,20 @@ int rxe_pool_init( pool->elem_size = ALIGN(info->size, RXE_POOL_ALIGN); pool->elem_offset = info->elem_offset; pool->flags = info->flags; - pool->index.tree = RB_ROOT; - pool->key.tree = RB_ROOT; pool->cleanup = info->cleanup; atomic_set(&pool->num_elem, 0); rwlock_init(&pool->pool_lock); + if (info->flags & RXE_POOL_XARRAY) { + xa_init_flags(&pool->xarray.xa, XA_FLAGS_ALLOC); + pool->xarray.limit.max = info->max_index; + pool->xarray.limit.min = info->min_index; + } + if (info->flags & RXE_POOL_INDEX) { + pool->index.tree = RB_ROOT; err = rxe_pool_init_index(pool, info->max_index, info->min_index); if (err) @@ -144,6 +150,7 @@ int rxe_pool_init( } if (info->flags & RXE_POOL_KEY) { + pool->key.tree = RB_ROOT; pool->key.key_offset = info->key_offset; pool->key.key_size = info->key_size; } @@ -158,7 +165,8 @@ void rxe_pool_cleanup(struct rxe_pool *pool) pr_warn("%s pool destroyed with unfree'd elem\n", pool->name); - kfree(pool->index.table); + if (pool->flags & RXE_POOL_INDEX) + kfree(pool->index.table); } /* should never fail because there are at least as many indices as @@ -272,28 +280,35 @@ static void *__rxe_alloc_locked(struct rxe_pool *pool) int err; if (atomic_inc_return(&pool->num_elem) > pool->max_elem) - goto out_cnt; + goto err; obj = kzalloc(pool->elem_size, GFP_ATOMIC); if (!obj) - goto out_cnt; + goto err; elem = (struct rxe_pool_elem *)((u8 *)obj + pool->elem_offset); elem->pool = pool; elem->obj = obj; + if (pool->flags & RXE_POOL_XARRAY) { + err = xa_alloc_cyclic_bh(&pool->xarray.xa, &elem->index, elem, + pool->xarray.limit, + &pool->xarray.next, GFP_KERNEL); + if (err) + goto err; + } + if (pool->flags & RXE_POOL_INDEX) { err = rxe_add_index(elem); - if (err) { - kfree(obj); - goto out_cnt; - } + if (err) + goto err; } return obj; -out_cnt: +err: + kfree(obj); atomic_dec(&pool->num_elem); return NULL; } @@ -368,15 +383,23 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) write_lock_bh(&pool->pool_lock); if (atomic_inc_return(&pool->num_elem) > pool->max_elem) - goto out_cnt; + goto err; elem->pool = pool; elem->obj = (u8 *)elem - pool->elem_offset; + if (pool->flags & RXE_POOL_XARRAY) { + err = xa_alloc_cyclic_bh(&pool->xarray.xa, &elem->index, elem, + pool->xarray.limit, + &pool->xarray.next, GFP_KERNEL); + if (err) + goto err; + } + if (pool->flags & RXE_POOL_INDEX) { err = rxe_add_index(elem); if (err) - goto out_cnt; + goto err; } refcount_set(&elem->refcnt, 1); @@ -384,13 +407,13 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) return 0; -out_cnt: +err: atomic_dec(&pool->num_elem); write_unlock_bh(&pool->pool_lock); return -EINVAL; } -void *rxe_pool_get_index_locked(struct rxe_pool *pool, u32 index) +static void *__rxe_get_index_locked(struct rxe_pool *pool, u32 index) { struct rb_node *node; struct rxe_pool_elem *elem; @@ -415,17 +438,58 @@ void *rxe_pool_get_index_locked(struct rxe_pool *pool, u32 index) return obj; } -void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) +static void *__rxe_get_index(struct rxe_pool *pool, u32 index) +{ + void *obj; + + read_lock_bh(&pool->pool_lock); + obj = __rxe_get_index_locked(pool, index); + read_unlock_bh(&pool->pool_lock); + + return obj; +} + +static void *__rxe_get_xarray_locked(struct rxe_pool *pool, u32 index) +{ + struct rxe_pool_elem *elem; + void *obj = NULL; + + elem = xa_load(&pool->xarray.xa, index); + if (elem && refcount_inc_not_zero(&elem->refcnt)) + obj = elem->obj; + + return obj; +} + +static void *__rxe_get_xarray(struct rxe_pool *pool, u32 index) { void *obj; read_lock_bh(&pool->pool_lock); - obj = rxe_pool_get_index_locked(pool, index); + obj = __rxe_get_xarray_locked(pool, index); read_unlock_bh(&pool->pool_lock); return obj; } +void *rxe_pool_get_index_locked(struct rxe_pool *pool, u32 index) +{ + if (pool->flags & RXE_POOL_XARRAY) + return __rxe_get_xarray_locked(pool, index); + if (pool->flags & RXE_POOL_INDEX) + return __rxe_get_index_locked(pool, index); + return NULL; +} + +void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) +{ + if (pool->flags & RXE_POOL_XARRAY) + return __rxe_get_xarray(pool, index); + if (pool->flags & RXE_POOL_INDEX) + return __rxe_get_index(pool, index); + return NULL; +} + void *rxe_pool_get_key_locked(struct rxe_pool *pool, void *key) { struct rb_node *node; @@ -519,6 +583,8 @@ static int __rxe_fini(struct rxe_pool_elem *elem) done = refcount_dec_if_one(&elem->refcnt); if (done) { + if (pool->flags & RXE_POOL_XARRAY) + xa_erase(&pool->xarray.xa, elem->index); if (pool->flags & RXE_POOL_INDEX) rxe_drop_index(elem); if (pool->flags & RXE_POOL_KEY) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index 3e78c275c7c5..f9c4f09cdcc9 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -8,6 +8,7 @@ #define RXE_POOL_H #include +#include #define RXE_POOL_ALIGN (16) #define RXE_POOL_CACHE_FLAGS (0) @@ -15,6 +16,7 @@ enum rxe_pool_flags { RXE_POOL_INDEX = BIT(1), RXE_POOL_KEY = BIT(2), + RXE_POOL_XARRAY = BIT(3), RXE_POOL_NO_ALLOC = BIT(4), }; @@ -72,6 +74,13 @@ struct rxe_pool { size_t elem_size; size_t elem_offset; + /* only used if xarray */ + struct { + struct xarray xa; + struct xa_limit limit; + u32 next; + } xarray; + /* only used if indexed */ struct { struct rb_root tree; From patchwork Wed Oct 20 22:05:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12573453 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41B54C4332F for ; Wed, 20 Oct 2021 22:07:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 219AD604AC for ; Wed, 20 Oct 2021 22:07:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230515AbhJTWJp (ORCPT ); Wed, 20 Oct 2021 18:09:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44248 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230433AbhJTWJo (ORCPT ); Wed, 20 Oct 2021 18:09:44 -0400 Received: from mail-oo1-xc34.google.com (mail-oo1-xc34.google.com [IPv6:2607:f8b0:4864:20::c34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E295DC06161C for ; Wed, 20 Oct 2021 15:07:29 -0700 (PDT) Received: by mail-oo1-xc34.google.com with SMTP id s2-20020a4ac102000000b002b722c09046so2440696oop.2 for ; Wed, 20 Oct 2021 15:07:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=UzZRpML2iAuglAg/yb93zs5By5zgGvMyEflwpHWoCdE=; b=qDIWtK00L40Oz2k9cBhS3aJrj0WMhNjJiNzugCELg433FcJuzGavw29Xnr2Seu7JwH hr4fxYGvFs3CtrWv9QcClva2wb65j42yYe3LHhAqhEctJliIrrEUXiLU0r1J2o5YJDdZ WUYfgwX0yr1PVlrY+To7W62zYUC70ick3rjzDYrXRsw5vqvenMytb8kFlQ9nWokUeLpR vr4Ur5+g+KyTHqxmen+D4wRlUjgeKkxaszFijdT+ymYxHXzQykV8RZ60NBdvlD6HBES0 H3o294BlPnIfyZpH8I7Ks3WFQYPdSYXKX08pHwJyPsl8BhiN0h/G1P8Yz1/e9fZT0XDz ta0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=UzZRpML2iAuglAg/yb93zs5By5zgGvMyEflwpHWoCdE=; b=5slXC8cvuJVbmQJw7A7It1DHG8OocfKOzNNsQ/UtaEJmmZJ77ibkPW+aMFfN0O7lFJ eM9YNdkQB0r/AZKxszyV6AN1JejW+zhRLB4mvBZlN9ahdhuK+0jCqsnMTmm+9/dcgbCn LNoDVkQesfwPrmg9Y6B7WZfQh99LECx1rMaYzUj7piHDnRi7LPG1rE4bK+ojHbyuiy70 XGCxbtLZegxEoNJsZsDvqRmQevrkrOs37AU/du27Myze+h9njiNLepMdzMDDh+6OT/pE BVP1OTr9gykcTVwPiv8yqX8mIalO//zGS60hwZ8u/3uZ9g2YtDQBy26HOlapdsgdHvqF FhCg== X-Gm-Message-State: AOAM531hny9YtQZJfcItJLTZ6J3GcJRXvMOzGAZmjw+twyMg4lX5fKvF yzMWSx/6pY33jHdS8mG0w0c5HT2mhok= X-Google-Smtp-Source: ABdhPJxlJz5FP/BlzAmc1Oc2H1jzzse68icIePXn0nCpZwxyFDF8cqsHyawJ0Evz+y47m1NI5jwEEQ== X-Received: by 2002:a4a:d84e:: with SMTP id g14mr1434291oov.62.1634767649343; Wed, 20 Oct 2021 15:07:29 -0700 (PDT) Received: from ubunto-21.tx.rr.com (2603-8081-140c-1a00-8d65-dc0b-4dcc-2f9b.res6.spectrum.com. [2603:8081:140c:1a00:8d65:dc0b:4dcc:2f9b]) by smtp.gmail.com with ESMTPSA id v13sm725050otn.41.2021.10.20.15.07.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Oct 2021 15:07:29 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 4/7] RDMA/rxe: Replace pool_lock by xa_lock Date: Wed, 20 Oct 2021 17:05:47 -0500 Message-Id: <20211020220549.36145-5-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211020220549.36145-1-rpearsonhpe@gmail.com> References: <20211020220549.36145-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org In rxe_pool.c xa_alloc_bh and xa_erase_bh and variants already include spin_lock_bh() __xa_alloc() spin_unlock_bh() So we are double locking. Replacing pool_lock by xa_lock and using xa_lock in all the places that were previously locked by pool_lock but dropping the double locks is a performance improvement. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_pool.c | 54 ++++++++++++++-------------- 1 file changed, 26 insertions(+), 28 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index ba5c600fa9e8..1b7269dd6d9e 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -133,8 +133,6 @@ int rxe_pool_init( atomic_set(&pool->num_elem, 0); - rwlock_init(&pool->pool_lock); - if (info->flags & RXE_POOL_XARRAY) { xa_init_flags(&pool->xarray.xa, XA_FLAGS_ALLOC); pool->xarray.limit.max = info->max_index; @@ -292,9 +290,9 @@ static void *__rxe_alloc_locked(struct rxe_pool *pool) elem->obj = obj; if (pool->flags & RXE_POOL_XARRAY) { - err = xa_alloc_cyclic_bh(&pool->xarray.xa, &elem->index, elem, - pool->xarray.limit, - &pool->xarray.next, GFP_KERNEL); + err = __xa_alloc_cyclic(&pool->xarray.xa, &elem->index, elem, + pool->xarray.limit, + &pool->xarray.next, GFP_KERNEL); if (err) goto err; } @@ -359,9 +357,9 @@ void *rxe_alloc(struct rxe_pool *pool) { void *obj; - write_lock_bh(&pool->pool_lock); + xa_lock_bh(&pool->xarray.xa); obj = rxe_alloc_locked(pool); - write_unlock_bh(&pool->pool_lock); + xa_unlock_bh(&pool->xarray.xa); return obj; } @@ -370,9 +368,9 @@ void *rxe_alloc_with_key(struct rxe_pool *pool, void *key) { void *obj; - write_lock_bh(&pool->pool_lock); + xa_lock_bh(&pool->xarray.xa); obj = rxe_alloc_with_key_locked(pool, key); - write_unlock_bh(&pool->pool_lock); + xa_unlock_bh(&pool->xarray.xa); return obj; } @@ -381,7 +379,7 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) { int err; - write_lock_bh(&pool->pool_lock); + xa_lock_bh(&pool->xarray.xa); if (atomic_inc_return(&pool->num_elem) > pool->max_elem) goto err; @@ -389,9 +387,9 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) elem->obj = (u8 *)elem - pool->elem_offset; if (pool->flags & RXE_POOL_XARRAY) { - err = xa_alloc_cyclic_bh(&pool->xarray.xa, &elem->index, elem, - pool->xarray.limit, - &pool->xarray.next, GFP_KERNEL); + err = __xa_alloc_cyclic(&pool->xarray.xa, &elem->index, elem, + pool->xarray.limit, + &pool->xarray.next, GFP_KERNEL); if (err) goto err; } @@ -403,13 +401,13 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) } refcount_set(&elem->refcnt, 1); - write_unlock_bh(&pool->pool_lock); + xa_unlock_bh(&pool->xarray.xa); return 0; err: atomic_dec(&pool->num_elem); - write_unlock_bh(&pool->pool_lock); + xa_unlock_bh(&pool->xarray.xa); return -EINVAL; } @@ -442,9 +440,9 @@ static void *__rxe_get_index(struct rxe_pool *pool, u32 index) { void *obj; - read_lock_bh(&pool->pool_lock); + xa_lock_bh(&pool->xarray.xa); obj = __rxe_get_index_locked(pool, index); - read_unlock_bh(&pool->pool_lock); + xa_unlock_bh(&pool->xarray.xa); return obj; } @@ -465,9 +463,9 @@ static void *__rxe_get_xarray(struct rxe_pool *pool, u32 index) { void *obj; - read_lock_bh(&pool->pool_lock); + xa_lock_bh(&pool->xarray.xa); obj = __rxe_get_xarray_locked(pool, index); - read_unlock_bh(&pool->pool_lock); + xa_unlock_bh(&pool->xarray.xa); return obj; } @@ -523,9 +521,9 @@ void *rxe_pool_get_key(struct rxe_pool *pool, void *key) { void *obj; - read_lock_bh(&pool->pool_lock); + xa_lock_bh(&pool->xarray.xa); obj = rxe_pool_get_key_locked(pool, key); - read_unlock_bh(&pool->pool_lock); + xa_unlock_bh(&pool->xarray.xa); return obj; } @@ -546,9 +544,9 @@ int __rxe_add_ref(struct rxe_pool_elem *elem) struct rxe_pool *pool = elem->pool; int ret; - read_lock_bh(&pool->pool_lock); + xa_lock_bh(&pool->xarray.xa); ret = __rxe_add_ref_locked(elem); - read_unlock_bh(&pool->pool_lock); + xa_unlock_bh(&pool->xarray.xa); return ret; } @@ -569,9 +567,9 @@ int __rxe_drop_ref(struct rxe_pool_elem *elem) struct rxe_pool *pool = elem->pool; int ret; - read_lock_bh(&pool->pool_lock); + xa_lock_bh(&pool->xarray.xa); ret = __rxe_drop_ref_locked(elem); - read_unlock_bh(&pool->pool_lock); + xa_unlock_bh(&pool->xarray.xa); return ret; } @@ -584,7 +582,7 @@ static int __rxe_fini(struct rxe_pool_elem *elem) done = refcount_dec_if_one(&elem->refcnt); if (done) { if (pool->flags & RXE_POOL_XARRAY) - xa_erase(&pool->xarray.xa, elem->index); + __xa_erase(&pool->xarray.xa, elem->index); if (pool->flags & RXE_POOL_INDEX) rxe_drop_index(elem); if (pool->flags & RXE_POOL_KEY) @@ -621,9 +619,9 @@ int __rxe_fini_ref(struct rxe_pool_elem *elem) struct rxe_pool *pool = elem->pool; int ret; - read_lock_bh(&pool->pool_lock); + xa_lock_bh(&pool->xarray.xa); ret = __rxe_fini(elem); - read_unlock_bh(&pool->pool_lock); + xa_unlock_bh(&pool->xarray.xa); if (!ret) { if (pool->cleanup) From patchwork Wed Oct 20 22:05:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12573457 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4F80C433EF for ; Wed, 20 Oct 2021 22:07:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B1B1660551 for ; Wed, 20 Oct 2021 22:07:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230466AbhJTWJq (ORCPT ); Wed, 20 Oct 2021 18:09:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44256 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229842AbhJTWJp (ORCPT ); Wed, 20 Oct 2021 18:09:45 -0400 Received: from mail-ot1-x333.google.com (mail-ot1-x333.google.com [IPv6:2607:f8b0:4864:20::333]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A46FAC06161C for ; Wed, 20 Oct 2021 15:07:30 -0700 (PDT) Received: by mail-ot1-x333.google.com with SMTP id e59-20020a9d01c1000000b00552c91a99f7so8759028ote.6 for ; Wed, 20 Oct 2021 15:07:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=UPersv+1bSjKGYzblCBC5jYPMG1NT6+omf3MmqVlrak=; b=fQWDxNSQxAtrw2O8p6IXZSNk47sPNuS3D9uKF2h1pvb/bB4PC86je+lZyUuc2cP4dd 9v1dIUyg2CE9dWj6AiDzMuqH4PJz6u3Tj6h69opQ8Bo0iby37k4kwgZdRE9stYiOSg6g aMsFeeNH/6r0Q2NfSUG0BdQAygDvN05QMD2TWEwqsh/2G41Set40yfbU39b5lFHvvlZa cXeK4cvc8YJelNdcpAGzq3fk0WXPsbd29C1lBO/zub8JgimHMgs7+zRMpSKPeDEr8OyY NQdHp/gxe6nTETcEN4kAAEooU9fblhP790EI0+iOiu0yJTsABNkNFYeQkPZDRgCaDyd5 zLOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=UPersv+1bSjKGYzblCBC5jYPMG1NT6+omf3MmqVlrak=; b=i64CpPWtM1pqaufcAScEU3PtlOQCOPHtfB9R5FFB4rsrdPxZXGKudOxCGCa8IFNqjs jBAHI5B3GbarUnROQxqvKEsMk+sLDLqKTTN5D7H9ELskcLNCaLEGovhQrzsZ1KroJTnW kIZCd3Emd20niHje+C5LuT1zZ6IH/y07WIw0mNNkyFqOc17BybBB6ix0DSCddXsW6zUI N26dzlm4PuARXYYOudTN2Fs9LAAdhByLeRlxwgGvxlir8OklAHhitIFMnvecsKlkKlnR 90GqgxMMN1iMJVWV0LZjVTYjmbLLva2Ns1KgjF8h+DV6/zYEALZsx8nCudlVH863ONir ahAQ== X-Gm-Message-State: AOAM531ELwCwDM7FRiZ/Ui63xoZO1QMAbXgGQPs7no81TPoR+p4gn2Dj JgBR88Hm58xsglZCRenDHmg/DCWEP7M= X-Google-Smtp-Source: ABdhPJzEOq1UHNqS/qgBVtfLlX540dycaJGrDK1iX90w/8WMGr46iWWKZz3/WComUCdIbnsG0RiRbA== X-Received: by 2002:a9d:5f85:: with SMTP id g5mr1512744oti.139.1634767650096; Wed, 20 Oct 2021 15:07:30 -0700 (PDT) Received: from ubunto-21.tx.rr.com (2603-8081-140c-1a00-8d65-dc0b-4dcc-2f9b.res6.spectrum.com. [2603:8081:140c:1a00:8d65:dc0b:4dcc:2f9b]) by smtp.gmail.com with ESMTPSA id v13sm725050otn.41.2021.10.20.15.07.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Oct 2021 15:07:29 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 5/7] RDMA/rxe: Convert remaining pools to xarrays Date: Wed, 20 Oct 2021 17:05:48 -0500 Message-Id: <20211020220549.36145-6-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211020220549.36145-1-rpearsonhpe@gmail.com> References: <20211020220549.36145-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org This patch converts the remaining pools with RXE_POOL_INDEX set to RXE_POOL_XARRAY. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_pool.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 1b7269dd6d9e..364449c284a3 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -25,7 +25,6 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = { .name = "rxe-ah", .size = sizeof(struct rxe_ah), .elem_offset = offsetof(struct rxe_ah, elem), - //.flags = RXE_POOL_INDEX | RXE_POOL_NO_ALLOC, .flags = RXE_POOL_XARRAY | RXE_POOL_NO_ALLOC, .min_index = RXE_MIN_AH_INDEX, .max_index = RXE_MAX_AH_INDEX, @@ -35,7 +34,7 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = { .size = sizeof(struct rxe_srq), .elem_offset = offsetof(struct rxe_srq, elem), .cleanup = rxe_srq_cleanup, - .flags = RXE_POOL_INDEX | RXE_POOL_NO_ALLOC, + .flags = RXE_POOL_XARRAY | RXE_POOL_NO_ALLOC, .min_index = RXE_MIN_SRQ_INDEX, .max_index = RXE_MAX_SRQ_INDEX, }, @@ -44,7 +43,7 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = { .size = sizeof(struct rxe_qp), .elem_offset = offsetof(struct rxe_qp, elem), .cleanup = rxe_qp_cleanup, - .flags = RXE_POOL_INDEX | RXE_POOL_NO_ALLOC, + .flags = RXE_POOL_XARRAY | RXE_POOL_NO_ALLOC, .min_index = RXE_MIN_QP_INDEX, .max_index = RXE_MAX_QP_INDEX, }, @@ -60,7 +59,7 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = { .size = sizeof(struct rxe_mr), .elem_offset = offsetof(struct rxe_mr, elem), .cleanup = rxe_mr_cleanup, - .flags = RXE_POOL_INDEX, + .flags = RXE_POOL_XARRAY, .max_index = RXE_MAX_MR_INDEX, .min_index = RXE_MIN_MR_INDEX, }, @@ -69,7 +68,7 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = { .size = sizeof(struct rxe_mw), .elem_offset = offsetof(struct rxe_mw, elem), .cleanup = rxe_mw_cleanup, - .flags = RXE_POOL_INDEX | RXE_POOL_NO_ALLOC, + .flags = RXE_POOL_XARRAY | RXE_POOL_NO_ALLOC, .max_index = RXE_MAX_MW_INDEX, .min_index = RXE_MIN_MW_INDEX, }, From patchwork Wed Oct 20 22:05:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12573461 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8B3EC433EF for ; Wed, 20 Oct 2021 22:07:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AA191611CC for ; Wed, 20 Oct 2021 22:07:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231144AbhJTWJr (ORCPT ); Wed, 20 Oct 2021 18:09:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44270 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231173AbhJTWJq (ORCPT ); Wed, 20 Oct 2021 18:09:46 -0400 Received: from mail-ot1-x334.google.com (mail-ot1-x334.google.com [IPv6:2607:f8b0:4864:20::334]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80E26C061755 for ; Wed, 20 Oct 2021 15:07:31 -0700 (PDT) Received: by mail-ot1-x334.google.com with SMTP id l16-20020a9d6a90000000b0054e7ab56f27so9950699otq.12 for ; Wed, 20 Oct 2021 15:07:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=AgyveUkEe5u5puSO4RzEPy3L3LnUJeIaV2h+HvJ3SmE=; b=dG4lbTVNvGWMHO9Y7AYXUBn98aQYhsJgepy79mDUXMJenUzryiEkgdsxZqsyr/LME8 vqy1xesA7SiWBWmnS+YcCAZZ4lDqjxrGD+0naw5sO1Nmqdw5oMpQaVw3Ku9Tvl8z4J2S Ip2e3l6ZjIuEjHP30F6Yv4ne312tRuAXAVe/V9Tpr6gCa1jq2aSS+cXPJ5p9jPW/fL5u R4DjRsXfIf16GYRkBzBXNOc1yEFTgVYZofMCO2c9wNykjroUTOnXf2BZdq0nUpMum2jf 8m64Tur3547gbmbzAJtdWnoWbCaVQPammBOCfkynQ5wXMWjITSFhBfkYNQBuyH5NeRSW G2ww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=AgyveUkEe5u5puSO4RzEPy3L3LnUJeIaV2h+HvJ3SmE=; b=ZTmpmpy7eqL/HKf6Cwq4ecnW98P+8nzyqtvCkUb0mdlHkfpkd1AwNEypQfH+7Eq0QW CcNHneg8AHczmg9KFKpEEXM0gbeQlY4V+iMwrEbqlQHUtN/3eUCdroOTch9QWQJSU23b yPShKHe9JDITlejS39ESi+E0bhVXUNF/Iw2c8QZ+z03M4+yUVPTY81wMItnhfuG/6S8j 857zBXel79DqAAGolekuApZ4u/2497wlRQ9P/nasf8sRCNcLmoj89DHCTUwm/YX6HcFI pfUcYHmLEY3gEWU58ouDAT6iMOM7qJ9qnoEfMU4gpeZLedOqRLI/QA3FPj/ABtqkOkfe hR1A== X-Gm-Message-State: AOAM533Em4rs1j3TIydqLQhC4mmWLHzTZWLnQezKVu1W6nHvWY+y2Oau FAjcupTvHH3bbB/nVYBoDv49uBb/9lk= X-Google-Smtp-Source: ABdhPJzVlMdLBN+G7SEtugrIaUuYhzcOOnJweEnHBQSlWMUTbi2UbYnLdEk9/n6NT7CHT3goxRPXkg== X-Received: by 2002:a05:6830:1d85:: with SMTP id y5mr1539571oti.316.1634767650800; Wed, 20 Oct 2021 15:07:30 -0700 (PDT) Received: from ubunto-21.tx.rr.com (2603-8081-140c-1a00-8d65-dc0b-4dcc-2f9b.res6.spectrum.com. [2603:8081:140c:1a00:8d65:dc0b:4dcc:2f9b]) by smtp.gmail.com with ESMTPSA id v13sm725050otn.41.2021.10.20.15.07.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Oct 2021 15:07:30 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 6/7] RDMA/rxe: Remove old index code from rxe_pool.c Date: Wed, 20 Oct 2021 17:05:49 -0500 Message-Id: <20211020220549.36145-7-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211020220549.36145-1-rpearsonhpe@gmail.com> References: <20211020220549.36145-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Remove all red-black tree based index code from rxe_pool.c. Change some functions from int to void as errors are no longer returned. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe.c | 86 ++------------ drivers/infiniband/sw/rxe/rxe_pool.c | 171 +-------------------------- drivers/infiniband/sw/rxe/rxe_pool.h | 14 +-- 3 files changed, 15 insertions(+), 256 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c index 4298a1d20ad5..804c5630ed55 100644 --- a/drivers/infiniband/sw/rxe/rxe.c +++ b/drivers/infiniband/sw/rxe/rxe.c @@ -115,90 +115,37 @@ static void rxe_init_ports(struct rxe_dev *rxe) } /* init pools of managed objects */ -static int rxe_init_pools(struct rxe_dev *rxe) +static void rxe_init_pools(struct rxe_dev *rxe) { - int err; - - err = rxe_pool_init(rxe, &rxe->uc_pool, RXE_TYPE_UC, + rxe_pool_init(rxe, &rxe->uc_pool, RXE_TYPE_UC, rxe->max_ucontext); - if (err) - goto err1; - - err = rxe_pool_init(rxe, &rxe->pd_pool, RXE_TYPE_PD, + rxe_pool_init(rxe, &rxe->pd_pool, RXE_TYPE_PD, rxe->attr.max_pd); - if (err) - goto err2; - - err = rxe_pool_init(rxe, &rxe->ah_pool, RXE_TYPE_AH, + rxe_pool_init(rxe, &rxe->ah_pool, RXE_TYPE_AH, rxe->attr.max_ah); - if (err) - goto err3; - - err = rxe_pool_init(rxe, &rxe->srq_pool, RXE_TYPE_SRQ, + rxe_pool_init(rxe, &rxe->srq_pool, RXE_TYPE_SRQ, rxe->attr.max_srq); - if (err) - goto err4; - - err = rxe_pool_init(rxe, &rxe->qp_pool, RXE_TYPE_QP, + rxe_pool_init(rxe, &rxe->qp_pool, RXE_TYPE_QP, rxe->attr.max_qp); - if (err) - goto err5; - - err = rxe_pool_init(rxe, &rxe->cq_pool, RXE_TYPE_CQ, + rxe_pool_init(rxe, &rxe->cq_pool, RXE_TYPE_CQ, rxe->attr.max_cq); - if (err) - goto err6; - - err = rxe_pool_init(rxe, &rxe->mr_pool, RXE_TYPE_MR, + rxe_pool_init(rxe, &rxe->mr_pool, RXE_TYPE_MR, rxe->attr.max_mr); - if (err) - goto err7; - - err = rxe_pool_init(rxe, &rxe->mw_pool, RXE_TYPE_MW, + rxe_pool_init(rxe, &rxe->mw_pool, RXE_TYPE_MW, rxe->attr.max_mw); - if (err) - goto err8; - - err = rxe_pool_init(rxe, &rxe->mc_grp_pool, RXE_TYPE_MC_GRP, + rxe_pool_init(rxe, &rxe->mc_grp_pool, RXE_TYPE_MC_GRP, rxe->attr.max_mcast_grp); - if (err) - goto err9; - - return 0; - -err9: - rxe_pool_cleanup(&rxe->mw_pool); -err8: - rxe_pool_cleanup(&rxe->mr_pool); -err7: - rxe_pool_cleanup(&rxe->cq_pool); -err6: - rxe_pool_cleanup(&rxe->qp_pool); -err5: - rxe_pool_cleanup(&rxe->srq_pool); -err4: - rxe_pool_cleanup(&rxe->ah_pool); -err3: - rxe_pool_cleanup(&rxe->pd_pool); -err2: - rxe_pool_cleanup(&rxe->uc_pool); -err1: - return err; } /* initialize rxe device state */ -static int rxe_init(struct rxe_dev *rxe) +static void rxe_init(struct rxe_dev *rxe) { - int err; - /* init default device parameters */ rxe_init_device_param(rxe); rxe_init_ports(rxe); - err = rxe_init_pools(rxe); - if (err) - return err; + rxe_init_pools(rxe); /* init pending mmap list */ spin_lock_init(&rxe->mmap_offset_lock); @@ -206,8 +153,6 @@ static int rxe_init(struct rxe_dev *rxe) INIT_LIST_HEAD(&rxe->pending_mmaps); mutex_init(&rxe->usdev_lock); - - return 0; } void rxe_set_mtu(struct rxe_dev *rxe, unsigned int ndev_mtu) @@ -229,12 +174,7 @@ void rxe_set_mtu(struct rxe_dev *rxe, unsigned int ndev_mtu) */ int rxe_add(struct rxe_dev *rxe, unsigned int mtu, const char *ibdev_name) { - int err; - - err = rxe_init(rxe); - if (err) - return err; - + rxe_init(rxe); rxe_set_mtu(rxe, mtu); return rxe_register_device(rxe, ibdev_name); diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 364449c284a3..6e51483c0494 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -82,42 +82,13 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = { }, }; -static int rxe_pool_init_index(struct rxe_pool *pool, u32 max, u32 min) -{ - int err = 0; - size_t size; - - if ((max - min + 1) < pool->max_elem) { - pr_warn("not enough indices for max_elem\n"); - err = -EINVAL; - goto out; - } - - pool->index.max_index = max; - pool->index.min_index = min; - - size = BITS_TO_LONGS(max - min + 1) * sizeof(long); - pool->index.table = kmalloc(size, GFP_KERNEL); - if (!pool->index.table) { - err = -ENOMEM; - goto out; - } - - pool->index.table_size = size; - bitmap_zero(pool->index.table, max - min + 1); - -out: - return err; -} - -int rxe_pool_init( +void rxe_pool_init( struct rxe_dev *rxe, struct rxe_pool *pool, enum rxe_elem_type type, unsigned int max_elem) { const struct rxe_type_info *info = &rxe_type_info[type]; - int err = 0; memset(pool, 0, sizeof(*pool)); @@ -138,22 +109,11 @@ int rxe_pool_init( pool->xarray.limit.min = info->min_index; } - if (info->flags & RXE_POOL_INDEX) { - pool->index.tree = RB_ROOT; - err = rxe_pool_init_index(pool, info->max_index, - info->min_index); - if (err) - goto out; - } - if (info->flags & RXE_POOL_KEY) { pool->key.tree = RB_ROOT; pool->key.key_offset = info->key_offset; pool->key.key_size = info->key_size; } - -out: - return err; } void rxe_pool_cleanup(struct rxe_pool *pool) @@ -161,59 +121,6 @@ void rxe_pool_cleanup(struct rxe_pool *pool) if (atomic_read(&pool->num_elem) > 0) pr_warn("%s pool destroyed with unfree'd elem\n", pool->name); - - if (pool->flags & RXE_POOL_INDEX) - kfree(pool->index.table); -} - -/* should never fail because there are at least as many indices as - * max objects - */ -static u32 alloc_index(struct rxe_pool *pool) -{ - u32 index; - u32 range = pool->index.max_index - pool->index.min_index + 1; - - index = find_next_zero_bit(pool->index.table, range, - pool->index.last); - if (index >= range) - index = find_first_zero_bit(pool->index.table, range); - - WARN_ON_ONCE(index >= range); - set_bit(index, pool->index.table); - pool->index.last = index; - return index + pool->index.min_index; -} - -static int rxe_insert_index(struct rxe_pool *pool, struct rxe_pool_elem *new) -{ - struct rb_node **link = &pool->index.tree.rb_node; - struct rb_node *parent = NULL; - struct rxe_pool_elem *elem; - - while (*link) { - parent = *link; - elem = rb_entry(parent, struct rxe_pool_elem, index_node); - - /* this can happen if memory was recycled and/or the - * old object was not deleted from the pool index - */ - if (unlikely(elem == new || elem->index == new->index)) { - pr_warn("%s#%d: already in pool\n", pool->name, - new->index); - return -EINVAL; - } - - if (elem->index > new->index) - link = &(*link)->rb_left; - else - link = &(*link)->rb_right; - } - - rb_link_node(&new->index_node, parent, link); - rb_insert_color(&new->index_node, &pool->index.tree); - - return 0; } static int rxe_insert_key(struct rxe_pool *pool, struct rxe_pool_elem *new) @@ -248,28 +155,6 @@ static int rxe_insert_key(struct rxe_pool *pool, struct rxe_pool_elem *new) return 0; } -static int rxe_add_index(struct rxe_pool_elem *elem) -{ - struct rxe_pool *pool = elem->pool; - int err; - - elem->index = alloc_index(pool); - err = rxe_insert_index(pool, elem); - if (err) - clear_bit(elem->index - pool->index.min_index, - pool->index.table); - - return err; -} - -static void rxe_drop_index(struct rxe_pool_elem *elem) -{ - struct rxe_pool *pool = elem->pool; - - clear_bit(elem->index - pool->index.min_index, pool->index.table); - rb_erase(&elem->index_node, &pool->index.tree); -} - static void *__rxe_alloc_locked(struct rxe_pool *pool) { struct rxe_pool_elem *elem; @@ -296,12 +181,6 @@ static void *__rxe_alloc_locked(struct rxe_pool *pool) goto err; } - if (pool->flags & RXE_POOL_INDEX) { - err = rxe_add_index(elem); - if (err) - goto err; - } - return obj; err: @@ -393,12 +272,6 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) goto err; } - if (pool->flags & RXE_POOL_INDEX) { - err = rxe_add_index(elem); - if (err) - goto err; - } - refcount_set(&elem->refcnt, 1); xa_unlock_bh(&pool->xarray.xa); @@ -410,42 +283,6 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) return -EINVAL; } -static void *__rxe_get_index_locked(struct rxe_pool *pool, u32 index) -{ - struct rb_node *node; - struct rxe_pool_elem *elem; - void *obj = NULL; - - node = pool->index.tree.rb_node; - - while (node) { - elem = rb_entry(node, struct rxe_pool_elem, index_node); - - if (elem->index > index) - node = node->rb_left; - else if (elem->index < index) - node = node->rb_right; - else - break; - } - - if (node && refcount_inc_not_zero(&elem->refcnt)) - obj = elem->obj; - - return obj; -} - -static void *__rxe_get_index(struct rxe_pool *pool, u32 index) -{ - void *obj; - - xa_lock_bh(&pool->xarray.xa); - obj = __rxe_get_index_locked(pool, index); - xa_unlock_bh(&pool->xarray.xa); - - return obj; -} - static void *__rxe_get_xarray_locked(struct rxe_pool *pool, u32 index) { struct rxe_pool_elem *elem; @@ -473,8 +310,6 @@ void *rxe_pool_get_index_locked(struct rxe_pool *pool, u32 index) { if (pool->flags & RXE_POOL_XARRAY) return __rxe_get_xarray_locked(pool, index); - if (pool->flags & RXE_POOL_INDEX) - return __rxe_get_index_locked(pool, index); return NULL; } @@ -482,8 +317,6 @@ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) { if (pool->flags & RXE_POOL_XARRAY) return __rxe_get_xarray(pool, index); - if (pool->flags & RXE_POOL_INDEX) - return __rxe_get_index(pool, index); return NULL; } @@ -582,8 +415,6 @@ static int __rxe_fini(struct rxe_pool_elem *elem) if (done) { if (pool->flags & RXE_POOL_XARRAY) __xa_erase(&pool->xarray.xa, elem->index); - if (pool->flags & RXE_POOL_INDEX) - rxe_drop_index(elem); if (pool->flags & RXE_POOL_KEY) rb_erase(&elem->key_node, &pool->key.tree); atomic_dec(&pool->num_elem); diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index f9c4f09cdcc9..191e5aea454f 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -14,7 +14,6 @@ #define RXE_POOL_CACHE_FLAGS (0) enum rxe_pool_flags { - RXE_POOL_INDEX = BIT(1), RXE_POOL_KEY = BIT(2), RXE_POOL_XARRAY = BIT(3), RXE_POOL_NO_ALLOC = BIT(4), @@ -57,7 +56,6 @@ struct rxe_pool_elem { struct rb_node key_node; /* only used if indexed */ - struct rb_node index_node; u32 index; }; @@ -81,16 +79,6 @@ struct rxe_pool { u32 next; } xarray; - /* only used if indexed */ - struct { - struct rb_root tree; - unsigned long *table; - size_t table_size; - u32 last; - u32 max_index; - u32 min_index; - } index; - /* only used if keyed */ struct { struct rb_root tree; @@ -103,7 +91,7 @@ struct rxe_pool { * number of elements. gets parameters from rxe_type_info * pool elements will be allocated out of a slab cache */ -int rxe_pool_init(struct rxe_dev *rxe, struct rxe_pool *pool, +void rxe_pool_init(struct rxe_dev *rxe, struct rxe_pool *pool, enum rxe_elem_type type, u32 max_elem); /* free resources from object pool */ From patchwork Wed Oct 20 22:05:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12573459 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15ADCC433F5 for ; Wed, 20 Oct 2021 22:07:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F0390611CB for ; Wed, 20 Oct 2021 22:07:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230433AbhJTWJs (ORCPT ); Wed, 20 Oct 2021 18:09:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44266 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231183AbhJTWJq (ORCPT ); Wed, 20 Oct 2021 18:09:46 -0400 Received: from mail-ot1-x32b.google.com (mail-ot1-x32b.google.com [IPv6:2607:f8b0:4864:20::32b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 127F5C061749 for ; Wed, 20 Oct 2021 15:07:32 -0700 (PDT) Received: by mail-ot1-x32b.google.com with SMTP id e59-20020a9d01c1000000b00552c91a99f7so8759079ote.6 for ; Wed, 20 Oct 2021 15:07:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=mUZlyYFvpB1zhxSg6eTAb3KaGoV30W/+ippY3r9hWpA=; b=eVvVWyDhcKgHeRePEfZKP+gaIjKKqTKIDQyurjbnt1jVAOLSZAGVInqndHH5W9tsSQ XXCvYfObG0QPgrWjyrmsEgYynVrlFIeEUQ8d587cW5yman6140FomHr5xqkYd4oTfXhP ZqWoMIG+VCXTWqS/rWFvQltfv1qoDXJpcfbpMmr/s3KAyhZEdKy+B0OTvt84V5GvoY7+ OH00i9h2J+Hk3YTtcTlxTD2Zk0cUO+2QVHnRTFg1aXtOjB2sFBf6tDyYZvIEZ7xetxhy yqXy9XU2dg6NhswchuUVmtzshUTQNHo8DIAkaOXOAK47jYruZTUnPAOyhLu/rWkJEvEF 5p7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mUZlyYFvpB1zhxSg6eTAb3KaGoV30W/+ippY3r9hWpA=; b=AzY6ZpnZ/qFUYsNkJQgfrTtgRmm+1HbWRkCAJsn4Ir2EJU5cgn5LjZrnxF+56dyMXT OveX++q81kEXZdKbKHp9V9gJQuqFAPi6BV1l9qYADs8+FNHPKA+U3Lsz0yhWrKqeoo6U 16cK8K4SxPpBdTPtypWn0tMnebSdBhEwnxlQ7dOhhyAiP3AG8kM83F7SI27Jm8qqXjUq ux0mv1bNQJrjg5pxchLLRKpnyjYZqEVjLfXeI2pGD8acD2Jn6S2iopsj5y17aSMdcFSf BDGkbLfOD2DcGPhuXl+yQoKRNmW5aEuIHpSQ0rlofYVCJzROpPJw//3fLVFCHGsZ1De8 I9OQ== X-Gm-Message-State: AOAM532nWdsBKbNHmdPIPADCI137fZmYOENm5b7vNnyrbBS+3n1uDKNu tdeQQ50ePGVmn1kRe60uIMNkp0dL/BY= X-Google-Smtp-Source: ABdhPJzXuelvpQZTBu0NVQ4CemBcHoS5PcVFAKJ/R74y0ernWCNxk5gxGjYdbvqQ7rBnNH5wtcCNRw== X-Received: by 2002:a9d:8ab:: with SMTP id 40mr295084otf.109.1634767651509; Wed, 20 Oct 2021 15:07:31 -0700 (PDT) Received: from ubunto-21.tx.rr.com (2603-8081-140c-1a00-8d65-dc0b-4dcc-2f9b.res6.spectrum.com. [2603:8081:140c:1a00:8d65:dc0b:4dcc:2f9b]) by smtp.gmail.com with ESMTPSA id v13sm725050otn.41.2021.10.20.15.07.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Oct 2021 15:07:31 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 7/7] RDMA/rxe: Rename XARRAY as INDEX Date: Wed, 20 Oct 2021 17:05:50 -0500 Message-Id: <20211020220549.36145-8-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211020220549.36145-1-rpearsonhpe@gmail.com> References: <20211020220549.36145-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Rename RXE_POOL_XARRAY as RXE_POOL_INDEX and change several function names .._index_... from .._xarray_.. which completes the process of replacing red-black trees by xarrays. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_pool.c | 38 +++++++++------------------- drivers/infiniband/sw/rxe/rxe_pool.h | 4 +-- 2 files changed, 14 insertions(+), 28 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 6e51483c0494..6367cf68d19d 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -25,7 +25,7 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = { .name = "rxe-ah", .size = sizeof(struct rxe_ah), .elem_offset = offsetof(struct rxe_ah, elem), - .flags = RXE_POOL_XARRAY | RXE_POOL_NO_ALLOC, + .flags = RXE_POOL_INDEX | RXE_POOL_NO_ALLOC, .min_index = RXE_MIN_AH_INDEX, .max_index = RXE_MAX_AH_INDEX, }, @@ -34,7 +34,7 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = { .size = sizeof(struct rxe_srq), .elem_offset = offsetof(struct rxe_srq, elem), .cleanup = rxe_srq_cleanup, - .flags = RXE_POOL_XARRAY | RXE_POOL_NO_ALLOC, + .flags = RXE_POOL_INDEX | RXE_POOL_NO_ALLOC, .min_index = RXE_MIN_SRQ_INDEX, .max_index = RXE_MAX_SRQ_INDEX, }, @@ -43,7 +43,7 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = { .size = sizeof(struct rxe_qp), .elem_offset = offsetof(struct rxe_qp, elem), .cleanup = rxe_qp_cleanup, - .flags = RXE_POOL_XARRAY | RXE_POOL_NO_ALLOC, + .flags = RXE_POOL_INDEX | RXE_POOL_NO_ALLOC, .min_index = RXE_MIN_QP_INDEX, .max_index = RXE_MAX_QP_INDEX, }, @@ -59,7 +59,7 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = { .size = sizeof(struct rxe_mr), .elem_offset = offsetof(struct rxe_mr, elem), .cleanup = rxe_mr_cleanup, - .flags = RXE_POOL_XARRAY, + .flags = RXE_POOL_INDEX, .max_index = RXE_MAX_MR_INDEX, .min_index = RXE_MIN_MR_INDEX, }, @@ -68,7 +68,7 @@ static const struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = { .size = sizeof(struct rxe_mw), .elem_offset = offsetof(struct rxe_mw, elem), .cleanup = rxe_mw_cleanup, - .flags = RXE_POOL_XARRAY | RXE_POOL_NO_ALLOC, + .flags = RXE_POOL_INDEX | RXE_POOL_NO_ALLOC, .max_index = RXE_MAX_MW_INDEX, .min_index = RXE_MIN_MW_INDEX, }, @@ -103,7 +103,7 @@ void rxe_pool_init( atomic_set(&pool->num_elem, 0); - if (info->flags & RXE_POOL_XARRAY) { + if (info->flags & RXE_POOL_INDEX) { xa_init_flags(&pool->xarray.xa, XA_FLAGS_ALLOC); pool->xarray.limit.max = info->max_index; pool->xarray.limit.min = info->min_index; @@ -173,7 +173,7 @@ static void *__rxe_alloc_locked(struct rxe_pool *pool) elem->pool = pool; elem->obj = obj; - if (pool->flags & RXE_POOL_XARRAY) { + if (pool->flags & RXE_POOL_INDEX) { err = __xa_alloc_cyclic(&pool->xarray.xa, &elem->index, elem, pool->xarray.limit, &pool->xarray.next, GFP_KERNEL); @@ -264,7 +264,7 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) elem->pool = pool; elem->obj = (u8 *)elem - pool->elem_offset; - if (pool->flags & RXE_POOL_XARRAY) { + if (pool->flags & RXE_POOL_INDEX) { err = __xa_alloc_cyclic(&pool->xarray.xa, &elem->index, elem, pool->xarray.limit, &pool->xarray.next, GFP_KERNEL); @@ -283,7 +283,7 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) return -EINVAL; } -static void *__rxe_get_xarray_locked(struct rxe_pool *pool, u32 index) +void *rxe_pool_get_index_locked(struct rxe_pool *pool, u32 index) { struct rxe_pool_elem *elem; void *obj = NULL; @@ -295,31 +295,17 @@ static void *__rxe_get_xarray_locked(struct rxe_pool *pool, u32 index) return obj; } -static void *__rxe_get_xarray(struct rxe_pool *pool, u32 index) +void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) { void *obj; xa_lock_bh(&pool->xarray.xa); - obj = __rxe_get_xarray_locked(pool, index); + obj = rxe_pool_get_index_locked(pool, index); xa_unlock_bh(&pool->xarray.xa); return obj; } -void *rxe_pool_get_index_locked(struct rxe_pool *pool, u32 index) -{ - if (pool->flags & RXE_POOL_XARRAY) - return __rxe_get_xarray_locked(pool, index); - return NULL; -} - -void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) -{ - if (pool->flags & RXE_POOL_XARRAY) - return __rxe_get_xarray(pool, index); - return NULL; -} - void *rxe_pool_get_key_locked(struct rxe_pool *pool, void *key) { struct rb_node *node; @@ -413,7 +399,7 @@ static int __rxe_fini(struct rxe_pool_elem *elem) done = refcount_dec_if_one(&elem->refcnt); if (done) { - if (pool->flags & RXE_POOL_XARRAY) + if (pool->flags & RXE_POOL_INDEX) __xa_erase(&pool->xarray.xa, elem->index); if (pool->flags & RXE_POOL_KEY) rb_erase(&elem->key_node, &pool->key.tree); diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index 191e5aea454f..95a6b1e5232f 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -14,8 +14,8 @@ #define RXE_POOL_CACHE_FLAGS (0) enum rxe_pool_flags { + RXE_POOL_INDEX = BIT(1), RXE_POOL_KEY = BIT(2), - RXE_POOL_XARRAY = BIT(3), RXE_POOL_NO_ALLOC = BIT(4), }; @@ -72,7 +72,7 @@ struct rxe_pool { size_t elem_size; size_t elem_offset; - /* only used if xarray */ + /* only used if indexed */ struct { struct xarray xa; struct xa_limit limit;