From patchwork Fri Mar 4 00:07:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12768321 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99AC1C433FE for ; Fri, 4 Mar 2022 00:08:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231345AbiCDAJb (ORCPT ); Thu, 3 Mar 2022 19:09:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59554 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231395AbiCDAJa (ORCPT ); Thu, 3 Mar 2022 19:09:30 -0500 Received: from mail-oi1-x232.google.com (mail-oi1-x232.google.com [IPv6:2607:f8b0:4864:20::232]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F2DBC3EABB for ; Thu, 3 Mar 2022 16:08:42 -0800 (PST) Received: by mail-oi1-x232.google.com with SMTP id k2so6369300oia.2 for ; Thu, 03 Mar 2022 16:08:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=l761pQymdNatlFWaay1PB2J0pvW/7MChQpkaTgqRXRo=; b=IneiAbbDxpR5VJG1U6R57+sKsMI3OhGdP2tShrCSqyMrgQjBK5w6BsevhNi+7XYFKH ojoYPSxO3FUhGiuB0hAfGjm/PvvyiYuu3gMvw9tWzJ+wHs1iYIlm6CQAAOpz6lDlNoXL qrUdKO6J9np4EUwbYZ4NgWMT7xg9WfuRaK2Y9yyU4Q/xFmkabMv8bu2K2Ow5jSgIKqYh j4OqkOmhKiHiPjPi7mB875TYOUPia6io4nfcgtKwTD+wLYM/8rk6Er5Drd3Rc4+somWZ My49lri3mkuGO9m5xCleZKgZnXwqaTxyaVqCS/5V+oIco8PNdLJWuYYXQ6YzTmi/M/0C r7qQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=l761pQymdNatlFWaay1PB2J0pvW/7MChQpkaTgqRXRo=; b=BNELhuY+LzIqI/nWKcvZYwpT3MGH6xQIJzzKaWAYUDGDO4NHZ3RFVcDGrwwb73yyzG 7lsptNqio6amZEu3zZHKhn22d2SEOuS9fX9ZcU1yBgrzngSs9Enxy7I8vwkZdlyyd2kw 9u2diR3IQ+cQwAzt5oGqlAvg82KZsONSqYifFL/eU94dd3uq5uXdJNc2rHyjE94Fz+qA 21pa3JXe5tfrqZpV3PcSJscOoePCZNurhlUBRiSmdBqNccyg6md58YMUnTolBB9iV7hj h6b+OAHjNs9YE0+kewAiJpn8RbPu4DcoSvJoSmzzWi+pfwQeGgE+6S9jxi+Q1NDj1usc EDrQ== X-Gm-Message-State: AOAM532J8Rad22s8abhuerI25PmoBVkbRBf7YoUu09IpkvbyTFHc4pyI X9UsK+g38MwNW49ud4I2VFo= X-Google-Smtp-Source: ABdhPJzWdHGQzDhTsAWW0+aKyNdm8M7hB2Utv+bKjTKSCr4V9C/e4pm349+JJPAfXkRgv5+CecqMLw== X-Received: by 2002:a05:6808:488:b0:2d4:fb86:6fed with SMTP id z8-20020a056808048800b002d4fb866fedmr7070599oid.133.1646352522312; Thu, 03 Mar 2022 16:08:42 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-4a4e-f957-5a09-0218.res6.spectrum.com. [2603:8081:140c:1a00:4a4e:f957:5a09:218]) by smtp.googlemail.com with ESMTPSA id n23-20020a9d7417000000b005afc3371166sm1646469otk.81.2022.03.03.16.08.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Mar 2022 16:08:42 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v11 01/13] RDMA/rxe: Fix ref error in rxe_av.c Date: Thu, 3 Mar 2022 18:07:57 -0600 Message-Id: <20220304000808.225811-2-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220304000808.225811-1-rpearsonhpe@gmail.com> References: <20220304000808.225811-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org The commit referenced below can take a reference to the AH which is never dropped. This only happens in the UD request path. This patch optionally passes that AH back to the caller so that it can hold the reference while the AV is being accessed and then drop it. Code to do this is added to rxe_req.c. The AV is also passed to rxe_prepare in rxe_net.c as an optimization. Fixes: e2fe06c90806 ("RDMA/rxe: Lookup kernel AH from ah index in UD WQEs") Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_av.c | 19 +++++++++- drivers/infiniband/sw/rxe/rxe_loc.h | 5 ++- drivers/infiniband/sw/rxe/rxe_net.c | 17 +++++---- drivers/infiniband/sw/rxe/rxe_req.c | 55 +++++++++++++++++----------- drivers/infiniband/sw/rxe/rxe_resp.c | 2 +- 5 files changed, 63 insertions(+), 35 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_av.c b/drivers/infiniband/sw/rxe/rxe_av.c index 38c7b6fb39d7..360a567159fe 100644 --- a/drivers/infiniband/sw/rxe/rxe_av.c +++ b/drivers/infiniband/sw/rxe/rxe_av.c @@ -99,11 +99,14 @@ void rxe_av_fill_ip_info(struct rxe_av *av, struct rdma_ah_attr *attr) av->network_type = type; } -struct rxe_av *rxe_get_av(struct rxe_pkt_info *pkt) +struct rxe_av *rxe_get_av(struct rxe_pkt_info *pkt, struct rxe_ah **ahp) { struct rxe_ah *ah; u32 ah_num; + if (ahp) + *ahp = NULL; + if (!pkt || !pkt->qp) return NULL; @@ -117,10 +120,22 @@ struct rxe_av *rxe_get_av(struct rxe_pkt_info *pkt) if (ah_num) { /* only new user provider or kernel client */ ah = rxe_pool_get_index(&pkt->rxe->ah_pool, ah_num); - if (!ah || ah->ah_num != ah_num || rxe_ah_pd(ah) != pkt->qp->pd) { + if (!ah) { pr_warn("Unable to find AH matching ah_num\n"); return NULL; } + + if (rxe_ah_pd(ah) != pkt->qp->pd) { + pr_warn("PDs don't match for AH and QP\n"); + rxe_drop_ref(ah); + return NULL; + } + + if (ahp) + *ahp = ah; + else + rxe_drop_ref(ah); + return &ah->av; } diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 409efeecd581..2ffbe3390668 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -19,7 +19,7 @@ void rxe_av_to_attr(struct rxe_av *av, struct rdma_ah_attr *attr); void rxe_av_fill_ip_info(struct rxe_av *av, struct rdma_ah_attr *attr); -struct rxe_av *rxe_get_av(struct rxe_pkt_info *pkt); +struct rxe_av *rxe_get_av(struct rxe_pkt_info *pkt, struct rxe_ah **ahp); /* rxe_cq.c */ int rxe_cq_chk_attr(struct rxe_dev *rxe, struct rxe_cq *cq, @@ -94,7 +94,8 @@ void rxe_mw_cleanup(struct rxe_pool_elem *arg); /* rxe_net.c */ struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, int paylen, struct rxe_pkt_info *pkt); -int rxe_prepare(struct rxe_pkt_info *pkt, struct sk_buff *skb); +int rxe_prepare(struct rxe_av *av, struct rxe_pkt_info *pkt, + struct sk_buff *skb); int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, struct sk_buff *skb); const char *rxe_parent_name(struct rxe_dev *rxe, unsigned int port_num); diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index a8cfa7160478..b06f22ffc5a8 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -271,13 +271,13 @@ static void prepare_ipv6_hdr(struct dst_entry *dst, struct sk_buff *skb, ip6h->payload_len = htons(skb->len - sizeof(*ip6h)); } -static int prepare4(struct rxe_pkt_info *pkt, struct sk_buff *skb) +static int prepare4(struct rxe_av *av, struct rxe_pkt_info *pkt, + struct sk_buff *skb) { struct rxe_qp *qp = pkt->qp; struct dst_entry *dst; bool xnet = false; __be16 df = htons(IP_DF); - struct rxe_av *av = rxe_get_av(pkt); struct in_addr *saddr = &av->sgid_addr._sockaddr_in.sin_addr; struct in_addr *daddr = &av->dgid_addr._sockaddr_in.sin_addr; @@ -297,11 +297,11 @@ static int prepare4(struct rxe_pkt_info *pkt, struct sk_buff *skb) return 0; } -static int prepare6(struct rxe_pkt_info *pkt, struct sk_buff *skb) +static int prepare6(struct rxe_av *av, struct rxe_pkt_info *pkt, + struct sk_buff *skb) { struct rxe_qp *qp = pkt->qp; struct dst_entry *dst; - struct rxe_av *av = rxe_get_av(pkt); struct in6_addr *saddr = &av->sgid_addr._sockaddr_in6.sin6_addr; struct in6_addr *daddr = &av->dgid_addr._sockaddr_in6.sin6_addr; @@ -322,16 +322,17 @@ static int prepare6(struct rxe_pkt_info *pkt, struct sk_buff *skb) return 0; } -int rxe_prepare(struct rxe_pkt_info *pkt, struct sk_buff *skb) +int rxe_prepare(struct rxe_av *av, struct rxe_pkt_info *pkt, + struct sk_buff *skb) { int err = 0; if (skb->protocol == htons(ETH_P_IP)) - err = prepare4(pkt, skb); + err = prepare4(av, pkt, skb); else if (skb->protocol == htons(ETH_P_IPV6)) - err = prepare6(pkt, skb); + err = prepare6(av, pkt, skb); - if (ether_addr_equal(skb->dev->dev_addr, rxe_get_av(pkt)->dmac)) + if (ether_addr_equal(skb->dev->dev_addr, av->dmac)) pkt->mask |= RXE_LOOPBACK_MASK; return err; diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 5eb89052dd66..f44535f82bea 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -358,6 +358,7 @@ static inline int get_mtu(struct rxe_qp *qp) } static struct sk_buff *init_req_packet(struct rxe_qp *qp, + struct rxe_av *av, struct rxe_send_wqe *wqe, int opcode, int payload, struct rxe_pkt_info *pkt) @@ -365,7 +366,6 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp, struct rxe_dev *rxe = to_rdev(qp->ibqp.device); struct sk_buff *skb; struct rxe_send_wr *ibwr = &wqe->wr; - struct rxe_av *av; int pad = (-payload) & 0x3; int paylen; int solicited; @@ -374,21 +374,9 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp, /* length from start of bth to end of icrc */ paylen = rxe_opcode[opcode].length + payload + pad + RXE_ICRC_SIZE; - - /* pkt->hdr, port_num and mask are initialized in ifc layer */ - pkt->rxe = rxe; - pkt->opcode = opcode; - pkt->qp = qp; - pkt->psn = qp->req.psn; - pkt->mask = rxe_opcode[opcode].mask; - pkt->paylen = paylen; - pkt->wqe = wqe; + pkt->paylen = paylen; /* init skb */ - av = rxe_get_av(pkt); - if (!av) - return NULL; - skb = rxe_init_packet(rxe, av, paylen, pkt); if (unlikely(!skb)) return NULL; @@ -447,13 +435,13 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp, return skb; } -static int finish_packet(struct rxe_qp *qp, struct rxe_send_wqe *wqe, - struct rxe_pkt_info *pkt, struct sk_buff *skb, - int paylen) +static int finish_packet(struct rxe_qp *qp, struct rxe_av *av, + struct rxe_send_wqe *wqe, struct rxe_pkt_info *pkt, + struct sk_buff *skb, int paylen) { int err; - err = rxe_prepare(pkt, skb); + err = rxe_prepare(av, pkt, skb); if (err) return err; @@ -608,6 +596,7 @@ static int rxe_do_local_ops(struct rxe_qp *qp, struct rxe_send_wqe *wqe) int rxe_requester(void *arg) { struct rxe_qp *qp = (struct rxe_qp *)arg; + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); struct rxe_pkt_info pkt; struct sk_buff *skb; struct rxe_send_wqe *wqe; @@ -619,6 +608,8 @@ int rxe_requester(void *arg) struct rxe_send_wqe rollback_wqe; u32 rollback_psn; struct rxe_queue *q = qp->sq.queue; + struct rxe_ah *ah; + struct rxe_av *av; rxe_add_ref(qp); @@ -705,14 +696,28 @@ int rxe_requester(void *arg) payload = mtu; } - skb = init_req_packet(qp, wqe, opcode, payload, &pkt); + pkt.rxe = rxe; + pkt.opcode = opcode; + pkt.qp = qp; + pkt.psn = qp->req.psn; + pkt.mask = rxe_opcode[opcode].mask; + pkt.wqe = wqe; + + av = rxe_get_av(&pkt, &ah); + if (unlikely(!av)) { + pr_err("qp#%d Failed no address vector\n", qp_num(qp)); + wqe->status = IB_WC_LOC_QP_OP_ERR; + goto err_drop_ah; + } + + skb = init_req_packet(qp, av, wqe, opcode, payload, &pkt); if (unlikely(!skb)) { pr_err("qp#%d Failed allocating skb\n", qp_num(qp)); wqe->status = IB_WC_LOC_QP_OP_ERR; - goto err; + goto err_drop_ah; } - ret = finish_packet(qp, wqe, &pkt, skb, payload); + ret = finish_packet(qp, av, wqe, &pkt, skb, payload); if (unlikely(ret)) { pr_debug("qp#%d Error during finish packet\n", qp_num(qp)); if (ret == -EFAULT) @@ -720,9 +725,12 @@ int rxe_requester(void *arg) else wqe->status = IB_WC_LOC_QP_OP_ERR; kfree_skb(skb); - goto err; + goto err_drop_ah; } + if (ah) + rxe_drop_ref(ah); + /* * To prevent a race on wqe access between requester and completer, * wqe members state and psn need to be set before calling @@ -751,6 +759,9 @@ int rxe_requester(void *arg) goto next_wqe; +err_drop_ah: + if (ah) + rxe_drop_ref(ah); err: wqe->state = wqe_state_error; __rxe_do_task(&qp->comp.task); diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index c369d78fc8e8..b5ebe853748a 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -633,7 +633,7 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, if (ack->mask & RXE_ATMACK_MASK) atmack_set_orig(ack, qp->resp.atomic_orig); - err = rxe_prepare(ack, skb); + err = rxe_prepare(&qp->pri_av, ack, skb); if (err) { kfree_skb(skb); return NULL; From patchwork Fri Mar 4 00:07:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12768322 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C1F9C4332F for ; Fri, 4 Mar 2022 00:08:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232353AbiCDAJc (ORCPT ); Thu, 3 Mar 2022 19:09:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59680 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231395AbiCDAJb (ORCPT ); Thu, 3 Mar 2022 19:09:31 -0500 Received: from mail-ot1-x32f.google.com (mail-ot1-x32f.google.com [IPv6:2607:f8b0:4864:20::32f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 219D83FBFD for ; Thu, 3 Mar 2022 16:08:44 -0800 (PST) Received: by mail-ot1-x32f.google.com with SMTP id k9-20020a056830242900b005ad25f8ebfdso6048723ots.7 for ; Thu, 03 Mar 2022 16:08:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=262LG4c8BPhcfeH/5lGEIzBQnZBGchpsNYbwxAyCUq8=; b=IdWwRrGAfeSpWGPw7KTXt/SxjCsjVg7pUlhUh2Rr2dkPQmXMesZCc3hQ0hSw8v73pJ qUoX1qltALxssJkmOIy6+18t1xeIONj2XjT7uq9LqY7pT5mo47GFpTW1w9n5qcGm4Eto dQfZQogxKISu5bvY40r48m7kXlNMuklajP+iJzxHF8VnimMLbeFLmjsOtjQnXxxkXjWL MKWS1vn+rzvfMnJvKxm1HjPl/iP3DSO3HK8YhCK2t5yTXc39qPT/wdHOjBSvXQaPxC8Q DrI5B+WFBkxDiQENkRPLaKhLCmii8bIgOEltxeYitSO6MHGsKk7w9PzuJsDiPdmTRKQf Av1w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=262LG4c8BPhcfeH/5lGEIzBQnZBGchpsNYbwxAyCUq8=; b=KMaVgls9yORpsEub7ep7OqysPmlc/yW2HmiQG2d0s1FAJckYynPdSPCvK3BRvprtzU ZVgOm8T4/gtxnBWsoQIq984Xa3KKuZoJX6Oa5FP/EvFf1+vRwtQ8xlf47120+u1FwzWN SJxjLrJ22ot7Xq/DDs/aiut5+65Zq+Cw2LdpJxLfhaauQWZxxtDT2V3fhMLN3b14HZ6x S0HQKJhMBjJZoHSHbuloALnHqHW8QVJNYx+i8InVY9i6OVws5O1SjYrfGISxsmnpbx09 lgYcU4gVTeQViC7+faIEQDbtGNmMRDjogcG3OI+NdSkOrll3rFreEVQyxpcdutpu5ISK MNFg== X-Gm-Message-State: AOAM532c4EZwNgrYhm9ecq4Q5RyShbMGZm7loJkcncRF9JPu68sQio6D vvr0idvrHna0PSbbKjW1h6bPfLZGnTA= X-Google-Smtp-Source: ABdhPJyKHXKjworXTT9fEmzb+4qNwK8KFyfu2nn5K+T657ml1A2QTsB8XHdH3+78hOWb7WU2r0dDUw== X-Received: by 2002:a9d:63d7:0:b0:5af:8b78:6799 with SMTP id e23-20020a9d63d7000000b005af8b786799mr20836358otl.113.1646352523098; Thu, 03 Mar 2022 16:08:43 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-4a4e-f957-5a09-0218.res6.spectrum.com. [2603:8081:140c:1a00:4a4e:f957:5a09:218]) by smtp.googlemail.com with ESMTPSA id n23-20020a9d7417000000b005afc3371166sm1646469otk.81.2022.03.03.16.08.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Mar 2022 16:08:42 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v11 02/13] RDMA/rxe: Replace mr by rkey in responder resources Date: Thu, 3 Mar 2022 18:07:58 -0600 Message-Id: <20220304000808.225811-3-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220304000808.225811-1-rpearsonhpe@gmail.com> References: <20220304000808.225811-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Currently rxe saves a copy of MR in responder resources for RDMA reads. Since the responder resources are never freed just over written if more are needed this MR may not have a reference freed until the QP is destroyed. This patch uses the rkey instead of the MR and on subsequent packets of a multipacket read reply message it looks up the MR from the rkey for each packet. This makes it possible for a user to deregister an MR or unbind a MW on the fly and get correct behaviour. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_qp.c | 10 +-- drivers/infiniband/sw/rxe/rxe_resp.c | 123 ++++++++++++++++++-------- drivers/infiniband/sw/rxe/rxe_verbs.h | 1 - 3 files changed, 87 insertions(+), 47 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c index 5f270cbf18c6..26d461a8d71c 100644 --- a/drivers/infiniband/sw/rxe/rxe_qp.c +++ b/drivers/infiniband/sw/rxe/rxe_qp.c @@ -135,12 +135,8 @@ static void free_rd_atomic_resources(struct rxe_qp *qp) void free_rd_atomic_resource(struct rxe_qp *qp, struct resp_res *res) { - if (res->type == RXE_ATOMIC_MASK) { + if (res->type == RXE_ATOMIC_MASK) kfree_skb(res->atomic.skb); - } else if (res->type == RXE_READ_MASK) { - if (res->read.mr) - rxe_drop_ref(res->read.mr); - } res->type = 0; } @@ -825,10 +821,8 @@ static void rxe_qp_do_cleanup(struct work_struct *work) if (qp->pd) rxe_drop_ref(qp->pd); - if (qp->resp.mr) { + if (qp->resp.mr) rxe_drop_ref(qp->resp.mr); - qp->resp.mr = NULL; - } if (qp_type(qp) == IB_QPT_RC) sk_dst_reset(qp->sk->sk); diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index b5ebe853748a..b1ec003f0bb8 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -642,6 +642,78 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, return skb; } +static struct resp_res *rxe_prepare_read_res(struct rxe_qp *qp, + struct rxe_pkt_info *pkt) +{ + struct resp_res *res; + u32 pkts; + + res = &qp->resp.resources[qp->resp.res_head]; + rxe_advance_resp_resource(qp); + free_rd_atomic_resource(qp, res); + + res->type = RXE_READ_MASK; + res->replay = 0; + res->read.va = qp->resp.va + qp->resp.offset; + res->read.va_org = qp->resp.va + qp->resp.offset; + res->read.resid = qp->resp.resid; + res->read.length = qp->resp.resid; + res->read.rkey = qp->resp.rkey; + + pkts = max_t(u32, (reth_len(pkt) + qp->mtu - 1)/qp->mtu, 1); + res->first_psn = pkt->psn; + res->cur_psn = pkt->psn; + res->last_psn = (pkt->psn + pkts - 1) & BTH_PSN_MASK; + + res->state = rdatm_res_state_new; + + return res; +} + +/** + * rxe_recheck_mr - revalidate MR from rkey and get a reference + * @qp: the qp + * @rkey: the rkey + * + * This code allows the MR to be invalidated or deregistered or + * the MW if one was used to be invalidated or deallocated. + * It is assumed that the access permissions if originally good + * are OK and the mappings to be unchanged. + * + * Return: mr on success else NULL + */ +static struct rxe_mr *rxe_recheck_mr(struct rxe_qp *qp, u32 rkey) +{ + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); + struct rxe_mr *mr; + struct rxe_mw *mw; + + if (rkey_is_mw(rkey)) { + mw = rxe_pool_get_index(&rxe->mw_pool, rkey >> 8); + if (!mw || mw->rkey != rkey) + return NULL; + + if (mw->state != RXE_MW_STATE_VALID) { + rxe_drop_ref(mw); + return NULL; + } + + mr = mw->mr; + rxe_drop_ref(mw); + } else { + mr = rxe_pool_get_index(&rxe->mr_pool, rkey >> 8); + if (!mr || mr->rkey != rkey) + return NULL; + } + + if (mr->state != RXE_MR_STATE_VALID) { + rxe_drop_ref(mr); + return NULL; + } + + return mr; +} + /* RDMA read response. If res is not NULL, then we have a current RDMA request * being processed or replayed. */ @@ -656,53 +728,26 @@ static enum resp_states read_reply(struct rxe_qp *qp, int opcode; int err; struct resp_res *res = qp->resp.res; + struct rxe_mr *mr; if (!res) { - /* This is the first time we process that request. Get a - * resource - */ - res = &qp->resp.resources[qp->resp.res_head]; - - free_rd_atomic_resource(qp, res); - rxe_advance_resp_resource(qp); - - res->type = RXE_READ_MASK; - res->replay = 0; - - res->read.va = qp->resp.va + - qp->resp.offset; - res->read.va_org = qp->resp.va + - qp->resp.offset; - - res->first_psn = req_pkt->psn; - - if (reth_len(req_pkt)) { - res->last_psn = (req_pkt->psn + - (reth_len(req_pkt) + mtu - 1) / - mtu - 1) & BTH_PSN_MASK; - } else { - res->last_psn = res->first_psn; - } - res->cur_psn = req_pkt->psn; - - res->read.resid = qp->resp.resid; - res->read.length = qp->resp.resid; - res->read.rkey = qp->resp.rkey; - - /* note res inherits the reference to mr from qp */ - res->read.mr = qp->resp.mr; - qp->resp.mr = NULL; - - qp->resp.res = res; - res->state = rdatm_res_state_new; + res = rxe_prepare_read_res(qp, req_pkt); + qp->resp.res = res; } if (res->state == rdatm_res_state_new) { + mr = qp->resp.mr; + qp->resp.mr = NULL; + if (res->read.resid <= mtu) opcode = IB_OPCODE_RC_RDMA_READ_RESPONSE_ONLY; else opcode = IB_OPCODE_RC_RDMA_READ_RESPONSE_FIRST; } else { + mr = rxe_recheck_mr(qp, res->read.rkey); + if (!mr) + return RESPST_ERR_RKEY_VIOLATION; + if (res->read.resid > mtu) opcode = IB_OPCODE_RC_RDMA_READ_RESPONSE_MIDDLE; else @@ -718,10 +763,12 @@ static enum resp_states read_reply(struct rxe_qp *qp, if (!skb) return RESPST_ERR_RNR; - err = rxe_mr_copy(res->read.mr, res->read.va, payload_addr(&ack_pkt), + err = rxe_mr_copy(mr, res->read.va, payload_addr(&ack_pkt), payload, RXE_FROM_MR_OBJ); if (err) pr_err("Failed copying memory\n"); + if (mr) + rxe_drop_ref(mr); if (bth_pad(&ack_pkt)) { u8 *pad = payload_addr(&ack_pkt) + payload; diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 6b15251ff67a..e7eff1ca75e9 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -157,7 +157,6 @@ struct resp_res { struct sk_buff *skb; } atomic; struct { - struct rxe_mr *mr; u64 va_org; u32 rkey; u32 length; From patchwork Fri Mar 4 00:07:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12768327 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA976C4332F for ; Fri, 4 Mar 2022 00:08:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234131AbiCDAJf (ORCPT ); Thu, 3 Mar 2022 19:09:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59940 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235007AbiCDAJf (ORCPT ); Thu, 3 Mar 2022 19:09:35 -0500 Received: from mail-oi1-x22a.google.com (mail-oi1-x22a.google.com [IPv6:2607:f8b0:4864:20::22a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 66A6140E56 for ; Thu, 3 Mar 2022 16:08:44 -0800 (PST) Received: by mail-oi1-x22a.google.com with SMTP id x193so6406217oix.0 for ; Thu, 03 Mar 2022 16:08:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=iwD6U6B4WJb3lynL5l5wUSLzlnC+ucGLHdFjTDXzY1Y=; b=hGTn7F4Mlcz+qjTv9Eihndl84LhFQJJgvhLIkPY4fpTOi+x7A7vSOrLJx75b1VG0ER NwRFs3/0Oo1pAyvduyQy/hw6Ul+x2R7s5+Vyz4oYHr7qM/mm+dlBXzcB96whQz4LLvE0 C82Bhm555PURtOlNZc98QBGGnUKJ+u4IRKDQtI+FvxRcrOR2VRRX3F8HnkqCeMGAm/eA lgngHNon+8XedztosRSC5ujtSiVDSm1zb4T8XiLGjhgtBygAu4ND4g249ch4crjs5Tjp efFUlx5NAFtYfwU6yw3sEQfcLdciPm0Ef8/e8kxBV8p5wKOf11ML6OAD55qt6NxTduhO Uliw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iwD6U6B4WJb3lynL5l5wUSLzlnC+ucGLHdFjTDXzY1Y=; b=V5gWkrDNuTBNm6TLYfOlVCZ4tKbi6cJTCwyvc5/yUbgCAI6zj7dC7fG+eJh2/dm+7l kp5j3lMC3hqC7cGHld8ypmKeZTKcLNipXkDgKKQGbzHsO9TBYg5rNkchrQyr1a2cpeRj YXqtmi6Vj5NwxJprMj80SQK4Kn0hgRZxvxgzy36vPCc5FcYpfV+tV1ay6IATLeqHQXKe mJZ1FSbKqvajmmJgay8DLkOKeWyQJmg2ngK/cWtGstGdMypgyzDmzZDZO1KD78f7pUDC wYOvGGYWiRT+uPgQBSdoCMZaR2rDzUwrR2ri1jPdKJN//F2nRTpcpddWA8Jh4LYc1Ezu 3jzw== X-Gm-Message-State: AOAM533SzFtUR0BQd3MFfMTU49JeVbablhHNCU1HMXezYKJZlzK7y0Ee L+X9SO57yk5PbO/tcOukucM= X-Google-Smtp-Source: ABdhPJzeMnr7Nd3LSUMpu+WeBUgKpz3ozEW9ejDOBZIUIpar8nUZLb++qyOJkylPvjgcFij74OrU9w== X-Received: by 2002:a05:6808:2026:b0:2d5:409e:1dc2 with SMTP id q38-20020a056808202600b002d5409e1dc2mr6816486oiw.130.1646352523755; Thu, 03 Mar 2022 16:08:43 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-4a4e-f957-5a09-0218.res6.spectrum.com. [2603:8081:140c:1a00:4a4e:f957:5a09:218]) by smtp.googlemail.com with ESMTPSA id n23-20020a9d7417000000b005afc3371166sm1646469otk.81.2022.03.03.16.08.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Mar 2022 16:08:43 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v11 03/13] RDMA/rxe: Reverse the sense of RXE_POOL_NO_ALLOC Date: Thu, 3 Mar 2022 18:07:59 -0600 Message-Id: <20220304000808.225811-4-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220304000808.225811-1-rpearsonhpe@gmail.com> References: <20220304000808.225811-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org There is only one remaining object type that allocates its own memory, that is mr. So the sense of RXE_POOL_NO_ALLOC is changed to RXE_POOL_ALLOC. Add checks to rxe_alloc() and rxe_add_to_pool() to make sure the correct call is used for the setting of this flag. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_pool.c | 21 ++++++++++++--------- drivers/infiniband/sw/rxe/rxe_pool.h | 2 +- 2 files changed, 13 insertions(+), 10 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 16056b918ace..239c24544ff2 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -21,19 +21,17 @@ static const struct rxe_type_info { .name = "rxe-uc", .size = sizeof(struct rxe_ucontext), .elem_offset = offsetof(struct rxe_ucontext, elem), - .flags = RXE_POOL_NO_ALLOC, }, [RXE_TYPE_PD] = { .name = "rxe-pd", .size = sizeof(struct rxe_pd), .elem_offset = offsetof(struct rxe_pd, elem), - .flags = RXE_POOL_NO_ALLOC, }, [RXE_TYPE_AH] = { .name = "rxe-ah", .size = sizeof(struct rxe_ah), .elem_offset = offsetof(struct rxe_ah, elem), - .flags = RXE_POOL_INDEX | RXE_POOL_NO_ALLOC, + .flags = RXE_POOL_INDEX, .min_index = RXE_MIN_AH_INDEX, .max_index = RXE_MAX_AH_INDEX, }, @@ -41,7 +39,7 @@ static const struct rxe_type_info { .name = "rxe-srq", .size = sizeof(struct rxe_srq), .elem_offset = offsetof(struct rxe_srq, elem), - .flags = RXE_POOL_INDEX | RXE_POOL_NO_ALLOC, + .flags = RXE_POOL_INDEX, .min_index = RXE_MIN_SRQ_INDEX, .max_index = RXE_MAX_SRQ_INDEX, }, @@ -50,7 +48,7 @@ static const struct rxe_type_info { .size = sizeof(struct rxe_qp), .elem_offset = offsetof(struct rxe_qp, elem), .cleanup = rxe_qp_cleanup, - .flags = RXE_POOL_INDEX | RXE_POOL_NO_ALLOC, + .flags = RXE_POOL_INDEX, .min_index = RXE_MIN_QP_INDEX, .max_index = RXE_MAX_QP_INDEX, }, @@ -58,7 +56,6 @@ static const struct rxe_type_info { .name = "rxe-cq", .size = sizeof(struct rxe_cq), .elem_offset = offsetof(struct rxe_cq, elem), - .flags = RXE_POOL_NO_ALLOC, .cleanup = rxe_cq_cleanup, }, [RXE_TYPE_MR] = { @@ -66,7 +63,7 @@ static const struct rxe_type_info { .size = sizeof(struct rxe_mr), .elem_offset = offsetof(struct rxe_mr, elem), .cleanup = rxe_mr_cleanup, - .flags = RXE_POOL_INDEX, + .flags = RXE_POOL_INDEX | RXE_POOL_ALLOC, .min_index = RXE_MIN_MR_INDEX, .max_index = RXE_MAX_MR_INDEX, }, @@ -75,7 +72,7 @@ static const struct rxe_type_info { .size = sizeof(struct rxe_mw), .elem_offset = offsetof(struct rxe_mw, elem), .cleanup = rxe_mw_cleanup, - .flags = RXE_POOL_INDEX | RXE_POOL_NO_ALLOC, + .flags = RXE_POOL_INDEX, .min_index = RXE_MIN_MW_INDEX, .max_index = RXE_MAX_MW_INDEX, }, @@ -264,6 +261,9 @@ void *rxe_alloc(struct rxe_pool *pool) struct rxe_pool_elem *elem; void *obj; + if (WARN_ON(!(pool->flags & RXE_POOL_ALLOC))) + return NULL; + if (atomic_inc_return(&pool->num_elem) > pool->max_elem) goto out_cnt; @@ -286,6 +286,9 @@ void *rxe_alloc(struct rxe_pool *pool) int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) { + if (WARN_ON(pool->flags & RXE_POOL_ALLOC)) + return -EINVAL; + if (atomic_inc_return(&pool->num_elem) > pool->max_elem) goto out_cnt; @@ -310,7 +313,7 @@ void rxe_elem_release(struct kref *kref) if (pool->cleanup) pool->cleanup(elem); - if (!(pool->flags & RXE_POOL_NO_ALLOC)) { + if (pool->flags & RXE_POOL_ALLOC) { obj = elem->obj; kfree(obj); } diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index 8fc95c6b7b9b..44b944c8c360 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -9,7 +9,7 @@ enum rxe_pool_flags { RXE_POOL_INDEX = BIT(1), - RXE_POOL_NO_ALLOC = BIT(4), + RXE_POOL_ALLOC = BIT(2), }; enum rxe_elem_type { From patchwork Fri Mar 4 00:08:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12768324 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ABA1AC433FE for ; Fri, 4 Mar 2022 00:08:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233429AbiCDAJd (ORCPT ); Thu, 3 Mar 2022 19:09:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59734 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232763AbiCDAJc (ORCPT ); Thu, 3 Mar 2022 19:09:32 -0500 Received: from mail-ot1-x332.google.com (mail-ot1-x332.google.com [IPv6:2607:f8b0:4864:20::332]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3D0EA4739D for ; Thu, 3 Mar 2022 16:08:45 -0800 (PST) Received: by mail-ot1-x332.google.com with SMTP id k22-20020a9d4b96000000b005ad5211bd5aso6043232otf.8 for ; Thu, 03 Mar 2022 16:08:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ogoeYaZbfYCtUhjokZrOpx/fxsyeSs3uXBu3OTPLTd0=; b=J+Cg4gFTjxgC2PLDdYtXfuITuAl0UlQWTN09s6ENz9MKMl/X0ZFcWVnJfpLG+udUkJ ODF2krQRarDIJr+LXyTyKYQv1llIEHAr+KMoLLLR8DXg4moRCpFjYdvSGT1E7M8GWl7F AWGOaZ066aGhyBr91HJvW3BmcfQ4RapvS1X6APiJjSg9tv898erFBOAovPXInGjsiBKj EgnNOJXu3ZBVbJsEB7MQDfk0eqKc8f6cScRQ+/Th2H/vWEPSTEEy6Cfws5XgEfBExSMJ fzh2KRwKFeJKbXYX/8PYFWZN4I9Okoh+30NhWCbOBugDgLwXVML4t5SJ3c0rKsm9lx8d KJ2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ogoeYaZbfYCtUhjokZrOpx/fxsyeSs3uXBu3OTPLTd0=; b=7Mz3opMAtRrma0+GVzcPUHLtuwSM4s/ljBq1M6SoQdZ6+o9zWaP7lbx1hlTY7VNh63 okiozmhPKWfDcLFvT1PTPx5f6ZewoOlYgOnv7USFH4vAVLIoMzCB7J4azwDzI8g71yg0 6OkecONOuHD1l7YUp4HkWUO/Ht4v3x0yVgnHDEGHCIZHAub3ZAR+HxKzHhZrvD66EtEY Wu7tIYVud1Pf7dWAd4Kd8RmNNCie7mAaHN944lsFg6KumPJUF9uX0e8/0NU/Vs1wmCAi 3R0vpVGnPc6XdAp5l+ke+yJ0nDEB6VeLoThhISs7TfyR1EVHVnvRuyR185JWb/qgCt/I b82Q== X-Gm-Message-State: AOAM5322xEghG/k8n43h3ZlC9afjj259YH6hY7kGoVd5jabd1h3X0wlp hyFfM/GrRCzrpqMe37t60sM= X-Google-Smtp-Source: ABdhPJy/OzDJoq9uh25bo3lVKakxsiCT2SDuk34BqljR3OLB7oRuWGYhQrKap+yz3nO3cr6zOBxcpg== X-Received: by 2002:a9d:6e01:0:b0:5af:5d9d:4039 with SMTP id e1-20020a9d6e01000000b005af5d9d4039mr19857342otr.280.1646352524479; Thu, 03 Mar 2022 16:08:44 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-4a4e-f957-5a09-0218.res6.spectrum.com. [2603:8081:140c:1a00:4a4e:f957:5a09:218]) by smtp.googlemail.com with ESMTPSA id n23-20020a9d7417000000b005afc3371166sm1646469otk.81.2022.03.03.16.08.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Mar 2022 16:08:44 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v11 04/13] RDMA/rxe: Delete _locked() APIs for pool objects Date: Thu, 3 Mar 2022 18:08:00 -0600 Message-Id: <20220304000808.225811-5-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220304000808.225811-1-rpearsonhpe@gmail.com> References: <20220304000808.225811-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Since caller managed locks for indexed objects are no longer used these APIs are deleted. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_pool.c | 67 ++++------------------------ drivers/infiniband/sw/rxe/rxe_pool.h | 24 ++-------- 2 files changed, 12 insertions(+), 79 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 239c24544ff2..2e3543dde000 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -189,17 +189,6 @@ static int rxe_insert_index(struct rxe_pool *pool, struct rxe_pool_elem *new) return 0; } -int __rxe_add_index_locked(struct rxe_pool_elem *elem) -{ - struct rxe_pool *pool = elem->pool; - int err; - - elem->index = alloc_index(pool); - err = rxe_insert_index(pool, elem); - - return err; -} - int __rxe_add_index(struct rxe_pool_elem *elem) { struct rxe_pool *pool = elem->pool; @@ -207,55 +196,24 @@ int __rxe_add_index(struct rxe_pool_elem *elem) int err; write_lock_irqsave(&pool->pool_lock, flags); - err = __rxe_add_index_locked(elem); + elem->index = alloc_index(pool); + err = rxe_insert_index(pool, elem); write_unlock_irqrestore(&pool->pool_lock, flags); return err; } -void __rxe_drop_index_locked(struct rxe_pool_elem *elem) -{ - struct rxe_pool *pool = elem->pool; - - clear_bit(elem->index - pool->index.min_index, pool->index.table); - rb_erase(&elem->index_node, &pool->index.tree); -} - void __rxe_drop_index(struct rxe_pool_elem *elem) { struct rxe_pool *pool = elem->pool; unsigned long flags; write_lock_irqsave(&pool->pool_lock, flags); - __rxe_drop_index_locked(elem); + clear_bit(elem->index - pool->index.min_index, pool->index.table); + rb_erase(&elem->index_node, &pool->index.tree); write_unlock_irqrestore(&pool->pool_lock, flags); } -void *rxe_alloc_locked(struct rxe_pool *pool) -{ - struct rxe_pool_elem *elem; - void *obj; - - if (atomic_inc_return(&pool->num_elem) > pool->max_elem) - goto out_cnt; - - obj = kzalloc(pool->elem_size, GFP_ATOMIC); - if (!obj) - goto out_cnt; - - elem = (struct rxe_pool_elem *)((u8 *)obj + pool->elem_offset); - - elem->pool = pool; - elem->obj = obj; - kref_init(&elem->ref_cnt); - - return obj; - -out_cnt: - atomic_dec(&pool->num_elem); - return NULL; -} - void *rxe_alloc(struct rxe_pool *pool) { struct rxe_pool_elem *elem; @@ -321,12 +279,14 @@ void rxe_elem_release(struct kref *kref) atomic_dec(&pool->num_elem); } -void *rxe_pool_get_index_locked(struct rxe_pool *pool, u32 index) +void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) { - struct rb_node *node; struct rxe_pool_elem *elem; + struct rb_node *node; + unsigned long flags; void *obj; + read_lock_irqsave(&pool->pool_lock, flags); node = pool->index.tree.rb_node; while (node) { @@ -346,17 +306,6 @@ void *rxe_pool_get_index_locked(struct rxe_pool *pool, u32 index) } else { obj = NULL; } - - return obj; -} - -void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) -{ - unsigned long flags; - void *obj; - - read_lock_irqsave(&pool->pool_lock, flags); - obj = rxe_pool_get_index_locked(pool, index); read_unlock_irqrestore(&pool->pool_lock, flags); return obj; diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index 44b944c8c360..7fec5d96d695 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -68,9 +68,7 @@ int rxe_pool_init(struct rxe_dev *rxe, struct rxe_pool *pool, /* free resources from object pool */ void rxe_pool_cleanup(struct rxe_pool *pool); -/* allocate an object from pool holding and not holding the pool lock */ -void *rxe_alloc_locked(struct rxe_pool *pool); - +/* allocate an object from pool */ void *rxe_alloc(struct rxe_pool *pool); /* connect already allocated object to pool */ @@ -79,32 +77,18 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem); #define rxe_add_to_pool(pool, obj) __rxe_add_to_pool(pool, &(obj)->elem) /* assign an index to an indexed object and insert object into - * pool's rb tree holding and not holding the pool_lock + * pool's rb tree */ -int __rxe_add_index_locked(struct rxe_pool_elem *elem); - -#define rxe_add_index_locked(obj) __rxe_add_index_locked(&(obj)->elem) - int __rxe_add_index(struct rxe_pool_elem *elem); #define rxe_add_index(obj) __rxe_add_index(&(obj)->elem) -/* drop an index and remove object from rb tree - * holding and not holding the pool_lock - */ -void __rxe_drop_index_locked(struct rxe_pool_elem *elem); - -#define rxe_drop_index_locked(obj) __rxe_drop_index_locked(&(obj)->elem) - +/* drop an index and remove object from rb tree */ void __rxe_drop_index(struct rxe_pool_elem *elem); #define rxe_drop_index(obj) __rxe_drop_index(&(obj)->elem) -/* lookup an indexed object from index holding and not holding the pool_lock. - * takes a reference on object - */ -void *rxe_pool_get_index_locked(struct rxe_pool *pool, u32 index); - +/* lookup an indexed object from index. takes a reference on object */ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index); /* cleanup an object when all references are dropped */ From patchwork Fri Mar 4 00:08:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12768323 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5312C433EF for ; Fri, 4 Mar 2022 00:08:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233963AbiCDAJd (ORCPT ); Thu, 3 Mar 2022 19:09:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59736 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233429AbiCDAJc (ORCPT ); Thu, 3 Mar 2022 19:09:32 -0500 Received: from mail-ot1-x32c.google.com (mail-ot1-x32c.google.com [IPv6:2607:f8b0:4864:20::32c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E05BF3EABB for ; Thu, 3 Mar 2022 16:08:45 -0800 (PST) Received: by mail-ot1-x32c.google.com with SMTP id 40-20020a9d032b000000b005b02923e2e6so6065669otv.1 for ; Thu, 03 Mar 2022 16:08:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=DpoYd3T+Xyeu3qmtjQCLbWFZ5HK4VNygPWuSfJjk0Wc=; b=CzbGegeUOmw6a4xfZJpKJx2FZ7xqt4WsJ7HrCPvGy0sj4J59jSG1d05MvFmhxiOJMG yxjIuNIP1OmO3lFx2jmq7A+K9lWqzzSCS/riL/JpLU4Ppx9xGYkycCpZwtho5xGLSStH EZ4scv6skI2mEWSKuxAjNwUF9hgmWBGLm0f/qoQ+E00FGcTv5wrxcNogRVoj5hY1PzAo wIj7q09oeHdyH9TpGhqaG8XSEnusSphlCmjjIFDYIvu9MMizzHP85r8IenOGJr7Cr+2U wnxsRIl7IHezT6z69m8/iAqENeLuZC7XT5mrDnqstKrS2G/6AJkfJet2u6Ct0a8y+EOK 8L9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=DpoYd3T+Xyeu3qmtjQCLbWFZ5HK4VNygPWuSfJjk0Wc=; b=5SQvGEtf2LMA8DO1VzweR4eacWhV8m5FCj0rYG0biI/7rfeira8NhfqtIp5dRh1HMu box6Vz65fyO9R282OD2H3pwXMayoAs6xfDxYboB81TM/W++acg7xGOwpJ8sL3i5qk3F2 /ESuCc3GqvemKy2U3u6gN7YMz6mIFRiE+sfiDJLrBXPDed/Mgy3LF+MjayD7aTRs+L21 9iDmzISSZHcMQtCWP2/EvDbEmLi2QlnYuw3NJguCsRUSuPJiw7FBnAH6rFb2UKyU0qaz tdiXV67r7g+lqw01Pr31s1Od0Fj7rpoUjijrmd8zLi+dDnr9dpVUHPTds/q2lgDQex45 JNCw== X-Gm-Message-State: AOAM530V0kpQ5AYtZvs0c9GhLd4oQ567X46Sv+mYeExTKZqEEmmoYziY ZZKxU7UZ9YjirT1LwX9h7IY= X-Google-Smtp-Source: ABdhPJz6459WjgfE0BB8eGtT3A28l4CzrOz1APpLVv2vZbclIMQDEKqhRagjd3tef7gdpKSyM9PzMw== X-Received: by 2002:a05:6830:4392:b0:5b0:3ec4:4309 with SMTP id s18-20020a056830439200b005b03ec44309mr9066760otv.111.1646352525276; Thu, 03 Mar 2022 16:08:45 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-4a4e-f957-5a09-0218.res6.spectrum.com. [2603:8081:140c:1a00:4a4e:f957:5a09:218]) by smtp.googlemail.com with ESMTPSA id n23-20020a9d7417000000b005afc3371166sm1646469otk.81.2022.03.03.16.08.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Mar 2022 16:08:44 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v11 05/13] RDMA/rxe: Replace obj by elem in declaration Date: Thu, 3 Mar 2022 18:08:01 -0600 Message-Id: <20220304000808.225811-6-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220304000808.225811-1-rpearsonhpe@gmail.com> References: <20220304000808.225811-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Fix a harmless typo replacing obj by elem in the cleanup fields. This has no effect but is confusing. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_pool.c | 2 +- drivers/infiniband/sw/rxe/rxe_pool.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 2e3543dde000..3b50fd3d9d70 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -12,7 +12,7 @@ static const struct rxe_type_info { const char *name; size_t size; size_t elem_offset; - void (*cleanup)(struct rxe_pool_elem *obj); + void (*cleanup)(struct rxe_pool_elem *elem); enum rxe_pool_flags flags; u32 min_index; u32 max_index; diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index 7fec5d96d695..a8582ad85b1e 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -39,7 +39,7 @@ struct rxe_pool { struct rxe_dev *rxe; const char *name; rwlock_t pool_lock; /* protects pool add/del/search */ - void (*cleanup)(struct rxe_pool_elem *obj); + void (*cleanup)(struct rxe_pool_elem *elem); enum rxe_pool_flags flags; enum rxe_elem_type type; From patchwork Fri Mar 4 00:08:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12768325 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2FDB0C433F5 for ; Fri, 4 Mar 2022 00:08:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232763AbiCDAJe (ORCPT ); Thu, 3 Mar 2022 19:09:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59784 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231395AbiCDAJd (ORCPT ); Thu, 3 Mar 2022 19:09:33 -0500 Received: from mail-oo1-xc2f.google.com (mail-oo1-xc2f.google.com [IPv6:2607:f8b0:4864:20::c2f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7EEF847AFE for ; Thu, 3 Mar 2022 16:08:46 -0800 (PST) Received: by mail-oo1-xc2f.google.com with SMTP id n5-20020a4a9545000000b0031d45a442feso7652023ooi.3 for ; Thu, 03 Mar 2022 16:08:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qDpFc1qZcSRZoOmrwG1SOSqoCxkUsGIPo8B+4/9TPGs=; b=DJZ1sVOU519vGjHWougb2UE50ba/LWTIYJ99QAPxdvtGTkEeJAMIt0Vxg7gaAtpa2b Xm3yIRGLVI6W0SawLTuoDgGNTq55unbD2IEYJBSUx/fvM4bhWh+KDDSBBGUkJ2+Ymyzh rnjxQjhOfxWcNJvkOGg/5dtGzgawouBbp6s2efCIYRSKZgaSWzzelgH6KDSc+DoVyGtI MDULHiVpYmkVLQGViXrmi5zjnzdMMDhSxGOo2pBTzaQQkNSITp3zqAJQ0D6FJgagnjRg ENB1ylg0ybw5N5jsgLtRKlMyo9P+cDQPhSqRxKrREf0hBeVI8nxbMYZg7cs7JyeFcZCA JwAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qDpFc1qZcSRZoOmrwG1SOSqoCxkUsGIPo8B+4/9TPGs=; b=MYlTT8zbwnwfV/EthGoaE8zO98YQP4bxCtdJLUBeTRiLOsl82z00GX8R6/2RVtNGB/ cpsZ7geUfoG59E/cpZGIUkTCEG6/H+Q52aXTQseFO9ZXINgpULgIkvGG1e1lfU+w8XXr qxgqgRKqPRZriESwPqw+zH+ZRdUHtnD27m2voOAfDX3wTdfQRoJoh4q9T0Yo0ywsZfNY w42SiczHPWZp79Mr0zt/1fQr4oy9odZQltawWImR6X6mBK48Ou5OLytDBSBuPjgIZxTX NDFKyrRn7UEUJsinSWhc78ZpheKpQNLxJ0UE2jdfSk1H0KVR0p+/c+Dv+Rd3mBi9GiwU tbnw== X-Gm-Message-State: AOAM532jN2MCV39RqjvoDZ2d7JcLOgymkL03W5PmnHy4JXGmuVg3JwsD KK0LsCi9hafLw18ZKM5xFhk5E0HIJUE= X-Google-Smtp-Source: ABdhPJwGdM0xI6svZ3y7Kb318caQOMv4SCKHJvIATeJFowc/0Us8O4Grbr2F0OLOHEsgD5p/SDKdEQ== X-Received: by 2002:a05:6870:4151:b0:d7:f5f:fd52 with SMTP id r17-20020a056870415100b000d70f5ffd52mr6087457oad.2.1646352525906; Thu, 03 Mar 2022 16:08:45 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-4a4e-f957-5a09-0218.res6.spectrum.com. [2603:8081:140c:1a00:4a4e:f957:5a09:218]) by smtp.googlemail.com with ESMTPSA id n23-20020a9d7417000000b005afc3371166sm1646469otk.81.2022.03.03.16.08.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Mar 2022 16:08:45 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v11 06/13] RDMA/rxe: Move max_elem into rxe_type_info Date: Thu, 3 Mar 2022 18:08:02 -0600 Message-Id: <20220304000808.225811-7-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220304000808.225811-1-rpearsonhpe@gmail.com> References: <20220304000808.225811-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Move the maximum number of elements from a parameter in rxe_pool_init to a member of the rxe_type_info array. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe.c | 24 ++++++++---------------- drivers/infiniband/sw/rxe/rxe_pool.c | 14 +++++++++++--- drivers/infiniband/sw/rxe/rxe_pool.h | 2 +- 3 files changed, 20 insertions(+), 20 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c index fce3994d8f7a..dc1f9dd70966 100644 --- a/drivers/infiniband/sw/rxe/rxe.c +++ b/drivers/infiniband/sw/rxe/rxe.c @@ -118,43 +118,35 @@ static int rxe_init_pools(struct rxe_dev *rxe) { int err; - err = rxe_pool_init(rxe, &rxe->uc_pool, RXE_TYPE_UC, - rxe->max_ucontext); + err = rxe_pool_init(rxe, &rxe->uc_pool, RXE_TYPE_UC); if (err) goto err1; - err = rxe_pool_init(rxe, &rxe->pd_pool, RXE_TYPE_PD, - rxe->attr.max_pd); + err = rxe_pool_init(rxe, &rxe->pd_pool, RXE_TYPE_PD); if (err) goto err2; - err = rxe_pool_init(rxe, &rxe->ah_pool, RXE_TYPE_AH, - rxe->attr.max_ah); + err = rxe_pool_init(rxe, &rxe->ah_pool, RXE_TYPE_AH); if (err) goto err3; - err = rxe_pool_init(rxe, &rxe->srq_pool, RXE_TYPE_SRQ, - rxe->attr.max_srq); + err = rxe_pool_init(rxe, &rxe->srq_pool, RXE_TYPE_SRQ); if (err) goto err4; - err = rxe_pool_init(rxe, &rxe->qp_pool, RXE_TYPE_QP, - rxe->attr.max_qp); + err = rxe_pool_init(rxe, &rxe->qp_pool, RXE_TYPE_QP); if (err) goto err5; - err = rxe_pool_init(rxe, &rxe->cq_pool, RXE_TYPE_CQ, - rxe->attr.max_cq); + err = rxe_pool_init(rxe, &rxe->cq_pool, RXE_TYPE_CQ); if (err) goto err6; - err = rxe_pool_init(rxe, &rxe->mr_pool, RXE_TYPE_MR, - rxe->attr.max_mr); + err = rxe_pool_init(rxe, &rxe->mr_pool, RXE_TYPE_MR); if (err) goto err7; - err = rxe_pool_init(rxe, &rxe->mw_pool, RXE_TYPE_MW, - rxe->attr.max_mw); + err = rxe_pool_init(rxe, &rxe->mw_pool, RXE_TYPE_MW); if (err) goto err8; diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 3b50fd3d9d70..bc3ae64adba8 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -16,16 +16,19 @@ static const struct rxe_type_info { enum rxe_pool_flags flags; u32 min_index; u32 max_index; + u32 max_elem; } rxe_type_info[RXE_NUM_TYPES] = { [RXE_TYPE_UC] = { .name = "rxe-uc", .size = sizeof(struct rxe_ucontext), .elem_offset = offsetof(struct rxe_ucontext, elem), + .max_elem = UINT_MAX, }, [RXE_TYPE_PD] = { .name = "rxe-pd", .size = sizeof(struct rxe_pd), .elem_offset = offsetof(struct rxe_pd, elem), + .max_elem = UINT_MAX, }, [RXE_TYPE_AH] = { .name = "rxe-ah", @@ -34,6 +37,7 @@ static const struct rxe_type_info { .flags = RXE_POOL_INDEX, .min_index = RXE_MIN_AH_INDEX, .max_index = RXE_MAX_AH_INDEX, + .max_elem = RXE_MAX_AH_INDEX - RXE_MIN_AH_INDEX + 1, }, [RXE_TYPE_SRQ] = { .name = "rxe-srq", @@ -42,6 +46,7 @@ static const struct rxe_type_info { .flags = RXE_POOL_INDEX, .min_index = RXE_MIN_SRQ_INDEX, .max_index = RXE_MAX_SRQ_INDEX, + .max_elem = RXE_MAX_SRQ_INDEX - RXE_MIN_SRQ_INDEX + 1, }, [RXE_TYPE_QP] = { .name = "rxe-qp", @@ -51,12 +56,14 @@ static const struct rxe_type_info { .flags = RXE_POOL_INDEX, .min_index = RXE_MIN_QP_INDEX, .max_index = RXE_MAX_QP_INDEX, + .max_elem = RXE_MAX_QP_INDEX - RXE_MIN_QP_INDEX + 1, }, [RXE_TYPE_CQ] = { .name = "rxe-cq", .size = sizeof(struct rxe_cq), .elem_offset = offsetof(struct rxe_cq, elem), .cleanup = rxe_cq_cleanup, + .max_elem = UINT_MAX, }, [RXE_TYPE_MR] = { .name = "rxe-mr", @@ -66,6 +73,7 @@ static const struct rxe_type_info { .flags = RXE_POOL_INDEX | RXE_POOL_ALLOC, .min_index = RXE_MIN_MR_INDEX, .max_index = RXE_MAX_MR_INDEX, + .max_elem = RXE_MAX_MR_INDEX - RXE_MIN_MR_INDEX + 1, }, [RXE_TYPE_MW] = { .name = "rxe-mw", @@ -75,6 +83,7 @@ static const struct rxe_type_info { .flags = RXE_POOL_INDEX, .min_index = RXE_MIN_MW_INDEX, .max_index = RXE_MAX_MW_INDEX, + .max_elem = RXE_MAX_MW_INDEX - RXE_MIN_MW_INDEX + 1, }, }; @@ -104,8 +113,7 @@ static int rxe_pool_init_index(struct rxe_pool *pool, u32 max, u32 min) int rxe_pool_init( struct rxe_dev *rxe, struct rxe_pool *pool, - enum rxe_elem_type type, - unsigned int max_elem) + enum rxe_elem_type type) { const struct rxe_type_info *info = &rxe_type_info[type]; int err = 0; @@ -115,7 +123,7 @@ int rxe_pool_init( pool->rxe = rxe; pool->name = info->name; pool->type = type; - pool->max_elem = max_elem; + pool->max_elem = info->max_elem; pool->elem_size = ALIGN(info->size, RXE_POOL_ALIGN); pool->elem_offset = info->elem_offset; pool->flags = info->flags; diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index a8582ad85b1e..5f34d232d7f4 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -63,7 +63,7 @@ struct rxe_pool { * pool elements will be allocated out of a slab cache */ int rxe_pool_init(struct rxe_dev *rxe, struct rxe_pool *pool, - enum rxe_elem_type type, u32 max_elem); + enum rxe_elem_type type); /* free resources from object pool */ void rxe_pool_cleanup(struct rxe_pool *pool); From patchwork Fri Mar 4 00:08:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12768326 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F045C433EF for ; Fri, 4 Mar 2022 00:08:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235140AbiCDAJf (ORCPT ); Thu, 3 Mar 2022 19:09:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59804 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234287AbiCDAJd (ORCPT ); Thu, 3 Mar 2022 19:09:33 -0500 Received: from mail-oo1-xc36.google.com (mail-oo1-xc36.google.com [IPv6:2607:f8b0:4864:20::c36]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 415113FBFD for ; Thu, 3 Mar 2022 16:08:47 -0800 (PST) Received: by mail-oo1-xc36.google.com with SMTP id h16-20020a4a6f10000000b00320507b9ccfso7663465ooc.7 for ; Thu, 03 Mar 2022 16:08:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9q1/Uj0uD+ZwTi5CIpoO1p47G+exO0yd4IBMXRE/Big=; b=mWlj7kGPtmIVjkQ1E2fesF5cOZD4Ya4HDIFSH3vUHEhaSnEbaBF5P4+RxPTIBrt3XO 3bco/PvSKYywaqTwdbX4i9sXoXWOoRZAF1AjYaXOZH3laX9mP5h4+U5lp9cB/jcqGfKt qtIIpKOi6U9chYfxQjKPhr0IuVdlOC0FPD3f9KyWHIr6vKWVXoywfEkykwxY5TW/4WU+ kTTaXnaDMbsq7oCXD0xZtPx3OglmbLGAnbDvW0CBMYQTdd77IJ5agRuj4bXgBK/B5mmg 4WxIdnPbPydHZe/qnvvImC0x84pHDqgIe/EOxCXs5ELWXjxpJ74fl0+u/B5UKa1BZmSA Oi7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9q1/Uj0uD+ZwTi5CIpoO1p47G+exO0yd4IBMXRE/Big=; b=5LD/1JN/1wTwIsr+ZI6tkWXKW2aALxb9MSptrNhdO02oM91DIqd4LmqczQ8yew/5zM wM9V8FL6nwOaaPcv1rf+Ctd/otGuU80s+cVxV4K0kutNR44Ec7sU0h1EBVPF4YZWC6Cp 7nJ/kuZUmHkgGRvfpubBsMq34oB2VK6y8r/Lx6TpG62Kqk52QVb9++Dmgwf76IRXkSGw BRdbxxiqSpjTZdVFPGy+O8TxiSxHHKe3uVh7UEE8zS6cQQC2cDttse5ILRgkDoxtt1K4 PU7KGO1x4IPVqgijk+K14ZOouz2W/R7rxOsDZSzsF+TFM8iQFvt3Jkdy3+uDrxBC2slC T8HQ== X-Gm-Message-State: AOAM532jZTx62uDkiRmOLmhay/VXUovCs3vHGpKzBppYQkNHQapQ3vFK oZy0Gslh9rDeDJjTZN7N4YM= X-Google-Smtp-Source: ABdhPJwbJ/i5uFHzbS1N3EYndBJFqOaoKuA7DMxTG5YDuFneTzVNbCfJdTMAEhVPqCMMr5dEepn66Q== X-Received: by 2002:a05:6870:45aa:b0:d4:5d51:b0ac with SMTP id y42-20020a05687045aa00b000d45d51b0acmr6102739oao.59.1646352526537; Thu, 03 Mar 2022 16:08:46 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-4a4e-f957-5a09-0218.res6.spectrum.com. [2603:8081:140c:1a00:4a4e:f957:5a09:218]) by smtp.googlemail.com with ESMTPSA id n23-20020a9d7417000000b005afc3371166sm1646469otk.81.2022.03.03.16.08.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Mar 2022 16:08:46 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v11 07/13] RDMA/rxe: Shorten pool names in rxe_pool.c Date: Thu, 3 Mar 2022 18:08:03 -0600 Message-Id: <20220304000808.225811-8-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220304000808.225811-1-rpearsonhpe@gmail.com> References: <20220304000808.225811-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Replace pool names like "rxe-xx" with "xx". Just reduces clutter. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_pool.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index bc3ae64adba8..c50baeb10bd2 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -19,19 +19,19 @@ static const struct rxe_type_info { u32 max_elem; } rxe_type_info[RXE_NUM_TYPES] = { [RXE_TYPE_UC] = { - .name = "rxe-uc", + .name = "uc", .size = sizeof(struct rxe_ucontext), .elem_offset = offsetof(struct rxe_ucontext, elem), .max_elem = UINT_MAX, }, [RXE_TYPE_PD] = { - .name = "rxe-pd", + .name = "pd", .size = sizeof(struct rxe_pd), .elem_offset = offsetof(struct rxe_pd, elem), .max_elem = UINT_MAX, }, [RXE_TYPE_AH] = { - .name = "rxe-ah", + .name = "ah", .size = sizeof(struct rxe_ah), .elem_offset = offsetof(struct rxe_ah, elem), .flags = RXE_POOL_INDEX, @@ -40,7 +40,7 @@ static const struct rxe_type_info { .max_elem = RXE_MAX_AH_INDEX - RXE_MIN_AH_INDEX + 1, }, [RXE_TYPE_SRQ] = { - .name = "rxe-srq", + .name = "srq", .size = sizeof(struct rxe_srq), .elem_offset = offsetof(struct rxe_srq, elem), .flags = RXE_POOL_INDEX, @@ -49,7 +49,7 @@ static const struct rxe_type_info { .max_elem = RXE_MAX_SRQ_INDEX - RXE_MIN_SRQ_INDEX + 1, }, [RXE_TYPE_QP] = { - .name = "rxe-qp", + .name = "qp", .size = sizeof(struct rxe_qp), .elem_offset = offsetof(struct rxe_qp, elem), .cleanup = rxe_qp_cleanup, @@ -59,14 +59,14 @@ static const struct rxe_type_info { .max_elem = RXE_MAX_QP_INDEX - RXE_MIN_QP_INDEX + 1, }, [RXE_TYPE_CQ] = { - .name = "rxe-cq", + .name = "cq", .size = sizeof(struct rxe_cq), .elem_offset = offsetof(struct rxe_cq, elem), .cleanup = rxe_cq_cleanup, .max_elem = UINT_MAX, }, [RXE_TYPE_MR] = { - .name = "rxe-mr", + .name = "mr", .size = sizeof(struct rxe_mr), .elem_offset = offsetof(struct rxe_mr, elem), .cleanup = rxe_mr_cleanup, @@ -76,7 +76,7 @@ static const struct rxe_type_info { .max_elem = RXE_MAX_MR_INDEX - RXE_MIN_MR_INDEX + 1, }, [RXE_TYPE_MW] = { - .name = "rxe-mw", + .name = "mw", .size = sizeof(struct rxe_mw), .elem_offset = offsetof(struct rxe_mw, elem), .cleanup = rxe_mw_cleanup, From patchwork Fri Mar 4 00:08:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12768328 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9199C433F5 for ; Fri, 4 Mar 2022 00:08:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235007AbiCDAJg (ORCPT ); Thu, 3 Mar 2022 19:09:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59942 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235107AbiCDAJf (ORCPT ); Thu, 3 Mar 2022 19:09:35 -0500 Received: from mail-ot1-x32e.google.com (mail-ot1-x32e.google.com [IPv6:2607:f8b0:4864:20::32e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EA8EA49277 for ; Thu, 3 Mar 2022 16:08:47 -0800 (PST) Received: by mail-ot1-x32e.google.com with SMTP id 40-20020a9d032b000000b005b02923e2e6so6065719otv.1 for ; Thu, 03 Mar 2022 16:08:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=WTCCDs60aK4EDN4BuvxmI18oOfluSAUk7qcm893+q7A=; b=oCbJeYH8C3+aCcvVJRbSihcs5qz+inAjrf+j+QIU2HTT4gXZsGQkDIQAJe/9xc+uUt +yjfZ7hlejl8pGJBxIlD9rhIA0wSHCLdZbGua2F+YcVwgXSgFn7d5QzO5RO0wkNWdaLM 2Pz6d6ehwerp5CgJ+hMuXsbIJuSZxCip3Z02FKHCNrXscUcA6ZD4KCOgCZmQsP+oUq4p ARTORNbzFOJ9B/jYn5LNflrN/15vtUemoOACutjVEhOg86Q8WF+UQ+XxE7GLn2FQ09Wi hEuSaQAcUpMon3KRTdswACX3Qh4xC4FV8dTlPAo/ky/OWppp6tU9SwdHz586DOpFXZlw e+5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WTCCDs60aK4EDN4BuvxmI18oOfluSAUk7qcm893+q7A=; b=emVnR2xS1kDam2ktm0UvGS/7f16ed/OFhLohFTHOcJwaN+5Dfzyvu4u7Xg1R3XrhdP lDvjPIP9XS/HERxTzWuVdMmh0nla8DUB/ken5buuZkB8qnFevmUyxQcFR0LVKdI5qctO m6cYNNngM8UkBNqe+REV++JsW0NUgxV/tpHyiHhucw5iKg88pfykmX+bB4ycvNHRgdQ0 YdEPGquTNta1T2xsRiO50mSmLINknStbVxPN2HTvdODdxw+bO4UjGd7YsLgDnJsZcLcq JxKrUZ/Yq62DE1nc4MqhbfbKALWe1ZQ0axLXF7NWsgiTYcQZBx9nkofbX0lZ8Q9VP3Fd l+Ow== X-Gm-Message-State: AOAM533JVBG7sqZpD9gAlOSKrO4bZJR8ATh+BeVvHpnFco2hKcvqYi1b CteLoPuABWLnyaFgm6JQuIlwuHhJgL8= X-Google-Smtp-Source: ABdhPJyiENDX6vSsKrF+sS2RwwfmK+HfQIWJTVKYd4TbogDpi6O/oIAkbIWBeOOolew2PW7CCV2fnA== X-Received: by 2002:a9d:da6:0:b0:59f:a2fa:a158 with SMTP id 35-20020a9d0da6000000b0059fa2faa158mr20361761ots.20.1646352527224; Thu, 03 Mar 2022 16:08:47 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-4a4e-f957-5a09-0218.res6.spectrum.com. [2603:8081:140c:1a00:4a4e:f957:5a09:218]) by smtp.googlemail.com with ESMTPSA id n23-20020a9d7417000000b005afc3371166sm1646469otk.81.2022.03.03.16.08.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Mar 2022 16:08:47 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v11 08/13] RDMA/rxe: Replace red-black trees by xarrays Date: Thu, 3 Mar 2022 18:08:04 -0600 Message-Id: <20220304000808.225811-9-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220304000808.225811-1-rpearsonhpe@gmail.com> References: <20220304000808.225811-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Currently the rxe driver uses red-black trees to add indices to the rxe object pools. Linux xarrays provide a better way to implement the same functionality for indices. This patch replaces red-black trees by xarrays for pool objects. Since xarrays already have a spinlock use that in place of the pool rwlock. Make sure that all changes in the xarray(index) and kref(ref counnt) occur atomically. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe.c | 80 ++------- drivers/infiniband/sw/rxe/rxe_mr.c | 1 - drivers/infiniband/sw/rxe/rxe_mw.c | 8 - drivers/infiniband/sw/rxe/rxe_pool.c | 236 +++++++++----------------- drivers/infiniband/sw/rxe/rxe_pool.h | 43 ++--- drivers/infiniband/sw/rxe/rxe_verbs.c | 12 -- 6 files changed, 103 insertions(+), 277 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c index dc1f9dd70966..2dae7538a2ea 100644 --- a/drivers/infiniband/sw/rxe/rxe.c +++ b/drivers/infiniband/sw/rxe/rxe.c @@ -114,75 +114,26 @@ static void rxe_init_ports(struct rxe_dev *rxe) } /* init pools of managed objects */ -static int rxe_init_pools(struct rxe_dev *rxe) +static void rxe_init_pools(struct rxe_dev *rxe) { - int err; - - err = rxe_pool_init(rxe, &rxe->uc_pool, RXE_TYPE_UC); - if (err) - goto err1; - - err = rxe_pool_init(rxe, &rxe->pd_pool, RXE_TYPE_PD); - if (err) - goto err2; - - err = rxe_pool_init(rxe, &rxe->ah_pool, RXE_TYPE_AH); - if (err) - goto err3; - - err = rxe_pool_init(rxe, &rxe->srq_pool, RXE_TYPE_SRQ); - if (err) - goto err4; - - err = rxe_pool_init(rxe, &rxe->qp_pool, RXE_TYPE_QP); - if (err) - goto err5; - - err = rxe_pool_init(rxe, &rxe->cq_pool, RXE_TYPE_CQ); - if (err) - goto err6; - - err = rxe_pool_init(rxe, &rxe->mr_pool, RXE_TYPE_MR); - if (err) - goto err7; - - err = rxe_pool_init(rxe, &rxe->mw_pool, RXE_TYPE_MW); - if (err) - goto err8; - - return 0; - -err8: - rxe_pool_cleanup(&rxe->mr_pool); -err7: - rxe_pool_cleanup(&rxe->cq_pool); -err6: - rxe_pool_cleanup(&rxe->qp_pool); -err5: - rxe_pool_cleanup(&rxe->srq_pool); -err4: - rxe_pool_cleanup(&rxe->ah_pool); -err3: - rxe_pool_cleanup(&rxe->pd_pool); -err2: - rxe_pool_cleanup(&rxe->uc_pool); -err1: - return err; + rxe_pool_init(rxe, &rxe->uc_pool, RXE_TYPE_UC); + rxe_pool_init(rxe, &rxe->pd_pool, RXE_TYPE_PD); + rxe_pool_init(rxe, &rxe->ah_pool, RXE_TYPE_AH); + rxe_pool_init(rxe, &rxe->srq_pool, RXE_TYPE_SRQ); + rxe_pool_init(rxe, &rxe->qp_pool, RXE_TYPE_QP); + rxe_pool_init(rxe, &rxe->cq_pool, RXE_TYPE_CQ); + rxe_pool_init(rxe, &rxe->mr_pool, RXE_TYPE_MR); + rxe_pool_init(rxe, &rxe->mw_pool, RXE_TYPE_MW); } /* initialize rxe device state */ -static int rxe_init(struct rxe_dev *rxe) +static void rxe_init(struct rxe_dev *rxe) { - int err; - /* init default device parameters */ rxe_init_device_param(rxe); rxe_init_ports(rxe); - - err = rxe_init_pools(rxe); - if (err) - return err; + rxe_init_pools(rxe); /* init pending mmap list */ spin_lock_init(&rxe->mmap_offset_lock); @@ -194,8 +145,6 @@ static int rxe_init(struct rxe_dev *rxe) rxe->mcg_tree = RB_ROOT; mutex_init(&rxe->usdev_lock); - - return 0; } void rxe_set_mtu(struct rxe_dev *rxe, unsigned int ndev_mtu) @@ -217,12 +166,7 @@ void rxe_set_mtu(struct rxe_dev *rxe, unsigned int ndev_mtu) */ int rxe_add(struct rxe_dev *rxe, unsigned int mtu, const char *ibdev_name) { - int err; - - err = rxe_init(rxe); - if (err) - return err; - + rxe_init(rxe); rxe_set_mtu(rxe, mtu); return rxe_register_device(rxe, ibdev_name); diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 453ef3c9d535..35628b8a00b4 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -691,7 +691,6 @@ int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) mr->state = RXE_MR_STATE_INVALID; rxe_drop_ref(mr_pd(mr)); - rxe_drop_index(mr); rxe_drop_ref(mr); return 0; diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c index 32dd8c0b8b9e..7df36c40eec2 100644 --- a/drivers/infiniband/sw/rxe/rxe_mw.c +++ b/drivers/infiniband/sw/rxe/rxe_mw.c @@ -20,7 +20,6 @@ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata) return ret; } - rxe_add_index(mw); mw->rkey = ibmw->rkey = (mw->elem.index << 8) | rxe_get_next_key(-1); mw->state = (mw->ibmw.type == IB_MW_TYPE_2) ? RXE_MW_STATE_FREE : RXE_MW_STATE_VALID; @@ -329,10 +328,3 @@ struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey) return mw; } - -void rxe_mw_cleanup(struct rxe_pool_elem *elem) -{ - struct rxe_mw *mw = container_of(elem, typeof(*mw), elem); - - rxe_drop_index(mw); -} diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index c50baeb10bd2..b0dfeb08a470 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -22,19 +22,22 @@ static const struct rxe_type_info { .name = "uc", .size = sizeof(struct rxe_ucontext), .elem_offset = offsetof(struct rxe_ucontext, elem), + .min_index = 1, + .max_index = UINT_MAX, .max_elem = UINT_MAX, }, [RXE_TYPE_PD] = { .name = "pd", .size = sizeof(struct rxe_pd), .elem_offset = offsetof(struct rxe_pd, elem), + .min_index = 1, + .max_index = UINT_MAX, .max_elem = UINT_MAX, }, [RXE_TYPE_AH] = { .name = "ah", .size = sizeof(struct rxe_ah), .elem_offset = offsetof(struct rxe_ah, elem), - .flags = RXE_POOL_INDEX, .min_index = RXE_MIN_AH_INDEX, .max_index = RXE_MAX_AH_INDEX, .max_elem = RXE_MAX_AH_INDEX - RXE_MIN_AH_INDEX + 1, @@ -43,7 +46,6 @@ static const struct rxe_type_info { .name = "srq", .size = sizeof(struct rxe_srq), .elem_offset = offsetof(struct rxe_srq, elem), - .flags = RXE_POOL_INDEX, .min_index = RXE_MIN_SRQ_INDEX, .max_index = RXE_MAX_SRQ_INDEX, .max_elem = RXE_MAX_SRQ_INDEX - RXE_MIN_SRQ_INDEX + 1, @@ -53,7 +55,6 @@ static const struct rxe_type_info { .size = sizeof(struct rxe_qp), .elem_offset = offsetof(struct rxe_qp, elem), .cleanup = rxe_qp_cleanup, - .flags = RXE_POOL_INDEX, .min_index = RXE_MIN_QP_INDEX, .max_index = RXE_MAX_QP_INDEX, .max_elem = RXE_MAX_QP_INDEX - RXE_MIN_QP_INDEX + 1, @@ -63,6 +64,8 @@ static const struct rxe_type_info { .size = sizeof(struct rxe_cq), .elem_offset = offsetof(struct rxe_cq, elem), .cleanup = rxe_cq_cleanup, + .min_index = 1, + .max_index = UINT_MAX, .max_elem = UINT_MAX, }, [RXE_TYPE_MR] = { @@ -70,7 +73,7 @@ static const struct rxe_type_info { .size = sizeof(struct rxe_mr), .elem_offset = offsetof(struct rxe_mr, elem), .cleanup = rxe_mr_cleanup, - .flags = RXE_POOL_INDEX | RXE_POOL_ALLOC, + .flags = RXE_POOL_ALLOC, .min_index = RXE_MIN_MR_INDEX, .max_index = RXE_MAX_MR_INDEX, .max_elem = RXE_MAX_MR_INDEX - RXE_MIN_MR_INDEX + 1, @@ -79,44 +82,16 @@ static const struct rxe_type_info { .name = "mw", .size = sizeof(struct rxe_mw), .elem_offset = offsetof(struct rxe_mw, elem), - .cleanup = rxe_mw_cleanup, - .flags = RXE_POOL_INDEX, .min_index = RXE_MIN_MW_INDEX, .max_index = RXE_MAX_MW_INDEX, .max_elem = RXE_MAX_MW_INDEX - RXE_MIN_MW_INDEX + 1, }, }; -static int rxe_pool_init_index(struct rxe_pool *pool, u32 max, u32 min) -{ - int err = 0; - - if ((max - min + 1) < pool->max_elem) { - pr_warn("not enough indices for max_elem\n"); - err = -EINVAL; - goto out; - } - - pool->index.max_index = max; - pool->index.min_index = min; - - pool->index.table = bitmap_zalloc(max - min + 1, GFP_KERNEL); - if (!pool->index.table) { - err = -ENOMEM; - goto out; - } - -out: - return err; -} - -int rxe_pool_init( - struct rxe_dev *rxe, - struct rxe_pool *pool, - enum rxe_elem_type type) +void rxe_pool_init(struct rxe_dev *rxe, struct rxe_pool *pool, + enum rxe_elem_type type) { const struct rxe_type_info *info = &rxe_type_info[type]; - int err = 0; memset(pool, 0, sizeof(*pool)); @@ -131,111 +106,52 @@ int rxe_pool_init( atomic_set(&pool->num_elem, 0); - rwlock_init(&pool->pool_lock); - - if (pool->flags & RXE_POOL_INDEX) { - pool->index.tree = RB_ROOT; - err = rxe_pool_init_index(pool, info->max_index, - info->min_index); - if (err) - goto out; - } - -out: - return err; + xa_init_flags(&pool->xa, XA_FLAGS_ALLOC); + pool->limit.min = info->min_index; + pool->limit.max = info->max_index; } void rxe_pool_cleanup(struct rxe_pool *pool) { - if (atomic_read(&pool->num_elem) > 0) - pr_warn("%s pool destroyed with unfree'd elem\n", - pool->name); - - if (pool->flags & RXE_POOL_INDEX) - bitmap_free(pool->index.table); -} - -static u32 alloc_index(struct rxe_pool *pool) -{ - u32 index; - u32 range = pool->index.max_index - pool->index.min_index + 1; - - index = find_next_zero_bit(pool->index.table, range, pool->index.last); - if (index >= range) - index = find_first_zero_bit(pool->index.table, range); - - WARN_ON_ONCE(index >= range); - set_bit(index, pool->index.table); - pool->index.last = index; - return index + pool->index.min_index; -} - -static int rxe_insert_index(struct rxe_pool *pool, struct rxe_pool_elem *new) -{ - struct rb_node **link = &pool->index.tree.rb_node; - struct rb_node *parent = NULL; struct rxe_pool_elem *elem; - - while (*link) { - parent = *link; - elem = rb_entry(parent, struct rxe_pool_elem, index_node); - - if (elem->index == new->index) { - pr_warn("element already exists!\n"); - return -EINVAL; + struct xarray *xa = &pool->xa; + unsigned long index = 0; + unsigned long max = ULONG_MAX; + unsigned int elem_count = 0; + unsigned int obj_count = 0; + + do { + elem = xa_find(xa, &index, max, XA_PRESENT); + if (elem) { + elem_count++; + xa_erase(xa, index); + if (pool->flags & RXE_POOL_ALLOC) { + kfree(elem->obj); + obj_count++; + } } + } while (elem); - if (elem->index > new->index) - link = &(*link)->rb_left; - else - link = &(*link)->rb_right; - } - - rb_link_node(&new->index_node, parent, link); - rb_insert_color(&new->index_node, &pool->index.tree); - - return 0; -} - -int __rxe_add_index(struct rxe_pool_elem *elem) -{ - struct rxe_pool *pool = elem->pool; - unsigned long flags; - int err; - - write_lock_irqsave(&pool->pool_lock, flags); - elem->index = alloc_index(pool); - err = rxe_insert_index(pool, elem); - write_unlock_irqrestore(&pool->pool_lock, flags); - - return err; -} - -void __rxe_drop_index(struct rxe_pool_elem *elem) -{ - struct rxe_pool *pool = elem->pool; - unsigned long flags; - - write_lock_irqsave(&pool->pool_lock, flags); - clear_bit(elem->index - pool->index.min_index, pool->index.table); - rb_erase(&elem->index_node, &pool->index.tree); - write_unlock_irqrestore(&pool->pool_lock, flags); + if (WARN_ON(elem_count || obj_count)) + pr_debug("Freed %d indices and %d objects from pool %s\n", + elem_count, obj_count, pool->name); } void *rxe_alloc(struct rxe_pool *pool) { struct rxe_pool_elem *elem; void *obj; + int err; if (WARN_ON(!(pool->flags & RXE_POOL_ALLOC))) return NULL; if (atomic_inc_return(&pool->num_elem) > pool->max_elem) - goto out_cnt; + goto err_cnt; obj = kzalloc(pool->elem_size, GFP_KERNEL); if (!obj) - goto out_cnt; + goto err_cnt; elem = (struct rxe_pool_elem *)((u8 *)obj + pool->elem_offset); @@ -243,78 +159,86 @@ void *rxe_alloc(struct rxe_pool *pool) elem->obj = obj; kref_init(&elem->ref_cnt); + err = xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit, + &pool->next, GFP_KERNEL); + if (err) + goto err_free; + return obj; -out_cnt: +err_free: + kfree(obj); +err_cnt: atomic_dec(&pool->num_elem); return NULL; } int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) { + int err; + if (WARN_ON(pool->flags & RXE_POOL_ALLOC)) return -EINVAL; if (atomic_inc_return(&pool->num_elem) > pool->max_elem) - goto out_cnt; + goto err_cnt; elem->pool = pool; elem->obj = (u8 *)elem - pool->elem_offset; kref_init(&elem->ref_cnt); + err = xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit, + &pool->next, GFP_KERNEL); + if (err) + goto err_cnt; + return 0; -out_cnt: +err_cnt: atomic_dec(&pool->num_elem); return -EINVAL; } -void rxe_elem_release(struct kref *kref) +void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) { - struct rxe_pool_elem *elem = - container_of(kref, struct rxe_pool_elem, ref_cnt); - struct rxe_pool *pool = elem->pool; + struct rxe_pool_elem *elem; + struct xarray *xa = &pool->xa; + unsigned long flags; void *obj; - if (pool->cleanup) - pool->cleanup(elem); - - if (pool->flags & RXE_POOL_ALLOC) { + xa_lock_irqsave(xa, flags); + elem = xa_load(xa, index); + if (elem && kref_get_unless_zero(&elem->ref_cnt)) obj = elem->obj; - kfree(obj); - } + else + obj = NULL; + xa_unlock_irqrestore(xa, flags); - atomic_dec(&pool->num_elem); + return obj; } -void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) +static void rxe_elem_release(struct kref *kref) { - struct rxe_pool_elem *elem; - struct rb_node *node; - unsigned long flags; - void *obj; + struct rxe_pool_elem *elem = container_of(kref, typeof(*elem), ref_cnt); + struct rxe_pool *pool = elem->pool; - read_lock_irqsave(&pool->pool_lock, flags); - node = pool->index.tree.rb_node; + xa_erase(&pool->xa, elem->index); - while (node) { - elem = rb_entry(node, struct rxe_pool_elem, index_node); + if (pool->cleanup) + pool->cleanup(elem); - if (elem->index > index) - node = node->rb_left; - else if (elem->index < index) - node = node->rb_right; - else - break; - } + if (pool->flags & RXE_POOL_ALLOC) + kfree(elem->obj); - if (node) { - kref_get(&elem->ref_cnt); - obj = elem->obj; - } else { - obj = NULL; - } - read_unlock_irqrestore(&pool->pool_lock, flags); + atomic_dec(&pool->num_elem); +} - return obj; +int __rxe_get(struct rxe_pool_elem *elem) +{ + return kref_get_unless_zero(&elem->ref_cnt); +} + +int __rxe_put(struct rxe_pool_elem *elem) +{ + return kref_put(&elem->ref_cnt, rxe_elem_release); } diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index 5f34d232d7f4..d1e05d384b2c 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -8,8 +8,7 @@ #define RXE_POOL_H enum rxe_pool_flags { - RXE_POOL_INDEX = BIT(1), - RXE_POOL_ALLOC = BIT(2), + RXE_POOL_ALLOC = BIT(1), }; enum rxe_elem_type { @@ -29,16 +28,12 @@ struct rxe_pool_elem { void *obj; struct kref ref_cnt; struct list_head list; - - /* only used if indexed */ - struct rb_node index_node; u32 index; }; struct rxe_pool { struct rxe_dev *rxe; const char *name; - rwlock_t pool_lock; /* protects pool add/del/search */ void (*cleanup)(struct rxe_pool_elem *elem); enum rxe_pool_flags flags; enum rxe_elem_type type; @@ -48,21 +43,16 @@ struct rxe_pool { size_t elem_size; size_t elem_offset; - /* only used if indexed */ - struct { - struct rb_root tree; - unsigned long *table; - u32 last; - u32 max_index; - u32 min_index; - } index; + struct xarray xa; + struct xa_limit limit; + u32 next; }; /* initialize a pool of objects with given limit on * number of elements. gets parameters from rxe_type_info * pool elements will be allocated out of a slab cache */ -int rxe_pool_init(struct rxe_dev *rxe, struct rxe_pool *pool, +void rxe_pool_init(struct rxe_dev *rxe, struct rxe_pool *pool, enum rxe_elem_type type); /* free resources from object pool */ @@ -76,29 +66,18 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem); #define rxe_add_to_pool(pool, obj) __rxe_add_to_pool(pool, &(obj)->elem) -/* assign an index to an indexed object and insert object into - * pool's rb tree - */ -int __rxe_add_index(struct rxe_pool_elem *elem); - -#define rxe_add_index(obj) __rxe_add_index(&(obj)->elem) - -/* drop an index and remove object from rb tree */ -void __rxe_drop_index(struct rxe_pool_elem *elem); - -#define rxe_drop_index(obj) __rxe_drop_index(&(obj)->elem) - /* lookup an indexed object from index. takes a reference on object */ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index); -/* cleanup an object when all references are dropped */ -void rxe_elem_release(struct kref *kref); - /* take a reference on an object */ -#define rxe_add_ref(obj) kref_get(&(obj)->elem.ref_cnt) +int __rxe_get(struct rxe_pool_elem *elem); + +#define rxe_add_ref(obj) __rxe_get(&(obj)->elem) /* drop a reference on an object */ -#define rxe_drop_ref(obj) kref_put(&(obj)->elem.ref_cnt, rxe_elem_release) +int __rxe_put(struct rxe_pool_elem *elem); + +#define rxe_drop_ref(obj) __rxe_put(&(obj)->elem) #define rxe_read_ref(obj) kref_read(&(obj)->elem.ref_cnt) diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 80df9a8f71a1..f0c5715ac500 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -181,7 +181,6 @@ static int rxe_create_ah(struct ib_ah *ibah, return err; /* create index > 0 */ - rxe_add_index(ah); ah->ah_num = ah->elem.index; if (uresp) { @@ -189,7 +188,6 @@ static int rxe_create_ah(struct ib_ah *ibah, err = copy_to_user(&uresp->ah_num, &ah->ah_num, sizeof(uresp->ah_num)); if (err) { - rxe_drop_index(ah); rxe_drop_ref(ah); return -EFAULT; } @@ -230,7 +228,6 @@ static int rxe_destroy_ah(struct ib_ah *ibah, u32 flags) { struct rxe_ah *ah = to_rah(ibah); - rxe_drop_index(ah); rxe_drop_ref(ah); return 0; } @@ -438,7 +435,6 @@ static int rxe_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init, if (err) return err; - rxe_add_index(qp); err = rxe_qp_from_init(rxe, qp, pd, init, uresp, ibqp->pd, udata); if (err) goto qp_init; @@ -446,7 +442,6 @@ static int rxe_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init, return 0; qp_init: - rxe_drop_index(qp); rxe_drop_ref(qp); return err; } @@ -501,7 +496,6 @@ static int rxe_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata) return ret; rxe_qp_destroy(qp); - rxe_drop_index(qp); rxe_drop_ref(qp); return 0; } @@ -908,7 +902,6 @@ static struct ib_mr *rxe_get_dma_mr(struct ib_pd *ibpd, int access) if (!mr) return ERR_PTR(-ENOMEM); - rxe_add_index(mr); rxe_add_ref(pd); rxe_mr_init_dma(pd, access, mr); @@ -932,7 +925,6 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd, goto err2; } - rxe_add_index(mr); rxe_add_ref(pd); @@ -944,7 +936,6 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd, err3: rxe_drop_ref(pd); - rxe_drop_index(mr); rxe_drop_ref(mr); err2: return ERR_PTR(err); @@ -967,8 +958,6 @@ static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type, goto err1; } - rxe_add_index(mr); - rxe_add_ref(pd); err = rxe_mr_init_fast(pd, max_num_sg, mr); @@ -979,7 +968,6 @@ static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type, err2: rxe_drop_ref(pd); - rxe_drop_index(mr); rxe_drop_ref(mr); err1: return ERR_PTR(err); From patchwork Fri Mar 4 00:08:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12768331 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84D83C433EF for ; Fri, 4 Mar 2022 00:08:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235504AbiCDAJi (ORCPT ); Thu, 3 Mar 2022 19:09:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60040 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236593AbiCDAJg (ORCPT ); Thu, 3 Mar 2022 19:09:36 -0500 Received: from mail-oo1-xc2f.google.com (mail-oo1-xc2f.google.com [IPv6:2607:f8b0:4864:20::c2f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B12234BBBF for ; Thu, 3 Mar 2022 16:08:48 -0800 (PST) Received: by mail-oo1-xc2f.google.com with SMTP id d134-20020a4a528c000000b00319244f4b04so7659599oob.8 for ; Thu, 03 Mar 2022 16:08:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=o2aUG4D0hPr8V3HpNmOrQ5G5I7N8SaRoZffpas4ceic=; b=hyjAWD8dxmYUZORMpxj7/N16Czcx4nWM7z8h3h29VHhwTv41YXNPUVuFMLGqticwWE FlL7Ui7rtBukeju1bE3o7Pvqt2b7DyL2rMxuLDkILcKZeozY26ovComZct80yxb4DYEA pLDeEg4mkdM9UXqG6dk9M1Ay+inW+2SdsAH7gt3tmZSRuMV6cDHvKlgJHPadK8+OFX1q 1jXQ3/KMDTdizWgvTkg1p6WJYqDkA7Dtgs/pA/+H7gp5nxwfn/pXlEQiVA8FW/tvmCN7 S0V2lTHWzz/LuW9enk0qDI9zxepCPtsLrAITXfKCAjUFTv2jIfUYSLDhLMU70+z/kjPJ S/Cg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=o2aUG4D0hPr8V3HpNmOrQ5G5I7N8SaRoZffpas4ceic=; b=W7En97+aDkV+xSKIdzAaNAplOsn4G6JaiWMGr0ygPPtoHmr63qfxXD4wgAlAZfQIso 6o3dkRpRNNnA4u8zEmElJWSBJI6/3SFapr1KHGCj7nFIPhkN6X7pst+g4omKCJONljk9 J/U6pa1EhyyWxC4xRyFZf27IJgqaYo00YQlPMgr8/UUnjMPIVpko0sH5MURy9RdYxD5o zVMYn8oionr/NHR0JJ/RA6Y2Ec0/mQCcLLkyFOHFYzBsM8ENxQEh7OJgp8PXZpYPx3fJ TEhk0Z3ptA3EoFHobl9vaLMqA9Q2YyEz/fKm0Q84MeZiNQBQ1jxscwSh2B0dgB3CmXon X/NA== X-Gm-Message-State: AOAM531/S03V14Q/eMctlxfuIxSuHVdNNdCbpcGZ6vX5PDM7jM3FHQ5L FdSZ+/ilmehmawO24tExasgkxAHJVUE= X-Google-Smtp-Source: ABdhPJwbH3L1y+3xy0vqYRLQrE657cLlCl6qORNCPzJDW0UTCmHXhf6fOfvh45xydo/lJ7xa9sHWYA== X-Received: by 2002:a05:6870:1147:b0:ce:c0c9:5ed with SMTP id 7-20020a056870114700b000cec0c905edmr6060614oag.63.1646352527964; Thu, 03 Mar 2022 16:08:47 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-4a4e-f957-5a09-0218.res6.spectrum.com. [2603:8081:140c:1a00:4a4e:f957:5a09:218]) by smtp.googlemail.com with ESMTPSA id n23-20020a9d7417000000b005afc3371166sm1646469otk.81.2022.03.03.16.08.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Mar 2022 16:08:47 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v11 09/13] RDMA/rxe: Use standard names for ref counting Date: Thu, 3 Mar 2022 18:08:05 -0600 Message-Id: <20220304000808.225811-10-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220304000808.225811-1-rpearsonhpe@gmail.com> References: <20220304000808.225811-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Rename rxe_add_ref() to rxe_get() and rxe_drop_ref() to rxe_put(). Significantly improves readability for new readers. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_av.c | 4 +-- drivers/infiniband/sw/rxe/rxe_comp.c | 8 +++--- drivers/infiniband/sw/rxe/rxe_mcast.c | 4 +-- drivers/infiniband/sw/rxe/rxe_mr.c | 14 +++++----- drivers/infiniband/sw/rxe/rxe_mw.c | 30 ++++++++++----------- drivers/infiniband/sw/rxe/rxe_net.c | 6 ++--- drivers/infiniband/sw/rxe/rxe_pool.h | 8 +++--- drivers/infiniband/sw/rxe/rxe_qp.c | 28 ++++++++++---------- drivers/infiniband/sw/rxe/rxe_recv.c | 8 +++--- drivers/infiniband/sw/rxe/rxe_req.c | 10 +++---- drivers/infiniband/sw/rxe/rxe_resp.c | 32 +++++++++++----------- drivers/infiniband/sw/rxe/rxe_verbs.c | 38 +++++++++++++-------------- 12 files changed, 94 insertions(+), 96 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_av.c b/drivers/infiniband/sw/rxe/rxe_av.c index 360a567159fe..3b05314ca739 100644 --- a/drivers/infiniband/sw/rxe/rxe_av.c +++ b/drivers/infiniband/sw/rxe/rxe_av.c @@ -127,14 +127,14 @@ struct rxe_av *rxe_get_av(struct rxe_pkt_info *pkt, struct rxe_ah **ahp) if (rxe_ah_pd(ah) != pkt->qp->pd) { pr_warn("PDs don't match for AH and QP\n"); - rxe_drop_ref(ah); + rxe_put(ah); return NULL; } if (ahp) *ahp = ah; else - rxe_drop_ref(ah); + rxe_put(ah); return &ah->av; } diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c index f363fe3fa414..138b3e7d3a5f 100644 --- a/drivers/infiniband/sw/rxe/rxe_comp.c +++ b/drivers/infiniband/sw/rxe/rxe_comp.c @@ -526,7 +526,7 @@ static void rxe_drain_resp_pkts(struct rxe_qp *qp, bool notify) struct rxe_queue *q = qp->sq.queue; while ((skb = skb_dequeue(&qp->resp_pkts))) { - rxe_drop_ref(qp); + rxe_put(qp); kfree_skb(skb); ib_device_put(qp->ibqp.device); } @@ -548,7 +548,7 @@ static void free_pkt(struct rxe_pkt_info *pkt) struct ib_device *dev = qp->ibqp.device; kfree_skb(skb); - rxe_drop_ref(qp); + rxe_put(qp); ib_device_put(dev); } @@ -562,7 +562,7 @@ int rxe_completer(void *arg) enum comp_state state; int ret = 0; - rxe_add_ref(qp); + rxe_get(qp); if (!qp->valid || qp->req.state == QP_STATE_ERROR || qp->req.state == QP_STATE_RESET) { @@ -761,7 +761,7 @@ int rxe_completer(void *arg) done: if (pkt) free_pkt(pkt); - rxe_drop_ref(qp); + rxe_put(qp); return ret; } diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c index c399a29b648b..ae8f11cb704a 100644 --- a/drivers/infiniband/sw/rxe/rxe_mcast.c +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c @@ -319,7 +319,7 @@ static int __rxe_init_mca(struct rxe_qp *qp, struct rxe_mcg *mcg, atomic_inc(&qp->mcg_num); - rxe_add_ref(qp); + rxe_get(qp); mca->qp = qp; list_add_tail(&mca->qp_list, &mcg->qp_list); @@ -389,7 +389,7 @@ static void __rxe_cleanup_mca(struct rxe_mca *mca, struct rxe_mcg *mcg) atomic_dec(&mcg->qp_num); atomic_dec(&mcg->rxe->mcg_attach); atomic_dec(&mca->qp->mcg_num); - rxe_drop_ref(mca->qp); + rxe_put(mca->qp); kfree(mca); } diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 35628b8a00b4..60a31b718774 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -459,7 +459,7 @@ int copy_data( if (offset >= sge->length) { if (mr) { - rxe_drop_ref(mr); + rxe_put(mr); mr = NULL; } sge++; @@ -504,13 +504,13 @@ int copy_data( dma->resid = resid; if (mr) - rxe_drop_ref(mr); + rxe_put(mr); return 0; err2: if (mr) - rxe_drop_ref(mr); + rxe_put(mr); err1: return err; } @@ -569,7 +569,7 @@ struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key, (type == RXE_LOOKUP_REMOTE && mr->rkey != key) || mr_pd(mr) != pd || (access && !(access & mr->access)) || mr->state != RXE_MR_STATE_VALID)) { - rxe_drop_ref(mr); + rxe_put(mr); mr = NULL; } @@ -613,7 +613,7 @@ int rxe_invalidate_mr(struct rxe_qp *qp, u32 rkey) ret = 0; err_drop_ref: - rxe_drop_ref(mr); + rxe_put(mr); err: return ret; } @@ -690,8 +690,8 @@ int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) } mr->state = RXE_MR_STATE_INVALID; - rxe_drop_ref(mr_pd(mr)); - rxe_drop_ref(mr); + rxe_put(mr_pd(mr)); + rxe_put(mr); return 0; } diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c index 7df36c40eec2..c86b2efd58f2 100644 --- a/drivers/infiniband/sw/rxe/rxe_mw.c +++ b/drivers/infiniband/sw/rxe/rxe_mw.c @@ -12,11 +12,11 @@ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata) struct rxe_dev *rxe = to_rdev(ibmw->device); int ret; - rxe_add_ref(pd); + rxe_get(pd); ret = rxe_add_to_pool(&rxe->mw_pool, mw); if (ret) { - rxe_drop_ref(pd); + rxe_put(pd); return ret; } @@ -35,14 +35,14 @@ static void rxe_do_dealloc_mw(struct rxe_mw *mw) mw->mr = NULL; atomic_dec(&mr->num_mw); - rxe_drop_ref(mr); + rxe_put(mr); } if (mw->qp) { struct rxe_qp *qp = mw->qp; mw->qp = NULL; - rxe_drop_ref(qp); + rxe_put(qp); } mw->access = 0; @@ -60,8 +60,8 @@ int rxe_dealloc_mw(struct ib_mw *ibmw) rxe_do_dealloc_mw(mw); spin_unlock_bh(&mw->lock); - rxe_drop_ref(mw); - rxe_drop_ref(pd); + rxe_put(mw); + rxe_put(pd); return 0; } @@ -170,7 +170,7 @@ static void rxe_do_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe, mw->length = wqe->wr.wr.mw.length; if (mw->mr) { - rxe_drop_ref(mw->mr); + rxe_put(mw->mr); atomic_dec(&mw->mr->num_mw); mw->mr = NULL; } @@ -178,11 +178,11 @@ static void rxe_do_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe, if (mw->length) { mw->mr = mr; atomic_inc(&mr->num_mw); - rxe_add_ref(mr); + rxe_get(mr); } if (mw->ibmw.type == IB_MW_TYPE_2) { - rxe_add_ref(qp); + rxe_get(qp); mw->qp = qp; } } @@ -233,9 +233,9 @@ int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe) spin_unlock_bh(&mw->lock); err_drop_mr: if (mr) - rxe_drop_ref(mr); + rxe_put(mr); err_drop_mw: - rxe_drop_ref(mw); + rxe_put(mw); err: return ret; } @@ -260,13 +260,13 @@ static void rxe_do_invalidate_mw(struct rxe_mw *mw) /* valid type 2 MW will always have a QP pointer */ qp = mw->qp; mw->qp = NULL; - rxe_drop_ref(qp); + rxe_put(qp); /* valid type 2 MW will always have an MR pointer */ mr = mw->mr; mw->mr = NULL; atomic_dec(&mr->num_mw); - rxe_drop_ref(mr); + rxe_put(mr); mw->access = 0; mw->addr = 0; @@ -301,7 +301,7 @@ int rxe_invalidate_mw(struct rxe_qp *qp, u32 rkey) err_unlock: spin_unlock_bh(&mw->lock); err_drop_ref: - rxe_drop_ref(mw); + rxe_put(mw); err: return ret; } @@ -322,7 +322,7 @@ struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey) (mw->length == 0) || (access && !(access & mw->access)) || mw->state != RXE_MW_STATE_VALID)) { - rxe_drop_ref(mw); + rxe_put(mw); return NULL; } diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index b06f22ffc5a8..c53f4529f098 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -348,7 +348,7 @@ static void rxe_skb_tx_dtor(struct sk_buff *skb) skb_out < RXE_INFLIGHT_SKBS_PER_QP_LOW)) rxe_run_task(&qp->req.task, 1); - rxe_drop_ref(qp); + rxe_put(qp); } static int rxe_send(struct sk_buff *skb, struct rxe_pkt_info *pkt) @@ -358,7 +358,7 @@ static int rxe_send(struct sk_buff *skb, struct rxe_pkt_info *pkt) skb->destructor = rxe_skb_tx_dtor; skb->sk = pkt->qp->sk->sk; - rxe_add_ref(pkt->qp); + rxe_get(pkt->qp); atomic_inc(&pkt->qp->skb_out); if (skb->protocol == htons(ETH_P_IP)) { @@ -368,7 +368,7 @@ static int rxe_send(struct sk_buff *skb, struct rxe_pkt_info *pkt) } else { pr_err("Unknown layer 3 protocol: %d\n", skb->protocol); atomic_dec(&pkt->qp->skb_out); - rxe_drop_ref(pkt->qp); + rxe_put(pkt->qp); kfree_skb(skb); return -EINVAL; } diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index d1e05d384b2c..24bcc786c1b3 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -69,16 +69,14 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem); /* lookup an indexed object from index. takes a reference on object */ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index); -/* take a reference on an object */ int __rxe_get(struct rxe_pool_elem *elem); -#define rxe_add_ref(obj) __rxe_get(&(obj)->elem) +#define rxe_get(obj) __rxe_get(&(obj)->elem) -/* drop a reference on an object */ int __rxe_put(struct rxe_pool_elem *elem); -#define rxe_drop_ref(obj) __rxe_put(&(obj)->elem) +#define rxe_put(obj) __rxe_put(&(obj)->elem) -#define rxe_read_ref(obj) kref_read(&(obj)->elem.ref_cnt) +#define rxe_read(obj) kref_read(&(obj)->elem.ref_cnt) #endif /* RXE_POOL_H */ diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c index 26d461a8d71c..62acf890af6c 100644 --- a/drivers/infiniband/sw/rxe/rxe_qp.c +++ b/drivers/infiniband/sw/rxe/rxe_qp.c @@ -323,11 +323,11 @@ int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd, struct rxe_cq *scq = to_rcq(init->send_cq); struct rxe_srq *srq = init->srq ? to_rsrq(init->srq) : NULL; - rxe_add_ref(pd); - rxe_add_ref(rcq); - rxe_add_ref(scq); + rxe_get(pd); + rxe_get(rcq); + rxe_get(scq); if (srq) - rxe_add_ref(srq); + rxe_get(srq); qp->pd = pd; qp->rcq = rcq; @@ -359,10 +359,10 @@ int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd, qp->srq = NULL; if (srq) - rxe_drop_ref(srq); - rxe_drop_ref(scq); - rxe_drop_ref(rcq); - rxe_drop_ref(pd); + rxe_put(srq); + rxe_put(scq); + rxe_put(rcq); + rxe_put(pd); return err; } @@ -521,7 +521,7 @@ static void rxe_qp_reset(struct rxe_qp *qp) qp->resp.sent_psn_nak = 0; if (qp->resp.mr) { - rxe_drop_ref(qp->resp.mr); + rxe_put(qp->resp.mr); qp->resp.mr = NULL; } @@ -809,20 +809,20 @@ static void rxe_qp_do_cleanup(struct work_struct *work) rxe_queue_cleanup(qp->sq.queue); if (qp->srq) - rxe_drop_ref(qp->srq); + rxe_put(qp->srq); if (qp->rq.queue) rxe_queue_cleanup(qp->rq.queue); if (qp->scq) - rxe_drop_ref(qp->scq); + rxe_put(qp->scq); if (qp->rcq) - rxe_drop_ref(qp->rcq); + rxe_put(qp->rcq); if (qp->pd) - rxe_drop_ref(qp->pd); + rxe_put(qp->pd); if (qp->resp.mr) - rxe_drop_ref(qp->resp.mr); + rxe_put(qp->resp.mr); if (qp_type(qp) == IB_QPT_RC) sk_dst_reset(qp->sk->sk); diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c index 53924453abef..d09a8b68c962 100644 --- a/drivers/infiniband/sw/rxe/rxe_recv.c +++ b/drivers/infiniband/sw/rxe/rxe_recv.c @@ -217,7 +217,7 @@ static int hdr_check(struct rxe_pkt_info *pkt) return 0; err2: - rxe_drop_ref(qp); + rxe_put(qp); err1: return -EINVAL; } @@ -288,11 +288,11 @@ static void rxe_rcv_mcast_pkt(struct rxe_dev *rxe, struct sk_buff *skb) cpkt = SKB_TO_PKT(cskb); cpkt->qp = qp; - rxe_add_ref(qp); + rxe_get(qp); rxe_rcv_pkt(cpkt, cskb); } else { pkt->qp = qp; - rxe_add_ref(qp); + rxe_get(qp); rxe_rcv_pkt(pkt, skb); skb = NULL; /* mark consumed */ } @@ -397,7 +397,7 @@ void rxe_rcv(struct sk_buff *skb) drop: if (pkt->qp) - rxe_drop_ref(pkt->qp); + rxe_put(pkt->qp); kfree_skb(skb); ib_device_put(&rxe->ib_dev); diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index f44535f82bea..2bde9e767dc7 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -611,7 +611,7 @@ int rxe_requester(void *arg) struct rxe_ah *ah; struct rxe_av *av; - rxe_add_ref(qp); + rxe_get(qp); next_wqe: if (unlikely(!qp->valid || qp->req.state == QP_STATE_ERROR)) @@ -690,7 +690,7 @@ int rxe_requester(void *arg) wqe->state = wqe_state_done; wqe->status = IB_WC_SUCCESS; __rxe_do_task(&qp->comp.task); - rxe_drop_ref(qp); + rxe_put(qp); return 0; } payload = mtu; @@ -729,7 +729,7 @@ int rxe_requester(void *arg) } if (ah) - rxe_drop_ref(ah); + rxe_put(ah); /* * To prevent a race on wqe access between requester and completer, @@ -761,12 +761,12 @@ int rxe_requester(void *arg) err_drop_ah: if (ah) - rxe_drop_ref(ah); + rxe_put(ah); err: wqe->state = wqe_state_error; __rxe_do_task(&qp->comp.task); exit: - rxe_drop_ref(qp); + rxe_put(qp); return -EAGAIN; } diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index b1ec003f0bb8..16fc7ea1298d 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -99,7 +99,7 @@ static inline enum resp_states get_req(struct rxe_qp *qp, if (qp->resp.state == QP_STATE_ERROR) { while ((skb = skb_dequeue(&qp->req_pkts))) { - rxe_drop_ref(qp); + rxe_put(qp); kfree_skb(skb); ib_device_put(qp->ibqp.device); } @@ -464,8 +464,8 @@ static enum resp_states check_rkey(struct rxe_qp *qp, if (mw->access & IB_ZERO_BASED) qp->resp.offset = mw->addr; - rxe_drop_ref(mw); - rxe_add_ref(mr); + rxe_put(mw); + rxe_get(mr); } else { mr = lookup_mr(qp->pd, access, rkey, RXE_LOOKUP_REMOTE); if (!mr) { @@ -508,9 +508,9 @@ static enum resp_states check_rkey(struct rxe_qp *qp, err: if (mr) - rxe_drop_ref(mr); + rxe_put(mr); if (mw) - rxe_drop_ref(mw); + rxe_put(mw); return state; } @@ -694,12 +694,12 @@ static struct rxe_mr *rxe_recheck_mr(struct rxe_qp *qp, u32 rkey) return NULL; if (mw->state != RXE_MW_STATE_VALID) { - rxe_drop_ref(mw); + rxe_put(mw); return NULL; } mr = mw->mr; - rxe_drop_ref(mw); + rxe_put(mw); } else { mr = rxe_pool_get_index(&rxe->mr_pool, rkey >> 8); if (!mr || mr->rkey != rkey) @@ -707,7 +707,7 @@ static struct rxe_mr *rxe_recheck_mr(struct rxe_qp *qp, u32 rkey) } if (mr->state != RXE_MR_STATE_VALID) { - rxe_drop_ref(mr); + rxe_put(mr); return NULL; } @@ -768,7 +768,7 @@ static enum resp_states read_reply(struct rxe_qp *qp, if (err) pr_err("Failed copying memory\n"); if (mr) - rxe_drop_ref(mr); + rxe_put(mr); if (bth_pad(&ack_pkt)) { u8 *pad = payload_addr(&ack_pkt) + payload; @@ -1037,7 +1037,7 @@ static int send_atomic_ack(struct rxe_qp *qp, struct rxe_pkt_info *pkt, rc = rxe_xmit_packet(qp, &ack_pkt, skb); if (rc) { pr_err_ratelimited("Failed sending ack\n"); - rxe_drop_ref(qp); + rxe_put(qp); } out: return rc; @@ -1066,13 +1066,13 @@ static enum resp_states cleanup(struct rxe_qp *qp, if (pkt) { skb = skb_dequeue(&qp->req_pkts); - rxe_drop_ref(qp); + rxe_put(qp); kfree_skb(skb); ib_device_put(qp->ibqp.device); } if (qp->resp.mr) { - rxe_drop_ref(qp->resp.mr); + rxe_put(qp->resp.mr); qp->resp.mr = NULL; } @@ -1216,7 +1216,7 @@ static enum resp_states do_class_d1e_error(struct rxe_qp *qp) } if (qp->resp.mr) { - rxe_drop_ref(qp->resp.mr); + rxe_put(qp->resp.mr); qp->resp.mr = NULL; } @@ -1230,7 +1230,7 @@ static void rxe_drain_req_pkts(struct rxe_qp *qp, bool notify) struct rxe_queue *q = qp->rq.queue; while ((skb = skb_dequeue(&qp->req_pkts))) { - rxe_drop_ref(qp); + rxe_put(qp); kfree_skb(skb); ib_device_put(qp->ibqp.device); } @@ -1250,7 +1250,7 @@ int rxe_responder(void *arg) struct rxe_pkt_info *pkt = NULL; int ret = 0; - rxe_add_ref(qp); + rxe_get(qp); qp->resp.aeth_syndrome = AETH_ACK_UNLIMITED; @@ -1437,6 +1437,6 @@ int rxe_responder(void *arg) exit: ret = -EAGAIN; done: - rxe_drop_ref(qp); + rxe_put(qp); return ret; } diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index f0c5715ac500..67184b0281a0 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -115,7 +115,7 @@ static void rxe_dealloc_ucontext(struct ib_ucontext *ibuc) { struct rxe_ucontext *uc = to_ruc(ibuc); - rxe_drop_ref(uc); + rxe_put(uc); } static int rxe_port_immutable(struct ib_device *dev, u32 port_num, @@ -149,7 +149,7 @@ static int rxe_dealloc_pd(struct ib_pd *ibpd, struct ib_udata *udata) { struct rxe_pd *pd = to_rpd(ibpd); - rxe_drop_ref(pd); + rxe_put(pd); return 0; } @@ -188,7 +188,7 @@ static int rxe_create_ah(struct ib_ah *ibah, err = copy_to_user(&uresp->ah_num, &ah->ah_num, sizeof(uresp->ah_num)); if (err) { - rxe_drop_ref(ah); + rxe_put(ah); return -EFAULT; } } else if (ah->is_user) { @@ -228,7 +228,7 @@ static int rxe_destroy_ah(struct ib_ah *ibah, u32 flags) { struct rxe_ah *ah = to_rah(ibah); - rxe_drop_ref(ah); + rxe_put(ah); return 0; } @@ -303,7 +303,7 @@ static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init, if (err) goto err1; - rxe_add_ref(pd); + rxe_get(pd); srq->pd = pd; err = rxe_srq_from_init(rxe, srq, init, udata, uresp); @@ -313,8 +313,8 @@ static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init, return 0; err2: - rxe_drop_ref(pd); - rxe_drop_ref(srq); + rxe_put(pd); + rxe_put(srq); err1: return err; } @@ -371,8 +371,8 @@ static int rxe_destroy_srq(struct ib_srq *ibsrq, struct ib_udata *udata) if (srq->rq.queue) rxe_queue_cleanup(srq->rq.queue); - rxe_drop_ref(srq->pd); - rxe_drop_ref(srq); + rxe_put(srq->pd); + rxe_put(srq); return 0; } @@ -442,7 +442,7 @@ static int rxe_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init, return 0; qp_init: - rxe_drop_ref(qp); + rxe_put(qp); return err; } @@ -496,7 +496,7 @@ static int rxe_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata) return ret; rxe_qp_destroy(qp); - rxe_drop_ref(qp); + rxe_put(qp); return 0; } @@ -809,7 +809,7 @@ static int rxe_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata) rxe_cq_disable(cq); - rxe_drop_ref(cq); + rxe_put(cq); return 0; } @@ -902,7 +902,7 @@ static struct ib_mr *rxe_get_dma_mr(struct ib_pd *ibpd, int access) if (!mr) return ERR_PTR(-ENOMEM); - rxe_add_ref(pd); + rxe_get(pd); rxe_mr_init_dma(pd, access, mr); return &mr->ibmr; @@ -926,7 +926,7 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd, } - rxe_add_ref(pd); + rxe_get(pd); err = rxe_mr_init_user(pd, start, length, iova, access, mr); if (err) @@ -935,8 +935,8 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd, return &mr->ibmr; err3: - rxe_drop_ref(pd); - rxe_drop_ref(mr); + rxe_put(pd); + rxe_put(mr); err2: return ERR_PTR(err); } @@ -958,7 +958,7 @@ static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type, goto err1; } - rxe_add_ref(pd); + rxe_get(pd); err = rxe_mr_init_fast(pd, max_num_sg, mr); if (err) @@ -967,8 +967,8 @@ static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type, return &mr->ibmr; err2: - rxe_drop_ref(pd); - rxe_drop_ref(mr); + rxe_put(pd); + rxe_put(mr); err1: return ERR_PTR(err); } From patchwork Fri Mar 4 00:08:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12768329 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB57AC433FE for ; Fri, 4 Mar 2022 00:08:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235107AbiCDAJh (ORCPT ); Thu, 3 Mar 2022 19:09:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59990 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235560AbiCDAJf (ORCPT ); Thu, 3 Mar 2022 19:09:35 -0500 Received: from mail-oo1-xc29.google.com (mail-oo1-xc29.google.com [IPv6:2607:f8b0:4864:20::c29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3A2E04D269 for ; Thu, 3 Mar 2022 16:08:49 -0800 (PST) Received: by mail-oo1-xc29.google.com with SMTP id y15-20020a4a650f000000b0031c19e9fe9dso7635687ooc.12 for ; Thu, 03 Mar 2022 16:08:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HgWXVeHo1hUdjhPFA2XSav7mquXNX2WtALbOGZf4Lx4=; b=Mz5doqCMqORXusWSwAjmtKMGonbSbeBDeSn52+CKt2fZCi1eUMVmRC7Wa14exBTGeZ Li7eBG8xXKjgL3jeIv7+lTRvWhZniMCjlJ32YMtqC6xPqOfllZZV3aPFg+f6JYO/xRcx X2eyWCrYnr/5PEavyE/rJPHVyREnMirGup1JppkPcfixDBUAs2nDyOiVH+EGkHk0sOUF Qs0M7tW6J8tvozkjj3WKAnfJUAv3o6FuS8KVuBq4U4gVJ9lWbngBs7Pnj4jaFCs6Y1GR IGPY7ILY5u4PMAKbkgVrTvYCFxpWSMAnqbB0W+BkkXIajzyo9IcFWp0TuZyVGyYf73Oc jEzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HgWXVeHo1hUdjhPFA2XSav7mquXNX2WtALbOGZf4Lx4=; b=IenqEqO4uDPTxgrxT4z/fkdtVtehiCKuztdeJL0bFjMVSvadzKplEo+o9Sio81Xq+j M0RKKs/iKa8eLNqKDRCjtte0qSc5kEAyleC0QbM5lHLEBNqY5WGW0J1asXlPubVwdl6Q TfnYDb9+F29su4Sc0ZWOsgYW3BwPnzT398DmdHFYYVhh3wu3JCLGdQqkKGaOQucgTfED ASJj003s5O7YTiZWo7ArG85STLG7tDOl4NG4ftfpwrgmgSyKFgNFEnmdK6uzSjRkaaLX Fw/fZZLqmtq7iTvTAALEF3VyTLauebzN/Mw6Ikvl4NqSc9gjvUORANG6b7tbB8UmGddb GABA== X-Gm-Message-State: AOAM533/s6hpl3gs66NU7HU0652F5xd759Hf8sZSw3RntpxqXCvNTnxq f1QiqAWGNJO22LtLps4eti7lrgiarFw= X-Google-Smtp-Source: ABdhPJxQzW4DWyyhIjKrqwAWgZm2pF/tPMzx8Sy6nl+2lpc8CG5vI8gvcpLmE0CzpCrWWlGmHUIdAQ== X-Received: by 2002:a05:6870:702b:b0:d7:433a:ded with SMTP id u43-20020a056870702b00b000d7433a0dedmr6250184oae.64.1646352528599; Thu, 03 Mar 2022 16:08:48 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-4a4e-f957-5a09-0218.res6.spectrum.com. [2603:8081:140c:1a00:4a4e:f957:5a09:218]) by smtp.googlemail.com with ESMTPSA id n23-20020a9d7417000000b005afc3371166sm1646469otk.81.2022.03.03.16.08.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Mar 2022 16:08:48 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v11 10/13] RDMA/rxe: Stop lookup of partially built objects Date: Thu, 3 Mar 2022 18:08:06 -0600 Message-Id: <20220304000808.225811-11-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220304000808.225811-1-rpearsonhpe@gmail.com> References: <20220304000808.225811-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Currently the rdma_rxe driver has a security weakness due to giving objects which are partially initialized indices allowing external actors to gain access to them by sending packets which refer to their index (e.g. qpn, rkey, etc) causing unpredictable results. This patch adds two new APIs rxe_show(obj) and rxe_hide(obj) which enable or disable looking up pool objects from indices using rxe_pool_get_index(). By default objects are disabled. These APIs are used to enable looking up objects which have indices: AH, SRQ, QP, MR, and MW. They are added in create verbs after the objects are fully initialized and as soon as possible in destroy verbs. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mr.c | 2 ++ drivers/infiniband/sw/rxe/rxe_mw.c | 4 +++ drivers/infiniband/sw/rxe/rxe_pool.c | 40 +++++++++++++++++++++++++-- drivers/infiniband/sw/rxe/rxe_pool.h | 5 ++++ drivers/infiniband/sw/rxe/rxe_verbs.c | 12 ++++++++ 5 files changed, 60 insertions(+), 3 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 60a31b718774..e4ad2cc12584 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -689,6 +689,8 @@ int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) return -EINVAL; } + rxe_hide(mr); + mr->state = RXE_MR_STATE_INVALID; rxe_put(mr_pd(mr)); rxe_put(mr); diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c index c86b2efd58f2..4edbed1410e9 100644 --- a/drivers/infiniband/sw/rxe/rxe_mw.c +++ b/drivers/infiniband/sw/rxe/rxe_mw.c @@ -25,6 +25,8 @@ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata) RXE_MW_STATE_FREE : RXE_MW_STATE_VALID; spin_lock_init(&mw->lock); + rxe_show(mw); + return 0; } @@ -56,6 +58,8 @@ int rxe_dealloc_mw(struct ib_mw *ibmw) struct rxe_mw *mw = to_rmw(ibmw); struct rxe_pd *pd = to_rpd(ibmw->pd); + rxe_hide(mw); + spin_lock_bh(&mw->lock); rxe_do_dealloc_mw(mw); spin_unlock_bh(&mw->lock); diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index b0dfeb08a470..c0b70687a83e 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -159,7 +159,7 @@ void *rxe_alloc(struct rxe_pool *pool) elem->obj = obj; kref_init(&elem->ref_cnt); - err = xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit, + err = xa_alloc_cyclic(&pool->xa, &elem->index, NULL, pool->limit, &pool->next, GFP_KERNEL); if (err) goto err_free; @@ -187,7 +187,7 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) elem->obj = (u8 *)elem - pool->elem_offset; kref_init(&elem->ref_cnt); - err = xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit, + err = xa_alloc_cyclic(&pool->xa, &elem->index, NULL, pool->limit, &pool->next, GFP_KERNEL); if (err) goto err_cnt; @@ -221,8 +221,12 @@ static void rxe_elem_release(struct kref *kref) { struct rxe_pool_elem *elem = container_of(kref, typeof(*elem), ref_cnt); struct rxe_pool *pool = elem->pool; + struct xarray *xa = &pool->xa; + unsigned long flags; - xa_erase(&pool->xa, elem->index); + xa_lock_irqsave(xa, flags); + __xa_erase(&pool->xa, elem->index); + xa_unlock_irqrestore(xa, flags); if (pool->cleanup) pool->cleanup(elem); @@ -242,3 +246,33 @@ int __rxe_put(struct rxe_pool_elem *elem) { return kref_put(&elem->ref_cnt, rxe_elem_release); } + +int __rxe_show(struct rxe_pool_elem *elem) +{ + struct xarray *xa = &elem->pool->xa; + unsigned long flags; + void *ret; + + xa_lock_irqsave(xa, flags); + ret = __xa_store(&elem->pool->xa, elem->index, elem, GFP_KERNEL); + xa_unlock_irqrestore(xa, flags); + if (IS_ERR(ret)) + return PTR_ERR(ret); + else + return 0; +} + +int __rxe_hide(struct rxe_pool_elem *elem) +{ + struct xarray *xa = &elem->pool->xa; + unsigned long flags; + void *ret; + + xa_lock_irqsave(xa, flags); + ret = __xa_store(&elem->pool->xa, elem->index, NULL, GFP_KERNEL); + xa_unlock_irqrestore(xa, flags); + if (IS_ERR(ret)) + return PTR_ERR(ret); + else + return 0; +} diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index 24bcc786c1b3..c48d8f6f95f3 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -79,4 +79,9 @@ int __rxe_put(struct rxe_pool_elem *elem); #define rxe_read(obj) kref_read(&(obj)->elem.ref_cnt) +int __rxe_show(struct rxe_pool_elem *elem); +#define rxe_show(obj) __rxe_show(&(obj)->elem) +int __rxe_hide(struct rxe_pool_elem *elem); +#define rxe_hide(obj) __rxe_hide(&(obj)->elem) + #endif /* RXE_POOL_H */ diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 67184b0281a0..e010e8860492 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -197,6 +197,8 @@ static int rxe_create_ah(struct ib_ah *ibah, } rxe_init_av(init_attr->ah_attr, &ah->av); + rxe_show(ah); + return 0; } @@ -228,6 +230,7 @@ static int rxe_destroy_ah(struct ib_ah *ibah, u32 flags) { struct rxe_ah *ah = to_rah(ibah); + rxe_hide(ah); rxe_put(ah); return 0; } @@ -310,6 +313,7 @@ static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init, if (err) goto err2; + rxe_show(srq); return 0; err2: @@ -368,6 +372,7 @@ static int rxe_destroy_srq(struct ib_srq *ibsrq, struct ib_udata *udata) { struct rxe_srq *srq = to_rsrq(ibsrq); + rxe_hide(srq); if (srq->rq.queue) rxe_queue_cleanup(srq->rq.queue); @@ -439,6 +444,7 @@ static int rxe_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init, if (err) goto qp_init; + rxe_show(qp); return 0; qp_init: @@ -491,6 +497,7 @@ static int rxe_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata) struct rxe_qp *qp = to_rqp(ibqp); int ret; + rxe_hide(qp); ret = rxe_qp_chk_destroy(qp); if (ret) return ret; @@ -904,6 +911,7 @@ static struct ib_mr *rxe_get_dma_mr(struct ib_pd *ibpd, int access) rxe_get(pd); rxe_mr_init_dma(pd, access, mr); + rxe_show(mr); return &mr->ibmr; } @@ -932,6 +940,8 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd, if (err) goto err3; + rxe_show(mr); + return &mr->ibmr; err3: @@ -964,6 +974,8 @@ static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type, if (err) goto err2; + rxe_show(mr); + return &mr->ibmr; err2: From patchwork Fri Mar 4 00:08:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12768330 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3670C4332F for ; Fri, 4 Mar 2022 00:08:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237332AbiCDAJj (ORCPT ); Thu, 3 Mar 2022 19:09:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59988 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234287AbiCDAJg (ORCPT ); Thu, 3 Mar 2022 19:09:36 -0500 Received: from mail-oo1-xc33.google.com (mail-oo1-xc33.google.com [IPv6:2607:f8b0:4864:20::c33]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0798A527CA for ; Thu, 3 Mar 2022 16:08:50 -0800 (PST) Received: by mail-oo1-xc33.google.com with SMTP id j7-20020a4ad6c7000000b0031c690e4123so7628102oot.11 for ; Thu, 03 Mar 2022 16:08:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ut0ka3TO8okN+zyJNCmSW3zwRoLh6o5Ud/1r0Pmr6/o=; b=eiAML+NjCavJ9utbcttWLgMkSnyG9BmNCM9IbSu1ost1TMZc+hzQ7EV1iVN/znUxhI rYduwrbkG3KakE2+D1O0p/zFr3bYRwjd/OT8BjtCCFfmRo32aE4njSAruWYnYx1bdj/8 ffFtTjAN1XcUcd734uO6xWCwSisdyjcoTBSDK9pR6al58Dqnk0yihhOi6FejG4ZNYV0R Re3T7T+5H1maFtinIMQW002E8W/v5PrOOREz3nvgefkLVKVaNSi9e2sUDsSqsZzPyfoc V/Dp1zsTv75Gbcvm5PIod9VlZGxtH8ooxs4I2Czd6X5VXYBdcyB6ob3q1kXjknC3ahbW imTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ut0ka3TO8okN+zyJNCmSW3zwRoLh6o5Ud/1r0Pmr6/o=; b=qbOW898mHJXjudkwIE2gu8ie/CNlLqgyZqnoJWESvP4+3d+pOb5QWcFq64NQc5CYvw rVoS0qhwtAIm2cXY4RV8Qe449ksrPJ7+pf8loHkRvqF4QGCR+zdePMtbjFx9CXeeNd+b 6D+QMoOF2tUjS+OG2hflnNljo10KSUQYAxoJDuQe7IVBADCdU0UCwXPfrB0dyHhSqYhv 414D8MD55Q1uB25M20JQrT0B4wWXL4iKpD2PRCPJRMCIhPFh6kBXUWNH0Q+zis5PkgI6 wd1sJfD+y1sNjV1YmZlO95X5TTAORvVvREklJS7AhTogdCOIVWQ6sBQFAyU3Kn5i2NfG dOMw== X-Gm-Message-State: AOAM532N+YxkE/YZW2J0mRvbT0nvov6Pkf7GyWaw/DYOAjmSxItgdKA3 BZa3lgNYgP9pxstBdrryFAw= X-Google-Smtp-Source: ABdhPJxZmJYC9mkAY71KWEch6hERH6W6iw6HJKWQCoSDnQ3CTfNAW070m1lMoJZZCIH0UBnLI3X+6w== X-Received: by 2002:a05:6870:14cf:b0:d9:a9ce:92a9 with SMTP id l15-20020a05687014cf00b000d9a9ce92a9mr6105728oab.64.1646352529362; Thu, 03 Mar 2022 16:08:49 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-4a4e-f957-5a09-0218.res6.spectrum.com. [2603:8081:140c:1a00:4a4e:f957:5a09:218]) by smtp.googlemail.com with ESMTPSA id n23-20020a9d7417000000b005afc3371166sm1646469otk.81.2022.03.03.16.08.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Mar 2022 16:08:49 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v11 11/13] RDMA/rxe: Add wait_for_completion to pool objects Date: Thu, 3 Mar 2022 18:08:07 -0600 Message-Id: <20220304000808.225811-12-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220304000808.225811-1-rpearsonhpe@gmail.com> References: <20220304000808.225811-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Reference counting for object deletion can cause an object to wait for something else to happen before an object gets deleted. The destroy verbs can then return to rdma-core with the object still holding references. Adding wait_for_completion in this path prevents this. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_mr.c | 2 +- drivers/infiniband/sw/rxe/rxe_mw.c | 2 +- drivers/infiniband/sw/rxe/rxe_pool.c | 29 +++++++++++++++++++++++++++ drivers/infiniband/sw/rxe/rxe_pool.h | 5 +++++ drivers/infiniband/sw/rxe/rxe_verbs.c | 22 ++++++++++---------- 5 files changed, 47 insertions(+), 13 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index e4ad2cc12584..83e370ae9df6 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -693,7 +693,7 @@ int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) mr->state = RXE_MR_STATE_INVALID; rxe_put(mr_pd(mr)); - rxe_put(mr); + rxe_wait(mr); return 0; } diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c index 4edbed1410e9..157b3f968522 100644 --- a/drivers/infiniband/sw/rxe/rxe_mw.c +++ b/drivers/infiniband/sw/rxe/rxe_mw.c @@ -64,8 +64,8 @@ int rxe_dealloc_mw(struct ib_mw *ibmw) rxe_do_dealloc_mw(mw); spin_unlock_bh(&mw->lock); - rxe_put(mw); rxe_put(pd); + rxe_wait(mw); return 0; } diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index c0b70687a83e..4fb6c7dd32ad 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -6,6 +6,8 @@ #include "rxe.h" +#define RXE_POOL_TIMEOUT (200) +#define RXE_POOL_MAX_TIMEOUTS (3) #define RXE_POOL_ALIGN (16) static const struct rxe_type_info { @@ -158,6 +160,7 @@ void *rxe_alloc(struct rxe_pool *pool) elem->pool = pool; elem->obj = obj; kref_init(&elem->ref_cnt); + init_completion(&elem->complete); err = xa_alloc_cyclic(&pool->xa, &elem->index, NULL, pool->limit, &pool->next, GFP_KERNEL); @@ -186,6 +189,7 @@ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) elem->pool = pool; elem->obj = (u8 *)elem - pool->elem_offset; kref_init(&elem->ref_cnt); + init_completion(&elem->complete); err = xa_alloc_cyclic(&pool->xa, &elem->index, NULL, pool->limit, &pool->next, GFP_KERNEL); @@ -228,6 +232,29 @@ static void rxe_elem_release(struct kref *kref) __xa_erase(&pool->xa, elem->index); xa_unlock_irqrestore(xa, flags); + complete(&elem->complete); +} + +int __rxe_wait(struct rxe_pool_elem *elem) +{ + struct rxe_pool *pool = elem->pool; + static int timeout = RXE_POOL_TIMEOUT; + int ret, err = 0; + + __rxe_put(elem); + + if (timeout) { + ret = wait_for_completion_timeout(&elem->complete, timeout); + if (!ret) { + pr_warn("Timed out waiting for %s#%d to complete\n", + pool->name, elem->index); + if (++pool->timeouts >= RXE_POOL_MAX_TIMEOUTS) + timeout = 0; + + err = -EINVAL; + } + } + if (pool->cleanup) pool->cleanup(elem); @@ -235,6 +262,8 @@ static void rxe_elem_release(struct kref *kref) kfree(elem->obj); atomic_dec(&pool->num_elem); + + return err; } int __rxe_get(struct rxe_pool_elem *elem) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index c48d8f6f95f3..1863fa165450 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -28,6 +28,7 @@ struct rxe_pool_elem { void *obj; struct kref ref_cnt; struct list_head list; + struct completion complete; u32 index; }; @@ -37,6 +38,7 @@ struct rxe_pool { void (*cleanup)(struct rxe_pool_elem *elem); enum rxe_pool_flags flags; enum rxe_elem_type type; + unsigned int timeouts; unsigned int max_elem; atomic_t num_elem; @@ -77,6 +79,9 @@ int __rxe_put(struct rxe_pool_elem *elem); #define rxe_put(obj) __rxe_put(&(obj)->elem) +int __rxe_wait(struct rxe_pool_elem *elem); +#define rxe_wait(obj) __rxe_wait(&(obj)->elem) + #define rxe_read(obj) kref_read(&(obj)->elem.ref_cnt) int __rxe_show(struct rxe_pool_elem *elem); diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index e010e8860492..0529ad8e819b 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -115,7 +115,7 @@ static void rxe_dealloc_ucontext(struct ib_ucontext *ibuc) { struct rxe_ucontext *uc = to_ruc(ibuc); - rxe_put(uc); + rxe_wait(uc); } static int rxe_port_immutable(struct ib_device *dev, u32 port_num, @@ -149,7 +149,7 @@ static int rxe_dealloc_pd(struct ib_pd *ibpd, struct ib_udata *udata) { struct rxe_pd *pd = to_rpd(ibpd); - rxe_put(pd); + rxe_wait(pd); return 0; } @@ -188,7 +188,7 @@ static int rxe_create_ah(struct ib_ah *ibah, err = copy_to_user(&uresp->ah_num, &ah->ah_num, sizeof(uresp->ah_num)); if (err) { - rxe_put(ah); + rxe_wait(ah); return -EFAULT; } } else if (ah->is_user) { @@ -231,7 +231,7 @@ static int rxe_destroy_ah(struct ib_ah *ibah, u32 flags) struct rxe_ah *ah = to_rah(ibah); rxe_hide(ah); - rxe_put(ah); + rxe_wait(ah); return 0; } @@ -318,7 +318,7 @@ static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init, err2: rxe_put(pd); - rxe_put(srq); + rxe_wait(srq); err1: return err; } @@ -377,7 +377,7 @@ static int rxe_destroy_srq(struct ib_srq *ibsrq, struct ib_udata *udata) rxe_queue_cleanup(srq->rq.queue); rxe_put(srq->pd); - rxe_put(srq); + rxe_wait(srq); return 0; } @@ -448,7 +448,7 @@ static int rxe_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init, return 0; qp_init: - rxe_put(qp); + rxe_wait(qp); return err; } @@ -503,7 +503,7 @@ static int rxe_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata) return ret; rxe_qp_destroy(qp); - rxe_put(qp); + rxe_wait(qp); return 0; } @@ -816,7 +816,7 @@ static int rxe_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata) rxe_cq_disable(cq); - rxe_put(cq); + rxe_wait(cq); return 0; } @@ -946,7 +946,7 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd, err3: rxe_put(pd); - rxe_put(mr); + rxe_wait(mr); err2: return ERR_PTR(err); } @@ -980,7 +980,7 @@ static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type, err2: rxe_put(pd); - rxe_put(mr); + rxe_wait(mr); err1: return ERR_PTR(err); } From patchwork Fri Mar 4 00:08:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12768332 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D1F7C43217 for ; Fri, 4 Mar 2022 00:08:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235560AbiCDAJk (ORCPT ); Thu, 3 Mar 2022 19:09:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60020 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235490AbiCDAJi (ORCPT ); Thu, 3 Mar 2022 19:09:38 -0500 Received: from mail-oo1-xc2d.google.com (mail-oo1-xc2d.google.com [IPv6:2607:f8b0:4864:20::c2d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C50B7496B7 for ; Thu, 3 Mar 2022 16:08:50 -0800 (PST) Received: by mail-oo1-xc2d.google.com with SMTP id y15-20020a4a650f000000b0031c19e9fe9dso7635758ooc.12 for ; Thu, 03 Mar 2022 16:08:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+tGaRck6qxPlRe7H4OEd7Bs+y0ag3+BMfJRalDFQg38=; b=fOc1poDEwEhgV0EbjYpuwSZuDmAvRE1H63G9vW0b/CGlbRyZtGG88qYalz4G56dA9I T1EJOywkHh+ZyhMuzYWr2JH3YLw5EpYwhX8oRhZ0AzZPjN/uYt4hn3hgrLBWU4SJHpEh 7bss/EIjqyRaqWdldJ10nHO7d5In3RWPJ0JT2JKGinLGiQGzmXkLA/fYvWW+zhUrqhtJ XR0S8A2/mi1sMxKcqQ6+FZI6kU3kUaIF1bYZUhlPDZREh7Y1gHAuKpsW9CcknljhAs9O 2q96+rURid1kHK1yb+IBMSzh5KsLoDlNVZcA39fQbyX6BKk6Uf2ENTyX6BA98jFGCmBy AAhw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+tGaRck6qxPlRe7H4OEd7Bs+y0ag3+BMfJRalDFQg38=; b=AIbAjLYttk/92AVvdYipUlJ0tNpJBO+DSDrhP6D7C8y6YHQ3SZ4aMQovDjHDfEn2fg l5Pe2w2LXZCgCx/WqkhI5o+9PJeX+/dIpXnBQ2WoPFKYUQz3C/wzZYSCY2AdL16JMQBi FVezdz5yJP+oZpSCDw5J+2Lp1XYaK36aC+JMinshYT0ElteXRFjUYuYMO/kYa4za+tHv UY2jkgGPGiCch5Bd2Ji337gPSk2w1vRacoVKLmW96MxrVg2TaCsqYupxD7HSOBHoowcQ e/PyA9GX3fx5SUIftiSIcHB4upHG7nbbZNTWXIdwV+5NlPADpr8R7xYMVF6CpApwrPXc lLYg== X-Gm-Message-State: AOAM533XI4/Xts/Jt1sNACYSsypCNF7kTKXN5q2oMQDhmZlnii5P8CWT WWcv6P3Q7fwScOqsgPawzVY= X-Google-Smtp-Source: ABdhPJwUbuG2AX3Vpzt9cLMgi86M0jQeWDXzrY3pDTFCy0l0Y+SF1RQRt1a21nJSEXeQGnrDaszZsQ== X-Received: by 2002:a05:6870:220a:b0:ca:d34e:ae77 with SMTP id i10-20020a056870220a00b000cad34eae77mr6121886oaf.164.1646352530005; Thu, 03 Mar 2022 16:08:50 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-4a4e-f957-5a09-0218.res6.spectrum.com. [2603:8081:140c:1a00:4a4e:f957:5a09:218]) by smtp.googlemail.com with ESMTPSA id n23-20020a9d7417000000b005afc3371166sm1646469otk.81.2022.03.03.16.08.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Mar 2022 16:08:49 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v11 12/13] RDMA/rxe: Convert read side locking to rcu Date: Thu, 3 Mar 2022 18:08:08 -0600 Message-Id: <20220304000808.225811-13-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220304000808.225811-1-rpearsonhpe@gmail.com> References: <20220304000808.225811-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Use rcu_read_lock() for protecting read side operations in rxe_pool.c. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_pool.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index 4fb6c7dd32ad..ec464b03d120 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -207,16 +207,15 @@ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) { struct rxe_pool_elem *elem; struct xarray *xa = &pool->xa; - unsigned long flags; void *obj; - xa_lock_irqsave(xa, flags); + rcu_read_lock(); elem = xa_load(xa, index); if (elem && kref_get_unless_zero(&elem->ref_cnt)) obj = elem->obj; else obj = NULL; - xa_unlock_irqrestore(xa, flags); + rcu_read_unlock(); return obj; } @@ -259,7 +258,7 @@ int __rxe_wait(struct rxe_pool_elem *elem) pool->cleanup(elem); if (pool->flags & RXE_POOL_ALLOC) - kfree(elem->obj); + kfree_rcu(elem->obj); atomic_dec(&pool->num_elem); From patchwork Fri Mar 4 00:08:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12768333 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE2C5C43219 for ; Fri, 4 Mar 2022 00:08:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235277AbiCDAJk (ORCPT ); Thu, 3 Mar 2022 19:09:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60238 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235687AbiCDAJj (ORCPT ); Thu, 3 Mar 2022 19:09:39 -0500 Received: from mail-oi1-x236.google.com (mail-oi1-x236.google.com [IPv6:2607:f8b0:4864:20::236]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7001A40E56 for ; Thu, 3 Mar 2022 16:08:51 -0800 (PST) Received: by mail-oi1-x236.google.com with SMTP id z7so6393142oid.4 for ; Thu, 03 Mar 2022 16:08:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HFMA80fY+emg/ziM5RYoXYxNAJGSLagKIl++pDhI8cY=; b=C7hIhAfhH1hlocj0goWS+DNt5AxCGlzoLGcVxeE/pKWe1eWucqjFQF72vZTvD0aJCU /PCLvsY1MbeZ/sFypaS4VJC/hlNshhiICQ0DKq5LCVFz8S028t/H6k9tuLCwjJ1M6aSz rt8SrZYyjiYoTmyxdzkW2l5PaP6KwilYFQyEBMqQ5Bv4vxFUrBBXl9qfA3WPgp6ACFvY d6P33oB1hwCmi8vazhez0BA3O7cRid4ufb52/ocGYifnMgX4sdIDiTgeMUkXo05ru+Gu XLuReiElMlLbFHcPRYrHBUxyWix5t0u7v6uY3BRZ1W2TASjY6hhFLkmARwB9KXOXX9mS GnXQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HFMA80fY+emg/ziM5RYoXYxNAJGSLagKIl++pDhI8cY=; b=G5J9b0W9F806OZ8CLZ7wD0NjvuiZLI52cRp/zn6NH/c2GvjIBNhkkWmFyGVXDVorKL 2wDVAHU2AuCw536l5BgiqYwKD9gKA1RPW3V8dciox2cchRJ1/b+Rxok66NiynwlA8bYu dKWoqYpfSRHJyCdQg0IJJ69v2de7MH9GG6UJQ3C+RcqQCULctSMRKUK6HbPXixGAAk4C rezzbrMO2ig9Uu1HkSL/MFBXPRWcHsIFFt27dQ3QnTs2Fe2K54DWMpnc/zVv1QYJQ48q i7V6uMeuDrfF94ed1RRvJmQt+Zc3H6lF3crxu/Fbk9JAtBNVN0akwJt5algyoGNoPKEt XGBA== X-Gm-Message-State: AOAM530LHcRUi0/JmDPfqZTJMTJY9GRtrs1dKdvEZ48HOm3fpu2F6YLH sTTY6LQMmh/Tyq32lcFDlbM= X-Google-Smtp-Source: ABdhPJzhEx4wopXcO0i8v3Eb438BFhY1GqseizGshuArID3qWkdXXgVa+BXNpCU2QTbCGJRS65NvTA== X-Received: by 2002:a05:6808:1b12:b0:2d4:51b4:9ee with SMTP id bx18-20020a0568081b1200b002d451b409eemr6955323oib.116.1646352530711; Thu, 03 Mar 2022 16:08:50 -0800 (PST) Received: from ubuntu-21.tx.rr.com (2603-8081-140c-1a00-4a4e-f957-5a09-0218.res6.spectrum.com. [2603:8081:140c:1a00:4a4e:f957:5a09:218]) by smtp.googlemail.com with ESMTPSA id n23-20020a9d7417000000b005afc3371166sm1646469otk.81.2022.03.03.16.08.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Mar 2022 16:08:50 -0800 (PST) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v11 13/13] RDMA/rxe: Cleanup rxe_pool.c Date: Thu, 3 Mar 2022 18:08:09 -0600 Message-Id: <20220304000808.225811-14-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220304000808.225811-1-rpearsonhpe@gmail.com> References: <20220304000808.225811-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Minor cleanup of rxe_pool.c. Add document comment headers for the subroutines. Increase alignment for pool elements. Convert some printk's to WARN-ON's. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_pool.c | 129 +++++++++++++++++++++----- drivers/infiniband/sw/rxe/rxe_verbs.c | 27 ++---- 2 files changed, 115 insertions(+), 41 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index ec464b03d120..d5cd0e71e9a0 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -1,14 +1,14 @@ // SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB /* + * Copyright (c) 2022 Hewlett Packard Enterprise, Inc. All rights reserved. * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved. * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved. */ #include "rxe.h" -#define RXE_POOL_TIMEOUT (200) -#define RXE_POOL_MAX_TIMEOUTS (3) -#define RXE_POOL_ALIGN (16) +#define RXE_POOL_TIMEOUT (200) /* jiffies */ +#define RXE_POOL_ALIGN (64) static const struct rxe_type_info { const char *name; @@ -90,6 +90,14 @@ static const struct rxe_type_info { }, }; +/** + * rxe_pool_init - initialize a rxe object pool + * @rxe: rxe device pool belongs to + * @pool: object pool + * @type: pool type + * + * Called from rxe_init() + */ void rxe_pool_init(struct rxe_dev *rxe, struct rxe_pool *pool, enum rxe_elem_type type) { @@ -113,6 +121,12 @@ void rxe_pool_init(struct rxe_dev *rxe, struct rxe_pool *pool, pool->limit.max = info->max_index; } +/** + * rxe_pool_cleanup - free any remaining pool resources + * @pool: object pool + * + * Called from rxe_dealloc() + */ void rxe_pool_cleanup(struct rxe_pool *pool) { struct rxe_pool_elem *elem; @@ -136,24 +150,37 @@ void rxe_pool_cleanup(struct rxe_pool *pool) if (WARN_ON(elem_count || obj_count)) pr_debug("Freed %d indices and %d objects from pool %s\n", - elem_count, obj_count, pool->name); + elem_count, obj_count, pool->name); } +/** + * rxe_alloc - allocate a new pool object + * @pool: object pool + * + * Context: in task. + * Returns: object on success else an ERR_PTR + */ void *rxe_alloc(struct rxe_pool *pool) { + struct xarray *xa = &pool->xa; struct rxe_pool_elem *elem; void *obj; - int err; + int err = -EINVAL; if (WARN_ON(!(pool->flags & RXE_POOL_ALLOC))) - return NULL; + goto err_out; + + if (WARN_ON(!in_task())) + goto err_dec; if (atomic_inc_return(&pool->num_elem) > pool->max_elem) - goto err_cnt; + goto err_dec; obj = kzalloc(pool->elem_size, GFP_KERNEL); - if (!obj) - goto err_cnt; + if (!obj) { + err = -ENOMEM; + goto err_dec; + } elem = (struct rxe_pool_elem *)((u8 *)obj + pool->elem_offset); @@ -162,7 +189,7 @@ void *rxe_alloc(struct rxe_pool *pool) kref_init(&elem->ref_cnt); init_completion(&elem->complete); - err = xa_alloc_cyclic(&pool->xa, &elem->index, NULL, pool->limit, + err = xa_alloc_cyclic(xa, &elem->index, NULL, pool->limit, &pool->next, GFP_KERNEL); if (err) goto err_free; @@ -171,38 +198,59 @@ void *rxe_alloc(struct rxe_pool *pool) err_free: kfree(obj); -err_cnt: +err_dec: atomic_dec(&pool->num_elem); - return NULL; +err_out: + return ERR_PTR(err); } +/** + * __rxe_add_to_pool - add rdma-core allocated object to rxe object pool + * @pool: object pool + * @elem: rxe_pool_elem embedded in object + * + * Context: in task. + * Returns: 0 on success else an error + */ int __rxe_add_to_pool(struct rxe_pool *pool, struct rxe_pool_elem *elem) { - int err; + struct xarray *xa = &pool->xa; + int err = -EINVAL; if (WARN_ON(pool->flags & RXE_POOL_ALLOC)) - return -EINVAL; + goto err_out; + + if (WARN_ON(!in_task())) + goto err_out; if (atomic_inc_return(&pool->num_elem) > pool->max_elem) - goto err_cnt; + goto err_dec; elem->pool = pool; elem->obj = (u8 *)elem - pool->elem_offset; kref_init(&elem->ref_cnt); init_completion(&elem->complete); - err = xa_alloc_cyclic(&pool->xa, &elem->index, NULL, pool->limit, + err = xa_alloc_cyclic(xa, &elem->index, NULL, pool->limit, &pool->next, GFP_KERNEL); if (err) - goto err_cnt; + goto err_dec; return 0; -err_cnt: +err_dec: atomic_dec(&pool->num_elem); - return -EINVAL; +err_out: + return err; } +/** + * rxe_pool_get_index - find object in pool with given index + * @pool: object pool + * @index: index + * + * Returns: object on success else NULL + */ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) { struct rxe_pool_elem *elem; @@ -220,6 +268,12 @@ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index) return obj; } +/** + * rxe_elem_release - remove object index and complete + * @kref: kref embedded in pool element + * + * Context: ref count of pool object has reached zero. + */ static void rxe_elem_release(struct kref *kref) { struct rxe_pool_elem *elem = container_of(kref, typeof(*elem), ref_cnt); @@ -234,6 +288,12 @@ static void rxe_elem_release(struct kref *kref) complete(&elem->complete); } +/** + * __rxe_wait - put a ref on object and wait for completion + * @elem: rxe_pool_elem embedded in object + * + * Returns: 0 if object did not timeout else an error + */ int __rxe_wait(struct rxe_pool_elem *elem) { struct rxe_pool *pool = elem->pool; @@ -244,12 +304,9 @@ int __rxe_wait(struct rxe_pool_elem *elem) if (timeout) { ret = wait_for_completion_timeout(&elem->complete, timeout); - if (!ret) { - pr_warn("Timed out waiting for %s#%d to complete\n", + if (WARN_ON(!ret)) { + pr_debug("Timed out waiting for %s#%d to complete\n", pool->name, elem->index); - if (++pool->timeouts >= RXE_POOL_MAX_TIMEOUTS) - timeout = 0; - err = -EINVAL; } } @@ -265,16 +322,34 @@ int __rxe_wait(struct rxe_pool_elem *elem) return err; } +/** + * __rxe_add_ref - takes a ref on the object unless ref count is zero + * @elem: rxe_pool_elem embedded in object + * + * Returns: 1 if reference is added else 0 + */ int __rxe_get(struct rxe_pool_elem *elem) { return kref_get_unless_zero(&elem->ref_cnt); } +/** + * __rxe_drop_ref - puts a ref on the object + * @elem: rxe_pool_elem embedded in object + * + * Returns: 1 if ref count reaches zero and release called else 0 + */ int __rxe_put(struct rxe_pool_elem *elem) { return kref_put(&elem->ref_cnt, rxe_elem_release); } +/** + * __rxe_show - enable looking up object from index + * @elem: rxe_pool_elem embedded in object + * + * Returns 0 on success else an error + */ int __rxe_show(struct rxe_pool_elem *elem) { struct xarray *xa = &elem->pool->xa; @@ -290,6 +365,12 @@ int __rxe_show(struct rxe_pool_elem *elem) return 0; } +/** + * __rxe_hide - disable looking up object from index + * @elem: rxe_pool_elem embedded in object + * + * Returns 0 on success else an error + */ int __rxe_hide(struct rxe_pool_elem *elem) { struct xarray *xa = &elem->pool->xa; diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 0529ad8e819b..73f549efe632 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -906,8 +906,8 @@ static struct ib_mr *rxe_get_dma_mr(struct ib_pd *ibpd, int access) struct rxe_mr *mr; mr = rxe_alloc(&rxe->mr_pool); - if (!mr) - return ERR_PTR(-ENOMEM); + if (IS_ERR(mr)) + return (void *)mr; rxe_get(pd); rxe_mr_init_dma(pd, access, mr); @@ -928,26 +928,22 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd, struct rxe_mr *mr; mr = rxe_alloc(&rxe->mr_pool); - if (!mr) { - err = -ENOMEM; - goto err2; - } - + if (IS_ERR(mr)) + return (void *)mr; rxe_get(pd); err = rxe_mr_init_user(pd, start, length, iova, access, mr); if (err) - goto err3; + goto err; rxe_show(mr); return &mr->ibmr; -err3: +err: rxe_put(pd); rxe_wait(mr); -err2: return ERR_PTR(err); } @@ -963,25 +959,22 @@ static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type, return ERR_PTR(-EINVAL); mr = rxe_alloc(&rxe->mr_pool); - if (!mr) { - err = -ENOMEM; - goto err1; - } + if (IS_ERR(mr)) + return (void *)mr; rxe_get(pd); err = rxe_mr_init_fast(pd, max_num_sg, mr); if (err) - goto err2; + goto err; rxe_show(mr); return &mr->ibmr; -err2: +err: rxe_put(pd); rxe_wait(mr); -err1: return ERR_PTR(err); }