From patchwork Tue Mar 13 22:33:18 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 10281077 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id BD18B60231 for ; Tue, 13 Mar 2018 22:33:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A101B2857F for ; Tue, 13 Mar 2018 22:33:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 95D1B28587; Tue, 13 Mar 2018 22:33:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 86BBF2857F for ; Tue, 13 Mar 2018 22:33:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751870AbeCMWdg (ORCPT ); Tue, 13 Mar 2018 18:33:36 -0400 Received: from mail-pf0-f194.google.com ([209.85.192.194]:37098 "EHLO mail-pf0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751300AbeCMWdf (ORCPT ); Tue, 13 Mar 2018 18:33:35 -0400 Received: by mail-pf0-f194.google.com with SMTP id h11so529724pfn.4 for ; Tue, 13 Mar 2018 15:33:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=aqgCXNhF6ruGudYNH3jhX6aws56lcnETn6idb8XRjpU=; b=O4a2OwqK6g8muKOoVwB42p2f2FrSeYpfVhu/fvoDd5gw1i3MyD2dLXNCTHki/Naun1 whq6+4Do/f0bgGZQ74rJfEo9SwKF7SRFxmW3RTwZPg26qGXXCdvm2A9GZ8UN8LQK56Wp lOKTNbyW7hQR3k6/ITzdNub+kFBGfzFxbYA3cIXxn8WeJN6etEXqUOaYKGIktywXZnHj BdHL+Sm3ji2wAG0fXzp+/9CNNCscJAfLLnu5jmUjkRhO9Ipw5ged/cS/idabMEIsveE6 knW9ZPzIwHWA0jHA2u0vfUCvinpqVovVs3rk+JDfBaD7xEaPOxJH9yVYF3IJw7HAqCki hAQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=aqgCXNhF6ruGudYNH3jhX6aws56lcnETn6idb8XRjpU=; b=TXjSCiupZycc7Se5QzrbaEAPx90CPuzsTQXT9zcleqiA+/szoi7TCh5pdFdNAw94Hm 7sdCr52Ffkrc32tHC9HUYqlxCoIrpuW312zWCQsYMWg9gj2f2M/5ZugSmH2C0GwSpkd6 j/DLZ6aFhtxcPkOGVU2aByLLKRMgj26p1Axcx1+xemStLh85sxzSrTmL8fXXBr25BrDn 3JTvBM4tkWXSk4K1s0AUqi4t63dFZA7Lhcnzpe0h0xzJHMjdiKJmKwyFzPhC7vOO2A1V aXPiR+P8CLALWMmBrF/OdHG4iIDjwTW06cnfUvVSVdLi2yJhSocfq73/pf7SMY1uHGGv LcMA== X-Gm-Message-State: AElRT7GkUIi7qyIxuQkhgGM/5Qda/woG9ZZPHPN4tpiqx84zHg0Af76n xB06AH4n7Wu/OU5SW5XIsDA+e/RNroI= X-Google-Smtp-Source: AG47ELvw8qCO4eShRDJWzQw1T1Jj9VTblah9tGLUjprh+PAOSURjr6CXL+A2/4lUBlXcXDw6lXsWOw== X-Received: by 10.99.160.17 with SMTP id r17mr1765028pge.127.1520980414141; Tue, 13 Mar 2018 15:33:34 -0700 (PDT) Received: from ziepe.ca (S010614cc2056d97f.ed.shawcable.net. [174.3.196.123]) by smtp.gmail.com with ESMTPSA id u71sm2322524pfd.152.2018.03.13.15.33.32 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 13 Mar 2018 15:33:32 -0700 (PDT) Received: from jgg by mlx.ziepe.ca with local (Exim 4.86_2) (envelope-from ) id 1evsU7-0002wK-Hg; Tue, 13 Mar 2018 16:33:31 -0600 From: Jason Gunthorpe To: linux-rdma@vger.kernel.org Cc: Yuval Shaia , Moni Shoua , Jason Gunthorpe Subject: [PATCH 2/2] RDMA/rxe: Use structs to describe the uABI instead of opencoding Date: Tue, 13 Mar 2018 16:33:18 -0600 Message-Id: <20180313223318.11151-3-jgg@ziepe.ca> X-Mailer: git-send-email 2.16.1 In-Reply-To: <20180313223318.11151-1-jgg@ziepe.ca> References: <20180313223318.11151-1-jgg@ziepe.ca> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jason Gunthorpe Open coding pointer math is not acceptable for describing the uABI in RDMA. Provide structs for all the cases. The udata is casted to the struct as close to the verbs entry point as possible for maximum clarity. Function signatures and so forth are revised to allow for this. Signed-off-by: Jason Gunthorpe --- drivers/infiniband/sw/rxe/rxe_cq.c | 13 +++++----- drivers/infiniband/sw/rxe/rxe_loc.h | 13 ++++++---- drivers/infiniband/sw/rxe/rxe_qp.c | 26 ++++++++++--------- drivers/infiniband/sw/rxe/rxe_queue.c | 24 ++++-------------- drivers/infiniband/sw/rxe/rxe_queue.h | 5 ++-- drivers/infiniband/sw/rxe/rxe_srq.c | 44 +++++++++++--------------------- drivers/infiniband/sw/rxe/rxe_verbs.c | 48 +++++++++++++++++++++++++++++++---- include/uapi/rdma/rdma_user_rxe.h | 22 ++++++++++++++++ 8 files changed, 116 insertions(+), 79 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_cq.c b/drivers/infiniband/sw/rxe/rxe_cq.c index c9593e47275363..2ee4b08b00ea4c 100644 --- a/drivers/infiniband/sw/rxe/rxe_cq.c +++ b/drivers/infiniband/sw/rxe/rxe_cq.c @@ -83,7 +83,7 @@ static void rxe_send_complete(unsigned long data) int rxe_cq_from_init(struct rxe_dev *rxe, struct rxe_cq *cq, int cqe, int comp_vector, struct ib_ucontext *context, - struct ib_udata *udata) + struct rxe_create_cq_resp __user *uresp) { int err; @@ -94,15 +94,15 @@ int rxe_cq_from_init(struct rxe_dev *rxe, struct rxe_cq *cq, int cqe, return -ENOMEM; } - err = do_mmap_info(rxe, udata, false, context, cq->queue->buf, - cq->queue->buf_size, &cq->queue->ip); + err = do_mmap_info(rxe, uresp ? &uresp->mi : NULL, context, + cq->queue->buf, cq->queue->buf_size, &cq->queue->ip); if (err) { kvfree(cq->queue->buf); kfree(cq->queue); return err; } - if (udata) + if (uresp) cq->is_user = 1; cq->is_dying = false; @@ -114,14 +114,15 @@ int rxe_cq_from_init(struct rxe_dev *rxe, struct rxe_cq *cq, int cqe, return 0; } -int rxe_cq_resize_queue(struct rxe_cq *cq, int cqe, struct ib_udata *udata) +int rxe_cq_resize_queue(struct rxe_cq *cq, int cqe, + struct rxe_resize_cq_resp __user *uresp) { int err; err = rxe_queue_resize(cq->queue, (unsigned int *)&cqe, sizeof(struct rxe_cqe), cq->queue->ip ? cq->queue->ip->context : NULL, - udata, NULL, &cq->cq_lock); + uresp ? &uresp->mi : NULL, NULL, &cq->cq_lock); if (!err) cq->ibcq.cqe = cqe; diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 31070a696f3613..b71023c1c58bc7 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -56,9 +56,10 @@ int rxe_cq_chk_attr(struct rxe_dev *rxe, struct rxe_cq *cq, int rxe_cq_from_init(struct rxe_dev *rxe, struct rxe_cq *cq, int cqe, int comp_vector, struct ib_ucontext *context, - struct ib_udata *udata); + struct rxe_create_cq_resp __user *uresp); -int rxe_cq_resize_queue(struct rxe_cq *cq, int new_cqe, struct ib_udata *udata); +int rxe_cq_resize_queue(struct rxe_cq *cq, int new_cqe, + struct rxe_resize_cq_resp __user *uresp); int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe *cqe, int solicited); @@ -158,7 +159,8 @@ int rxe_mcast_delete(struct rxe_dev *rxe, union ib_gid *mgid); int rxe_qp_chk_init(struct rxe_dev *rxe, struct ib_qp_init_attr *init); int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd, - struct ib_qp_init_attr *init, struct ib_udata *udata, + struct ib_qp_init_attr *init, + struct rxe_create_qp_resp __user *uresp, struct ib_pd *ibpd); int rxe_qp_to_init(struct rxe_qp *qp, struct ib_qp_init_attr *init); @@ -226,11 +228,12 @@ int rxe_srq_chk_attr(struct rxe_dev *rxe, struct rxe_srq *srq, int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq, struct ib_srq_init_attr *init, - struct ib_ucontext *context, struct ib_udata *udata); + struct ib_ucontext *context, + struct rxe_create_srq_resp __user *uresp); int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq, struct ib_srq_attr *attr, enum ib_srq_attr_mask mask, - struct ib_udata *udata); + struct rxe_modify_srq_cmd *ucmd); void rxe_release(struct kref *kref); diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c index 98a7a19146a892..b9f7aa1114b220 100644 --- a/drivers/infiniband/sw/rxe/rxe_qp.c +++ b/drivers/infiniband/sw/rxe/rxe_qp.c @@ -216,7 +216,8 @@ static void rxe_qp_init_misc(struct rxe_dev *rxe, struct rxe_qp *qp, static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp, struct ib_qp_init_attr *init, - struct ib_ucontext *context, struct ib_udata *udata) + struct ib_ucontext *context, + struct rxe_create_qp_resp __user *uresp) { int err; int wqe_size; @@ -241,9 +242,9 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp, if (!qp->sq.queue) return -ENOMEM; - err = do_mmap_info(rxe, udata, true, - context, qp->sq.queue->buf, - qp->sq.queue->buf_size, &qp->sq.queue->ip); + err = do_mmap_info(rxe, uresp ? &uresp->sq_mi : NULL, context, + qp->sq.queue->buf, qp->sq.queue->buf_size, + &qp->sq.queue->ip); if (err) { kvfree(qp->sq.queue->buf); @@ -274,7 +275,8 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp, static int rxe_qp_init_resp(struct rxe_dev *rxe, struct rxe_qp *qp, struct ib_qp_init_attr *init, - struct ib_ucontext *context, struct ib_udata *udata) + struct ib_ucontext *context, + struct rxe_create_qp_resp __user *uresp) { int err; int wqe_size; @@ -294,9 +296,8 @@ static int rxe_qp_init_resp(struct rxe_dev *rxe, struct rxe_qp *qp, if (!qp->rq.queue) return -ENOMEM; - err = do_mmap_info(rxe, udata, false, context, - qp->rq.queue->buf, - qp->rq.queue->buf_size, + err = do_mmap_info(rxe, uresp ? &uresp->rq_mi : NULL, context, + qp->rq.queue->buf, qp->rq.queue->buf_size, &qp->rq.queue->ip); if (err) { kvfree(qp->rq.queue->buf); @@ -322,14 +323,15 @@ static int rxe_qp_init_resp(struct rxe_dev *rxe, struct rxe_qp *qp, /* called by the create qp verb */ int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd, - struct ib_qp_init_attr *init, struct ib_udata *udata, + struct ib_qp_init_attr *init, + struct rxe_create_qp_resp __user *uresp, struct ib_pd *ibpd) { int err; struct rxe_cq *rcq = to_rcq(init->recv_cq); struct rxe_cq *scq = to_rcq(init->send_cq); struct rxe_srq *srq = init->srq ? to_rsrq(init->srq) : NULL; - struct ib_ucontext *context = udata ? ibpd->uobject->context : NULL; + struct ib_ucontext *context = ibpd->uobject ? ibpd->uobject->context : NULL; rxe_add_ref(pd); rxe_add_ref(rcq); @@ -344,11 +346,11 @@ int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd, rxe_qp_init_misc(rxe, qp, init); - err = rxe_qp_init_req(rxe, qp, init, context, udata); + err = rxe_qp_init_req(rxe, qp, init, context, uresp); if (err) goto err1; - err = rxe_qp_init_resp(rxe, qp, init, context, udata); + err = rxe_qp_init_resp(rxe, qp, init, context, uresp); if (err) goto err2; diff --git a/drivers/infiniband/sw/rxe/rxe_queue.c b/drivers/infiniband/sw/rxe/rxe_queue.c index d14bf496d62d3a..f84ab4469261f2 100644 --- a/drivers/infiniband/sw/rxe/rxe_queue.c +++ b/drivers/infiniband/sw/rxe/rxe_queue.c @@ -37,35 +37,21 @@ #include "rxe_queue.h" int do_mmap_info(struct rxe_dev *rxe, - struct ib_udata *udata, - bool is_req, + struct mminfo __user *outbuf, struct ib_ucontext *context, struct rxe_queue_buf *buf, size_t buf_size, struct rxe_mmap_info **ip_p) { int err; - u32 len, offset; struct rxe_mmap_info *ip = NULL; - if (udata) { - if (is_req) { - len = udata->outlen - sizeof(struct mminfo); - offset = sizeof(struct mminfo); - } else { - len = udata->outlen; - offset = 0; - } - - if (len < sizeof(ip->info)) - goto err1; - + if (outbuf) { ip = rxe_create_mmap_info(rxe, buf_size, context, buf); if (!ip) goto err1; - err = copy_to_user(udata->outbuf + offset, &ip->info, - sizeof(ip->info)); + err = copy_to_user(outbuf, &ip->info, sizeof(ip->info)); if (err) goto err2; @@ -171,7 +157,7 @@ int rxe_queue_resize(struct rxe_queue *q, unsigned int *num_elem_p, unsigned int elem_size, struct ib_ucontext *context, - struct ib_udata *udata, + struct mminfo __user *outbuf, spinlock_t *producer_lock, spinlock_t *consumer_lock) { @@ -184,7 +170,7 @@ int rxe_queue_resize(struct rxe_queue *q, if (!new_q) return -ENOMEM; - err = do_mmap_info(new_q->rxe, udata, false, context, new_q->buf, + err = do_mmap_info(new_q->rxe, outbuf, context, new_q->buf, new_q->buf_size, &new_q->ip); if (err) { vfree(new_q->buf); diff --git a/drivers/infiniband/sw/rxe/rxe_queue.h b/drivers/infiniband/sw/rxe/rxe_queue.h index 8c8641c87817f3..79ba4b320054b4 100644 --- a/drivers/infiniband/sw/rxe/rxe_queue.h +++ b/drivers/infiniband/sw/rxe/rxe_queue.h @@ -77,8 +77,7 @@ struct rxe_queue { }; int do_mmap_info(struct rxe_dev *rxe, - struct ib_udata *udata, - bool is_req, + struct mminfo __user *outbuf, struct ib_ucontext *context, struct rxe_queue_buf *buf, size_t buf_size, @@ -94,7 +93,7 @@ int rxe_queue_resize(struct rxe_queue *q, unsigned int *num_elem_p, unsigned int elem_size, struct ib_ucontext *context, - struct ib_udata *udata, + struct mminfo __user *outbuf, /* Protect producers while resizing queue */ spinlock_t *producer_lock, /* Protect consumers while resizing queue */ diff --git a/drivers/infiniband/sw/rxe/rxe_srq.c b/drivers/infiniband/sw/rxe/rxe_srq.c index efc832a2d7c6b9..0d6c04ba7fc36c 100644 --- a/drivers/infiniband/sw/rxe/rxe_srq.c +++ b/drivers/infiniband/sw/rxe/rxe_srq.c @@ -99,7 +99,8 @@ int rxe_srq_chk_attr(struct rxe_dev *rxe, struct rxe_srq *srq, int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq, struct ib_srq_init_attr *init, - struct ib_ucontext *context, struct ib_udata *udata) + struct ib_ucontext *context, + struct rxe_create_srq_resp __user *uresp) { int err; int srq_wqe_size; @@ -126,55 +127,41 @@ int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq, srq->rq.queue = q; - err = do_mmap_info(rxe, udata, false, context, q->buf, + err = do_mmap_info(rxe, uresp ? &uresp->mi : NULL, context, q->buf, q->buf_size, &q->ip); if (err) return err; - if (udata && udata->outlen >= sizeof(struct mminfo) + sizeof(u32)) { - if (copy_to_user(udata->outbuf + sizeof(struct mminfo), - &srq->srq_num, sizeof(u32))) + if (uresp) { + if (copy_to_user(&uresp->srq_num, &srq->srq_num, + sizeof(uresp->srq_num))) return -EFAULT; } + return 0; } int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq, struct ib_srq_attr *attr, enum ib_srq_attr_mask mask, - struct ib_udata *udata) + struct rxe_modify_srq_cmd *ucmd) { int err; struct rxe_queue *q = srq->rq.queue; - struct mminfo mi = { .offset = 1, .size = 0}; + struct mminfo __user *mi = NULL; if (mask & IB_SRQ_MAX_WR) { - /* Check that we can write the mminfo struct to user space */ - if (udata && udata->inlen >= sizeof(__u64)) { - __u64 mi_addr; - - /* Get address of user space mminfo struct */ - err = ib_copy_from_udata(&mi_addr, udata, - sizeof(mi_addr)); - if (err) - goto err1; - - udata->outbuf = (void __user *)(unsigned long)mi_addr; - udata->outlen = sizeof(mi); - - if (!access_ok(VERIFY_WRITE, - (void __user *)udata->outbuf, - udata->outlen)) { - err = -EFAULT; - goto err1; - } - } + /* + * This is completely screwed up, the response is supposed to + * be in the outbuf not like this. + */ + mi = u64_to_user_ptr(ucmd->mmap_info_addr); err = rxe_queue_resize(q, &attr->max_wr, rcv_wqe_size(srq->rq.max_sge), srq->rq.queue->ip ? srq->rq.queue->ip->context : NULL, - udata, &srq->rq.producer_lock, + mi, &srq->rq.producer_lock, &srq->rq.consumer_lock); if (err) goto err2; @@ -188,6 +175,5 @@ int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq, err2: rxe_queue_cleanup(q); srq->rq.queue = NULL; -err1: return err; } diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 34539c3242a8c3..ced79e49234b22 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -407,6 +407,13 @@ static struct ib_srq *rxe_create_srq(struct ib_pd *ibpd, struct rxe_pd *pd = to_rpd(ibpd); struct rxe_srq *srq; struct ib_ucontext *context = udata ? ibpd->uobject->context : NULL; + struct rxe_create_srq_resp __user *uresp = NULL; + + if (udata) { + if (udata->outlen < sizeof(*uresp)) + return ERR_PTR(-EINVAL); + uresp = udata->outbuf; + } err = rxe_srq_chk_attr(rxe, NULL, &init->attr, IB_SRQ_INIT_MASK); if (err) @@ -422,7 +429,7 @@ static struct ib_srq *rxe_create_srq(struct ib_pd *ibpd, rxe_add_ref(pd); srq->pd = pd; - err = rxe_srq_from_init(rxe, srq, init, context, udata); + err = rxe_srq_from_init(rxe, srq, init, context, uresp); if (err) goto err2; @@ -443,12 +450,22 @@ static int rxe_modify_srq(struct ib_srq *ibsrq, struct ib_srq_attr *attr, int err; struct rxe_srq *srq = to_rsrq(ibsrq); struct rxe_dev *rxe = to_rdev(ibsrq->device); + struct rxe_modify_srq_cmd ucmd = {}; + + if (udata) { + if (udata->inlen < sizeof(ucmd)) + return -EINVAL; + + err = ib_copy_from_udata(&ucmd, udata, sizeof(ucmd)); + if (err) + return err; + } err = rxe_srq_chk_attr(rxe, srq, attr, mask); if (err) goto err1; - err = rxe_srq_from_attr(rxe, srq, attr, mask, udata); + err = rxe_srq_from_attr(rxe, srq, attr, mask, &ucmd); if (err) goto err1; @@ -517,6 +534,13 @@ static struct ib_qp *rxe_create_qp(struct ib_pd *ibpd, struct rxe_dev *rxe = to_rdev(ibpd->device); struct rxe_pd *pd = to_rpd(ibpd); struct rxe_qp *qp; + struct rxe_create_qp_resp __user *uresp = NULL; + + if (udata) { + if (udata->outlen < sizeof(*uresp)) + return ERR_PTR(-EINVAL); + uresp = udata->outbuf; + } err = rxe_qp_chk_init(rxe, init); if (err) @@ -538,7 +562,7 @@ static struct ib_qp *rxe_create_qp(struct ib_pd *ibpd, rxe_add_index(qp); - err = rxe_qp_from_init(rxe, qp, pd, init, udata, ibpd); + err = rxe_qp_from_init(rxe, qp, pd, init, uresp, ibpd); if (err) goto err3; @@ -888,6 +912,13 @@ static struct ib_cq *rxe_create_cq(struct ib_device *dev, int err; struct rxe_dev *rxe = to_rdev(dev); struct rxe_cq *cq; + struct rxe_create_cq_resp __user *uresp = NULL; + + if (udata) { + if (udata->outlen < sizeof(*uresp)) + return ERR_PTR(-EINVAL); + uresp = udata->outbuf; + } if (attr->flags) return ERR_PTR(-EINVAL); @@ -903,7 +934,7 @@ static struct ib_cq *rxe_create_cq(struct ib_device *dev, } err = rxe_cq_from_init(rxe, cq, attr->cqe, attr->comp_vector, - context, udata); + context, uresp); if (err) goto err2; @@ -930,12 +961,19 @@ static int rxe_resize_cq(struct ib_cq *ibcq, int cqe, struct ib_udata *udata) int err; struct rxe_cq *cq = to_rcq(ibcq); struct rxe_dev *rxe = to_rdev(ibcq->device); + struct rxe_resize_cq_resp __user *uresp = NULL; + + if (udata) { + if (udata->outlen < sizeof(*uresp)) + return -EINVAL; + uresp = udata->outbuf; + } err = rxe_cq_chk_attr(rxe, cq, cqe, 0); if (err) goto err1; - err = rxe_cq_resize_queue(cq, cqe, udata); + err = rxe_cq_resize_queue(cq, cqe, uresp); if (err) goto err1; diff --git a/include/uapi/rdma/rdma_user_rxe.h b/include/uapi/rdma/rdma_user_rxe.h index e3e6852b58eb45..b3b1bfc8fa21af 100644 --- a/include/uapi/rdma/rdma_user_rxe.h +++ b/include/uapi/rdma/rdma_user_rxe.h @@ -144,4 +144,26 @@ struct rxe_recv_wqe { struct rxe_dma_info dma; }; +struct rxe_create_cq_resp { + struct mminfo mi; +}; + +struct rxe_resize_cq_resp { + struct mminfo mi; +}; + +struct rxe_create_qp_resp { + struct mminfo rq_mi; + struct mminfo sq_mi; +}; + +struct rxe_create_srq_resp { + struct mminfo mi; + __u32 srq_num; +}; + +struct rxe_modify_srq_cmd { + __u64 mmap_info_addr; +}; + #endif /* RDMA_USER_RXE_H */