From patchwork Tue Dec 4 22:14:52 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Arumugam, Kamenee" X-Patchwork-Id: 10712635 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3A1AF17D5 for ; Tue, 4 Dec 2018 22:15:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2A1442C82B for ; Tue, 4 Dec 2018 22:15:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1B6322C852; Tue, 4 Dec 2018 22:15:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8171A2C82B for ; Tue, 4 Dec 2018 22:15:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725919AbeLDWPJ (ORCPT ); Tue, 4 Dec 2018 17:15:09 -0500 Received: from mga14.intel.com ([192.55.52.115]:31115 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725875AbeLDWPJ (ORCPT ); Tue, 4 Dec 2018 17:15:09 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 04 Dec 2018 14:14:53 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,315,1539673200"; d="scan'208";a="107309900" Received: from scymds01.sc.intel.com ([10.82.194.37]) by orsmga003.jf.intel.com with ESMTP; 04 Dec 2018 14:14:52 -0800 Received: from scvm10.sc.intel.com (scvm10.sc.intel.com [10.82.195.27]) by scymds01.sc.intel.com with ESMTP id wB4MEqTI011960 for ; Tue, 4 Dec 2018 14:14:52 -0800 Received: from scvm10.sc.intel.com (localhost [127.0.0.1]) by scvm10.sc.intel.com with ESMTP id wB4MEqBp011469 for ; Tue, 4 Dec 2018 14:14:52 -0800 Subject: [PATCH] hfi1verbs: Move receive work queues struct To: linux-rdma@vger.kernel.org From: Kamenee Arumugam Date: Tue, 04 Dec 2018 14:14:52 -0800 Message-ID: <20181204221452.11446.93246.stgit@scvm10.sc.intel.com> User-Agent: StGit/0.16 MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP hfi1_rwq and hfi1_rwqe are shared structs between rdmavt and uapi. Remove hfi1_rwq and hfi1_rwqe structs from providers. Include rvt_rwq and rvt_rwqe struct definition into kernel-headers/rvt-abi.h. Reviewed-by: Mike Marciniszyn Signed-off-by: Kamenee Arumugam --- kernel-headers/rdma/rvt-abi.h | 28 ++++++++++++++++++++++++++++ providers/hfi1verbs/hfiverbs.h | 34 ++++------------------------------ providers/hfi1verbs/verbs.c | 28 ++++++++++++++-------------- 3 files changed, 46 insertions(+), 44 deletions(-) mode change 100755 => 100644 kernel-headers/rdma/rvt-abi.h mode change 100644 => 100755 providers/hfi1verbs/verbs.c diff --git a/kernel-headers/rdma/rvt-abi.h b/kernel-headers/rdma/rvt-abi.h old mode 100755 new mode 100644 index c30f2fe..2e14075 --- a/kernel-headers/rdma/rvt-abi.h +++ b/kernel-headers/rdma/rvt-abi.h @@ -10,6 +10,7 @@ #include #include +#include #ifndef RDMA_ATOMIC_UAPI #define RDMA_ATOMIC_UAPI(_type, _name) _type _name #endif @@ -28,4 +29,31 @@ struct rvt_cq_wc { struct ib_uverbs_wc uqueue[0]; }; +/* + * Receive work request queue entry. + * The size of the sg_list is determined when the QP (or SRQ) is created + * and stored in qp->r_rq.max_sge (or srq->rq.max_sge). + */ +struct rvt_rwqe { + __u64 wr_id; + __u8 num_sge; + __u8 padding[7]; + struct ibv_sge sg_list[0]; +}; + +/* + * This structure is used to contain the head pointer, tail pointer, + * and receive work queue entries as a single memory allocation so + * it can be mmap'ed into user space. + * Note that the wq array elements are variable size so you can't + * just index into the array to get the N'th element; + * use get_rwqe_ptr() instead. + */ +struct rvt_rwq { + /* new work requests posted to the head */ + RDMA_ATOMIC_UAPI(__u32, head); + /* receives pull requests from here. */ + RDMA_ATOMIC_UAPI(__u32, tail); + struct rvt_rwqe wq[0]; +}; #endif /* RVT_ABI_USER_H */ diff --git a/providers/hfi1verbs/hfiverbs.h b/providers/hfi1verbs/hfiverbs.h index 7ce27c8..9931c32 100644 --- a/providers/hfi1verbs/hfiverbs.h +++ b/providers/hfi1verbs/hfiverbs.h @@ -83,34 +83,8 @@ struct hfi1_cq { struct rvt_cq_wc *queue; pthread_spinlock_t lock; }; -/* - * Receive work request queue entry. - * The size of the sg_list is determined when the QP is created and stored - * in qp->r_max_sge. - */ -struct hfi1_rwqe { - uint64_t wr_id; - uint8_t num_sge; - uint8_t padding[7]; - struct ibv_sge sg_list[0]; -}; - -/* - * This struture is used to contain the head pointer, tail pointer, - * and receive work queue entries as a single memory allocation so - * it can be mmap'ed into user space. - * Note that the wq array elements are variable size so you can't - * just index into the array to get the N'th element; - * use get_rwqe_ptr() instead. - */ -struct hfi1_rwq { - _Atomic(uint32_t) head; /* new requests posted to the head. */ - _Atomic(uint32_t) tail; /* receives pull requests from here. */ - struct hfi1_rwqe wq[0]; -}; - struct hfi1_rq { - struct hfi1_rwq *rwq; + struct rvt_rwq *rwq; pthread_spinlock_t lock; uint32_t size; uint32_t max_sge; @@ -158,12 +132,12 @@ static inline struct hfi1_srq *to_isrq(struct ibv_srq *ibsrq) * Since struct hfi1_rwqe is not a fixed size, we can't simply index into * struct hfi1_rq.wq. This function does the array index computation. */ -static inline struct hfi1_rwqe *get_rwqe_ptr(struct hfi1_rq *rq, +static inline struct rvt_rwqe *get_rwqe_ptr(struct hfi1_rq *rq, unsigned n) { - return (struct hfi1_rwqe *) + return (struct rvt_rwqe *) ((char *) rq->rwq->wq + - (sizeof(struct hfi1_rwqe) + + (sizeof(struct rvt_rwqe) + rq->max_sge * sizeof(struct ibv_sge)) * n); } diff --git a/providers/hfi1verbs/verbs.c b/providers/hfi1verbs/verbs.c old mode 100644 new mode 100755 index dcf6714..5a527ab --- a/providers/hfi1verbs/verbs.c +++ b/providers/hfi1verbs/verbs.c @@ -342,8 +342,8 @@ struct ibv_qp *hfi1_create_qp(struct ibv_pd *pd, struct ibv_qp_init_attr *attr) } else { qp->rq.size = attr->cap.max_recv_wr + 1; qp->rq.max_sge = attr->cap.max_recv_sge; - size = sizeof(struct hfi1_rwq) + - (sizeof(struct hfi1_rwqe) + + size = sizeof(struct rvt_rwq) + + (sizeof(struct rvt_rwqe) + (sizeof(struct ibv_sge) * qp->rq.max_sge)) * qp->rq.size; qp->rq.rwq = mmap(NULL, size, @@ -412,8 +412,8 @@ int hfi1_destroy_qp(struct ibv_qp *ibqp) if (qp->rq.rwq) { size_t size; - size = sizeof(struct hfi1_rwq) + - (sizeof(struct hfi1_rwqe) + + size = sizeof(struct rvt_rwq) + + (sizeof(struct rvt_rwqe) + (sizeof(struct ibv_sge) * qp->rq.max_sge)) * qp->rq.size; (void) munmap(qp->rq.rwq, size); @@ -470,8 +470,8 @@ static int post_recv(struct hfi1_rq *rq, struct ibv_recv_wr *wr, struct ibv_recv_wr **bad_wr) { struct ibv_recv_wr *i; - struct hfi1_rwq *rwq; - struct hfi1_rwqe *wqe; + struct rvt_rwq *rwq; + struct rvt_rwqe *wqe; uint32_t head; int n, ret; @@ -541,8 +541,8 @@ struct ibv_srq *hfi1_create_srq(struct ibv_pd *pd, srq->rq.size = attr->attr.max_wr + 1; srq->rq.max_sge = attr->attr.max_sge; - size = sizeof(struct hfi1_rwq) + - (sizeof(struct hfi1_rwqe) + + size = sizeof(struct rvt_rwq) + + (sizeof(struct rvt_rwqe) + (sizeof(struct ibv_sge) * srq->rq.max_sge)) * srq->rq.size; srq->rq.rwq = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, pd->context->cmd_fd, resp.offset); @@ -591,8 +591,8 @@ int hfi1_modify_srq(struct ibv_srq *ibsrq, if (attr_mask & IBV_SRQ_MAX_WR) { pthread_spin_lock(&srq->rq.lock); /* Save the old size so we can unmmap the queue. */ - size = sizeof(struct hfi1_rwq) + - (sizeof(struct hfi1_rwqe) + + size = sizeof(struct rvt_rwq) + + (sizeof(struct rvt_rwqe) + (sizeof(struct ibv_sge) * srq->rq.max_sge)) * srq->rq.size; } @@ -607,8 +607,8 @@ int hfi1_modify_srq(struct ibv_srq *ibsrq, if (attr_mask & IBV_SRQ_MAX_WR) { (void) munmap(srq->rq.rwq, size); srq->rq.size = attr->max_wr + 1; - size = sizeof(struct hfi1_rwq) + - (sizeof(struct hfi1_rwqe) + + size = sizeof(struct rvt_rwq) + + (sizeof(struct rvt_rwqe) + (sizeof(struct ibv_sge) * srq->rq.max_sge)) * srq->rq.size; srq->rq.rwq = mmap(NULL, size, @@ -649,8 +649,8 @@ int hfi1_destroy_srq(struct ibv_srq *ibsrq) if (ret) return ret; - size = sizeof(struct hfi1_rwq) + - (sizeof(struct hfi1_rwqe) + + size = sizeof(struct rvt_rwq) + + (sizeof(struct rvt_rwqe) + (sizeof(struct ibv_sge) * srq->rq.max_sge)) * srq->rq.size; (void) munmap(srq->rq.rwq, size); free(srq);