From patchwork Sun Oct 14 02:44:02 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yixian Liu X-Patchwork-Id: 10640501 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 60DC714E2 for ; Sun, 14 Oct 2018 02:43:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5014A2A5FA for ; Sun, 14 Oct 2018 02:43:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4333D2A609; Sun, 14 Oct 2018 02:43:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A7E532A5FA for ; Sun, 14 Oct 2018 02:43:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726242AbeJNKXH (ORCPT ); Sun, 14 Oct 2018 06:23:07 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:13630 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726248AbeJNKXG (ORCPT ); Sun, 14 Oct 2018 06:23:06 -0400 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 7C91C10FDD820; Sun, 14 Oct 2018 10:43:40 +0800 (CST) Received: from localhost.localdomain (10.67.212.132) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.399.0; Sun, 14 Oct 2018 10:43:39 +0800 From: Yixian Liu To: , CC: , Subject: [PATCH v3 rdma-core 3/3] libhns: Add bind mw support for hip08 Date: Sun, 14 Oct 2018 10:44:02 +0800 Message-ID: <1539485042-50118-4-git-send-email-liuyixian@huawei.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1539485042-50118-1-git-send-email-liuyixian@huawei.com> References: <1539485042-50118-1-git-send-email-liuyixian@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.212.132] X-CFilter-Loop: Reflected Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch adds memory window bind support in the user space driver. Signed-off-by: Yixian Liu --- providers/hns/hns_roce_u.c | 1 + providers/hns/hns_roce_u.h | 2 ++ providers/hns/hns_roce_u_hw_v2.c | 39 +++++++++++++++++++++++++++++++++++++-- providers/hns/hns_roce_u_hw_v2.h | 12 ++++++++++++ providers/hns/hns_roce_u_verbs.c | 40 ++++++++++++++++++++++++++++++++++++++++ 5 files changed, 92 insertions(+), 2 deletions(-) diff --git a/providers/hns/hns_roce_u.c b/providers/hns/hns_roce_u.c index 2d12365..3597e9a 100644 --- a/providers/hns/hns_roce_u.c +++ b/providers/hns/hns_roce_u.c @@ -64,6 +64,7 @@ static const struct verbs_match_ent hca_table[] = { static const struct verbs_context_ops hns_common_ops = { .alloc_mw = hns_roce_u_alloc_mw, .alloc_pd = hns_roce_u_alloc_pd, + .bind_mw = hns_roce_u_bind_mw, .cq_event = hns_roce_u_cq_event, .create_cq = hns_roce_u_create_cq, .create_qp = hns_roce_u_create_qp, diff --git a/providers/hns/hns_roce_u.h b/providers/hns/hns_roce_u.h index ac75533..93c917d 100644 --- a/providers/hns/hns_roce_u.h +++ b/providers/hns/hns_roce_u.h @@ -277,6 +277,8 @@ int hns_roce_u_dereg_mr(struct verbs_mr *mr); struct ibv_mw *hns_roce_u_alloc_mw(struct ibv_pd *pd, enum ibv_mw_type type); int hns_roce_u_dealloc_mw(struct ibv_mw *mw); +int hns_roce_u_bind_mw(struct ibv_qp *qp, struct ibv_mw *mw, + struct ibv_mw_bind *mw_bind); struct ibv_cq *hns_roce_u_create_cq(struct ibv_context *context, int cqe, struct ibv_comp_channel *channel, diff --git a/providers/hns/hns_roce_u_hw_v2.c b/providers/hns/hns_roce_u_hw_v2.c index de8a96c..f9551e5 100644 --- a/providers/hns/hns_roce_u_hw_v2.c +++ b/providers/hns/hns_roce_u_hw_v2.c @@ -548,8 +548,8 @@ static int hns_roce_u_v2_arm_cq(struct ibv_cq *ibvcq, int solicited) return 0; } -static int hns_roce_u_v2_post_send(struct ibv_qp *ibvqp, struct ibv_send_wr *wr, - struct ibv_send_wr **bad_wr) +int hns_roce_u_v2_post_send(struct ibv_qp *ibvqp, struct ibv_send_wr *wr, + struct ibv_send_wr **bad_wr) { unsigned int sq_shift; unsigned int ind_sge; @@ -710,6 +710,41 @@ static int hns_roce_u_v2_post_send(struct ibv_qp *ibvqp, struct ibv_send_wr *wr, wqe += sizeof(struct hns_roce_v2_wqe_data_seg); set_atomic_seg(wqe, wr); break; + + case IBV_WR_BIND_MW: + roce_set_field(rc_sq_wqe->byte_4, + RC_SQ_WQE_BYTE_4_OPCODE_M, + RC_SQ_WQE_BYTE_4_OPCODE_S, + HNS_ROCE_WQE_OP_BIND_MW_TYPE); + roce_set_bit(rc_sq_wqe->byte_4, + RC_SQ_WQE_BYTE_4_MW_TYPE_S, + wr->bind_mw.mw->type - 1); + roce_set_bit(rc_sq_wqe->byte_4, + RC_SQ_WQE_BYTE_4_ATOMIC_S, + wr->bind_mw.bind_info.mw_access_flags & + IBV_ACCESS_REMOTE_ATOMIC ? 1 : 0); + roce_set_bit(rc_sq_wqe->byte_4, + RC_SQ_WQE_BYTE_4_RDMA_READ_S, + wr->bind_mw.bind_info.mw_access_flags & + IBV_ACCESS_REMOTE_READ ? 1 : 0); + roce_set_bit(rc_sq_wqe->byte_4, + RC_SQ_WQE_BYTE_4_RDMA_WRITE_S, + wr->bind_mw.bind_info.mw_access_flags & + IBV_ACCESS_REMOTE_WRITE ? 1 : 0); + + rc_sq_wqe->new_rkey = htole32(wr->bind_mw.rkey); + rc_sq_wqe->byte_16 = + htole32(wr->bind_mw.bind_info.length & + 0xffffffff); + rc_sq_wqe->byte_20 = + htole32(wr->bind_mw.bind_info.length >> + 32); + rc_sq_wqe->rkey = + htole32(wr->bind_mw.bind_info.mr->rkey); + rc_sq_wqe->va = + htole64(wr->bind_mw.bind_info.addr); + break; + default: roce_set_field(rc_sq_wqe->byte_4, RC_SQ_WQE_BYTE_4_OPCODE_M, diff --git a/providers/hns/hns_roce_u_hw_v2.h b/providers/hns/hns_roce_u_hw_v2.h index 99c7b99..ff63bb2 100644 --- a/providers/hns/hns_roce_u_hw_v2.h +++ b/providers/hns/hns_roce_u_hw_v2.h @@ -228,6 +228,7 @@ struct hns_roce_rc_sq_wqe { union { __le32 inv_key; __le32 immtdata; + __le32 new_rkey; }; __le32 byte_16; __le32 byte_20; @@ -251,6 +252,14 @@ struct hns_roce_rc_sq_wqe { #define RC_SQ_WQE_BYTE_4_INLINE_S 12 +#define RC_SQ_WQE_BYTE_4_MW_TYPE_S 14 + +#define RC_SQ_WQE_BYTE_4_ATOMIC_S 20 + +#define RC_SQ_WQE_BYTE_4_RDMA_READ_S 21 + +#define RC_SQ_WQE_BYTE_4_RDMA_WRITE_S 22 + #define RC_SQ_WQE_BYTE_16_XRC_SRQN_S 0 #define RC_SQ_WQE_BYTE_16_XRC_SRQN_M \ (((1UL << 24) - 1) << RC_SQ_WQE_BYTE_16_XRC_SRQN_S) @@ -280,4 +289,7 @@ struct hns_roce_wqe_atomic_seg { __le64 cmp_data; }; +int hns_roce_u_v2_post_send(struct ibv_qp *ibvqp, struct ibv_send_wr *wr, + struct ibv_send_wr **bad_wr); + #endif /* _HNS_ROCE_U_HW_V2_H */ diff --git a/providers/hns/hns_roce_u_verbs.c b/providers/hns/hns_roce_u_verbs.c index 53c8104..b0f928e 100644 --- a/providers/hns/hns_roce_u_verbs.c +++ b/providers/hns/hns_roce_u_verbs.c @@ -175,6 +175,46 @@ int hns_roce_u_dereg_mr(struct verbs_mr *vmr) return ret; } +int hns_roce_u_bind_mw(struct ibv_qp *qp, struct ibv_mw *mw, + struct ibv_mw_bind *mw_bind) +{ + struct ibv_mw_bind_info *bind_info = &mw_bind->bind_info; + struct ibv_send_wr *bad_wr = NULL; + struct ibv_send_wr wr = {}; + int ret; + + if ((mw->pd != qp->pd) || (mw->pd != bind_info->mr->pd)) + return EINVAL; + + if (mw->type != IBV_MW_TYPE_1) + return EINVAL; + + if (!bind_info->mr && bind_info->length) + return EINVAL; + + if (bind_info->mw_access_flags & ~(IBV_ACCESS_REMOTE_WRITE | + IBV_ACCESS_REMOTE_READ | IBV_ACCESS_REMOTE_ATOMIC)) + return EINVAL; + + wr.opcode = IBV_WR_BIND_MW; + wr.next = NULL; + + wr.wr_id = mw_bind->wr_id; + wr.send_flags = mw_bind->send_flags; + + wr.bind_mw.mw = mw; + wr.bind_mw.rkey = ibv_inc_rkey(mw->rkey); + wr.bind_mw.bind_info = mw_bind->bind_info; + + ret = hns_roce_u_v2_post_send(qp, &wr, &bad_wr); + if (ret) + return ret; + + mw->rkey = wr.bind_mw.rkey; + + return 0; +} + struct ibv_mw *hns_roce_u_alloc_mw(struct ibv_pd *pd, enum ibv_mw_type type) { struct ibv_mw *mw;