From patchwork Sat Feb 16 09:03:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lijun Ou X-Patchwork-Id: 10816149 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 89433139A for ; Sat, 16 Feb 2019 09:03:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 78C1C2D58F for ; Sat, 16 Feb 2019 09:03:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6AD792D60F; Sat, 16 Feb 2019 09:03:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1363C2D58F for ; Sat, 16 Feb 2019 09:03:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731549AbfBPJDk (ORCPT ); Sat, 16 Feb 2019 04:03:40 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:3744 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1731535AbfBPJDk (ORCPT ); Sat, 16 Feb 2019 04:03:40 -0500 Received: from DGGEMS404-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 2F171516EB95034DED6F; Sat, 16 Feb 2019 17:03:18 +0800 (CST) Received: from localhost.localdomain (10.67.212.75) by DGGEMS404-HUB.china.huawei.com (10.3.19.204) with Microsoft SMTP Server id 14.3.408.0; Sat, 16 Feb 2019 17:03:10 +0800 From: Lijun Ou To: , CC: , , Subject: [PATCH rdma-core 1/5] libhns: CQ depth does not support 0 Date: Sat, 16 Feb 2019 17:03:29 +0800 Message-ID: <1550307813-151285-2-git-send-email-oulijun@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1550307813-151285-1-git-send-email-oulijun@huawei.com> References: <1550307813-151285-1-git-send-email-oulijun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.212.75] X-CFilter-Loop: Reflected Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: chenglang When the user configures the CQ depth to be less than 64, the driver would set the CQ depth to 64. The hip0x series does not support user configuration 0. So we modify the user mode driver to unify the parameter range. Signed-off-by: chenglang --- providers/hns/hns_roce_u_verbs.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/providers/hns/hns_roce_u_verbs.c b/providers/hns/hns_roce_u_verbs.c index 05c2a8e..e2e27a6 100644 --- a/providers/hns/hns_roce_u_verbs.c +++ b/providers/hns/hns_roce_u_verbs.c @@ -304,6 +304,9 @@ static int hns_roce_verify_cq(int *cqe, struct hns_roce_context *context) struct hns_roce_device *hr_dev = to_hr_dev(context->ibv_ctx.context.device); + if (*cqe < 1 || *cqe > context->max_cqe) + return -1; + if (hr_dev->hw_version == HNS_ROCE_HW_VER1) if (*cqe < HNS_ROCE_MIN_CQE_NUM) { fprintf(stderr, @@ -312,9 +315,6 @@ static int hns_roce_verify_cq(int *cqe, struct hns_roce_context *context) *cqe = HNS_ROCE_MIN_CQE_NUM; } - if (*cqe > context->max_cqe) - return -1; - return 0; } From patchwork Sat Feb 16 09:03:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lijun Ou X-Patchwork-Id: 10816147 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0982213B5 for ; Sat, 16 Feb 2019 09:03:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ED3CA2D58F for ; Sat, 16 Feb 2019 09:03:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E18402D60F; Sat, 16 Feb 2019 09:03:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8CE1A2D58F for ; Sat, 16 Feb 2019 09:03:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729565AbfBPJDY (ORCPT ); Sat, 16 Feb 2019 04:03:24 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:3749 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726647AbfBPJDY (ORCPT ); Sat, 16 Feb 2019 04:03:24 -0500 Received: from DGGEMS404-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 4BD7146642AA826BDAE5; Sat, 16 Feb 2019 17:03:18 +0800 (CST) Received: from localhost.localdomain (10.67.212.75) by DGGEMS404-HUB.china.huawei.com (10.3.19.204) with Microsoft SMTP Server id 14.3.408.0; Sat, 16 Feb 2019 17:03:10 +0800 From: Lijun Ou To: , CC: , , Subject: [PATCH rdma-core 2/5] libhns: Fix errors detected by Cppcheck tool Date: Sat, 16 Feb 2019 17:03:30 +0800 Message-ID: <1550307813-151285-3-git-send-email-oulijun@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1550307813-151285-1-git-send-email-oulijun@huawei.com> References: <1550307813-151285-1-git-send-email-oulijun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.212.75] X-CFilter-Loop: Reflected Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: chenglang The driver passes structure resp's member a to ib core. Then, ib core uses container_of() to init resp's all members. At last, the driver uses resp's member b. The static check tool CppCheck considers this is an uninitStructMember bug. Here initialize resp in the driver to avoid this dependence. Signed-off-by: chenglang Signed-off-by: Lijun Ou --- providers/hns/hns_roce_u.c | 2 +- providers/hns/hns_roce_u_verbs.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/providers/hns/hns_roce_u.c b/providers/hns/hns_roce_u.c index 8113c00..15e52f6 100644 --- a/providers/hns/hns_roce_u.c +++ b/providers/hns/hns_roce_u.c @@ -92,7 +92,7 @@ static struct verbs_context *hns_roce_alloc_context(struct ibv_device *ibdev, struct ibv_get_context cmd; struct ibv_device_attr dev_attrs; struct hns_roce_context *context; - struct hns_roce_alloc_ucontext_resp resp; + struct hns_roce_alloc_ucontext_resp resp = {}; struct hns_roce_device *hr_dev = to_hr_dev(ibdev); context = verbs_init_and_alloc_context(ibdev, cmd_fd, context, ibv_ctx, diff --git a/providers/hns/hns_roce_u_verbs.c b/providers/hns/hns_roce_u_verbs.c index e2e27a6..4c60375 100644 --- a/providers/hns/hns_roce_u_verbs.c +++ b/providers/hns/hns_roce_u_verbs.c @@ -89,7 +89,7 @@ struct ibv_pd *hns_roce_u_alloc_pd(struct ibv_context *context) { struct ibv_alloc_pd cmd; struct hns_roce_pd *pd; - struct hns_roce_alloc_pd_resp resp; + struct hns_roce_alloc_pd_resp resp = {}; pd = (struct hns_roce_pd *)malloc(sizeof(*pd)); if (!pd) From patchwork Sat Feb 16 09:03:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lijun Ou X-Patchwork-Id: 10816139 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 166C617E0 for ; Sat, 16 Feb 2019 09:03:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 05FEC2D58F for ; Sat, 16 Feb 2019 09:03:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EDF3A2D65B; Sat, 16 Feb 2019 09:03:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1FD282D58F for ; Sat, 16 Feb 2019 09:03:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730580AbfBPJDV (ORCPT ); Sat, 16 Feb 2019 04:03:21 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:3747 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728631AbfBPJDU (ORCPT ); Sat, 16 Feb 2019 04:03:20 -0500 Received: from DGGEMS404-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 42925B4FAD05E64ADF55; Sat, 16 Feb 2019 17:03:18 +0800 (CST) Received: from localhost.localdomain (10.67.212.75) by DGGEMS404-HUB.china.huawei.com (10.3.19.204) with Microsoft SMTP Server id 14.3.408.0; Sat, 16 Feb 2019 17:03:10 +0800 From: Lijun Ou To: , CC: , , Subject: [PATCH rdma-core 3/5] libhns: Package some lines for calculating qp buffer size Date: Sat, 16 Feb 2019 17:03:31 +0800 Message-ID: <1550307813-151285-4-git-send-email-oulijun@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1550307813-151285-1-git-send-email-oulijun@huawei.com> References: <1550307813-151285-1-git-send-email-oulijun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.212.75] X-CFilter-Loop: Reflected Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP For readability, here moves the relatived lines of calculating qp buffer size into an independent function as well as moves the relatived lines of allocating rq inline buffer space into an independent function. Signed-off-by: Lijun Ou --- providers/hns/hns_roce_u_verbs.c | 100 +++++++++++++++++++++++---------------- 1 file changed, 60 insertions(+), 40 deletions(-) diff --git a/providers/hns/hns_roce_u_verbs.c b/providers/hns/hns_roce_u_verbs.c index 4c60375..3413b33 100644 --- a/providers/hns/hns_roce_u_verbs.c +++ b/providers/hns/hns_roce_u_verbs.c @@ -658,25 +658,43 @@ static int hns_roce_verify_qp(struct ibv_qp_init_attr *attr, return 0; } -static int hns_roce_alloc_qp_buf(struct ibv_pd *pd, struct ibv_qp_cap *cap, - enum ibv_qp_type type, struct hns_roce_qp *qp) +static int hns_roce_alloc_recv_inl_buf(struct ibv_qp_cap *cap, + struct hns_roce_qp *qp) { int i; - int page_size = to_hr_dev(pd->context->device)->page_size; - qp->sq.wrid = - (unsigned long *)malloc(qp->sq.wqe_cnt * sizeof(uint64_t)); - if (!qp->sq.wrid) + qp->rq_rinl_buf.wqe_list = + (struct hns_roce_rinl_wqe *)calloc(1, qp->rq.wqe_cnt * + sizeof(struct hns_roce_rinl_wqe)); + if (!qp->rq_rinl_buf.wqe_list) return -1; - if (qp->rq.wqe_cnt) { - qp->rq.wrid = malloc(qp->rq.wqe_cnt * sizeof(uint64_t)); - if (!qp->rq.wrid) { - free(qp->sq.wrid); - return -1; - } + qp->rq_rinl_buf.wqe_cnt = qp->rq.wqe_cnt; + + qp->rq_rinl_buf.wqe_list[0].sg_list = + (struct hns_roce_rinl_sge *)calloc(1, qp->rq.wqe_cnt * + cap->max_recv_sge * sizeof(struct hns_roce_rinl_sge)); + if (!qp->rq_rinl_buf.wqe_list[0].sg_list) { + free(qp->rq_rinl_buf.wqe_list); + return -1; } + for (i = 0; i < qp->rq_rinl_buf.wqe_cnt; i++) { + int wqe_size = i * cap->max_recv_sge; + + qp->rq_rinl_buf.wqe_list[i].sg_list = + &(qp->rq_rinl_buf.wqe_list[0].sg_list[wqe_size]); + } + + return 0; +} + +static int hns_roce_calc_qp_buff_size(struct ibv_pd *pd, struct ibv_qp_cap *cap, + enum ibv_qp_type type, + struct hns_roce_qp *qp) +{ + int page_size = to_hr_dev(pd->context->device)->page_size; + if (to_hr_dev(pd->context->device)->hw_version == HNS_ROCE_HW_VER1) { for (qp->rq.wqe_shift = 4; 1 << qp->rq.wqe_shift < sizeof(struct hns_roce_rc_send_wqe); qp->rq.wqe_shift++) @@ -704,35 +722,9 @@ static int hns_roce_alloc_qp_buf(struct ibv_pd *pd, struct ibv_qp_cap *cap, else qp->sge.sge_shift = 0; - /* alloc recv inline buf*/ - qp->rq_rinl_buf.wqe_list = - (struct hns_roce_rinl_wqe *)calloc(1, qp->rq.wqe_cnt * - sizeof(struct hns_roce_rinl_wqe)); - if (!qp->rq_rinl_buf.wqe_list) { - if (qp->rq.wqe_cnt) - free(qp->rq.wrid); - free(qp->sq.wrid); + /* alloc recv inline buf */ + if (hns_roce_alloc_recv_inl_buf(cap, qp)) return -1; - } - - qp->rq_rinl_buf.wqe_cnt = qp->rq.wqe_cnt; - - qp->rq_rinl_buf.wqe_list[0].sg_list = - (struct hns_roce_rinl_sge *)calloc(1, qp->rq.wqe_cnt * - cap->max_recv_sge * sizeof(struct hns_roce_rinl_sge)); - if (!qp->rq_rinl_buf.wqe_list[0].sg_list) { - if (qp->rq.wqe_cnt) - free(qp->rq.wrid); - free(qp->sq.wrid); - free(qp->rq_rinl_buf.wqe_list); - return -1; - } - for (i = 0; i < qp->rq_rinl_buf.wqe_cnt; i++) { - int wqe_size = i * cap->max_recv_sge; - - qp->rq_rinl_buf.wqe_list[i].sg_list = - &(qp->rq_rinl_buf.wqe_list[0].sg_list[wqe_size]); - } qp->buf_size = align((qp->sq.wqe_cnt << qp->sq.wqe_shift), page_size) + @@ -755,6 +747,34 @@ static int hns_roce_alloc_qp_buf(struct ibv_pd *pd, struct ibv_qp_cap *cap, } } + return 0; +} + +static int hns_roce_alloc_qp_buf(struct ibv_pd *pd, struct ibv_qp_cap *cap, + enum ibv_qp_type type, struct hns_roce_qp *qp) +{ + int page_size = to_hr_dev(pd->context->device)->page_size; + + qp->sq.wrid = + (unsigned long *)malloc(qp->sq.wqe_cnt * sizeof(uint64_t)); + if (!qp->sq.wrid) + return -1; + + if (qp->rq.wqe_cnt) { + qp->rq.wrid = malloc(qp->rq.wqe_cnt * sizeof(uint64_t)); + if (!qp->rq.wrid) { + free(qp->sq.wrid); + return -1; + } + } + + if (hns_roce_calc_qp_buff_size(pd, cap, type, qp)) { + if (qp->rq.wqe_cnt) + free(qp->rq.wrid); + free(qp->sq.wrid); + return -1; + } + if (hns_roce_alloc_buf(&qp->buf, align(qp->buf_size, page_size), to_hr_dev(pd->context->device)->page_size)) { if (qp->rq.wqe_cnt) From patchwork Sat Feb 16 09:03:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lijun Ou X-Patchwork-Id: 10816143 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A18A8139A for ; Sat, 16 Feb 2019 09:03:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 904C42D58F for ; Sat, 16 Feb 2019 09:03:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 84AA72D60F; Sat, 16 Feb 2019 09:03:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CE33E2D607 for ; Sat, 16 Feb 2019 09:03:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729100AbfBPJDW (ORCPT ); Sat, 16 Feb 2019 04:03:22 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:3746 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726647AbfBPJDV (ORCPT ); Sat, 16 Feb 2019 04:03:21 -0500 Received: from DGGEMS404-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 3CB0E7B275228291466D; Sat, 16 Feb 2019 17:03:18 +0800 (CST) Received: from localhost.localdomain (10.67.212.75) by DGGEMS404-HUB.china.huawei.com (10.3.19.204) with Microsoft SMTP Server id 14.3.408.0; Sat, 16 Feb 2019 17:03:11 +0800 From: Lijun Ou To: , CC: , , Subject: [PATCH rdma-core 4/5] libhns: Package for polling cqe function Date: Sat, 16 Feb 2019 17:03:32 +0800 Message-ID: <1550307813-151285-5-git-send-email-oulijun@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1550307813-151285-1-git-send-email-oulijun@huawei.com> References: <1550307813-151285-1-git-send-email-oulijun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.212.75] X-CFilter-Loop: Reflected Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In order to reduce complexity, we move some lines into some separated function according to the flow of polling cqe. Signed-off-by: Lijun Ou --- providers/hns/hns_roce_u_hw_v2.c | 300 +++++++++++++++++++++------------------ 1 file changed, 163 insertions(+), 137 deletions(-) diff --git a/providers/hns/hns_roce_u_hw_v2.c b/providers/hns/hns_roce_u_hw_v2.c index 7938b96..597904d 100644 --- a/providers/hns/hns_roce_u_hw_v2.c +++ b/providers/hns/hns_roce_u_hw_v2.c @@ -273,6 +273,159 @@ static void hns_roce_v2_clear_qp(struct hns_roce_context *ctx, uint32_t qpn) static int hns_roce_u_v2_modify_qp(struct ibv_qp *qp, struct ibv_qp_attr *attr, int attr_mask); +static int hns_roce_flush_cqe(struct hns_roce_qp **cur_qp, struct ibv_wc *wc) +{ + struct ibv_qp_attr attr; + int attr_mask; + int ret; + + if ((wc->status != IBV_WC_SUCCESS) && + (wc->status != IBV_WC_WR_FLUSH_ERR)) { + attr_mask = IBV_QP_STATE; + attr.qp_state = IBV_QPS_ERR; + ret = hns_roce_u_v2_modify_qp(&(*cur_qp)->ibv_qp, + &attr, attr_mask); + if (ret) { + fprintf(stderr, PFX "failed to modify qp!\n"); + return ret; + } + (*cur_qp)->ibv_qp.state = IBV_QPS_ERR; + } + + return V2_CQ_OK; +} + +static void hns_roce_v2_get_opcode_from_sender(struct hns_roce_v2_cqe *cqe, + struct ibv_wc *wc) +{ + /* Get opcode and flag before update the tail point for send */ + switch (roce_get_field(cqe->byte_4, CQE_BYTE_4_OPCODE_M, + CQE_BYTE_4_OPCODE_S) & HNS_ROCE_V2_CQE_OPCODE_MASK) { + case HNS_ROCE_SQ_OP_SEND: + wc->opcode = IBV_WC_SEND; + wc->wc_flags = 0; + break; + case HNS_ROCE_SQ_OP_SEND_WITH_IMM: + wc->opcode = IBV_WC_SEND; + wc->wc_flags = IBV_WC_WITH_IMM; + break; + case HNS_ROCE_SQ_OP_SEND_WITH_INV: + wc->opcode = IBV_WC_SEND; + break; + case HNS_ROCE_SQ_OP_RDMA_READ: + wc->opcode = IBV_WC_RDMA_READ; + wc->byte_len = le32toh(cqe->byte_cnt); + wc->wc_flags = 0; + break; + case HNS_ROCE_SQ_OP_RDMA_WRITE: + wc->opcode = IBV_WC_RDMA_WRITE; + wc->wc_flags = 0; + break; + + case HNS_ROCE_SQ_OP_RDMA_WRITE_WITH_IMM: + wc->opcode = IBV_WC_RDMA_WRITE; + wc->wc_flags = IBV_WC_WITH_IMM; + break; + case HNS_ROCE_SQ_OP_LOCAL_INV: + wc->opcode = IBV_WC_LOCAL_INV; + wc->wc_flags = IBV_WC_WITH_INV; + break; + case HNS_ROCE_SQ_OP_ATOMIC_COMP_AND_SWAP: + wc->opcode = IBV_WC_COMP_SWAP; + wc->byte_len = 8; + wc->wc_flags = 0; + break; + case HNS_ROCE_SQ_OP_ATOMIC_FETCH_AND_ADD: + wc->opcode = IBV_WC_FETCH_ADD; + wc->byte_len = 8; + wc->wc_flags = 0; + break; + case HNS_ROCE_SQ_OP_BIND_MW: + wc->opcode = IBV_WC_BIND_MW; + wc->wc_flags = 0; + break; + default: + wc->status = IBV_WC_GENERAL_ERR; + wc->wc_flags = 0; + break; + } +} + +static void hns_roce_v2_get_opcode_from_receiver(struct hns_roce_v2_cqe *cqe, + struct ibv_wc *wc, + uint32_t opcode) +{ + switch (opcode) { + case HNS_ROCE_RECV_OP_RDMA_WRITE_IMM: + wc->opcode = IBV_WC_RECV_RDMA_WITH_IMM; + wc->wc_flags = IBV_WC_WITH_IMM; + wc->imm_data = htobe32(le32toh(cqe->immtdata)); + break; + case HNS_ROCE_RECV_OP_SEND: + wc->opcode = IBV_WC_RECV; + wc->wc_flags = 0; + break; + case HNS_ROCE_RECV_OP_SEND_WITH_IMM: + wc->opcode = IBV_WC_RECV; + wc->wc_flags = IBV_WC_WITH_IMM; + wc->imm_data = htobe32(le32toh(cqe->immtdata)); + break; + case HNS_ROCE_RECV_OP_SEND_WITH_INV: + wc->opcode = IBV_WC_RECV; + wc->wc_flags = IBV_WC_WITH_INV; + wc->invalidated_rkey = le32toh(cqe->rkey); + break; + default: + wc->status = IBV_WC_GENERAL_ERR; + break; + } +} + +static int hns_roce_handle_recv_inl_wqe(struct hns_roce_v2_cqe *cqe, + struct hns_roce_qp **cur_qp, + struct ibv_wc *wc, uint32_t opcode) +{ + if (((*cur_qp)->ibv_qp.qp_type == IBV_QPT_RC || + (*cur_qp)->ibv_qp.qp_type == IBV_QPT_UC) && + (opcode == HNS_ROCE_RECV_OP_SEND || + opcode == HNS_ROCE_RECV_OP_SEND_WITH_IMM || + opcode == HNS_ROCE_RECV_OP_SEND_WITH_INV) && + (roce_get_bit(cqe->byte_4, CQE_BYTE_4_RQ_INLINE_S))) { + struct hns_roce_rinl_sge *sge_list; + uint32_t wr_num, wr_cnt, sge_num, data_len; + uint8_t *wqe_buf; + uint32_t sge_cnt, size; + + wr_num = (uint16_t)roce_get_field(cqe->byte_4, + CQE_BYTE_4_WQE_IDX_M, + CQE_BYTE_4_WQE_IDX_S) & 0xffff; + wr_cnt = wr_num & ((*cur_qp)->rq.wqe_cnt - 1); + + sge_list = (*cur_qp)->rq_rinl_buf.wqe_list[wr_cnt].sg_list; + sge_num = (*cur_qp)->rq_rinl_buf.wqe_list[wr_cnt].sge_cnt; + wqe_buf = (uint8_t *)get_recv_wqe_v2(*cur_qp, wr_cnt); + data_len = wc->byte_len; + + for (sge_cnt = 0; (sge_cnt < sge_num) && (data_len); + sge_cnt++) { + size = sge_list[sge_cnt].len < data_len ? + sge_list[sge_cnt].len : data_len; + + memcpy((void *)sge_list[sge_cnt].addr, + (void *)wqe_buf, size); + data_len -= size; + wqe_buf += size; + } + + if (data_len) { + wc->status = IBV_WC_LOC_LEN_ERR; + return V2_CQ_POLL_ERR; + } + } + + return V2_CQ_OK; +} + static int hns_roce_v2_poll_one(struct hns_roce_cq *cq, struct hns_roce_qp **cur_qp, struct ibv_wc *wc) { @@ -282,11 +435,8 @@ static int hns_roce_v2_poll_one(struct hns_roce_cq *cq, uint32_t local_qpn; struct hns_roce_wq *wq = NULL; struct hns_roce_v2_cqe *cqe = NULL; - struct hns_roce_rinl_sge *sge_list; struct hns_roce_srq *srq = NULL; uint32_t opcode; - struct ibv_qp_attr attr; - int attr_mask; int ret; /* According to CI, find the relative cqe */ @@ -361,18 +511,7 @@ static int hns_roce_v2_poll_one(struct hns_roce_cq *cq, if (roce_get_field(cqe->byte_4, CQE_BYTE_4_STATUS_M, CQE_BYTE_4_STATUS_S) != HNS_ROCE_V2_CQE_SUCCESS) { hns_roce_v2_handle_error_cqe(cqe, wc); - - /* flush cqe */ - if ((wc->status != IBV_WC_SUCCESS) && - (wc->status != IBV_WC_WR_FLUSH_ERR)) { - attr_mask = IBV_QP_STATE; - attr.qp_state = IBV_QPS_ERR; - ret = hns_roce_u_v2_modify_qp(&(*cur_qp)->ibv_qp, - &attr, attr_mask); - if (ret) - return ret; - } - return V2_CQ_OK; + return hns_roce_flush_cqe(cur_qp, wc); } wc->status = IBV_WC_SUCCESS; @@ -382,132 +521,19 @@ static int hns_roce_v2_poll_one(struct hns_roce_cq *cq, * information of wc */ if (is_send) { - /* Get opcode and flag before update the tail point for send */ - switch (roce_get_field(cqe->byte_4, CQE_BYTE_4_OPCODE_M, - CQE_BYTE_4_OPCODE_S) & HNS_ROCE_V2_CQE_OPCODE_MASK) { - case HNS_ROCE_SQ_OP_SEND: - wc->opcode = IBV_WC_SEND; - wc->wc_flags = 0; - break; - - case HNS_ROCE_SQ_OP_SEND_WITH_IMM: - wc->opcode = IBV_WC_SEND; - wc->wc_flags = IBV_WC_WITH_IMM; - break; - - case HNS_ROCE_SQ_OP_SEND_WITH_INV: - wc->opcode = IBV_WC_SEND; - break; - - case HNS_ROCE_SQ_OP_RDMA_READ: - wc->opcode = IBV_WC_RDMA_READ; - wc->byte_len = le32toh(cqe->byte_cnt); - wc->wc_flags = 0; - break; - - case HNS_ROCE_SQ_OP_RDMA_WRITE: - wc->opcode = IBV_WC_RDMA_WRITE; - wc->wc_flags = 0; - break; - - case HNS_ROCE_SQ_OP_RDMA_WRITE_WITH_IMM: - wc->opcode = IBV_WC_RDMA_WRITE; - wc->wc_flags = IBV_WC_WITH_IMM; - break; - case HNS_ROCE_SQ_OP_LOCAL_INV: - wc->opcode = IBV_WC_LOCAL_INV; - wc->wc_flags = IBV_WC_WITH_INV; - break; - case HNS_ROCE_SQ_OP_ATOMIC_COMP_AND_SWAP: - wc->opcode = IBV_WC_COMP_SWAP; - wc->byte_len = 8; - wc->wc_flags = 0; - break; - case HNS_ROCE_SQ_OP_ATOMIC_FETCH_AND_ADD: - wc->opcode = IBV_WC_FETCH_ADD; - wc->byte_len = 8; - wc->wc_flags = 0; - break; - case HNS_ROCE_SQ_OP_BIND_MW: - wc->opcode = IBV_WC_BIND_MW; - wc->wc_flags = 0; - break; - default: - wc->status = IBV_WC_GENERAL_ERR; - wc->wc_flags = 0; - break; - } + hns_roce_v2_get_opcode_from_sender(cqe, wc); } else { /* Get opcode and flag in rq&srq */ wc->byte_len = le32toh(cqe->byte_cnt); - opcode = roce_get_field(cqe->byte_4, CQE_BYTE_4_OPCODE_M, - CQE_BYTE_4_OPCODE_S) & HNS_ROCE_V2_CQE_OPCODE_MASK; - switch (opcode) { - case HNS_ROCE_RECV_OP_RDMA_WRITE_IMM: - wc->opcode = IBV_WC_RECV_RDMA_WITH_IMM; - wc->wc_flags = IBV_WC_WITH_IMM; - wc->imm_data = htobe32(le32toh(cqe->immtdata)); - break; - - case HNS_ROCE_RECV_OP_SEND: - wc->opcode = IBV_WC_RECV; - wc->wc_flags = 0; - break; - - case HNS_ROCE_RECV_OP_SEND_WITH_IMM: - wc->opcode = IBV_WC_RECV; - wc->wc_flags = IBV_WC_WITH_IMM; - wc->imm_data = htobe32(le32toh(cqe->immtdata)); - break; - - case HNS_ROCE_RECV_OP_SEND_WITH_INV: - wc->opcode = IBV_WC_RECV; - wc->wc_flags = IBV_WC_WITH_INV; - wc->invalidated_rkey = le32toh(cqe->rkey); - break; - default: - wc->status = IBV_WC_GENERAL_ERR; - break; - } - - if (((*cur_qp)->ibv_qp.qp_type == IBV_QPT_RC || - (*cur_qp)->ibv_qp.qp_type == IBV_QPT_UC) && - (opcode == HNS_ROCE_RECV_OP_SEND || - opcode == HNS_ROCE_RECV_OP_SEND_WITH_IMM || - opcode == HNS_ROCE_RECV_OP_SEND_WITH_INV) && - (roce_get_bit(cqe->byte_4, CQE_BYTE_4_RQ_INLINE_S))) { - uint32_t wr_num, wr_cnt, sge_num, data_len; - uint8_t *wqe_buf; - uint32_t sge_cnt, size; - - wr_num = (uint16_t)roce_get_field(cqe->byte_4, - CQE_BYTE_4_WQE_IDX_M, - CQE_BYTE_4_WQE_IDX_S) & 0xffff; - wr_cnt = wr_num & ((*cur_qp)->rq.wqe_cnt - 1); - - sge_list = - (*cur_qp)->rq_rinl_buf.wqe_list[wr_cnt].sg_list; - sge_num = - (*cur_qp)->rq_rinl_buf.wqe_list[wr_cnt].sge_cnt; - wqe_buf = (uint8_t *)get_recv_wqe_v2(*cur_qp, wr_cnt); - data_len = wc->byte_len; - - for (sge_cnt = 0; (sge_cnt < sge_num) && (data_len); - sge_cnt++) { - size = sge_list[sge_cnt].len < data_len ? - sge_list[sge_cnt].len : data_len; - - memcpy((void *)sge_list[sge_cnt].addr, - (void *)wqe_buf, size); - data_len -= size; - wqe_buf += size; - } - - if (data_len) { - wc->status = IBV_WC_LOC_LEN_ERR; - return V2_CQ_POLL_ERR; - } + CQE_BYTE_4_OPCODE_S) & HNS_ROCE_V2_CQE_OPCODE_MASK; + hns_roce_v2_get_opcode_from_receiver(cqe, wc, opcode); + + ret = hns_roce_handle_recv_inl_wqe(cqe, cur_qp, wc, opcode); + if (ret) { + fprintf(stderr, + PFX "failed to handle recv inline wqe!\n"); + return ret; } } From patchwork Sat Feb 16 09:03:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lijun Ou X-Patchwork-Id: 10816145 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B5F8C13B5 for ; Sat, 16 Feb 2019 09:03:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A60B32D58F for ; Sat, 16 Feb 2019 09:03:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9A5A52D60F; Sat, 16 Feb 2019 09:03:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 55A822D58F for ; Sat, 16 Feb 2019 09:03:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731567AbfBPJDX (ORCPT ); Sat, 16 Feb 2019 04:03:23 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:3745 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1729565AbfBPJDX (ORCPT ); Sat, 16 Feb 2019 04:03:23 -0500 Received: from DGGEMS404-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 3888BD46AA7F23F4F2EE; Sat, 16 Feb 2019 17:03:18 +0800 (CST) Received: from localhost.localdomain (10.67.212.75) by DGGEMS404-HUB.china.huawei.com (10.3.19.204) with Microsoft SMTP Server id 14.3.408.0; Sat, 16 Feb 2019 17:03:11 +0800 From: Lijun Ou To: , CC: , , Subject: [PATCH rdma-core 5/5] libhns: Bugfix for useing buffer length Date: Sat, 16 Feb 2019 17:03:33 +0800 Message-ID: <1550307813-151285-6-git-send-email-oulijun@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1550307813-151285-1-git-send-email-oulijun@huawei.com> References: <1550307813-151285-1-git-send-email-oulijun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.212.75] X-CFilter-Loop: Reflected Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We should use the length of buffer after aligned according the input size for ibv_dontfork_range function. Fix: c24583975044("libhns: Add verbs of qp support") Signed-off-by: Lijun Ou --- providers/hns/hns_roce_u_buf.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/providers/hns/hns_roce_u_buf.c b/providers/hns/hns_roce_u_buf.c index f92ea65..27ed90c 100644 --- a/providers/hns/hns_roce_u_buf.c +++ b/providers/hns/hns_roce_u_buf.c @@ -46,7 +46,7 @@ int hns_roce_alloc_buf(struct hns_roce_buf *buf, unsigned int size, if (buf->buf == MAP_FAILED) return errno; - ret = ibv_dontfork_range(buf->buf, size); + ret = ibv_dontfork_range(buf->buf, buf->length); if (ret) munmap(buf->buf, buf->length);