From patchwork Tue Feb 19 09:06:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lijun Ou X-Patchwork-Id: 10819511 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B4DB8139A for ; Tue, 19 Feb 2019 09:06:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9B8122B506 for ; Tue, 19 Feb 2019 09:06:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8F30B2B50F; Tue, 19 Feb 2019 09:06:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3C1962B505 for ; Tue, 19 Feb 2019 09:06:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725805AbfBSJG1 (ORCPT ); Tue, 19 Feb 2019 04:06:27 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:4221 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727250AbfBSJG0 (ORCPT ); Tue, 19 Feb 2019 04:06:26 -0500 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 814E8104065FDF5C0141; Tue, 19 Feb 2019 17:06:24 +0800 (CST) Received: from localhost.localdomain (10.67.212.75) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.408.0; Tue, 19 Feb 2019 17:06:16 +0800 From: Lijun Ou To: , CC: , , Subject: [PATCH V2 rdma-core 1/5] libhns: CQ depth does not support 0 Date: Tue, 19 Feb 2019 17:06:37 +0800 Message-ID: <1550567201-226345-2-git-send-email-oulijun@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1550567201-226345-1-git-send-email-oulijun@huawei.com> References: <1550567201-226345-1-git-send-email-oulijun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.212.75] X-CFilter-Loop: Reflected Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: chenglang When the user configures the CQ depth to be less than 64, the driver would set the CQ depth to 64. The hip0x series does not support user configuration 0. So we modify the user mode driver to unify the parameter range. Signed-off-by: chenglang --- providers/hns/hns_roce_u_verbs.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/providers/hns/hns_roce_u_verbs.c b/providers/hns/hns_roce_u_verbs.c index 05c2a8e..e2e27a6 100644 --- a/providers/hns/hns_roce_u_verbs.c +++ b/providers/hns/hns_roce_u_verbs.c @@ -304,6 +304,9 @@ static int hns_roce_verify_cq(int *cqe, struct hns_roce_context *context) struct hns_roce_device *hr_dev = to_hr_dev(context->ibv_ctx.context.device); + if (*cqe < 1 || *cqe > context->max_cqe) + return -1; + if (hr_dev->hw_version == HNS_ROCE_HW_VER1) if (*cqe < HNS_ROCE_MIN_CQE_NUM) { fprintf(stderr, @@ -312,9 +315,6 @@ static int hns_roce_verify_cq(int *cqe, struct hns_roce_context *context) *cqe = HNS_ROCE_MIN_CQE_NUM; } - if (*cqe > context->max_cqe) - return -1; - return 0; } From patchwork Tue Feb 19 09:06:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lijun Ou X-Patchwork-Id: 10819519 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 44392184E for ; Tue, 19 Feb 2019 09:06:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2EBDA2B3EF for ; Tue, 19 Feb 2019 09:06:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2304A2B506; Tue, 19 Feb 2019 09:06:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C6FED2B3EF for ; Tue, 19 Feb 2019 09:06:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727361AbfBSJG2 (ORCPT ); Tue, 19 Feb 2019 04:06:28 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:4222 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726695AbfBSJG1 (ORCPT ); Tue, 19 Feb 2019 04:06:27 -0500 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 855785E2E8A019328F72; Tue, 19 Feb 2019 17:06:24 +0800 (CST) Received: from localhost.localdomain (10.67.212.75) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.408.0; Tue, 19 Feb 2019 17:06:16 +0800 From: Lijun Ou To: , CC: , , Subject: [PATCH V2 rdma-core 2/5] libhns: Fix errors detected by Cppcheck tool Date: Tue, 19 Feb 2019 17:06:38 +0800 Message-ID: <1550567201-226345-3-git-send-email-oulijun@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1550567201-226345-1-git-send-email-oulijun@huawei.com> References: <1550567201-226345-1-git-send-email-oulijun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.212.75] X-CFilter-Loop: Reflected Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: chenglang The driver passes structure resp's member a to ib core. Then, ib core uses container_of() to init resp's all members. At last, the driver uses resp's member b. The static check tool CppCheck considers this is an uninitStructMember bug. Here initialize resp in the driver to avoid this dependence. Signed-off-by: chenglang Signed-off-by: Lijun Ou --- providers/hns/hns_roce_u.c | 2 +- providers/hns/hns_roce_u_verbs.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/providers/hns/hns_roce_u.c b/providers/hns/hns_roce_u.c index 8113c00..15e52f6 100644 --- a/providers/hns/hns_roce_u.c +++ b/providers/hns/hns_roce_u.c @@ -92,7 +92,7 @@ static struct verbs_context *hns_roce_alloc_context(struct ibv_device *ibdev, struct ibv_get_context cmd; struct ibv_device_attr dev_attrs; struct hns_roce_context *context; - struct hns_roce_alloc_ucontext_resp resp; + struct hns_roce_alloc_ucontext_resp resp = {}; struct hns_roce_device *hr_dev = to_hr_dev(ibdev); context = verbs_init_and_alloc_context(ibdev, cmd_fd, context, ibv_ctx, diff --git a/providers/hns/hns_roce_u_verbs.c b/providers/hns/hns_roce_u_verbs.c index e2e27a6..4c60375 100644 --- a/providers/hns/hns_roce_u_verbs.c +++ b/providers/hns/hns_roce_u_verbs.c @@ -89,7 +89,7 @@ struct ibv_pd *hns_roce_u_alloc_pd(struct ibv_context *context) { struct ibv_alloc_pd cmd; struct hns_roce_pd *pd; - struct hns_roce_alloc_pd_resp resp; + struct hns_roce_alloc_pd_resp resp = {}; pd = (struct hns_roce_pd *)malloc(sizeof(*pd)); if (!pd) From patchwork Tue Feb 19 09:06:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lijun Ou X-Patchwork-Id: 10819513 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C4BF91575 for ; Tue, 19 Feb 2019 09:06:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B2E412B505 for ; Tue, 19 Feb 2019 09:06:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A70A32B537; Tue, 19 Feb 2019 09:06:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1A9952B3EF for ; Tue, 19 Feb 2019 09:06:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727426AbfBSJG1 (ORCPT ); Tue, 19 Feb 2019 04:06:27 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:4220 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725805AbfBSJG0 (ORCPT ); Tue, 19 Feb 2019 04:06:26 -0500 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 7C52C348A825DCD6A9AE; Tue, 19 Feb 2019 17:06:24 +0800 (CST) Received: from localhost.localdomain (10.67.212.75) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.408.0; Tue, 19 Feb 2019 17:06:17 +0800 From: Lijun Ou To: , CC: , , Subject: [PATCH V2 rdma-core 3/5] libhns: Package some lines for calculating qp buffer size Date: Tue, 19 Feb 2019 17:06:39 +0800 Message-ID: <1550567201-226345-4-git-send-email-oulijun@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1550567201-226345-1-git-send-email-oulijun@huawei.com> References: <1550567201-226345-1-git-send-email-oulijun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.212.75] X-CFilter-Loop: Reflected Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP For readability, here moves the relatived lines of calculating qp buffer size into an independent function as well as moves the relatived lines of allocating rq inline buffer space into an independent function. Signed-off-by: Lijun Ou --- providers/hns/hns_roce_u_verbs.c | 100 +++++++++++++++++++++++---------------- 1 file changed, 60 insertions(+), 40 deletions(-) diff --git a/providers/hns/hns_roce_u_verbs.c b/providers/hns/hns_roce_u_verbs.c index 4c60375..3413b33 100644 --- a/providers/hns/hns_roce_u_verbs.c +++ b/providers/hns/hns_roce_u_verbs.c @@ -658,25 +658,43 @@ static int hns_roce_verify_qp(struct ibv_qp_init_attr *attr, return 0; } -static int hns_roce_alloc_qp_buf(struct ibv_pd *pd, struct ibv_qp_cap *cap, - enum ibv_qp_type type, struct hns_roce_qp *qp) +static int hns_roce_alloc_recv_inl_buf(struct ibv_qp_cap *cap, + struct hns_roce_qp *qp) { int i; - int page_size = to_hr_dev(pd->context->device)->page_size; - qp->sq.wrid = - (unsigned long *)malloc(qp->sq.wqe_cnt * sizeof(uint64_t)); - if (!qp->sq.wrid) + qp->rq_rinl_buf.wqe_list = + (struct hns_roce_rinl_wqe *)calloc(1, qp->rq.wqe_cnt * + sizeof(struct hns_roce_rinl_wqe)); + if (!qp->rq_rinl_buf.wqe_list) return -1; - if (qp->rq.wqe_cnt) { - qp->rq.wrid = malloc(qp->rq.wqe_cnt * sizeof(uint64_t)); - if (!qp->rq.wrid) { - free(qp->sq.wrid); - return -1; - } + qp->rq_rinl_buf.wqe_cnt = qp->rq.wqe_cnt; + + qp->rq_rinl_buf.wqe_list[0].sg_list = + (struct hns_roce_rinl_sge *)calloc(1, qp->rq.wqe_cnt * + cap->max_recv_sge * sizeof(struct hns_roce_rinl_sge)); + if (!qp->rq_rinl_buf.wqe_list[0].sg_list) { + free(qp->rq_rinl_buf.wqe_list); + return -1; } + for (i = 0; i < qp->rq_rinl_buf.wqe_cnt; i++) { + int wqe_size = i * cap->max_recv_sge; + + qp->rq_rinl_buf.wqe_list[i].sg_list = + &(qp->rq_rinl_buf.wqe_list[0].sg_list[wqe_size]); + } + + return 0; +} + +static int hns_roce_calc_qp_buff_size(struct ibv_pd *pd, struct ibv_qp_cap *cap, + enum ibv_qp_type type, + struct hns_roce_qp *qp) +{ + int page_size = to_hr_dev(pd->context->device)->page_size; + if (to_hr_dev(pd->context->device)->hw_version == HNS_ROCE_HW_VER1) { for (qp->rq.wqe_shift = 4; 1 << qp->rq.wqe_shift < sizeof(struct hns_roce_rc_send_wqe); qp->rq.wqe_shift++) @@ -704,35 +722,9 @@ static int hns_roce_alloc_qp_buf(struct ibv_pd *pd, struct ibv_qp_cap *cap, else qp->sge.sge_shift = 0; - /* alloc recv inline buf*/ - qp->rq_rinl_buf.wqe_list = - (struct hns_roce_rinl_wqe *)calloc(1, qp->rq.wqe_cnt * - sizeof(struct hns_roce_rinl_wqe)); - if (!qp->rq_rinl_buf.wqe_list) { - if (qp->rq.wqe_cnt) - free(qp->rq.wrid); - free(qp->sq.wrid); + /* alloc recv inline buf */ + if (hns_roce_alloc_recv_inl_buf(cap, qp)) return -1; - } - - qp->rq_rinl_buf.wqe_cnt = qp->rq.wqe_cnt; - - qp->rq_rinl_buf.wqe_list[0].sg_list = - (struct hns_roce_rinl_sge *)calloc(1, qp->rq.wqe_cnt * - cap->max_recv_sge * sizeof(struct hns_roce_rinl_sge)); - if (!qp->rq_rinl_buf.wqe_list[0].sg_list) { - if (qp->rq.wqe_cnt) - free(qp->rq.wrid); - free(qp->sq.wrid); - free(qp->rq_rinl_buf.wqe_list); - return -1; - } - for (i = 0; i < qp->rq_rinl_buf.wqe_cnt; i++) { - int wqe_size = i * cap->max_recv_sge; - - qp->rq_rinl_buf.wqe_list[i].sg_list = - &(qp->rq_rinl_buf.wqe_list[0].sg_list[wqe_size]); - } qp->buf_size = align((qp->sq.wqe_cnt << qp->sq.wqe_shift), page_size) + @@ -755,6 +747,34 @@ static int hns_roce_alloc_qp_buf(struct ibv_pd *pd, struct ibv_qp_cap *cap, } } + return 0; +} + +static int hns_roce_alloc_qp_buf(struct ibv_pd *pd, struct ibv_qp_cap *cap, + enum ibv_qp_type type, struct hns_roce_qp *qp) +{ + int page_size = to_hr_dev(pd->context->device)->page_size; + + qp->sq.wrid = + (unsigned long *)malloc(qp->sq.wqe_cnt * sizeof(uint64_t)); + if (!qp->sq.wrid) + return -1; + + if (qp->rq.wqe_cnt) { + qp->rq.wrid = malloc(qp->rq.wqe_cnt * sizeof(uint64_t)); + if (!qp->rq.wrid) { + free(qp->sq.wrid); + return -1; + } + } + + if (hns_roce_calc_qp_buff_size(pd, cap, type, qp)) { + if (qp->rq.wqe_cnt) + free(qp->rq.wrid); + free(qp->sq.wrid); + return -1; + } + if (hns_roce_alloc_buf(&qp->buf, align(qp->buf_size, page_size), to_hr_dev(pd->context->device)->page_size)) { if (qp->rq.wqe_cnt) From patchwork Tue Feb 19 09:06:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lijun Ou X-Patchwork-Id: 10819515 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8D4736CB for ; Tue, 19 Feb 2019 09:06:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7B1C22B3EF for ; Tue, 19 Feb 2019 09:06:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6F30E2B506; Tue, 19 Feb 2019 09:06:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B2D3F2B3EF for ; Tue, 19 Feb 2019 09:06:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727250AbfBSJG2 (ORCPT ); Tue, 19 Feb 2019 04:06:28 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:4223 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725765AbfBSJG1 (ORCPT ); Tue, 19 Feb 2019 04:06:27 -0500 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 896F4458D7CA047AA57B; Tue, 19 Feb 2019 17:06:24 +0800 (CST) Received: from localhost.localdomain (10.67.212.75) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.408.0; Tue, 19 Feb 2019 17:06:17 +0800 From: Lijun Ou To: , CC: , , Subject: [PATCH V2 rdma-core 4/5] libhns: Package for polling cqe function Date: Tue, 19 Feb 2019 17:06:40 +0800 Message-ID: <1550567201-226345-5-git-send-email-oulijun@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1550567201-226345-1-git-send-email-oulijun@huawei.com> References: <1550567201-226345-1-git-send-email-oulijun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.212.75] X-CFilter-Loop: Reflected Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In order to reduce complexity, we move some lines into some separated function according to the flow of polling cqe. Signed-off-by: Lijun Ou --- V1->V2: - Remove warning by TravisCI check --- providers/hns/hns_roce_u_hw_v2.c | 300 +++++++++++++++++++++------------------ 1 file changed, 163 insertions(+), 137 deletions(-) diff --git a/providers/hns/hns_roce_u_hw_v2.c b/providers/hns/hns_roce_u_hw_v2.c index 7938b96..0a49437 100644 --- a/providers/hns/hns_roce_u_hw_v2.c +++ b/providers/hns/hns_roce_u_hw_v2.c @@ -273,6 +273,159 @@ static void hns_roce_v2_clear_qp(struct hns_roce_context *ctx, uint32_t qpn) static int hns_roce_u_v2_modify_qp(struct ibv_qp *qp, struct ibv_qp_attr *attr, int attr_mask); +static int hns_roce_flush_cqe(struct hns_roce_qp **cur_qp, struct ibv_wc *wc) +{ + struct ibv_qp_attr attr; + int attr_mask; + int ret; + + if ((wc->status != IBV_WC_SUCCESS) && + (wc->status != IBV_WC_WR_FLUSH_ERR)) { + attr_mask = IBV_QP_STATE; + attr.qp_state = IBV_QPS_ERR; + ret = hns_roce_u_v2_modify_qp(&(*cur_qp)->ibv_qp, + &attr, attr_mask); + if (ret) { + fprintf(stderr, PFX "failed to modify qp!\n"); + return ret; + } + (*cur_qp)->ibv_qp.state = IBV_QPS_ERR; + } + + return V2_CQ_OK; +} + +static void hns_roce_v2_get_opcode_from_sender(struct hns_roce_v2_cqe *cqe, + struct ibv_wc *wc) +{ + /* Get opcode and flag before update the tail point for send */ + switch (roce_get_field(cqe->byte_4, CQE_BYTE_4_OPCODE_M, + CQE_BYTE_4_OPCODE_S) & HNS_ROCE_V2_CQE_OPCODE_MASK) { + case HNS_ROCE_SQ_OP_SEND: + wc->opcode = IBV_WC_SEND; + wc->wc_flags = 0; + break; + case HNS_ROCE_SQ_OP_SEND_WITH_IMM: + wc->opcode = IBV_WC_SEND; + wc->wc_flags = IBV_WC_WITH_IMM; + break; + case HNS_ROCE_SQ_OP_SEND_WITH_INV: + wc->opcode = IBV_WC_SEND; + break; + case HNS_ROCE_SQ_OP_RDMA_READ: + wc->opcode = IBV_WC_RDMA_READ; + wc->byte_len = le32toh(cqe->byte_cnt); + wc->wc_flags = 0; + break; + case HNS_ROCE_SQ_OP_RDMA_WRITE: + wc->opcode = IBV_WC_RDMA_WRITE; + wc->wc_flags = 0; + break; + + case HNS_ROCE_SQ_OP_RDMA_WRITE_WITH_IMM: + wc->opcode = IBV_WC_RDMA_WRITE; + wc->wc_flags = IBV_WC_WITH_IMM; + break; + case HNS_ROCE_SQ_OP_LOCAL_INV: + wc->opcode = IBV_WC_LOCAL_INV; + wc->wc_flags = IBV_WC_WITH_INV; + break; + case HNS_ROCE_SQ_OP_ATOMIC_COMP_AND_SWAP: + wc->opcode = IBV_WC_COMP_SWAP; + wc->byte_len = 8; + wc->wc_flags = 0; + break; + case HNS_ROCE_SQ_OP_ATOMIC_FETCH_AND_ADD: + wc->opcode = IBV_WC_FETCH_ADD; + wc->byte_len = 8; + wc->wc_flags = 0; + break; + case HNS_ROCE_SQ_OP_BIND_MW: + wc->opcode = IBV_WC_BIND_MW; + wc->wc_flags = 0; + break; + default: + wc->status = IBV_WC_GENERAL_ERR; + wc->wc_flags = 0; + break; + } +} + +static void hns_roce_v2_get_opcode_from_receiver(struct hns_roce_v2_cqe *cqe, + struct ibv_wc *wc, + uint32_t opcode) +{ + switch (opcode) { + case HNS_ROCE_RECV_OP_RDMA_WRITE_IMM: + wc->opcode = IBV_WC_RECV_RDMA_WITH_IMM; + wc->wc_flags = IBV_WC_WITH_IMM; + wc->imm_data = htobe32(le32toh(cqe->immtdata)); + break; + case HNS_ROCE_RECV_OP_SEND: + wc->opcode = IBV_WC_RECV; + wc->wc_flags = 0; + break; + case HNS_ROCE_RECV_OP_SEND_WITH_IMM: + wc->opcode = IBV_WC_RECV; + wc->wc_flags = IBV_WC_WITH_IMM; + wc->imm_data = htobe32(le32toh(cqe->immtdata)); + break; + case HNS_ROCE_RECV_OP_SEND_WITH_INV: + wc->opcode = IBV_WC_RECV; + wc->wc_flags = IBV_WC_WITH_INV; + wc->invalidated_rkey = le32toh(cqe->rkey); + break; + default: + wc->status = IBV_WC_GENERAL_ERR; + break; + } +} + +static int hns_roce_handle_recv_inl_wqe(struct hns_roce_v2_cqe *cqe, + struct hns_roce_qp **cur_qp, + struct ibv_wc *wc, uint32_t opcode) +{ + if (((*cur_qp)->ibv_qp.qp_type == IBV_QPT_RC || + (*cur_qp)->ibv_qp.qp_type == IBV_QPT_UC) && + (opcode == HNS_ROCE_RECV_OP_SEND || + opcode == HNS_ROCE_RECV_OP_SEND_WITH_IMM || + opcode == HNS_ROCE_RECV_OP_SEND_WITH_INV) && + (roce_get_bit(cqe->byte_4, CQE_BYTE_4_RQ_INLINE_S))) { + struct hns_roce_rinl_sge *sge_list; + uint32_t wr_num, wr_cnt, sge_num, data_len; + uint8_t *wqe_buf; + uint32_t sge_cnt, size; + + wr_num = (uint16_t)roce_get_field(cqe->byte_4, + CQE_BYTE_4_WQE_IDX_M, + CQE_BYTE_4_WQE_IDX_S) & 0xffff; + wr_cnt = wr_num & ((*cur_qp)->rq.wqe_cnt - 1); + + sge_list = (*cur_qp)->rq_rinl_buf.wqe_list[wr_cnt].sg_list; + sge_num = (*cur_qp)->rq_rinl_buf.wqe_list[wr_cnt].sge_cnt; + wqe_buf = (uint8_t *)get_recv_wqe_v2(*cur_qp, wr_cnt); + data_len = wc->byte_len; + + for (sge_cnt = 0; (sge_cnt < sge_num) && (data_len); + sge_cnt++) { + size = sge_list[sge_cnt].len < data_len ? + sge_list[sge_cnt].len : data_len; + + memcpy((void *)sge_list[sge_cnt].addr, + (void *)wqe_buf, size); + data_len -= size; + wqe_buf += size; + } + + if (data_len) { + wc->status = IBV_WC_LOC_LEN_ERR; + return V2_CQ_POLL_ERR; + } + } + + return V2_CQ_OK; +} + static int hns_roce_v2_poll_one(struct hns_roce_cq *cq, struct hns_roce_qp **cur_qp, struct ibv_wc *wc) { @@ -282,11 +435,8 @@ static int hns_roce_v2_poll_one(struct hns_roce_cq *cq, uint32_t local_qpn; struct hns_roce_wq *wq = NULL; struct hns_roce_v2_cqe *cqe = NULL; - struct hns_roce_rinl_sge *sge_list; struct hns_roce_srq *srq = NULL; uint32_t opcode; - struct ibv_qp_attr attr; - int attr_mask; int ret; /* According to CI, find the relative cqe */ @@ -361,18 +511,7 @@ static int hns_roce_v2_poll_one(struct hns_roce_cq *cq, if (roce_get_field(cqe->byte_4, CQE_BYTE_4_STATUS_M, CQE_BYTE_4_STATUS_S) != HNS_ROCE_V2_CQE_SUCCESS) { hns_roce_v2_handle_error_cqe(cqe, wc); - - /* flush cqe */ - if ((wc->status != IBV_WC_SUCCESS) && - (wc->status != IBV_WC_WR_FLUSH_ERR)) { - attr_mask = IBV_QP_STATE; - attr.qp_state = IBV_QPS_ERR; - ret = hns_roce_u_v2_modify_qp(&(*cur_qp)->ibv_qp, - &attr, attr_mask); - if (ret) - return ret; - } - return V2_CQ_OK; + return hns_roce_flush_cqe(cur_qp, wc); } wc->status = IBV_WC_SUCCESS; @@ -382,132 +521,19 @@ static int hns_roce_v2_poll_one(struct hns_roce_cq *cq, * information of wc */ if (is_send) { - /* Get opcode and flag before update the tail point for send */ - switch (roce_get_field(cqe->byte_4, CQE_BYTE_4_OPCODE_M, - CQE_BYTE_4_OPCODE_S) & HNS_ROCE_V2_CQE_OPCODE_MASK) { - case HNS_ROCE_SQ_OP_SEND: - wc->opcode = IBV_WC_SEND; - wc->wc_flags = 0; - break; - - case HNS_ROCE_SQ_OP_SEND_WITH_IMM: - wc->opcode = IBV_WC_SEND; - wc->wc_flags = IBV_WC_WITH_IMM; - break; - - case HNS_ROCE_SQ_OP_SEND_WITH_INV: - wc->opcode = IBV_WC_SEND; - break; - - case HNS_ROCE_SQ_OP_RDMA_READ: - wc->opcode = IBV_WC_RDMA_READ; - wc->byte_len = le32toh(cqe->byte_cnt); - wc->wc_flags = 0; - break; - - case HNS_ROCE_SQ_OP_RDMA_WRITE: - wc->opcode = IBV_WC_RDMA_WRITE; - wc->wc_flags = 0; - break; - - case HNS_ROCE_SQ_OP_RDMA_WRITE_WITH_IMM: - wc->opcode = IBV_WC_RDMA_WRITE; - wc->wc_flags = IBV_WC_WITH_IMM; - break; - case HNS_ROCE_SQ_OP_LOCAL_INV: - wc->opcode = IBV_WC_LOCAL_INV; - wc->wc_flags = IBV_WC_WITH_INV; - break; - case HNS_ROCE_SQ_OP_ATOMIC_COMP_AND_SWAP: - wc->opcode = IBV_WC_COMP_SWAP; - wc->byte_len = 8; - wc->wc_flags = 0; - break; - case HNS_ROCE_SQ_OP_ATOMIC_FETCH_AND_ADD: - wc->opcode = IBV_WC_FETCH_ADD; - wc->byte_len = 8; - wc->wc_flags = 0; - break; - case HNS_ROCE_SQ_OP_BIND_MW: - wc->opcode = IBV_WC_BIND_MW; - wc->wc_flags = 0; - break; - default: - wc->status = IBV_WC_GENERAL_ERR; - wc->wc_flags = 0; - break; - } + hns_roce_v2_get_opcode_from_sender(cqe, wc); } else { /* Get opcode and flag in rq&srq */ wc->byte_len = le32toh(cqe->byte_cnt); - opcode = roce_get_field(cqe->byte_4, CQE_BYTE_4_OPCODE_M, - CQE_BYTE_4_OPCODE_S) & HNS_ROCE_V2_CQE_OPCODE_MASK; - switch (opcode) { - case HNS_ROCE_RECV_OP_RDMA_WRITE_IMM: - wc->opcode = IBV_WC_RECV_RDMA_WITH_IMM; - wc->wc_flags = IBV_WC_WITH_IMM; - wc->imm_data = htobe32(le32toh(cqe->immtdata)); - break; - - case HNS_ROCE_RECV_OP_SEND: - wc->opcode = IBV_WC_RECV; - wc->wc_flags = 0; - break; - - case HNS_ROCE_RECV_OP_SEND_WITH_IMM: - wc->opcode = IBV_WC_RECV; - wc->wc_flags = IBV_WC_WITH_IMM; - wc->imm_data = htobe32(le32toh(cqe->immtdata)); - break; - - case HNS_ROCE_RECV_OP_SEND_WITH_INV: - wc->opcode = IBV_WC_RECV; - wc->wc_flags = IBV_WC_WITH_INV; - wc->invalidated_rkey = le32toh(cqe->rkey); - break; - default: - wc->status = IBV_WC_GENERAL_ERR; - break; - } - - if (((*cur_qp)->ibv_qp.qp_type == IBV_QPT_RC || - (*cur_qp)->ibv_qp.qp_type == IBV_QPT_UC) && - (opcode == HNS_ROCE_RECV_OP_SEND || - opcode == HNS_ROCE_RECV_OP_SEND_WITH_IMM || - opcode == HNS_ROCE_RECV_OP_SEND_WITH_INV) && - (roce_get_bit(cqe->byte_4, CQE_BYTE_4_RQ_INLINE_S))) { - uint32_t wr_num, wr_cnt, sge_num, data_len; - uint8_t *wqe_buf; - uint32_t sge_cnt, size; - - wr_num = (uint16_t)roce_get_field(cqe->byte_4, - CQE_BYTE_4_WQE_IDX_M, - CQE_BYTE_4_WQE_IDX_S) & 0xffff; - wr_cnt = wr_num & ((*cur_qp)->rq.wqe_cnt - 1); - - sge_list = - (*cur_qp)->rq_rinl_buf.wqe_list[wr_cnt].sg_list; - sge_num = - (*cur_qp)->rq_rinl_buf.wqe_list[wr_cnt].sge_cnt; - wqe_buf = (uint8_t *)get_recv_wqe_v2(*cur_qp, wr_cnt); - data_len = wc->byte_len; - - for (sge_cnt = 0; (sge_cnt < sge_num) && (data_len); - sge_cnt++) { - size = sge_list[sge_cnt].len < data_len ? - sge_list[sge_cnt].len : data_len; - - memcpy((void *)sge_list[sge_cnt].addr, - (void *)wqe_buf, size); - data_len -= size; - wqe_buf += size; - } - - if (data_len) { - wc->status = IBV_WC_LOC_LEN_ERR; - return V2_CQ_POLL_ERR; - } + CQE_BYTE_4_OPCODE_S) & HNS_ROCE_V2_CQE_OPCODE_MASK; + hns_roce_v2_get_opcode_from_receiver(cqe, wc, opcode); + + ret = hns_roce_handle_recv_inl_wqe(cqe, cur_qp, wc, opcode); + if (ret) { + fprintf(stderr, + PFX "failed to handle recv inline wqe!\n"); + return ret; } } From patchwork Tue Feb 19 09:06:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lijun Ou X-Patchwork-Id: 10819521 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BC71F6CB for ; Tue, 19 Feb 2019 09:06:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AB3172B3EF for ; Tue, 19 Feb 2019 09:06:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9FD472B506; Tue, 19 Feb 2019 09:06:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 57CF62B3EF for ; Tue, 19 Feb 2019 09:06:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726695AbfBSJG3 (ORCPT ); Tue, 19 Feb 2019 04:06:29 -0500 Received: from szxga04-in.huawei.com ([45.249.212.190]:3682 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727400AbfBSJG2 (ORCPT ); Tue, 19 Feb 2019 04:06:28 -0500 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id AE15D84F09D1AF041752; Tue, 19 Feb 2019 17:06:24 +0800 (CST) Received: from localhost.localdomain (10.67.212.75) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.408.0; Tue, 19 Feb 2019 17:06:17 +0800 From: Lijun Ou To: , CC: , , Subject: [PATCH V2 rdma-core 5/5] libhns: Bugfix for using buffer length Date: Tue, 19 Feb 2019 17:06:41 +0800 Message-ID: <1550567201-226345-6-git-send-email-oulijun@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1550567201-226345-1-git-send-email-oulijun@huawei.com> References: <1550567201-226345-1-git-send-email-oulijun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.212.75] X-CFilter-Loop: Reflected Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We should use the length of buffer after aligned according the input size for ibv_dontfork_range function. Fixes: c24583975044 ("libhns: Add verbs of qp support") Signed-off-by: Lijun Ou --- V1->V2: 1. Modify the fixes style --- providers/hns/hns_roce_u_buf.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/providers/hns/hns_roce_u_buf.c b/providers/hns/hns_roce_u_buf.c index f92ea65..27ed90c 100644 --- a/providers/hns/hns_roce_u_buf.c +++ b/providers/hns/hns_roce_u_buf.c @@ -46,7 +46,7 @@ int hns_roce_alloc_buf(struct hns_roce_buf *buf, unsigned int size, if (buf->buf == MAP_FAILED) return errno; - ret = ibv_dontfork_range(buf->buf, size); + ret = ibv_dontfork_range(buf->buf, buf->length); if (ret) munmap(buf->buf, buf->length);