From patchwork Thu Aug 8 14:53:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lijun Ou X-Patchwork-Id: 11084341 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A846218A6 for ; Thu, 8 Aug 2019 14:58:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9196C28AA0 for ; Thu, 8 Aug 2019 14:58:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 85B7928B20; Thu, 8 Aug 2019 14:58:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2D49F28AA0 for ; Thu, 8 Aug 2019 14:58:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732606AbfHHO6W (ORCPT ); Thu, 8 Aug 2019 10:58:22 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:4202 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1732760AbfHHO6V (ORCPT ); Thu, 8 Aug 2019 10:58:21 -0400 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 6212C21B5A895749F92E; Thu, 8 Aug 2019 22:58:12 +0800 (CST) Received: from linux-ioko.site (10.71.200.31) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.439.0; Thu, 8 Aug 2019 22:58:05 +0800 From: Lijun Ou To: , CC: , , Subject: [PATCH V4 for-next 01/14] RDMA/hns: Encapsulate some lines for setting sq size in user mode Date: Thu, 8 Aug 2019 22:53:41 +0800 Message-ID: <1565276034-97329-2-git-send-email-oulijun@huawei.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1565276034-97329-1-git-send-email-oulijun@huawei.com> References: <1565276034-97329-1-git-send-email-oulijun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.71.200.31] X-CFilter-Loop: Reflected Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP It needs to check the sq size with integrity when configures the relatived parameters of sq. Here moves the relatived code into a special function. Signed-off-by: Lijun Ou --- V3->V4: 1. Remove replace operation for the origin dev print interface. V1->V2: 1. Use the new print interface according to Gal Pressman because it can print the detail ib_device name --- drivers/infiniband/hw/hns/hns_roce_qp.c | 29 ++++++++++++++++++++++------- 1 file changed, 22 insertions(+), 7 deletions(-) diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c index 9c272c2..5fcc17e6 100644 --- a/drivers/infiniband/hw/hns/hns_roce_qp.c +++ b/drivers/infiniband/hw/hns/hns_roce_qp.c @@ -324,16 +324,12 @@ static int hns_roce_set_rq_size(struct hns_roce_dev *hr_dev, return 0; } -static int hns_roce_set_user_sq_size(struct hns_roce_dev *hr_dev, - struct ib_qp_cap *cap, - struct hns_roce_qp *hr_qp, - struct hns_roce_ib_create_qp *ucmd) +static int check_sq_size_with_integrity(struct hns_roce_dev *hr_dev, + struct ib_qp_cap *cap, + struct hns_roce_ib_create_qp *ucmd) { u32 roundup_sq_stride = roundup_pow_of_two(hr_dev->caps.max_sq_desc_sz); u8 max_sq_stride = ilog2(roundup_sq_stride); - u32 ex_sge_num; - u32 page_size; - u32 max_cnt; /* Sanity check SQ size before proceeding */ if ((u32)(1 << ucmd->log_sq_bb_count) > hr_dev->caps.max_wqes || @@ -349,6 +345,25 @@ static int hns_roce_set_user_sq_size(struct hns_roce_dev *hr_dev, return -EINVAL; } + return 0; +} + +static int hns_roce_set_user_sq_size(struct hns_roce_dev *hr_dev, + struct ib_qp_cap *cap, + struct hns_roce_qp *hr_qp, + struct hns_roce_ib_create_qp *ucmd) +{ + u32 ex_sge_num; + u32 page_size; + u32 max_cnt; + int ret; + + ret = check_sq_size_with_integrity(hr_dev, cap, ucmd); + if (ret) { + ibdev_err(&hr_dev->ib_dev, "Sanity check sq size failed\n"); + return ret; + } + hr_qp->sq.wqe_cnt = 1 << ucmd->log_sq_bb_count; hr_qp->sq.wqe_shift = ucmd->log_sq_stride; From patchwork Thu Aug 8 14:53:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lijun Ou X-Patchwork-Id: 11084325 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 61C08912 for ; Thu, 8 Aug 2019 14:58:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4FA2728A89 for ; Thu, 8 Aug 2019 14:58:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 43D7128B18; Thu, 8 Aug 2019 14:58:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B9A9228AA0 for ; Thu, 8 Aug 2019 14:58:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733144AbfHHO6T (ORCPT ); Thu, 8 Aug 2019 10:58:19 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:4199 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728289AbfHHO6S (ORCPT ); Thu, 8 Aug 2019 10:58:18 -0400 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 55A3A9511A0828951A2E; Thu, 8 Aug 2019 22:58:12 +0800 (CST) Received: from linux-ioko.site (10.71.200.31) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.439.0; Thu, 8 Aug 2019 22:58:05 +0800 From: Lijun Ou To: , CC: , , Subject: [PATCH V4 for-next 02/14] RDMA/hns: Optimize hns_roce_modify_qp function Date: Thu, 8 Aug 2019 22:53:42 +0800 Message-ID: <1565276034-97329-3-git-send-email-oulijun@huawei.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1565276034-97329-1-git-send-email-oulijun@huawei.com> References: <1565276034-97329-1-git-send-email-oulijun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.71.200.31] X-CFilter-Loop: Reflected Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Here mainly packages some code into some new functions in order to reduce code compelexity. Signed-off-by: Lijun Ou --- V3->V4: 1. Remove the ibdev prefix print interface 2. Refactor the some lines for checking mtu function according to Doug Ledford's reviews V1->V2: 1. Use ibdev prefix print interface --- drivers/infiniband/hw/hns/hns_roce_qp.c | 117 +++++++++++++++++++------------- 1 file changed, 69 insertions(+), 48 deletions(-) diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c index 5fcc17e6..f76617b 100644 --- a/drivers/infiniband/hw/hns/hns_roce_qp.c +++ b/drivers/infiniband/hw/hns/hns_roce_qp.c @@ -1070,48 +1070,41 @@ int to_hr_qp_type(int qp_type) return transport_type; } -int hns_roce_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, - int attr_mask, struct ib_udata *udata) +static int check_mtu_validate(struct hns_roce_dev *hr_dev, + struct hns_roce_qp *hr_qp, + struct ib_qp_attr *attr, int attr_mask) { - struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device); - struct hns_roce_qp *hr_qp = to_hr_qp(ibqp); - enum ib_qp_state cur_state, new_state; struct device *dev = hr_dev->dev; - int ret = -EINVAL; - int p; enum ib_mtu active_mtu; + int p; - mutex_lock(&hr_qp->mutex); - - cur_state = attr_mask & IB_QP_CUR_STATE ? - attr->cur_qp_state : (enum ib_qp_state)hr_qp->state; - new_state = attr_mask & IB_QP_STATE ? - attr->qp_state : cur_state; - - if (ibqp->uobject && - (attr_mask & IB_QP_STATE) && new_state == IB_QPS_ERR) { - if (hr_qp->sdb_en == 1) { - hr_qp->sq.head = *(int *)(hr_qp->sdb.virt_addr); + p = attr_mask & IB_QP_PORT ? (attr->port_num - 1) : hr_qp->port; + active_mtu = iboe_get_mtu(hr_dev->iboe.netdevs[p]->mtu); - if (hr_qp->rdb_en == 1) - hr_qp->rq.head = *(int *)(hr_qp->rdb.virt_addr); - } else { - dev_warn(dev, "flush cqe is not supported in userspace!\n"); - goto out; - } + if ((hr_dev->caps.max_mtu >= IB_MTU_2048 && + attr->path_mtu > hr_dev->caps.max_mtu) || + attr->path_mtu < IB_MTU_256 || attr->path_mtu > active_mtu) { + dev_err(dev, "attr path_mtu(%d)invalid while modify qp", + attr->path_mtu); + return -EINVAL; } - if (!ib_modify_qp_is_ok(cur_state, new_state, ibqp->qp_type, - attr_mask)) { - dev_err(dev, "ib_modify_qp_is_ok failed\n"); - goto out; - } + return 0; +} + +static int hns_roce_check_qp_attr(struct ib_qp *ibqp, struct ib_qp_attr *attr, + int attr_mask) +{ + struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device); + struct hns_roce_qp *hr_qp = to_hr_qp(ibqp); + struct device *dev = hr_dev->dev; + int p; if ((attr_mask & IB_QP_PORT) && (attr->port_num == 0 || attr->port_num > hr_dev->caps.num_ports)) { dev_err(dev, "attr port_num invalid.attr->port_num=%d\n", attr->port_num); - goto out; + return -EINVAL; } if (attr_mask & IB_QP_PKEY_INDEX) { @@ -1119,23 +1112,7 @@ int hns_roce_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, if (attr->pkey_index >= hr_dev->caps.pkey_table_len[p]) { dev_err(dev, "attr pkey_index invalid.attr->pkey_index=%d\n", attr->pkey_index); - goto out; - } - } - - if (attr_mask & IB_QP_PATH_MTU) { - p = attr_mask & IB_QP_PORT ? (attr->port_num - 1) : hr_qp->port; - active_mtu = iboe_get_mtu(hr_dev->iboe.netdevs[p]->mtu); - - if ((hr_dev->caps.max_mtu == IB_MTU_4096 && - attr->path_mtu > IB_MTU_4096) || - (hr_dev->caps.max_mtu == IB_MTU_2048 && - attr->path_mtu > IB_MTU_2048) || - attr->path_mtu < IB_MTU_256 || - attr->path_mtu > active_mtu) { - dev_err(dev, "attr path_mtu(%d)invalid while modify qp", - attr->path_mtu); - goto out; + return -EINVAL; } } @@ -1143,16 +1120,60 @@ int hns_roce_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, attr->max_rd_atomic > hr_dev->caps.max_qp_init_rdma) { dev_err(dev, "attr max_rd_atomic invalid.attr->max_rd_atomic=%d\n", attr->max_rd_atomic); - goto out; + return -EINVAL; } if (attr_mask & IB_QP_MAX_DEST_RD_ATOMIC && attr->max_dest_rd_atomic > hr_dev->caps.max_qp_dest_rdma) { dev_err(dev, "attr max_dest_rd_atomic invalid.attr->max_dest_rd_atomic=%d\n", attr->max_dest_rd_atomic); + return -EINVAL; + } + + if (attr_mask & IB_QP_PATH_MTU) + return check_mtu_validate(hr_dev, hr_qp, attr, attr_mask); + + return 0; +} + +int hns_roce_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, + int attr_mask, struct ib_udata *udata) +{ + struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device); + struct hns_roce_qp *hr_qp = to_hr_qp(ibqp); + enum ib_qp_state cur_state, new_state; + struct device *dev = hr_dev->dev; + int ret = -EINVAL; + + mutex_lock(&hr_qp->mutex); + + cur_state = attr_mask & IB_QP_CUR_STATE ? + attr->cur_qp_state : (enum ib_qp_state)hr_qp->state; + new_state = attr_mask & IB_QP_STATE ? attr->qp_state : cur_state; + + if (ibqp->uobject && + (attr_mask & IB_QP_STATE) && new_state == IB_QPS_ERR) { + if (hr_qp->sdb_en == 1) { + hr_qp->sq.head = *(int *)(hr_qp->sdb.virt_addr); + + if (hr_qp->rdb_en == 1) + hr_qp->rq.head = *(int *)(hr_qp->rdb.virt_addr); + } else { + dev_warn(dev, "flush cqe is not supported in userspace!\n"); + goto out; + } + } + + if (!ib_modify_qp_is_ok(cur_state, new_state, ibqp->qp_type, + attr_mask)) { + dev_err(dev, "ib_modify_qp_is_ok failed\n"); goto out; } + ret = hns_roce_check_qp_attr(ibqp, attr, attr_mask); + if (ret) + goto out; + if (cur_state == new_state && cur_state == IB_QPS_RESET) { if (hr_dev->caps.min_wqes) { ret = -EPERM; From patchwork Thu Aug 8 14:53:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lijun Ou X-Patchwork-Id: 11084329 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 046701850 for ; Thu, 8 Aug 2019 14:58:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E6B9128AA0 for ; Thu, 8 Aug 2019 14:58:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DB11A28B20; Thu, 8 Aug 2019 14:58:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8818A28AA0 for ; Thu, 8 Aug 2019 14:58:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733149AbfHHO6T (ORCPT ); Thu, 8 Aug 2019 10:58:19 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:4197 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1732667AbfHHO6S (ORCPT ); Thu, 8 Aug 2019 10:58:18 -0400 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 4D7A03A7BFCCBB206746; Thu, 8 Aug 2019 22:58:12 +0800 (CST) Received: from linux-ioko.site (10.71.200.31) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.439.0; Thu, 8 Aug 2019 22:58:05 +0800 From: Lijun Ou To: , CC: , , Subject: [PATCH V4 for-next 03/14] RDMA/hns: Update the prompt message for creating and destroy qp Date: Thu, 8 Aug 2019 22:53:43 +0800 Message-ID: <1565276034-97329-4-git-send-email-oulijun@huawei.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1565276034-97329-1-git-send-email-oulijun@huawei.com> References: <1565276034-97329-1-git-send-email-oulijun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.71.200.31] X-CFilter-Loop: Reflected Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Yixian Liu Current prompt message is uncorrect when destroying qp, add qpn information when creating qp. Signed-off-by: Yixian Liu Signed-off-by: Lijun Ou --- V3->V4: 1. Remove the new API. --- drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 6 +++--- drivers/infiniband/hw/hns/hns_roce_qp.c | 3 ++- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c index 59f88bf0..6dff7e2 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -4571,8 +4571,7 @@ static int hns_roce_v2_destroy_qp_common(struct hns_roce_dev *hr_dev, ret = hns_roce_v2_modify_qp(&hr_qp->ibqp, NULL, 0, hr_qp->state, IB_QPS_RESET); if (ret) { - dev_err(dev, "modify QP %06lx to ERR failed.\n", - hr_qp->qpn); + dev_err(dev, "modify QP to Reset failed.\n"); return ret; } } @@ -4641,7 +4640,8 @@ static int hns_roce_v2_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata) ret = hns_roce_v2_destroy_qp_common(hr_dev, hr_qp, udata); if (ret) { - dev_err(hr_dev->dev, "Destroy qp failed(%d)\n", ret); + dev_err(&hr_dev->dev, "Destroy qp 0x%06lx failed(%d)\n", + hr_qp->qpn, ret); return ret; } diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c index f76617b..f803209 100644 --- a/drivers/infiniband/hw/hns/hns_roce_qp.c +++ b/drivers/infiniband/hw/hns/hns_roce_qp.c @@ -1002,7 +1002,8 @@ struct ib_qp *hns_roce_create_qp(struct ib_pd *pd, ret = hns_roce_create_qp_common(hr_dev, pd, init_attr, udata, 0, hr_qp); if (ret) { - dev_err(dev, "Create RC QP failed\n"); + dev_err(dev, "Create RC QP 0x%06lx failed(%d)\n", + hr_qp->qpn, ret); kfree(hr_qp); return ERR_PTR(ret); } From patchwork Thu Aug 8 14:53:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lijun Ou X-Patchwork-Id: 11084321 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 81958912 for ; Thu, 8 Aug 2019 14:58:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6EDE428A89 for ; Thu, 8 Aug 2019 14:58:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6354B28B18; Thu, 8 Aug 2019 14:58:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 087E828A89 for ; Thu, 8 Aug 2019 14:58:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733152AbfHHO6S (ORCPT ); Thu, 8 Aug 2019 10:58:18 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:4196 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728120AbfHHO6S (ORCPT ); Thu, 8 Aug 2019 10:58:18 -0400 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 4944381133731C05C36C; Thu, 8 Aug 2019 22:58:12 +0800 (CST) Received: from linux-ioko.site (10.71.200.31) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.439.0; Thu, 8 Aug 2019 22:58:05 +0800 From: Lijun Ou To: , CC: , , Subject: [PATCH V4 for-next 04/14] RDMA/hns: Remove unnessary init for cmq reg Date: Thu, 8 Aug 2019 22:53:44 +0800 Message-ID: <1565276034-97329-5-git-send-email-oulijun@huawei.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1565276034-97329-1-git-send-email-oulijun@huawei.com> References: <1565276034-97329-1-git-send-email-oulijun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.71.200.31] X-CFilter-Loop: Reflected Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Yixian Liu There is no need to init the enable bit of cmq. Signed-off-by: Yixian Liu --- drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 6 ++---- drivers/infiniband/hw/hns/hns_roce_hw_v2.h | 2 -- 2 files changed, 2 insertions(+), 6 deletions(-) diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c index 6dff7e2..eedafd6 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -887,8 +887,7 @@ static void hns_roce_cmq_init_regs(struct hns_roce_dev *hr_dev, bool ring_type) roce_write(hr_dev, ROCEE_TX_CMQ_BASEADDR_H_REG, upper_32_bits(dma)); roce_write(hr_dev, ROCEE_TX_CMQ_DEPTH_REG, - (ring->desc_num >> HNS_ROCE_CMQ_DESC_NUM_S) | - HNS_ROCE_CMQ_ENABLE); + ring->desc_num >> HNS_ROCE_CMQ_DESC_NUM_S); roce_write(hr_dev, ROCEE_TX_CMQ_HEAD_REG, 0); roce_write(hr_dev, ROCEE_TX_CMQ_TAIL_REG, 0); } else { @@ -896,8 +895,7 @@ static void hns_roce_cmq_init_regs(struct hns_roce_dev *hr_dev, bool ring_type) roce_write(hr_dev, ROCEE_RX_CMQ_BASEADDR_H_REG, upper_32_bits(dma)); roce_write(hr_dev, ROCEE_RX_CMQ_DEPTH_REG, - (ring->desc_num >> HNS_ROCE_CMQ_DESC_NUM_S) | - HNS_ROCE_CMQ_ENABLE); + ring->desc_num >> HNS_ROCE_CMQ_DESC_NUM_S); roce_write(hr_dev, ROCEE_RX_CMQ_HEAD_REG, 0); roce_write(hr_dev, ROCEE_RX_CMQ_TAIL_REG, 0); } diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h index 478f5a5..58931b5 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h @@ -126,8 +126,6 @@ #define HNS_ROCE_CMD_FLAG_ERR_INTR BIT(HNS_ROCE_CMD_FLAG_ERR_INTR_SHIFT) #define HNS_ROCE_CMQ_DESC_NUM_S 3 -#define HNS_ROCE_CMQ_EN_B 16 -#define HNS_ROCE_CMQ_ENABLE BIT(HNS_ROCE_CMQ_EN_B) #define HNS_ROCE_CMQ_SCC_CLR_DONE_CNT 5 From patchwork Thu Aug 8 14:53:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lijun Ou X-Patchwork-Id: 11084317 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6551C14DB for ; Thu, 8 Aug 2019 14:58:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 511DF28A89 for ; Thu, 8 Aug 2019 14:58:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 409F328B18; Thu, 8 Aug 2019 14:58:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B699228A89 for ; Thu, 8 Aug 2019 14:58:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733119AbfHHO6P (ORCPT ); Thu, 8 Aug 2019 10:58:15 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:4200 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1732667AbfHHO6O (ORCPT ); Thu, 8 Aug 2019 10:58:14 -0400 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 5E32F952AFA308A9F81D; Thu, 8 Aug 2019 22:58:12 +0800 (CST) Received: from linux-ioko.site (10.71.200.31) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.439.0; Thu, 8 Aug 2019 22:58:06 +0800 From: Lijun Ou To: , CC: , , Subject: [PATCH V4 for-next 05/14] RDMA/hns: Clean up unnecessary initial assignment Date: Thu, 8 Aug 2019 22:53:45 +0800 Message-ID: <1565276034-97329-6-git-send-email-oulijun@huawei.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1565276034-97329-1-git-send-email-oulijun@huawei.com> References: <1565276034-97329-1-git-send-email-oulijun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.71.200.31] X-CFilter-Loop: Reflected Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Lang Cheng Here remove some unncessary initialization for some valiables. Signed-off-by: Lang Cheng Signed-off-by: Lijun Ou --- V3->V4: 1. Remove the unnecessary whitespace --- drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c index eedafd6..2bb2d0c 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -239,7 +239,7 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp, struct device *dev = hr_dev->dev; struct hns_roce_v2_db sq_db; struct ib_qp_attr attr; - unsigned int sge_ind = 0; + unsigned int sge_ind; unsigned int owner_bit; unsigned long flags; unsigned int ind; @@ -4291,7 +4291,7 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp, struct hns_roce_v2_qp_context *context; struct hns_roce_v2_qp_context *qpc_mask; struct device *dev = hr_dev->dev; - int ret = -EINVAL; + int ret; context = kcalloc(2, sizeof(*context), GFP_ATOMIC); if (!context) From patchwork Thu Aug 8 14:53:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lijun Ou X-Patchwork-Id: 11084345 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 093A514E5 for ; Thu, 8 Aug 2019 14:58:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E7EA628A89 for ; Thu, 8 Aug 2019 14:58:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DC65A28B18; Thu, 8 Aug 2019 14:58:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3243128AA0 for ; Thu, 8 Aug 2019 14:58:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732844AbfHHO6X (ORCPT ); Thu, 8 Aug 2019 10:58:23 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:4203 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1733180AbfHHO6X (ORCPT ); Thu, 8 Aug 2019 10:58:23 -0400 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 6615226B34EE4366043B; Thu, 8 Aug 2019 22:58:12 +0800 (CST) Received: from linux-ioko.site (10.71.200.31) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.439.0; Thu, 8 Aug 2019 22:58:06 +0800 From: Lijun Ou To: , CC: , , Subject: [PATCH V4 for-next 06/14] RDMA/hns: Update some comments style Date: Thu, 8 Aug 2019 22:53:46 +0800 Message-ID: <1565276034-97329-7-git-send-email-oulijun@huawei.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1565276034-97329-1-git-send-email-oulijun@huawei.com> References: <1565276034-97329-1-git-send-email-oulijun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.71.200.31] X-CFilter-Loop: Reflected Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Lang Cheng Here removes some useless comments and adds necessary spaces to another comments. Signed-off-by: Lang Cheng --- drivers/infiniband/hw/hns/hns_roce_device.h | 65 ++++++++++++++--------------- drivers/infiniband/hw/hns/hns_roce_hem.h | 6 +-- drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 9 ++-- drivers/infiniband/hw/hns/hns_roce_mr.c | 1 - 4 files changed, 38 insertions(+), 43 deletions(-) diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h index b39497a..8c4b120 100644 --- a/drivers/infiniband/hw/hns/hns_roce_device.h +++ b/drivers/infiniband/hw/hns/hns_roce_device.h @@ -84,7 +84,6 @@ #define HNS_ROCE_CEQ_ENTRY_SIZE 0x4 #define HNS_ROCE_AEQ_ENTRY_SIZE 0x10 -/* 4G/4K = 1M */ #define HNS_ROCE_SL_SHIFT 28 #define HNS_ROCE_TCLASS_SHIFT 20 #define HNS_ROCE_FLOW_LABEL_MASK 0xfffff @@ -322,7 +321,7 @@ struct hns_roce_hem_table { unsigned long num_hem; /* HEM entry record obj total num */ unsigned long num_obj; - /*Single obj size */ + /* Single obj size */ unsigned long obj_size; unsigned long table_chunk_size; int lowmem; @@ -343,7 +342,7 @@ struct hns_roce_mtt { struct hns_roce_buf_region { int offset; /* page offset */ - u32 count; /* page count*/ + u32 count; /* page count */ int hopnum; /* addressing hop num */ }; @@ -384,25 +383,25 @@ struct hns_roce_mr { u64 size; /* Address range of MR */ u32 key; /* Key of MR */ u32 pd; /* PD num of MR */ - u32 access;/* Access permission of MR */ + u32 access; /* Access permission of MR */ u32 npages; int enabled; /* MR's active status */ int type; /* MR's register type */ - u64 *pbl_buf;/* MR's PBL space */ + u64 *pbl_buf; /* MR's PBL space */ dma_addr_t pbl_dma_addr; /* MR's PBL space PA */ - u32 pbl_size;/* PA number in the PBL */ - u64 pbl_ba;/* page table address */ - u32 l0_chunk_last_num;/* L0 last number */ - u32 l1_chunk_last_num;/* L1 last number */ - u64 **pbl_bt_l2;/* PBL BT L2 */ - u64 **pbl_bt_l1;/* PBL BT L1 */ - u64 *pbl_bt_l0;/* PBL BT L0 */ - dma_addr_t *pbl_l2_dma_addr;/* PBL BT L2 dma addr */ - dma_addr_t *pbl_l1_dma_addr;/* PBL BT L1 dma addr */ - dma_addr_t pbl_l0_dma_addr;/* PBL BT L0 dma addr */ - u32 pbl_ba_pg_sz;/* BT chunk page size */ - u32 pbl_buf_pg_sz;/* buf chunk page size */ - u32 pbl_hop_num;/* multi-hop number */ + u32 pbl_size; /* PA number in the PBL */ + u64 pbl_ba; /* page table address */ + u32 l0_chunk_last_num; /* L0 last number */ + u32 l1_chunk_last_num; /* L1 last number */ + u64 **pbl_bt_l2; /* PBL BT L2 */ + u64 **pbl_bt_l1; /* PBL BT L1 */ + u64 *pbl_bt_l0; /* PBL BT L0 */ + dma_addr_t *pbl_l2_dma_addr; /* PBL BT L2 dma addr */ + dma_addr_t *pbl_l1_dma_addr; /* PBL BT L1 dma addr */ + dma_addr_t pbl_l0_dma_addr; /* PBL BT L0 dma addr */ + u32 pbl_ba_pg_sz; /* BT chunk page size */ + u32 pbl_buf_pg_sz; /* buf chunk page size */ + u32 pbl_hop_num; /* multi-hop number */ }; struct hns_roce_mr_table { @@ -425,16 +424,16 @@ struct hns_roce_wq { u32 max_post; int max_gs; int offset; - int wqe_shift;/* WQE size */ + int wqe_shift; /* WQE size */ u32 head; u32 tail; void __iomem *db_reg_l; }; struct hns_roce_sge { - int sge_cnt; /* SGE num */ + int sge_cnt; /* SGE num */ int offset; - int sge_shift;/* SGE size */ + int sge_shift; /* SGE size */ }; struct hns_roce_buf_list { @@ -750,7 +749,7 @@ struct hns_roce_eq { struct hns_roce_dev *hr_dev; void __iomem *doorbell; - int type_flag;/* Aeq:1 ceq:0 */ + int type_flag; /* Aeq:1 ceq:0 */ int eqn; u32 entries; int log_entries; @@ -796,22 +795,22 @@ struct hns_roce_caps { int local_ca_ack_delay; int num_uars; u32 phy_num_uars; - u32 max_sq_sg; /* 2 */ - u32 max_sq_inline; /* 32 */ - u32 max_rq_sg; /* 2 */ + u32 max_sq_sg; + u32 max_sq_inline; + u32 max_rq_sg; u32 max_extend_sg; - int num_qps; /* 256k */ + int num_qps; int reserved_qps; int num_qpc_timer; int num_cqc_timer; u32 max_srq_sg; int num_srqs; - u32 max_wqes; /* 16k */ + u32 max_wqes; u32 max_srqs; u32 max_srq_wrs; u32 max_srq_sges; - u32 max_sq_desc_sz; /* 64 */ - u32 max_rq_desc_sz; /* 64 */ + u32 max_sq_desc_sz; + u32 max_rq_desc_sz; u32 max_srq_desc_sz; int max_qp_init_rdma; int max_qp_dest_rdma; @@ -822,7 +821,7 @@ struct hns_roce_caps { int reserved_cqs; int reserved_srqs; u32 max_srqwqes; - int num_aeq_vectors; /* 1 */ + int num_aeq_vectors; int num_comp_vectors; int num_other_vectors; int num_mtpts; @@ -903,7 +902,7 @@ struct hns_roce_caps { u32 sl_num; u32 tsq_buf_pg_sz; u32 tpq_buf_pg_sz; - u32 chunk_sz; /* chunk size in non multihop mode*/ + u32 chunk_sz; /* chunk size in non multihop mode */ u64 flags; }; @@ -1043,8 +1042,8 @@ struct hns_roce_dev { int loop_idc; u32 sdb_offset; u32 odb_offset; - dma_addr_t tptr_dma_addr; /*only for hw v1*/ - u32 tptr_size; /*only for hw v1*/ + dma_addr_t tptr_dma_addr; /* only for hw v1 */ + u32 tptr_size; /* only for hw v1 */ const struct hns_roce_hw *hw; void *priv; struct workqueue_struct *irq_workq; diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.h b/drivers/infiniband/hw/hns/hns_roce_hem.h index f1ccb8f..8678327 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hem.h +++ b/drivers/infiniband/hw/hns/hns_roce_hem.h @@ -102,9 +102,9 @@ struct hns_roce_hem_mhop { u32 buf_chunk_size; u32 bt_chunk_size; u32 ba_l0_num; - u32 l0_idx;/* level 0 base address table index */ - u32 l1_idx;/* level 1 base address table index */ - u32 l2_idx;/* level 2 base address table index */ + u32 l0_idx; /* level 0 base address table index */ + u32 l1_idx; /* level 1 base address table index */ + u32 l2_idx; /* level 2 base address table index */ }; void hns_roce_free_hem(struct hns_roce_dev *hr_dev, struct hns_roce_hem *hem); diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c index 2bb2d0c..9b037b3 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -1886,7 +1886,7 @@ static int hns_roce_v2_init(struct hns_roce_dev *hr_dev) goto err_tpq_init_failed; } - /* Alloc memory for QPC Timer buffer space chunk*/ + /* Alloc memory for QPC Timer buffer space chunk */ for (qpc_count = 0; qpc_count < hr_dev->caps.qpc_timer_bt_num; qpc_count++) { ret = hns_roce_table_get(hr_dev, &hr_dev->qpc_timer_table, @@ -1897,7 +1897,7 @@ static int hns_roce_v2_init(struct hns_roce_dev *hr_dev) } } - /* Alloc memory for CQC Timer buffer space chunk*/ + /* Alloc memory for CQC Timer buffer space chunk */ for (cqc_count = 0; cqc_count < hr_dev->caps.cqc_timer_bt_num; cqc_count++) { ret = hns_roce_table_get(hr_dev, &hr_dev->cqc_timer_table, @@ -5236,14 +5236,12 @@ static void hns_roce_mhop_free_eq(struct hns_roce_dev *hr_dev, buf_chk_sz = 1 << (hr_dev->caps.eqe_buf_pg_sz + PAGE_SHIFT); bt_chk_sz = 1 << (hr_dev->caps.eqe_ba_pg_sz + PAGE_SHIFT); - /* hop_num = 0 */ if (mhop_num == HNS_ROCE_HOP_NUM_0) { dma_free_coherent(dev, (unsigned int)(eq->entries * eq->eqe_size), eq->bt_l0, eq->l0_dma); return; } - /* hop_num = 1 or hop = 2 */ dma_free_coherent(dev, bt_chk_sz, eq->bt_l0, eq->l0_dma); if (mhop_num == 1) { for (i = 0; i < eq->l0_last_num; i++) { @@ -5483,7 +5481,6 @@ static int hns_roce_mhop_alloc_eq(struct hns_roce_dev *hr_dev, buf_chk_sz); bt_num = DIV_ROUND_UP(ba_num, bt_chk_sz / BA_BYTE_LEN); - /* hop_num = 0 */ if (mhop_num == HNS_ROCE_HOP_NUM_0) { if (eq->entries > buf_chk_sz / eq->eqe_size) { dev_err(dev, "eq entries %d is larger than buf_pg_sz!", @@ -5749,7 +5746,7 @@ static int __hns_roce_request_irq(struct hns_roce_dev *hr_dev, int irq_num, } } - /* irq contains: abnormal + AEQ + CEQ*/ + /* irq contains: abnormal + AEQ + CEQ */ for (j = 0; j < irq_num; j++) if (j < other_num) snprintf((char *)hr_dev->irq_names[j], diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c index 0cfa946..8157679 100644 --- a/drivers/infiniband/hw/hns/hns_roce_mr.c +++ b/drivers/infiniband/hw/hns/hns_roce_mr.c @@ -517,7 +517,6 @@ static int hns_roce_mhop_alloc(struct hns_roce_dev *hr_dev, int npages, if (mhop_num == HNS_ROCE_HOP_NUM_0) return 0; - /* hop_num = 1 */ if (mhop_num == 1) return pbl_1hop_alloc(hr_dev, npages, mr, pbl_bt_sz); From patchwork Thu Aug 8 14:53:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lijun Ou X-Patchwork-Id: 11084335 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EE380912 for ; Thu, 8 Aug 2019 14:58:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DC5E828AA0 for ; Thu, 8 Aug 2019 14:58:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D118C28B4A; Thu, 8 Aug 2019 14:58:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 706AD28AA0 for ; Thu, 8 Aug 2019 14:58:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733188AbfHHO6T (ORCPT ); Thu, 8 Aug 2019 10:58:19 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:4198 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1733151AbfHHO6T (ORCPT ); Thu, 8 Aug 2019 10:58:19 -0400 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 51A66A8CAD687EBE29A8; Thu, 8 Aug 2019 22:58:12 +0800 (CST) Received: from linux-ioko.site (10.71.200.31) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.439.0; Thu, 8 Aug 2019 22:58:06 +0800 From: Lijun Ou To: , CC: , , Subject: [PATCH V4 for-next 07/14] RDMA/hns: Handling the error return value of hem function Date: Thu, 8 Aug 2019 22:53:47 +0800 Message-ID: <1565276034-97329-8-git-send-email-oulijun@huawei.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1565276034-97329-1-git-send-email-oulijun@huawei.com> References: <1565276034-97329-1-git-send-email-oulijun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.71.200.31] X-CFilter-Loop: Reflected Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Lang Cheng Handling the error return value of hns_roce_calc_hem_mhop. Signed-off-by: Lang Cheng --- drivers/infiniband/hw/hns/hns_roce_hem.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.c b/drivers/infiniband/hw/hns/hns_roce_hem.c index d3e72a0..0268c7a 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hem.c +++ b/drivers/infiniband/hw/hns/hns_roce_hem.c @@ -830,7 +830,8 @@ void *hns_roce_table_find(struct hns_roce_dev *hr_dev, } else { u32 seg_size = 64; /* 8 bytes per BA and 8 BA per segment */ - hns_roce_calc_hem_mhop(hr_dev, table, &mhop_obj, &mhop); + if (hns_roce_calc_hem_mhop(hr_dev, table, &mhop_obj, &mhop)) + goto out; /* mtt mhop */ i = mhop.l0_idx; j = mhop.l1_idx; @@ -879,11 +880,13 @@ int hns_roce_table_get_range(struct hns_roce_dev *hr_dev, { struct hns_roce_hem_mhop mhop; unsigned long inc = table->table_chunk_size / table->obj_size; - unsigned long i; + unsigned long i = 0; int ret; if (hns_roce_check_whether_mhop(hr_dev, table->type)) { - hns_roce_calc_hem_mhop(hr_dev, table, NULL, &mhop); + ret = hns_roce_calc_hem_mhop(hr_dev, table, NULL, &mhop); + if (ret) + goto fail; inc = mhop.bt_chunk_size / table->obj_size; } @@ -913,7 +916,8 @@ void hns_roce_table_put_range(struct hns_roce_dev *hr_dev, unsigned long i; if (hns_roce_check_whether_mhop(hr_dev, table->type)) { - hns_roce_calc_hem_mhop(hr_dev, table, NULL, &mhop); + if (hns_roce_calc_hem_mhop(hr_dev, table, NULL, &mhop)) + return; inc = mhop.bt_chunk_size / table->obj_size; } @@ -1035,7 +1039,8 @@ static void hns_roce_cleanup_mhop_hem_table(struct hns_roce_dev *hr_dev, int i; u64 obj; - hns_roce_calc_hem_mhop(hr_dev, table, NULL, &mhop); + if (hns_roce_calc_hem_mhop(hr_dev, table, NULL, &mhop)) + return; buf_chunk_size = table->type < HEM_TYPE_MTT ? mhop.buf_chunk_size : mhop.bt_chunk_size; From patchwork Thu Aug 8 14:53:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lijun Ou X-Patchwork-Id: 11084327 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DA43614DB for ; Thu, 8 Aug 2019 14:58:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C85E328A89 for ; Thu, 8 Aug 2019 14:58:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BCD9428B20; Thu, 8 Aug 2019 14:58:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 65CC428A89 for ; Thu, 8 Aug 2019 14:58:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728289AbfHHO6T (ORCPT ); Thu, 8 Aug 2019 10:58:19 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:50838 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1733149AbfHHO6S (ORCPT ); Thu, 8 Aug 2019 10:58:18 -0400 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 70D6F51533CC48E97B80; Thu, 8 Aug 2019 22:58:17 +0800 (CST) Received: from linux-ioko.site (10.71.200.31) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.439.0; Thu, 8 Aug 2019 22:58:06 +0800 From: Lijun Ou To: , CC: , , Subject: [PATCH V4 for-next 08/14] RDMA/hns: Split bool statement and assign statement Date: Thu, 8 Aug 2019 22:53:48 +0800 Message-ID: <1565276034-97329-9-git-send-email-oulijun@huawei.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1565276034-97329-1-git-send-email-oulijun@huawei.com> References: <1565276034-97329-1-git-send-email-oulijun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.71.200.31] X-CFilter-Loop: Reflected Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Lang Cheng Assign statement can not be contained in bool statement or function param. Signed-off-by: Lang Cheng --- drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c index 9b037b3..4b9f839 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -4938,7 +4938,7 @@ static int hns_roce_v2_aeq_int(struct hns_roce_dev *hr_dev, struct hns_roce_eq *eq) { struct device *dev = hr_dev->dev; - struct hns_roce_aeqe *aeqe; + struct hns_roce_aeqe *aeqe = next_aeqe_sw_v2(eq); int aeqe_found = 0; int event_type; int sub_type; @@ -4946,8 +4946,7 @@ static int hns_roce_v2_aeq_int(struct hns_roce_dev *hr_dev, u32 qpn; u32 cqn; - while ((aeqe = next_aeqe_sw_v2(eq))) { - + while (aeqe) { /* Make sure we read AEQ entry after we have checked the * ownership bit */ @@ -5016,6 +5015,8 @@ static int hns_roce_v2_aeq_int(struct hns_roce_dev *hr_dev, eq->cons_index = 0; } hns_roce_v2_init_irq_work(hr_dev, eq, qpn, cqn); + + aeqe = next_aeqe_sw_v2(eq); } set_eq_cons_index_v2(eq); @@ -5068,12 +5069,11 @@ static int hns_roce_v2_ceq_int(struct hns_roce_dev *hr_dev, struct hns_roce_eq *eq) { struct device *dev = hr_dev->dev; - struct hns_roce_ceqe *ceqe; + struct hns_roce_ceqe *ceqe = next_ceqe_sw_v2(eq); int ceqe_found = 0; u32 cqn; - while ((ceqe = next_ceqe_sw_v2(eq))) { - + while (ceqe) { /* Make sure we read CEQ entry after we have checked the * ownership bit */ @@ -5092,6 +5092,8 @@ static int hns_roce_v2_ceq_int(struct hns_roce_dev *hr_dev, dev_warn(dev, "cons_index overflow, set back to 0.\n"); eq->cons_index = 0; } + + ceqe = next_ceqe_sw_v2(eq); } set_eq_cons_index_v2(eq); From patchwork Thu Aug 8 14:53:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lijun Ou X-Patchwork-Id: 11084323 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1225414E5 for ; Thu, 8 Aug 2019 14:58:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 00EBC28A89 for ; Thu, 8 Aug 2019 14:58:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E998428B20; Thu, 8 Aug 2019 14:58:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8EAE428A89 for ; Thu, 8 Aug 2019 14:58:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733176AbfHHO6T (ORCPT ); Thu, 8 Aug 2019 10:58:19 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:50768 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1733144AbfHHO6S (ORCPT ); Thu, 8 Aug 2019 10:58:18 -0400 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 5E1789522CFED346E9F7; Thu, 8 Aug 2019 22:58:17 +0800 (CST) Received: from linux-ioko.site (10.71.200.31) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.439.0; Thu, 8 Aug 2019 22:58:07 +0800 From: Lijun Ou To: , CC: , , Subject: [PATCH V4 for-next 09/14] RDMA/hns: Refactor irq request code Date: Thu, 8 Aug 2019 22:53:49 +0800 Message-ID: <1565276034-97329-10-git-send-email-oulijun@huawei.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1565276034-97329-1-git-send-email-oulijun@huawei.com> References: <1565276034-97329-1-git-send-email-oulijun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.71.200.31] X-CFilter-Loop: Reflected Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Lang Cheng Remove unnecessary if...else..., to make the code look simpler. Signed-off-by: Lang Cheng --- drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c index 4b9f839..c2ad774 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -5749,18 +5749,19 @@ static int __hns_roce_request_irq(struct hns_roce_dev *hr_dev, int irq_num, } /* irq contains: abnormal + AEQ + CEQ */ - for (j = 0; j < irq_num; j++) - if (j < other_num) - snprintf((char *)hr_dev->irq_names[j], - HNS_ROCE_INT_NAME_LEN, "hns-abn-%d", j); - else if (j < (other_num + aeq_num)) - snprintf((char *)hr_dev->irq_names[j], - HNS_ROCE_INT_NAME_LEN, "hns-aeq-%d", - j - other_num); - else - snprintf((char *)hr_dev->irq_names[j], - HNS_ROCE_INT_NAME_LEN, "hns-ceq-%d", - j - other_num - aeq_num); + for (j = 0; j < other_num; j++) + snprintf((char *)hr_dev->irq_names[j], + HNS_ROCE_INT_NAME_LEN, "hns-abn-%d", j); + + for (j = other_num; j < (other_num + aeq_num); j++) + snprintf((char *)hr_dev->irq_names[j], + HNS_ROCE_INT_NAME_LEN, "hns-aeq-%d", + j - other_num); + + for (j = (other_num + aeq_num); j < irq_num; j++) + snprintf((char *)hr_dev->irq_names[j], + HNS_ROCE_INT_NAME_LEN, "hns-ceq-%d", + j - other_num - aeq_num); for (j = 0; j < irq_num; j++) { if (j < other_num) From patchwork Thu Aug 8 14:53:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lijun Ou X-Patchwork-Id: 11084333 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CBDA414E5 for ; Thu, 8 Aug 2019 14:58:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B9FD628B18 for ; Thu, 8 Aug 2019 14:58:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AE59228B4B; Thu, 8 Aug 2019 14:58:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8DFEB28B18 for ; Thu, 8 Aug 2019 14:58:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728120AbfHHO6U (ORCPT ); Thu, 8 Aug 2019 10:58:20 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:50820 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1733174AbfHHO6T (ORCPT ); Thu, 8 Aug 2019 10:58:19 -0400 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 668594B096052DE84D52; Thu, 8 Aug 2019 22:58:17 +0800 (CST) Received: from linux-ioko.site (10.71.200.31) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.439.0; Thu, 8 Aug 2019 22:58:07 +0800 From: Lijun Ou To: , CC: , , Subject: [PATCH V4 for-next 10/14] RDMA/hns: Remove unnecessary kzalloc Date: Thu, 8 Aug 2019 22:53:50 +0800 Message-ID: <1565276034-97329-11-git-send-email-oulijun@huawei.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1565276034-97329-1-git-send-email-oulijun@huawei.com> References: <1565276034-97329-1-git-send-email-oulijun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.71.200.31] X-CFilter-Loop: Reflected Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Lang Cheng For hns_roce_v2_query_qp and hns_roce_v2_modify_qp, we can use stack memory to create qp context data. Make the code simpler. Signed-off-by: Lang Cheng --- V2->V3: 1. Delete unncessary memset operation V1->V2: 1. Fix the comment gived by Leon Romanovsky --- drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 64 +++++++++++++----------------- 1 file changed, 27 insertions(+), 37 deletions(-) diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c index c2ad774..de02612 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -4288,22 +4288,19 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp, { struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device); struct hns_roce_qp *hr_qp = to_hr_qp(ibqp); - struct hns_roce_v2_qp_context *context; - struct hns_roce_v2_qp_context *qpc_mask; + struct hns_roce_v2_qp_context ctx[2]; + struct hns_roce_v2_qp_context *context = ctx; + struct hns_roce_v2_qp_context *qpc_mask = ctx + 1; struct device *dev = hr_dev->dev; int ret; - context = kcalloc(2, sizeof(*context), GFP_ATOMIC); - if (!context) - return -ENOMEM; - - qpc_mask = context + 1; /* * In v2 engine, software pass context and context mask to hardware * when modifying qp. If software need modify some fields in context, * we should set all bits of the relevant fields in context mask to * 0 at the same time, else set them to 0x1. */ + memset(context, 0, sizeof(*context)); memset(qpc_mask, 0xff, sizeof(*qpc_mask)); ret = hns_roce_v2_set_abs_fields(ibqp, attr, attr_mask, cur_state, new_state, context, qpc_mask); @@ -4349,8 +4346,7 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp, V2_QPC_BYTE_60_QP_ST_S, 0); /* SW pass context to HW */ - ret = hns_roce_v2_qp_modify(hr_dev, cur_state, new_state, - context, hr_qp); + ret = hns_roce_v2_qp_modify(hr_dev, cur_state, new_state, ctx, hr_qp); if (ret) { dev_err(dev, "hns_roce_qp_modify failed(%d)\n", ret); goto out; @@ -4378,7 +4374,6 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp, } out: - kfree(context); return ret; } @@ -4429,16 +4424,12 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr, { struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device); struct hns_roce_qp *hr_qp = to_hr_qp(ibqp); - struct hns_roce_v2_qp_context *context; + struct hns_roce_v2_qp_context context = {}; struct device *dev = hr_dev->dev; int tmp_qp_state; int state; int ret; - context = kzalloc(sizeof(*context), GFP_KERNEL); - if (!context) - return -ENOMEM; - memset(qp_attr, 0, sizeof(*qp_attr)); memset(qp_init_attr, 0, sizeof(*qp_init_attr)); @@ -4450,14 +4441,14 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr, goto done; } - ret = hns_roce_v2_query_qpc(hr_dev, hr_qp, context); + ret = hns_roce_v2_query_qpc(hr_dev, hr_qp, &context); if (ret) { dev_err(dev, "query qpc error\n"); ret = -EINVAL; goto out; } - state = roce_get_field(context->byte_60_qpst_tempid, + state = roce_get_field(context.byte_60_qpst_tempid, V2_QPC_BYTE_60_QP_ST_M, V2_QPC_BYTE_60_QP_ST_S); tmp_qp_state = to_ib_qp_st((enum hns_roce_v2_qp_state)state); if (tmp_qp_state == -1) { @@ -4467,7 +4458,7 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr, } hr_qp->state = (u8)tmp_qp_state; qp_attr->qp_state = (enum ib_qp_state)hr_qp->state; - qp_attr->path_mtu = (enum ib_mtu)roce_get_field(context->byte_24_mtu_tc, + qp_attr->path_mtu = (enum ib_mtu)roce_get_field(context.byte_24_mtu_tc, V2_QPC_BYTE_24_MTU_M, V2_QPC_BYTE_24_MTU_S); qp_attr->path_mig_state = IB_MIG_ARMED; @@ -4475,20 +4466,20 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr, if (hr_qp->ibqp.qp_type == IB_QPT_UD) qp_attr->qkey = V2_QKEY_VAL; - qp_attr->rq_psn = roce_get_field(context->byte_108_rx_reqepsn, + qp_attr->rq_psn = roce_get_field(context.byte_108_rx_reqepsn, V2_QPC_BYTE_108_RX_REQ_EPSN_M, V2_QPC_BYTE_108_RX_REQ_EPSN_S); - qp_attr->sq_psn = (u32)roce_get_field(context->byte_172_sq_psn, + qp_attr->sq_psn = (u32)roce_get_field(context.byte_172_sq_psn, V2_QPC_BYTE_172_SQ_CUR_PSN_M, V2_QPC_BYTE_172_SQ_CUR_PSN_S); - qp_attr->dest_qp_num = (u8)roce_get_field(context->byte_56_dqpn_err, + qp_attr->dest_qp_num = (u8)roce_get_field(context.byte_56_dqpn_err, V2_QPC_BYTE_56_DQPN_M, V2_QPC_BYTE_56_DQPN_S); - qp_attr->qp_access_flags = ((roce_get_bit(context->byte_76_srqn_op_en, + qp_attr->qp_access_flags = ((roce_get_bit(context.byte_76_srqn_op_en, V2_QPC_BYTE_76_RRE_S)) << V2_QP_RWE_S) | - ((roce_get_bit(context->byte_76_srqn_op_en, + ((roce_get_bit(context.byte_76_srqn_op_en, V2_QPC_BYTE_76_RWE_S)) << V2_QP_RRE_S) | - ((roce_get_bit(context->byte_76_srqn_op_en, + ((roce_get_bit(context.byte_76_srqn_op_en, V2_QPC_BYTE_76_ATE_S)) << V2_QP_ATE_S); if (hr_qp->ibqp.qp_type == IB_QPT_RC || @@ -4497,43 +4488,43 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr, rdma_ah_retrieve_grh(&qp_attr->ah_attr); rdma_ah_set_sl(&qp_attr->ah_attr, - roce_get_field(context->byte_28_at_fl, + roce_get_field(context.byte_28_at_fl, V2_QPC_BYTE_28_SL_M, V2_QPC_BYTE_28_SL_S)); - grh->flow_label = roce_get_field(context->byte_28_at_fl, + grh->flow_label = roce_get_field(context.byte_28_at_fl, V2_QPC_BYTE_28_FL_M, V2_QPC_BYTE_28_FL_S); - grh->sgid_index = roce_get_field(context->byte_20_smac_sgid_idx, + grh->sgid_index = roce_get_field(context.byte_20_smac_sgid_idx, V2_QPC_BYTE_20_SGID_IDX_M, V2_QPC_BYTE_20_SGID_IDX_S); - grh->hop_limit = roce_get_field(context->byte_24_mtu_tc, + grh->hop_limit = roce_get_field(context.byte_24_mtu_tc, V2_QPC_BYTE_24_HOP_LIMIT_M, V2_QPC_BYTE_24_HOP_LIMIT_S); - grh->traffic_class = roce_get_field(context->byte_24_mtu_tc, + grh->traffic_class = roce_get_field(context.byte_24_mtu_tc, V2_QPC_BYTE_24_TC_M, V2_QPC_BYTE_24_TC_S); - memcpy(grh->dgid.raw, context->dgid, sizeof(grh->dgid.raw)); + memcpy(grh->dgid.raw, context.dgid, sizeof(grh->dgid.raw)); } qp_attr->port_num = hr_qp->port + 1; qp_attr->sq_draining = 0; - qp_attr->max_rd_atomic = 1 << roce_get_field(context->byte_208_irrl, + qp_attr->max_rd_atomic = 1 << roce_get_field(context.byte_208_irrl, V2_QPC_BYTE_208_SR_MAX_M, V2_QPC_BYTE_208_SR_MAX_S); - qp_attr->max_dest_rd_atomic = 1 << roce_get_field(context->byte_140_raq, + qp_attr->max_dest_rd_atomic = 1 << roce_get_field(context.byte_140_raq, V2_QPC_BYTE_140_RR_MAX_M, V2_QPC_BYTE_140_RR_MAX_S); - qp_attr->min_rnr_timer = (u8)roce_get_field(context->byte_80_rnr_rx_cqn, + qp_attr->min_rnr_timer = (u8)roce_get_field(context.byte_80_rnr_rx_cqn, V2_QPC_BYTE_80_MIN_RNR_TIME_M, V2_QPC_BYTE_80_MIN_RNR_TIME_S); - qp_attr->timeout = (u8)roce_get_field(context->byte_28_at_fl, + qp_attr->timeout = (u8)roce_get_field(context.byte_28_at_fl, V2_QPC_BYTE_28_AT_M, V2_QPC_BYTE_28_AT_S); - qp_attr->retry_cnt = roce_get_field(context->byte_212_lsn, + qp_attr->retry_cnt = roce_get_field(context.byte_212_lsn, V2_QPC_BYTE_212_RETRY_CNT_M, V2_QPC_BYTE_212_RETRY_CNT_S); - qp_attr->rnr_retry = context->rq_rnr_timer; + qp_attr->rnr_retry = context.rq_rnr_timer; done: qp_attr->cur_qp_state = qp_attr->qp_state; @@ -4552,7 +4543,6 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr, out: mutex_unlock(&hr_qp->mutex); - kfree(context); return ret; } From patchwork Thu Aug 8 14:53:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lijun Ou X-Patchwork-Id: 11084337 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 22DB814DB for ; Thu, 8 Aug 2019 14:58:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0FEA028AA0 for ; Thu, 8 Aug 2019 14:58:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0403128B18; Thu, 8 Aug 2019 14:58:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8FF0928B20 for ; Thu, 8 Aug 2019 14:58:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733174AbfHHO6V (ORCPT ); Thu, 8 Aug 2019 10:58:21 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:50864 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1733180AbfHHO6U (ORCPT ); Thu, 8 Aug 2019 10:58:20 -0400 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 74D8E1FFDC7BBA440213; Thu, 8 Aug 2019 22:58:17 +0800 (CST) Received: from linux-ioko.site (10.71.200.31) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.439.0; Thu, 8 Aug 2019 22:58:07 +0800 From: Lijun Ou To: , CC: , , Subject: [PATCH V4 for-next 11/14] RDMA/hns: Refactor hns_roce_v2_set_hem for hip08 Date: Thu, 8 Aug 2019 22:53:51 +0800 Message-ID: <1565276034-97329-12-git-send-email-oulijun@huawei.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1565276034-97329-1-git-send-email-oulijun@huawei.com> References: <1565276034-97329-1-git-send-email-oulijun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.71.200.31] X-CFilter-Loop: Reflected Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Yangyang Li In order to reduce the complexity of hns_roce_v2_set_hem, extract the implementation of op as a function. Signed-off-by: Yangyang Li --- drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 75 +++++++++++++++++------------- 1 file changed, 42 insertions(+), 33 deletions(-) diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c index de02612..c7f1585 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -2903,11 +2903,49 @@ static int hns_roce_v2_poll_cq(struct ib_cq *ibcq, int num_entries, return npolled; } +static int get_op_for_set_hem(struct hns_roce_dev *hr_dev, u32 type, + int step_idx) +{ + int op; + + if (type == HEM_TYPE_SCCC && step_idx) + return -EINVAL; + + switch (type) { + case HEM_TYPE_QPC: + op = HNS_ROCE_CMD_WRITE_QPC_BT0; + break; + case HEM_TYPE_MTPT: + op = HNS_ROCE_CMD_WRITE_MPT_BT0; + break; + case HEM_TYPE_CQC: + op = HNS_ROCE_CMD_WRITE_CQC_BT0; + break; + case HEM_TYPE_SRQC: + op = HNS_ROCE_CMD_WRITE_SRQC_BT0; + break; + case HEM_TYPE_SCCC: + op = HNS_ROCE_CMD_WRITE_SCCC_BT0; + break; + case HEM_TYPE_QPC_TIMER: + op = HNS_ROCE_CMD_WRITE_QPC_TIMER_BT0; + break; + case HEM_TYPE_CQC_TIMER: + op = HNS_ROCE_CMD_WRITE_CQC_TIMER_BT0; + break; + default: + dev_warn(hr_dev->dev, + "Table %d not to be written by mailbox!\n", type); + return -EINVAL; + } + + return op + step_idx; +} + static int hns_roce_v2_set_hem(struct hns_roce_dev *hr_dev, struct hns_roce_hem_table *table, int obj, int step_idx) { - struct device *dev = hr_dev->dev; struct hns_roce_cmd_mailbox *mailbox; struct hns_roce_hem_iter iter; struct hns_roce_hem_mhop mhop; @@ -2920,7 +2958,7 @@ static int hns_roce_v2_set_hem(struct hns_roce_dev *hr_dev, u64 bt_ba = 0; u32 chunk_ba_num; u32 hop_num; - u16 op = 0xff; + int op; if (!hns_roce_check_whether_mhop(hr_dev, table->type)) return 0; @@ -2942,38 +2980,9 @@ static int hns_roce_v2_set_hem(struct hns_roce_dev *hr_dev, hem_idx = i; } - switch (table->type) { - case HEM_TYPE_QPC: - op = HNS_ROCE_CMD_WRITE_QPC_BT0; - break; - case HEM_TYPE_MTPT: - op = HNS_ROCE_CMD_WRITE_MPT_BT0; - break; - case HEM_TYPE_CQC: - op = HNS_ROCE_CMD_WRITE_CQC_BT0; - break; - case HEM_TYPE_SRQC: - op = HNS_ROCE_CMD_WRITE_SRQC_BT0; - break; - case HEM_TYPE_SCCC: - op = HNS_ROCE_CMD_WRITE_SCCC_BT0; - break; - case HEM_TYPE_QPC_TIMER: - op = HNS_ROCE_CMD_WRITE_QPC_TIMER_BT0; - break; - case HEM_TYPE_CQC_TIMER: - op = HNS_ROCE_CMD_WRITE_CQC_TIMER_BT0; - break; - default: - dev_warn(dev, "Table %d not to be written by mailbox!\n", - table->type); + op = get_op_for_set_hem(hr_dev, table->type, step_idx); + if (op == -EINVAL) return 0; - } - - if (table->type == HEM_TYPE_SCCC && step_idx) - return 0; - - op += step_idx; mailbox = hns_roce_alloc_cmd_mailbox(hr_dev); if (IS_ERR(mailbox)) From patchwork Thu Aug 8 14:53:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lijun Ou X-Patchwork-Id: 11084331 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5155714DB for ; Thu, 8 Aug 2019 14:58:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3D1A628AA0 for ; Thu, 8 Aug 2019 14:58:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 317DD28B4A; Thu, 8 Aug 2019 14:58:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 40C2F28AA0 for ; Thu, 8 Aug 2019 14:58:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733153AbfHHO6U (ORCPT ); Thu, 8 Aug 2019 10:58:20 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:50788 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728120AbfHHO6T (ORCPT ); Thu, 8 Aug 2019 10:58:19 -0400 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 62AC587B3BF9282EF296; Thu, 8 Aug 2019 22:58:17 +0800 (CST) Received: from linux-ioko.site (10.71.200.31) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.439.0; Thu, 8 Aug 2019 22:58:07 +0800 From: Lijun Ou To: , CC: , , Subject: [PATCH V4 for-next 12/14] RDMA/hns: Remove redundant print in hns_roce_v2_ceq_int() Date: Thu, 8 Aug 2019 22:53:52 +0800 Message-ID: <1565276034-97329-13-git-send-email-oulijun@huawei.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1565276034-97329-1-git-send-email-oulijun@huawei.com> References: <1565276034-97329-1-git-send-email-oulijun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.71.200.31] X-CFilter-Loop: Reflected Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Weihang Li There is no need to tell users when eq->cons_index is overflow, we just set it back to zero. Signed-off-by: Weihang Li --- drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c index c7f1585..b949329 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -5009,10 +5009,9 @@ static int hns_roce_v2_aeq_int(struct hns_roce_dev *hr_dev, ++eq->cons_index; aeqe_found = 1; - if (eq->cons_index > (2 * eq->entries - 1)) { - dev_warn(dev, "cons_index overflow, set back to 0.\n"); + if (eq->cons_index > (2 * eq->entries - 1)) eq->cons_index = 0; - } + hns_roce_v2_init_irq_work(hr_dev, eq, qpn, cqn); aeqe = next_aeqe_sw_v2(eq); From patchwork Thu Aug 8 14:53:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lijun Ou X-Patchwork-Id: 11084343 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8EC7714DB for ; Thu, 8 Aug 2019 14:58:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7DE2A28A89 for ; Thu, 8 Aug 2019 14:58:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7262C28B20; Thu, 8 Aug 2019 14:58:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5149628A89 for ; Thu, 8 Aug 2019 14:58:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732760AbfHHO6W (ORCPT ); Thu, 8 Aug 2019 10:58:22 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:50866 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1732667AbfHHO6V (ORCPT ); Thu, 8 Aug 2019 10:58:21 -0400 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 78EEF39878E15D84ACF2; Thu, 8 Aug 2019 22:58:17 +0800 (CST) Received: from linux-ioko.site (10.71.200.31) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.439.0; Thu, 8 Aug 2019 22:58:08 +0800 From: Lijun Ou To: , CC: , , Subject: [PATCH V4 for-next 13/14] RDMA/hns: Disable alw_lcl_lpbk of SSU Date: Thu, 8 Aug 2019 22:53:53 +0800 Message-ID: <1565276034-97329-14-git-send-email-oulijun@huawei.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1565276034-97329-1-git-send-email-oulijun@huawei.com> References: <1565276034-97329-1-git-send-email-oulijun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.71.200.31] X-CFilter-Loop: Reflected Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Weihang Li If we enabled alw_lcl_lpbk in promiscuous mode, packet whose source and destination mac address is equal will be handled in both inner loopback and outer loopback. This will halve performance of roce in promiscuous mode. Signed-off-by: Weihang Li --- drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c index b949329..567be0a 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -1308,7 +1308,7 @@ static int hns_roce_set_vf_switch_param(struct hns_roce_dev *hr_dev, cpu_to_le16(HNS_ROCE_CMD_FLAG_NO_INTR | HNS_ROCE_CMD_FLAG_IN); desc.flag &= cpu_to_le16(~HNS_ROCE_CMD_FLAG_WR); roce_set_bit(swt->cfg, VF_SWITCH_DATA_CFG_ALW_LPBK_S, 1); - roce_set_bit(swt->cfg, VF_SWITCH_DATA_CFG_ALW_LCL_LPBK_S, 1); + roce_set_bit(swt->cfg, VF_SWITCH_DATA_CFG_ALW_LCL_LPBK_S, 0); roce_set_bit(swt->cfg, VF_SWITCH_DATA_CFG_ALW_DST_OVRD_S, 1); return hns_roce_cmq_send(hr_dev, &desc, 1); From patchwork Thu Aug 8 14:53:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lijun Ou X-Patchwork-Id: 11084339 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6EDDC1850 for ; Thu, 8 Aug 2019 14:58:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5F1E728B18 for ; Thu, 8 Aug 2019 14:58:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 529C228B20; Thu, 8 Aug 2019 14:58:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0D18228A89 for ; Thu, 8 Aug 2019 14:58:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733151AbfHHO6T (ORCPT ); Thu, 8 Aug 2019 10:58:19 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:50844 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1733153AbfHHO6T (ORCPT ); Thu, 8 Aug 2019 10:58:19 -0400 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 6BC31CDFCD9BCC2D1291; Thu, 8 Aug 2019 22:58:17 +0800 (CST) Received: from linux-ioko.site (10.71.200.31) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.439.0; Thu, 8 Aug 2019 22:58:08 +0800 From: Lijun Ou To: , CC: , , Subject: [PATCH V4 for-next 14/14] RDMA/hns: Use the new APIs for printing log Date: Thu, 8 Aug 2019 22:53:54 +0800 Message-ID: <1565276034-97329-15-git-send-email-oulijun@huawei.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1565276034-97329-1-git-send-email-oulijun@huawei.com> References: <1565276034-97329-1-git-send-email-oulijun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.71.200.31] X-CFilter-Loop: Reflected Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Here uses the new APIs instead of some dev print interfaces in some functions. Signed-off-by: Lijun Ou --- drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 8 +++--- drivers/infiniband/hw/hns/hns_roce_qp.c | 45 +++++++++++++++++------------- 2 files changed, 29 insertions(+), 24 deletions(-) diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c index 567be0a..87a1574 100644 --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -4560,7 +4560,7 @@ static int hns_roce_v2_destroy_qp_common(struct hns_roce_dev *hr_dev, struct ib_udata *udata) { struct hns_roce_cq *send_cq, *recv_cq; - struct device *dev = hr_dev->dev; + struct ib_device *ibdev = &hr_dev->ib_dev; int ret; if (hr_qp->ibqp.qp_type == IB_QPT_RC && hr_qp->state != IB_QPS_RESET) { @@ -4568,7 +4568,7 @@ static int hns_roce_v2_destroy_qp_common(struct hns_roce_dev *hr_dev, ret = hns_roce_v2_modify_qp(&hr_qp->ibqp, NULL, 0, hr_qp->state, IB_QPS_RESET); if (ret) { - dev_err(dev, "modify QP to Reset failed.\n"); + ibdev_err(ibdev, "modify QP to Reset failed.\n"); return ret; } } @@ -4637,8 +4637,8 @@ static int hns_roce_v2_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata) ret = hns_roce_v2_destroy_qp_common(hr_dev, hr_qp, udata); if (ret) { - dev_err(&hr_dev->dev, "Destroy qp 0x%06lx failed(%d)\n", - hr_qp->qpn, ret); + ibdev_err(&hr_dev->ib_dev, "Destroy qp 0x%06lx failed(%d)\n", + hr_qp->qpn, ret); return ret; } diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c index f803209..b729f8e 100644 --- a/drivers/infiniband/hw/hns/hns_roce_qp.c +++ b/drivers/infiniband/hw/hns/hns_roce_qp.c @@ -335,13 +335,13 @@ static int check_sq_size_with_integrity(struct hns_roce_dev *hr_dev, if ((u32)(1 << ucmd->log_sq_bb_count) > hr_dev->caps.max_wqes || ucmd->log_sq_stride > max_sq_stride || ucmd->log_sq_stride < HNS_ROCE_IB_MIN_SQ_STRIDE) { - dev_err(hr_dev->dev, "check SQ size error!\n"); + ibdev_err(&hr_dev->ib_dev, "check SQ size error!\n"); return -EINVAL; } if (cap->max_send_sge > hr_dev->caps.max_sq_sg) { - dev_err(hr_dev->dev, "SQ sge error! max_send_sge=%d\n", - cap->max_send_sge); + ibdev_err(&hr_dev->ib_dev, "SQ sge error! max_send_sge=%d\n", + cap->max_send_sge); return -EINVAL; } @@ -988,7 +988,7 @@ struct ib_qp *hns_roce_create_qp(struct ib_pd *pd, struct ib_udata *udata) { struct hns_roce_dev *hr_dev = to_hr_dev(pd->device); - struct device *dev = hr_dev->dev; + struct ib_device *ibdev = &hr_dev->ib_dev; struct hns_roce_sqp *hr_sqp; struct hns_roce_qp *hr_qp; int ret; @@ -1002,8 +1002,8 @@ struct ib_qp *hns_roce_create_qp(struct ib_pd *pd, ret = hns_roce_create_qp_common(hr_dev, pd, init_attr, udata, 0, hr_qp); if (ret) { - dev_err(dev, "Create RC QP 0x%06lx failed(%d)\n", - hr_qp->qpn, ret); + ibdev_err(ibdev, "Create RC QP 0x%06lx failed(%d)\n", + hr_qp->qpn, ret); kfree(hr_qp); return ERR_PTR(ret); } @@ -1015,7 +1015,7 @@ struct ib_qp *hns_roce_create_qp(struct ib_pd *pd, case IB_QPT_GSI: { /* Userspace is not allowed to create special QPs: */ if (udata) { - dev_err(dev, "not support usr space GSI\n"); + ibdev_err(ibdev, "not support usr space GSI\n"); return ERR_PTR(-EINVAL); } @@ -1037,7 +1037,7 @@ struct ib_qp *hns_roce_create_qp(struct ib_pd *pd, ret = hns_roce_create_qp_common(hr_dev, pd, init_attr, udata, hr_qp->ibqp.qp_num, hr_qp); if (ret) { - dev_err(dev, "Create GSI QP failed!\n"); + ibdev_err(ibdev, "Create GSI QP failed!\n"); kfree(hr_sqp); return ERR_PTR(ret); } @@ -1045,7 +1045,8 @@ struct ib_qp *hns_roce_create_qp(struct ib_pd *pd, break; } default:{ - dev_err(dev, "not support QP type %d\n", init_attr->qp_type); + ibdev_err(ibdev, "not support QP type %d\n", + init_attr->qp_type); return ERR_PTR(-EINVAL); } } @@ -1075,7 +1076,6 @@ static int check_mtu_validate(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp, struct ib_qp_attr *attr, int attr_mask) { - struct device *dev = hr_dev->dev; enum ib_mtu active_mtu; int p; @@ -1085,7 +1085,8 @@ static int check_mtu_validate(struct hns_roce_dev *hr_dev, if ((hr_dev->caps.max_mtu >= IB_MTU_2048 && attr->path_mtu > hr_dev->caps.max_mtu) || attr->path_mtu < IB_MTU_256 || attr->path_mtu > active_mtu) { - dev_err(dev, "attr path_mtu(%d)invalid while modify qp", + ibdev_err(&hr_dev->ib_dev, + "attr path_mtu(%d)invalid while modify qp", attr->path_mtu); return -EINVAL; } @@ -1098,12 +1099,12 @@ static int hns_roce_check_qp_attr(struct ib_qp *ibqp, struct ib_qp_attr *attr, { struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device); struct hns_roce_qp *hr_qp = to_hr_qp(ibqp); - struct device *dev = hr_dev->dev; int p; if ((attr_mask & IB_QP_PORT) && (attr->port_num == 0 || attr->port_num > hr_dev->caps.num_ports)) { - dev_err(dev, "attr port_num invalid.attr->port_num=%d\n", + ibdev_err(&hr_dev->ib_dev, + "attr port_num invalid.attr->port_num=%d\n", attr->port_num); return -EINVAL; } @@ -1111,7 +1112,8 @@ static int hns_roce_check_qp_attr(struct ib_qp *ibqp, struct ib_qp_attr *attr, if (attr_mask & IB_QP_PKEY_INDEX) { p = attr_mask & IB_QP_PORT ? (attr->port_num - 1) : hr_qp->port; if (attr->pkey_index >= hr_dev->caps.pkey_table_len[p]) { - dev_err(dev, "attr pkey_index invalid.attr->pkey_index=%d\n", + ibdev_err(&hr_dev->ib_dev, + "attr pkey_index invalid.attr->pkey_index=%d\n", attr->pkey_index); return -EINVAL; } @@ -1119,14 +1121,16 @@ static int hns_roce_check_qp_attr(struct ib_qp *ibqp, struct ib_qp_attr *attr, if (attr_mask & IB_QP_MAX_QP_RD_ATOMIC && attr->max_rd_atomic > hr_dev->caps.max_qp_init_rdma) { - dev_err(dev, "attr max_rd_atomic invalid.attr->max_rd_atomic=%d\n", + ibdev_err(&hr_dev->ib_dev, + "attr max_rd_atomic invalid.attr->max_rd_atomic=%d\n", attr->max_rd_atomic); return -EINVAL; } if (attr_mask & IB_QP_MAX_DEST_RD_ATOMIC && attr->max_dest_rd_atomic > hr_dev->caps.max_qp_dest_rdma) { - dev_err(dev, "attr max_dest_rd_atomic invalid.attr->max_dest_rd_atomic=%d\n", + ibdev_err(&hr_dev->ib_dev, + "attr max_dest_rd_atomic invalid.attr->max_dest_rd_atomic=%d\n", attr->max_dest_rd_atomic); return -EINVAL; } @@ -1143,7 +1147,6 @@ int hns_roce_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device); struct hns_roce_qp *hr_qp = to_hr_qp(ibqp); enum ib_qp_state cur_state, new_state; - struct device *dev = hr_dev->dev; int ret = -EINVAL; mutex_lock(&hr_qp->mutex); @@ -1160,14 +1163,15 @@ int hns_roce_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, if (hr_qp->rdb_en == 1) hr_qp->rq.head = *(int *)(hr_qp->rdb.virt_addr); } else { - dev_warn(dev, "flush cqe is not supported in userspace!\n"); + ibdev_warn(&hr_dev->ib_dev, + "flush cqe is not supported in userspace!\n"); goto out; } } if (!ib_modify_qp_is_ok(cur_state, new_state, ibqp->qp_type, attr_mask)) { - dev_err(dev, "ib_modify_qp_is_ok failed\n"); + ibdev_err(&hr_dev->ib_dev, "ib_modify_qp_is_ok failed\n"); goto out; } @@ -1178,7 +1182,8 @@ int hns_roce_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, if (cur_state == new_state && cur_state == IB_QPS_RESET) { if (hr_dev->caps.min_wqes) { ret = -EPERM; - dev_err(dev, "cur_state=%d new_state=%d\n", cur_state, + ibdev_err(&hr_dev->ib_dev, + "cur_state=%d new_state=%d\n", cur_state, new_state); } else { ret = 0;