From patchwork Thu Feb 7 17:33:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Max Gurtovoy X-Patchwork-Id: 10801643 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C24231575 for ; Thu, 7 Feb 2019 17:33:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AD1C028647 for ; Thu, 7 Feb 2019 17:33:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A05B5286B3; Thu, 7 Feb 2019 17:33:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F0C4328647 for ; Thu, 7 Feb 2019 17:33:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727004AbfBGRds (ORCPT ); Thu, 7 Feb 2019 12:33:48 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:49037 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726512AbfBGRdj (ORCPT ); Thu, 7 Feb 2019 12:33:39 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from maxg@mellanox.com) with ESMTPS (AES256-SHA encrypted); 7 Feb 2019 19:33:31 +0200 Received: from r-vnc08.mtr.labs.mlnx (r-vnc08.mtr.labs.mlnx [10.208.0.121]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id x17HXVKU025778; Thu, 7 Feb 2019 19:33:31 +0200 From: Max Gurtovoy To: linux-rdma@vger.kernel.org, sagi@grimberg.me, hch@lst.de, leon@kernel.org, jgg@mellanox.com, dledford@redhat.com, bvanassche@acm.org Cc: maxg@mellanox.com, israelr@mellanox.com, idanb@mellanox.com, vladimirk@mellanox.com, shlomin@mellanox.com, oren@mellanox.com Subject: [PATCH 01/17] RDMA/core: Introduce new header file for signature operations Date: Thu, 7 Feb 2019 19:33:15 +0200 Message-Id: <1549560811-8655-2-git-send-email-maxg@mellanox.com> X-Mailer: git-send-email 1.7.8.2 In-Reply-To: <1549560811-8655-1-git-send-email-maxg@mellanox.com> References: <1549560811-8655-1-git-send-email-maxg@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This commit ought to ease the exhausted ib_verbs.h file and make the code more readable. Signed-off-by: Max Gurtovoy Signed-off-by: Israel Rukshin Reviewed-by: Leon Romanovsky Reviewed-by: Sagi Grimberg --- include/rdma/ib_verbs.h | 112 +------------------------------------------ include/rdma/signature.h | 120 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 121 insertions(+), 111 deletions(-) create mode 100644 include/rdma/signature.h diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 80debf5982ac..f89b521245ec 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -61,6 +61,7 @@ #include #include #include +#include #include #include @@ -240,17 +241,6 @@ enum ib_device_cap_flags { IB_DEVICE_PCI_WRITE_END_PADDING = (1ULL << 36), }; -enum ib_signature_prot_cap { - IB_PROT_T10DIF_TYPE_1 = 1, - IB_PROT_T10DIF_TYPE_2 = 1 << 1, - IB_PROT_T10DIF_TYPE_3 = 1 << 2, -}; - -enum ib_signature_guard_cap { - IB_GUARD_T10DIF_CRC = 1, - IB_GUARD_T10DIF_CSUM = 1 << 1, -}; - enum ib_atomic_cap { IB_ATOMIC_NONE, IB_ATOMIC_HCA, @@ -773,106 +763,6 @@ enum ib_mr_type { IB_MR_TYPE_SG_GAPS, }; -/** - * Signature types - * IB_SIG_TYPE_NONE: Unprotected. - * IB_SIG_TYPE_T10_DIF: Type T10-DIF - */ -enum ib_signature_type { - IB_SIG_TYPE_NONE, - IB_SIG_TYPE_T10_DIF, -}; - -/** - * Signature T10-DIF block-guard types - * IB_T10DIF_CRC: Corresponds to T10-PI mandated CRC checksum rules. - * IB_T10DIF_CSUM: Corresponds to IP checksum rules. - */ -enum ib_t10_dif_bg_type { - IB_T10DIF_CRC, - IB_T10DIF_CSUM -}; - -/** - * struct ib_t10_dif_domain - Parameters specific for T10-DIF - * domain. - * @bg_type: T10-DIF block guard type (CRC|CSUM) - * @pi_interval: protection information interval. - * @bg: seed of guard computation. - * @app_tag: application tag of guard block - * @ref_tag: initial guard block reference tag. - * @ref_remap: Indicate wethear the reftag increments each block - * @app_escape: Indicate to skip block check if apptag=0xffff - * @ref_escape: Indicate to skip block check if reftag=0xffffffff - * @apptag_check_mask: check bitmask of application tag. - */ -struct ib_t10_dif_domain { - enum ib_t10_dif_bg_type bg_type; - u16 pi_interval; - u16 bg; - u16 app_tag; - u32 ref_tag; - bool ref_remap; - bool app_escape; - bool ref_escape; - u16 apptag_check_mask; -}; - -/** - * struct ib_sig_domain - Parameters for signature domain - * @sig_type: specific signauture type - * @sig: union of all signature domain attributes that may - * be used to set domain layout. - */ -struct ib_sig_domain { - enum ib_signature_type sig_type; - union { - struct ib_t10_dif_domain dif; - } sig; -}; - -/** - * struct ib_sig_attrs - Parameters for signature handover operation - * @check_mask: bitmask for signature byte check (8 bytes) - * @mem: memory domain layout desciptor. - * @wire: wire domain layout desciptor. - */ -struct ib_sig_attrs { - u8 check_mask; - struct ib_sig_domain mem; - struct ib_sig_domain wire; -}; - -enum ib_sig_err_type { - IB_SIG_BAD_GUARD, - IB_SIG_BAD_REFTAG, - IB_SIG_BAD_APPTAG, -}; - -/** - * Signature check masks (8 bytes in total) according to the T10-PI standard: - * -------- -------- ------------ - * | GUARD | APPTAG | REFTAG | - * | 2B | 2B | 4B | - * -------- -------- ------------ - */ -enum { - IB_SIG_CHECK_GUARD = 0xc0, - IB_SIG_CHECK_APPTAG = 0x30, - IB_SIG_CHECK_REFTAG = 0x0f, -}; - -/** - * struct ib_sig_err - signature error descriptor - */ -struct ib_sig_err { - enum ib_sig_err_type err_type; - u32 expected; - u32 actual; - u64 sig_err_offset; - u32 key; -}; - enum ib_mr_status_check { IB_MR_CHECK_SIG_STATUS = 1, }; diff --git a/include/rdma/signature.h b/include/rdma/signature.h new file mode 100644 index 000000000000..9cecefd94bcf --- /dev/null +++ b/include/rdma/signature.h @@ -0,0 +1,120 @@ +/* SPDX-License-Identifier: (GPL-2.0 OR Linux-OpenIB) */ +/* + * Copyright (c) 2017-2018 Mellanox Technologies. All rights reserved. + */ + +#ifndef _RDMA_SIGNATURE_H_ +#define _RDMA_SIGNATURE_H_ + +enum ib_signature_prot_cap { + IB_PROT_T10DIF_TYPE_1 = 1, + IB_PROT_T10DIF_TYPE_2 = 1 << 1, + IB_PROT_T10DIF_TYPE_3 = 1 << 2, +}; + +enum ib_signature_guard_cap { + IB_GUARD_T10DIF_CRC = 1, + IB_GUARD_T10DIF_CSUM = 1 << 1, +}; + +/** + * Signature types + * IB_SIG_TYPE_NONE: Unprotected. + * IB_SIG_TYPE_T10_DIF: Type T10-DIF + */ +enum ib_signature_type { + IB_SIG_TYPE_NONE, + IB_SIG_TYPE_T10_DIF, +}; + +/** + * Signature T10-DIF block-guard types + * IB_T10DIF_CRC: Corresponds to T10-PI mandated CRC checksum rules. + * IB_T10DIF_CSUM: Corresponds to IP checksum rules. + */ +enum ib_t10_dif_bg_type { + IB_T10DIF_CRC, + IB_T10DIF_CSUM, +}; + +/** + * struct ib_t10_dif_domain - Parameters specific for T10-DIF + * domain. + * @bg_type: T10-DIF block guard type (CRC|CSUM) + * @pi_interval: protection information interval. + * @bg: seed of guard computation. + * @app_tag: application tag of guard block + * @ref_tag: initial guard block reference tag. + * @ref_remap: Indicate wethear the reftag increments each block + * @app_escape: Indicate to skip block check if apptag=0xffff + * @ref_escape: Indicate to skip block check if reftag=0xffffffff + * @apptag_check_mask: check bitmask of application tag. + */ +struct ib_t10_dif_domain { + enum ib_t10_dif_bg_type bg_type; + u16 pi_interval; + u16 bg; + u16 app_tag; + u32 ref_tag; + bool ref_remap; + bool app_escape; + bool ref_escape; + u16 apptag_check_mask; +}; + +/** + * struct ib_sig_domain - Parameters for signature domain + * @sig_type: specific signauture type + * @sig: union of all signature domain attributes that may + * be used to set domain layout. + */ +struct ib_sig_domain { + enum ib_signature_type sig_type; + union { + struct ib_t10_dif_domain dif; + } sig; +}; + +/** + * struct ib_sig_attrs - Parameters for signature handover operation + * @check_mask: bitmask for signature byte check (8 bytes) + * @mem: memory domain layout descriptor. + * @wire: wire domain layout descriptor. + */ +struct ib_sig_attrs { + u8 check_mask; + struct ib_sig_domain mem; + struct ib_sig_domain wire; +}; + +enum ib_sig_err_type { + IB_SIG_BAD_GUARD, + IB_SIG_BAD_REFTAG, + IB_SIG_BAD_APPTAG, +}; + +/** + * Signature check masks (8 bytes in total) according to the T10-PI standard: + * -------- -------- ------------ + * | GUARD | APPTAG | REFTAG | + * | 2B | 2B | 4B | + * -------- -------- ------------ + */ +enum { + IB_SIG_CHECK_GUARD = 0xc0, + IB_SIG_CHECK_APPTAG = 0x30, + IB_SIG_CHECK_REFTAG = 0x0f, +}; + +/** + * struct ib_sig_err - signature error descriptor + */ +struct ib_sig_err { + enum ib_sig_err_type err_type; + u32 expected; + u32 actual; + u64 sig_err_offset; + u32 key; +}; + +#endif /* _RDMA_SIGNATURE_H_ */ From patchwork Thu Feb 7 17:33:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Max Gurtovoy X-Patchwork-Id: 10801613 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BB78414E1 for ; Thu, 7 Feb 2019 17:33:40 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A8DFA2E347 for ; Thu, 7 Feb 2019 17:33:40 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9CDD02E37D; Thu, 7 Feb 2019 17:33:40 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 436882E347 for ; Thu, 7 Feb 2019 17:33:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726793AbfBGRdj (ORCPT ); Thu, 7 Feb 2019 12:33:39 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:49040 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726873AbfBGRdi (ORCPT ); Thu, 7 Feb 2019 12:33:38 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from maxg@mellanox.com) with ESMTPS (AES256-SHA encrypted); 7 Feb 2019 19:33:31 +0200 Received: from r-vnc08.mtr.labs.mlnx (r-vnc08.mtr.labs.mlnx [10.208.0.121]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id x17HXVKV025778; Thu, 7 Feb 2019 19:33:31 +0200 From: Max Gurtovoy To: linux-rdma@vger.kernel.org, sagi@grimberg.me, hch@lst.de, leon@kernel.org, jgg@mellanox.com, dledford@redhat.com, bvanassche@acm.org Cc: maxg@mellanox.com, israelr@mellanox.com, idanb@mellanox.com, vladimirk@mellanox.com, shlomin@mellanox.com, oren@mellanox.com Subject: [PATCH 02/17] RDMA/core: Save the MR type in the ib_mr structure Date: Thu, 7 Feb 2019 19:33:16 +0200 Message-Id: <1549560811-8655-3-git-send-email-maxg@mellanox.com> X-Mailer: git-send-email 1.7.8.2 In-Reply-To: <1549560811-8655-1-git-send-email-maxg@mellanox.com> References: <1549560811-8655-1-git-send-email-maxg@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This is a preparation for the signature verbs API change. This change is needed since the MR type will define, in the upcoming patches, the need for allocating internal resources in LLD for signature handover related operations. It will also help to make sure that signature related functions are called with an appropriate MR type and fail otherwise. Signed-off-by: Max Gurtovoy Signed-off-by: Israel Rukshin Reviewed-by: Sagi Grimberg --- drivers/infiniband/core/uverbs_cmd.c | 1 + drivers/infiniband/core/verbs.c | 1 + include/rdma/ib_verbs.h | 1 + 3 files changed, 3 insertions(+) diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c index 3317300ab036..72c5a8daf558 100644 --- a/drivers/infiniband/core/uverbs_cmd.c +++ b/drivers/infiniband/core/uverbs_cmd.c @@ -737,6 +737,7 @@ static int ib_uverbs_reg_mr(struct uverbs_attr_bundle *attrs) mr->device = pd->device; mr->pd = pd; + mr->type = IB_MR_TYPE_MEM_REG; mr->dm = NULL; mr->uobject = uobj; atomic_inc(&pd->usecnt); diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c index ac011836bb54..4588b933d4b4 100644 --- a/drivers/infiniband/core/verbs.c +++ b/drivers/infiniband/core/verbs.c @@ -1992,6 +1992,7 @@ struct ib_mr *ib_alloc_mr(struct ib_pd *pd, mr->need_inval = false; mr->res.type = RDMA_RESTRACK_MR; rdma_restrack_kadd(&mr->res); + mr->type = mr_type; } return mr; diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index f89b521245ec..42a9e297c21f 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -1695,6 +1695,7 @@ struct ib_mr { u64 iova; u64 length; unsigned int page_size; + enum ib_mr_type type; bool need_inval; union { struct ib_uobject *uobject; /* user */ From patchwork Thu Feb 7 17:33:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Max Gurtovoy X-Patchwork-Id: 10801617 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 80D1C186D for ; Thu, 7 Feb 2019 17:33:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6A7142E358 for ; Thu, 7 Feb 2019 17:33:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5ED682E38B; Thu, 7 Feb 2019 17:33:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DCBC52E358 for ; Thu, 7 Feb 2019 17:33:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726902AbfBGRdj (ORCPT ); Thu, 7 Feb 2019 12:33:39 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:49036 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726747AbfBGRdi (ORCPT ); Thu, 7 Feb 2019 12:33:38 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from maxg@mellanox.com) with ESMTPS (AES256-SHA encrypted); 7 Feb 2019 19:33:32 +0200 Received: from r-vnc08.mtr.labs.mlnx (r-vnc08.mtr.labs.mlnx [10.208.0.121]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id x17HXVKW025778; Thu, 7 Feb 2019 19:33:31 +0200 From: Max Gurtovoy To: linux-rdma@vger.kernel.org, sagi@grimberg.me, hch@lst.de, leon@kernel.org, jgg@mellanox.com, dledford@redhat.com, bvanassche@acm.org Cc: maxg@mellanox.com, israelr@mellanox.com, idanb@mellanox.com, vladimirk@mellanox.com, shlomin@mellanox.com, oren@mellanox.com Subject: [PATCH 03/17] RDMA/core: Introduce IB_MR_TYPE_PI and ib_alloc_mr_integrity API Date: Thu, 7 Feb 2019 19:33:17 +0200 Message-Id: <1549560811-8655-4-git-send-email-maxg@mellanox.com> X-Mailer: git-send-email 1.7.8.2 In-Reply-To: <1549560811-8655-1-git-send-email-maxg@mellanox.com> References: <1549560811-8655-1-git-send-email-maxg@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Israel Rukshin This is a preparation for signature verbs API re-design. In the new design a single MR with IB_MR_TYPE_PI type will be used to perform the needed mapping for PI (protection information) operations. Signed-off-by: Israel Rukshin Signed-off-by: Max Gurtovoy Reviewed-by: Sagi Grimberg --- drivers/infiniband/core/device.c | 1 + drivers/infiniband/core/verbs.c | 46 ++++++++++++++++++++++++++++++++++++++++ include/rdma/ib_verbs.h | 11 ++++++++++ 3 files changed, 58 insertions(+) diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c index 238ec42778ef..bc4c7c5c305b 100644 --- a/drivers/infiniband/core/device.c +++ b/drivers/infiniband/core/device.c @@ -1244,6 +1244,7 @@ void ib_set_device_ops(struct ib_device *dev, const struct ib_device_ops *ops) SET_DEVICE_OP(dev_ops, alloc_fmr); SET_DEVICE_OP(dev_ops, alloc_hw_stats); SET_DEVICE_OP(dev_ops, alloc_mr); + SET_DEVICE_OP(dev_ops, alloc_mr_integrity); SET_DEVICE_OP(dev_ops, alloc_mw); SET_DEVICE_OP(dev_ops, alloc_pd); SET_DEVICE_OP(dev_ops, alloc_rdma_netdev); diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c index 4588b933d4b4..047a51bd3e79 100644 --- a/drivers/infiniband/core/verbs.c +++ b/drivers/infiniband/core/verbs.c @@ -1982,6 +1982,9 @@ struct ib_mr *ib_alloc_mr(struct ib_pd *pd, if (!pd->device->ops.alloc_mr) return ERR_PTR(-EOPNOTSUPP); + if (WARN_ON_ONCE(mr_type == IB_MR_TYPE_PI)) + return ERR_PTR(-EINVAL); + mr = pd->device->ops.alloc_mr(pd, mr_type, max_num_sg); if (!IS_ERR(mr)) { mr->device = pd->device; @@ -1999,6 +2002,49 @@ struct ib_mr *ib_alloc_mr(struct ib_pd *pd, } EXPORT_SYMBOL(ib_alloc_mr); +/** + * ib_alloc_mr_integrity() - Allocates an integrity memory region + * @pd: protection domain associated with the region + * @max_num_data_sg: maximum data sg entries available for registration + * @max_num_meta_sg: maximum metadata sg entries available for + * registration + * + * Notes: + * Memory registration page/sg lists must not exceed max_num_sg, + * also the integrity page/sg lists must not exceed max_num_meta_sg. + * + */ +struct ib_mr *ib_alloc_mr_integrity(struct ib_pd *pd, + u32 max_num_data_sg, + u32 max_num_meta_sg) +{ + struct ib_mr *mr; + + if (!pd->device->ops.alloc_mr_integrity) + return ERR_PTR(-EOPNOTSUPP); + + if (!max_num_meta_sg) + return ERR_PTR(-EINVAL); + + mr = pd->device->ops.alloc_mr_integrity(pd, max_num_data_sg, + max_num_meta_sg); + if (IS_ERR(mr)) + return mr; + + mr->device = pd->device; + mr->pd = pd; + mr->dm = NULL; + mr->uobject = NULL; + atomic_inc(&pd->usecnt); + mr->need_inval = false; + mr->res.type = RDMA_RESTRACK_MR; + rdma_restrack_kadd(&mr->res); + mr->type = IB_MR_TYPE_PI; + + return mr; +} +EXPORT_SYMBOL(ib_alloc_mr_integrity); + /* "Fast" memory regions */ struct ib_fmr *ib_alloc_fmr(struct ib_pd *pd, diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 42a9e297c21f..8edba47f272e 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -756,11 +756,15 @@ __attribute_const__ int ib_rate_to_mbps(enum ib_rate rate); * register any arbitrary sg lists (without * the normal mr constraints - see * ib_map_mr_sg) + * @IB_MR_TYPE_PI: memory region that is used for + * PI (protection information) operations + */ enum ib_mr_type { IB_MR_TYPE_MEM_REG, IB_MR_TYPE_SIGNATURE, IB_MR_TYPE_SG_GAPS, + IB_MR_TYPE_PI, }; enum ib_mr_status_check { @@ -2306,6 +2310,9 @@ struct ib_device_ops { int (*dereg_mr)(struct ib_mr *mr); struct ib_mr *(*alloc_mr)(struct ib_pd *pd, enum ib_mr_type mr_type, u32 max_num_sg); + struct ib_mr *(*alloc_mr_integrity)(struct ib_pd *pd, + u32 max_num_data_sg, + u32 max_num_meta_sg); int (*advise_mr)(struct ib_pd *pd, enum ib_uverbs_advise_mr_advice advice, u32 flags, struct ib_sge *sg_list, u32 num_sge, @@ -3677,6 +3684,10 @@ struct ib_mr *ib_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, u32 max_num_sg); +struct ib_mr *ib_alloc_mr_integrity(struct ib_pd *pd, + u32 max_num_data_sg, + u32 max_num_meta_sg); + /** * ib_update_fast_reg_key - updates the key portion of the fast_reg MR * R_Key and L_Key. From patchwork Thu Feb 7 17:33:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Max Gurtovoy X-Patchwork-Id: 10801637 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 47557186D for ; Thu, 7 Feb 2019 17:33:47 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2EACA2E34D for ; Thu, 7 Feb 2019 17:33:47 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 22B8E2E38D; Thu, 7 Feb 2019 17:33:47 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F3DA82E37D for ; Thu, 7 Feb 2019 17:33:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726788AbfBGRdn (ORCPT ); Thu, 7 Feb 2019 12:33:43 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:49053 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726880AbfBGRdk (ORCPT ); Thu, 7 Feb 2019 12:33:40 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from maxg@mellanox.com) with ESMTPS (AES256-SHA encrypted); 7 Feb 2019 19:33:32 +0200 Received: from r-vnc08.mtr.labs.mlnx (r-vnc08.mtr.labs.mlnx [10.208.0.121]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id x17HXVKX025778; Thu, 7 Feb 2019 19:33:32 +0200 From: Max Gurtovoy To: linux-rdma@vger.kernel.org, sagi@grimberg.me, hch@lst.de, leon@kernel.org, jgg@mellanox.com, dledford@redhat.com, bvanassche@acm.org Cc: maxg@mellanox.com, israelr@mellanox.com, idanb@mellanox.com, vladimirk@mellanox.com, shlomin@mellanox.com, oren@mellanox.com Subject: [PATCH 04/17] RMDA/core: Introduce ib_scatterlist structure Date: Thu, 7 Feb 2019 19:33:18 +0200 Message-Id: <1549560811-8655-5-git-send-email-maxg@mellanox.com> X-Mailer: git-send-email 1.7.8.2 In-Reply-To: <1549560811-8655-1-git-send-email-maxg@mellanox.com> References: <1549560811-8655-1-git-send-email-maxg@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Israel Rukshin Instead of adding many arguments to RDMA mapping functions with PI, create a new data structure 'ib_scatterlist' which contains the needed information on a mapped scatterlist. In the future, this will also be used by ULPs that define these attributes explicitly. Signed-off-by: Israel Rukshin Reviewed-by: Max Gurtovoy --- drivers/infiniband/core/rw.c | 6 +++- drivers/infiniband/core/verbs.c | 39 +++++++++++------------ drivers/infiniband/hw/bnxt_re/ib_verbs.c | 5 ++- drivers/infiniband/hw/bnxt_re/ib_verbs.h | 3 +- drivers/infiniband/hw/cxgb3/iwch_provider.c | 5 ++- drivers/infiniband/hw/cxgb4/iw_cxgb4.h | 3 +- drivers/infiniband/hw/cxgb4/mem.c | 5 ++- drivers/infiniband/hw/hns/hns_roce_device.h | 3 +- drivers/infiniband/hw/hns/hns_roce_mr.c | 5 ++- drivers/infiniband/hw/i40iw/i40iw_verbs.c | 8 ++--- drivers/infiniband/hw/mlx4/mlx4_ib.h | 3 +- drivers/infiniband/hw/mlx4/mr.c | 5 ++- drivers/infiniband/hw/mlx5/mlx5_ib.h | 3 +- drivers/infiniband/hw/mlx5/mr.c | 21 +++++-------- drivers/infiniband/hw/nes/nes_verbs.c | 5 ++- drivers/infiniband/hw/ocrdma/ocrdma_verbs.c | 5 ++- drivers/infiniband/hw/ocrdma/ocrdma_verbs.h | 3 +- drivers/infiniband/hw/qedr/verbs.c | 5 ++- drivers/infiniband/hw/qedr/verbs.h | 3 +- drivers/infiniband/hw/vmw_pvrdma/pvrdma_mr.c | 5 ++- drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.h | 3 +- drivers/infiniband/sw/rdmavt/mr.c | 10 ++---- drivers/infiniband/sw/rdmavt/mr.h | 3 +- drivers/infiniband/sw/rxe/rxe_verbs.c | 5 ++- drivers/infiniband/ulp/iser/iser_memory.c | 13 ++++++-- drivers/infiniband/ulp/srp/ib_srp.c | 41 +++++++++++++------------ drivers/nvme/host/rdma.c | 6 +++- fs/cifs/smbdirect.c | 7 +++-- include/rdma/ib_verbs.h | 29 +++++++++++------ net/rds/ib_frmr.c | 9 ++++-- net/smc/smc_ib.c | 9 +++--- net/sunrpc/xprtrdma/frwr_ops.c | 6 +++- 32 files changed, 144 insertions(+), 137 deletions(-) diff --git a/drivers/infiniband/core/rw.c b/drivers/infiniband/core/rw.c index d22c4a2ebac6..23dccb35b7bf 100644 --- a/drivers/infiniband/core/rw.c +++ b/drivers/infiniband/core/rw.c @@ -72,6 +72,7 @@ static int rdma_rw_init_one_mr(struct ib_qp *qp, u8 port_num, { u32 pages_per_mr = rdma_rw_fr_page_list_len(qp->pd->device); u32 nents = min(sg_cnt, pages_per_mr); + struct ib_scatterlist ib_sg; int count = 0, ret; reg->mr = ib_mr_pool_get(qp, &qp->rdma_mrs); @@ -87,7 +88,10 @@ static int rdma_rw_init_one_mr(struct ib_qp *qp, u8 port_num, reg->inv_wr.next = NULL; } - ret = ib_map_mr_sg(reg->mr, sg, nents, &offset, PAGE_SIZE); + ib_sg.sg = sg; + ib_sg.dma_nents = nents; + ib_sg.offset = offset; + ret = ib_map_mr_sg(reg->mr, &ib_sg, PAGE_SIZE); if (ret < 0 || ret < nents) { ib_mr_pool_put(qp, &qp->rdma_mrs, reg->mr); return -EINVAL; diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c index 047a51bd3e79..08aa6715fa12 100644 --- a/drivers/infiniband/core/verbs.c +++ b/drivers/infiniband/core/verbs.c @@ -2406,9 +2406,7 @@ EXPORT_SYMBOL(ib_set_vf_guid); * ib_map_mr_sg() - Map the largest prefix of a dma mapped SG list * and set it the memory region. * @mr: memory region - * @sg: dma mapped scatterlist - * @sg_nents: number of entries in sg - * @sg_offset: offset in bytes into sg + * @ib_sg: dma mapped ib scatterlist * @page_size: page vector desired page size * * Constraints: @@ -2427,15 +2425,15 @@ EXPORT_SYMBOL(ib_set_vf_guid); * After this completes successfully, the memory region * is ready for registration. */ -int ib_map_mr_sg(struct ib_mr *mr, struct scatterlist *sg, int sg_nents, - unsigned int *sg_offset, unsigned int page_size) +int ib_map_mr_sg(struct ib_mr *mr, struct ib_scatterlist *ib_sg, + unsigned int page_size) { if (unlikely(!mr->device->ops.map_mr_sg)) return -EOPNOTSUPP; mr->page_size = page_size; - return mr->device->ops.map_mr_sg(mr, sg, sg_nents, sg_offset); + return mr->device->ops.map_mr_sg(mr, ib_sg); } EXPORT_SYMBOL(ib_map_mr_sg); @@ -2443,12 +2441,7 @@ EXPORT_SYMBOL(ib_map_mr_sg); * ib_sg_to_pages() - Convert the largest prefix of a sg list * to a page vector * @mr: memory region - * @sgl: dma mapped scatterlist - * @sg_nents: number of entries in sg - * @sg_offset_p: IN: start offset in bytes into sg - * OUT: offset in bytes for element n of the sg of the first - * byte that has not been processed where n is the return - * value of this function. + * @ib_sgl: dma mapped ib scatterlist * @set_page: driver page assignment function pointer * * Core service helper for drivers to convert the largest @@ -2456,26 +2449,32 @@ EXPORT_SYMBOL(ib_map_mr_sg); * prefix converted is the prefix that meet the requirements * of ib_map_mr_sg. * + * IN ib_sgl->offset: start offset in bytes into ib_sgl->sg + * OUT ib_sgl->offset: offset in bytes for element n of the sg of the first + * byte that has not been processed where n is the return + * value of this function. + * * Returns the number of sg elements that were assigned to * a page vector. */ -int ib_sg_to_pages(struct ib_mr *mr, struct scatterlist *sgl, int sg_nents, - unsigned int *sg_offset_p, int (*set_page)(struct ib_mr *, u64)) +int ib_sg_to_pages(struct ib_mr *mr, struct ib_scatterlist *ib_sgl, + int (*set_page)(struct ib_mr *, u64)) { struct scatterlist *sg; + struct scatterlist *sgl = ib_sgl->sg; u64 last_end_dma_addr = 0; - unsigned int sg_offset = sg_offset_p ? *sg_offset_p : 0; + unsigned int sg_offset = ib_sgl->offset; unsigned int last_page_off = 0; u64 page_mask = ~((u64)mr->page_size - 1); int i, ret; - if (unlikely(sg_nents <= 0 || sg_offset > sg_dma_len(&sgl[0]))) + if (unlikely(ib_sgl->dma_nents <= 0 || sg_offset > sg_dma_len(&sgl[0]))) return -EINVAL; mr->iova = sg_dma_address(&sgl[0]) + sg_offset; mr->length = 0; - for_each_sg(sgl, sg, sg_nents, i) { + for_each_sg(sgl, sg, ib_sgl->dma_nents, i) { u64 dma_addr = sg_dma_address(sg) + sg_offset; u64 prev_addr = dma_addr; unsigned int dma_len = sg_dma_len(sg) - sg_offset; @@ -2505,8 +2504,7 @@ int ib_sg_to_pages(struct ib_mr *mr, struct scatterlist *sgl, int sg_nents, if (unlikely(ret < 0)) { sg_offset = prev_addr - sg_dma_address(sg); mr->length += prev_addr - dma_addr; - if (sg_offset_p) - *sg_offset_p = sg_offset; + ib_sgl->offset = sg_offset; return i || sg_offset ? i : ret; } prev_addr = page_addr; @@ -2521,8 +2519,7 @@ int ib_sg_to_pages(struct ib_mr *mr, struct scatterlist *sgl, int sg_nents, sg_offset = 0; } - if (sg_offset_p) - *sg_offset_p = 0; + ib_sgl->offset = 0; return i; } EXPORT_SYMBOL(ib_sg_to_pages); diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c index 1e2515e2eb62..201f7ba3ebdd 100644 --- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c +++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c @@ -3400,13 +3400,12 @@ static int bnxt_re_set_page(struct ib_mr *ib_mr, u64 addr) return 0; } -int bnxt_re_map_mr_sg(struct ib_mr *ib_mr, struct scatterlist *sg, int sg_nents, - unsigned int *sg_offset) +int bnxt_re_map_mr_sg(struct ib_mr *ib_mr, struct ib_scatterlist *ib_sg) { struct bnxt_re_mr *mr = container_of(ib_mr, struct bnxt_re_mr, ib_mr); mr->npages = 0; - return ib_sg_to_pages(ib_mr, sg, sg_nents, sg_offset, bnxt_re_set_page); + return ib_sg_to_pages(ib_mr, ib_sg, bnxt_re_set_page); } struct ib_mr *bnxt_re_alloc_mr(struct ib_pd *ib_pd, enum ib_mr_type type, diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.h b/drivers/infiniband/hw/bnxt_re/ib_verbs.h index c4af72604b4f..110bbc4f4fd3 100644 --- a/drivers/infiniband/hw/bnxt_re/ib_verbs.h +++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.h @@ -205,8 +205,7 @@ int bnxt_re_poll_cq(struct ib_cq *cq, int num_entries, struct ib_wc *wc); int bnxt_re_req_notify_cq(struct ib_cq *cq, enum ib_cq_notify_flags flags); struct ib_mr *bnxt_re_get_dma_mr(struct ib_pd *pd, int mr_access_flags); -int bnxt_re_map_mr_sg(struct ib_mr *ib_mr, struct scatterlist *sg, int sg_nents, - unsigned int *sg_offset); +int bnxt_re_map_mr_sg(struct ib_mr *ib_mr, struct ib_scatterlist *ib_sg); struct ib_mr *bnxt_re_alloc_mr(struct ib_pd *ib_pd, enum ib_mr_type mr_type, u32 max_num_sg); int bnxt_re_dereg_mr(struct ib_mr *mr); diff --git a/drivers/infiniband/hw/cxgb3/iwch_provider.c b/drivers/infiniband/hw/cxgb3/iwch_provider.c index b34b1a1bd94b..89fad3af8bea 100644 --- a/drivers/infiniband/hw/cxgb3/iwch_provider.c +++ b/drivers/infiniband/hw/cxgb3/iwch_provider.c @@ -746,14 +746,13 @@ static int iwch_set_page(struct ib_mr *ibmr, u64 addr) return 0; } -static int iwch_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, - int sg_nents, unsigned int *sg_offset) +static int iwch_map_mr_sg(struct ib_mr *ibmr, struct ib_scatterlist *ib_sg) { struct iwch_mr *mhp = to_iwch_mr(ibmr); mhp->npages = 0; - return ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, iwch_set_page); + return ib_sg_to_pages(ibmr, ib_sg, iwch_set_page); } static int iwch_destroy_qp(struct ib_qp *ib_qp) diff --git a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h index f0fceadd0d12..1b46ff1a1b4e 100644 --- a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h +++ b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h @@ -1051,8 +1051,7 @@ void c4iw_qp_rem_ref(struct ib_qp *qp); struct ib_mr *c4iw_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, u32 max_num_sg); -int c4iw_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, - unsigned int *sg_offset); +int c4iw_map_mr_sg(struct ib_mr *ibmr, struct ib_scatterlist *ib_sg); int c4iw_dealloc_mw(struct ib_mw *mw); void c4iw_dealloc(struct uld_ctx *ctx); struct ib_mw *c4iw_alloc_mw(struct ib_pd *pd, enum ib_mw_type type, diff --git a/drivers/infiniband/hw/cxgb4/mem.c b/drivers/infiniband/hw/cxgb4/mem.c index 7b76e6f81aeb..48b24f70b52c 100644 --- a/drivers/infiniband/hw/cxgb4/mem.c +++ b/drivers/infiniband/hw/cxgb4/mem.c @@ -782,14 +782,13 @@ static int c4iw_set_page(struct ib_mr *ibmr, u64 addr) return 0; } -int c4iw_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, - unsigned int *sg_offset) +int c4iw_map_mr_sg(struct ib_mr *ibmr, struct ib_scatterlist *ib_sg) { struct c4iw_mr *mhp = to_c4iw_mr(ibmr); mhp->mpl_len = 0; - return ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, c4iw_set_page); + return ib_sg_to_pages(ibmr, ib_sg, c4iw_set_page); } int c4iw_dereg_mr(struct ib_mr *ib_mr) diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h index 509e467843f6..ed6512a47459 100644 --- a/drivers/infiniband/hw/hns/hns_roce_device.h +++ b/drivers/infiniband/hw/hns/hns_roce_device.h @@ -1075,8 +1075,7 @@ int hns_roce_rereg_user_mr(struct ib_mr *mr, int flags, u64 start, u64 length, struct ib_udata *udata); struct ib_mr *hns_roce_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, u32 max_num_sg); -int hns_roce_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, - unsigned int *sg_offset); +int hns_roce_map_mr_sg(struct ib_mr *ibmr, struct ib_scatterlist *ib_sg); int hns_roce_dereg_mr(struct ib_mr *ibmr); int hns_roce_hw2sw_mpt(struct hns_roce_dev *hr_dev, struct hns_roce_cmd_mailbox *mailbox, diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c index ee5991bd4171..042c50cbf498 100644 --- a/drivers/infiniband/hw/hns/hns_roce_mr.c +++ b/drivers/infiniband/hw/hns/hns_roce_mr.c @@ -1375,14 +1375,13 @@ static int hns_roce_set_page(struct ib_mr *ibmr, u64 addr) return 0; } -int hns_roce_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, - unsigned int *sg_offset) +int hns_roce_map_mr_sg(struct ib_mr *ibmr, struct ib_scatterlist *ib_sg) { struct hns_roce_mr *mr = to_hr_mr(ibmr); mr->npages = 0; - return ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, hns_roce_set_page); + return ib_sg_to_pages(ibmr, ib_sg, hns_roce_set_page); } static void hns_roce_mw_free(struct hns_roce_dev *hr_dev, diff --git a/drivers/infiniband/hw/i40iw/i40iw_verbs.c b/drivers/infiniband/hw/i40iw/i40iw_verbs.c index 0b675b0742c2..17c9d5376c97 100644 --- a/drivers/infiniband/hw/i40iw/i40iw_verbs.c +++ b/drivers/infiniband/hw/i40iw/i40iw_verbs.c @@ -1715,16 +1715,14 @@ static int i40iw_set_page(struct ib_mr *ibmr, u64 addr) /** * i40iw_map_mr_sg - map of sg list for fmr * @ibmr: ib mem to access iwarp mr pointer - * @sg: scatter gather list for fmr - * @sg_nents: number of sg pages + * @ib_sg: dma mapped ib scatter gather list for fmr */ -static int i40iw_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, - int sg_nents, unsigned int *sg_offset) +static int i40iw_map_mr_sg(struct ib_mr *ibmr, struct ib_scatterlist *ib_sg) { struct i40iw_mr *iwmr = to_iwmr(ibmr); iwmr->npages = 0; - return ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, i40iw_set_page); + return ib_sg_to_pages(ibmr, ib_sg, i40iw_set_page); } /** diff --git a/drivers/infiniband/hw/mlx4/mlx4_ib.h b/drivers/infiniband/hw/mlx4/mlx4_ib.h index e491f3eda6e7..659c40309a22 100644 --- a/drivers/infiniband/hw/mlx4/mlx4_ib.h +++ b/drivers/infiniband/hw/mlx4/mlx4_ib.h @@ -739,8 +739,7 @@ int mlx4_ib_dealloc_mw(struct ib_mw *mw); struct ib_mr *mlx4_ib_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, u32 max_num_sg); -int mlx4_ib_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, - unsigned int *sg_offset); +int mlx4_ib_map_mr_sg(struct ib_mr *ibmr, struct ib_scatterlist *ib_sg); int mlx4_ib_modify_cq(struct ib_cq *cq, u16 cq_count, u16 cq_period); int mlx4_ib_resize_cq(struct ib_cq *ibcq, int entries, struct ib_udata *udata); struct ib_cq *mlx4_ib_create_cq(struct ib_device *ibdev, diff --git a/drivers/infiniband/hw/mlx4/mr.c b/drivers/infiniband/hw/mlx4/mr.c index c7c85c22e4e3..2ef2748b4f38 100644 --- a/drivers/infiniband/hw/mlx4/mr.c +++ b/drivers/infiniband/hw/mlx4/mr.c @@ -806,8 +806,7 @@ static int mlx4_set_page(struct ib_mr *ibmr, u64 addr) return 0; } -int mlx4_ib_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, - unsigned int *sg_offset) +int mlx4_ib_map_mr_sg(struct ib_mr *ibmr, struct ib_scatterlist *ib_sg) { struct mlx4_ib_mr *mr = to_mmr(ibmr); int rc; @@ -817,7 +816,7 @@ int mlx4_ib_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, ib_dma_sync_single_for_cpu(ibmr->device, mr->page_map, mr->page_map_size, DMA_TO_DEVICE); - rc = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, mlx4_set_page); + rc = ib_sg_to_pages(ibmr, ib_sg, mlx4_set_page); ib_dma_sync_single_for_device(ibmr->device, mr->page_map, mr->page_map_size, DMA_TO_DEVICE); diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index b06d3b1efea8..33b0d042ef05 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -1107,8 +1107,7 @@ int mlx5_ib_dereg_mr(struct ib_mr *ibmr); struct ib_mr *mlx5_ib_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, u32 max_num_sg); -int mlx5_ib_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, - unsigned int *sg_offset); +int mlx5_ib_map_mr_sg(struct ib_mr *ibmr, struct ib_scatterlist *ib_sg); int mlx5_ib_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num, const struct ib_wc *in_wc, const struct ib_grh *in_grh, const struct ib_mad_hdr *in, size_t in_mad_size, diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index fd6ea1f75085..659d39734523 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -1934,20 +1934,18 @@ int mlx5_ib_check_mr_status(struct ib_mr *ibmr, u32 check_mask, static int mlx5_ib_sg_to_klms(struct mlx5_ib_mr *mr, - struct scatterlist *sgl, - unsigned short sg_nents, - unsigned int *sg_offset_p) + struct ib_scatterlist *ib_sgl) { - struct scatterlist *sg = sgl; + struct scatterlist *sg = ib_sgl->sg; struct mlx5_klm *klms = mr->descs; - unsigned int sg_offset = sg_offset_p ? *sg_offset_p : 0; + unsigned int sg_offset = ib_sgl->offset; u32 lkey = mr->ibmr.pd->local_dma_lkey; int i; mr->ibmr.iova = sg_dma_address(sg) + sg_offset; mr->ibmr.length = 0; - for_each_sg(sgl, sg, sg_nents, i) { + for_each_sg(ib_sgl->sg, sg, ib_sgl->dma_nents, i) { if (unlikely(i >= mr->max_descs)) break; klms[i].va = cpu_to_be64(sg_dma_address(sg) + sg_offset); @@ -1959,8 +1957,7 @@ mlx5_ib_sg_to_klms(struct mlx5_ib_mr *mr, } mr->ndescs = i; - if (sg_offset_p) - *sg_offset_p = sg_offset; + ib_sgl->offset = sg_offset; return i; } @@ -1979,8 +1976,7 @@ static int mlx5_set_page(struct ib_mr *ibmr, u64 addr) return 0; } -int mlx5_ib_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, - unsigned int *sg_offset) +int mlx5_ib_map_mr_sg(struct ib_mr *ibmr, struct ib_scatterlist *ib_sg) { struct mlx5_ib_mr *mr = to_mmr(ibmr); int n; @@ -1992,10 +1988,9 @@ int mlx5_ib_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, DMA_TO_DEVICE); if (mr->access_mode == MLX5_MKC_ACCESS_MODE_KLMS) - n = mlx5_ib_sg_to_klms(mr, sg, sg_nents, sg_offset); + n = mlx5_ib_sg_to_klms(mr, ib_sg); else - n = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, - mlx5_set_page); + n = ib_sg_to_pages(ibmr, ib_sg, mlx5_set_page); ib_dma_sync_single_for_device(ibmr->device, mr->desc_map, mr->desc_size * mr->max_descs, diff --git a/drivers/infiniband/hw/nes/nes_verbs.c b/drivers/infiniband/hw/nes/nes_verbs.c index 4e7f08ee1907..22883f901fb4 100644 --- a/drivers/infiniband/hw/nes/nes_verbs.c +++ b/drivers/infiniband/hw/nes/nes_verbs.c @@ -402,14 +402,13 @@ static int nes_set_page(struct ib_mr *ibmr, u64 addr) return 0; } -static int nes_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, - int sg_nents, unsigned int *sg_offset) +static int nes_map_mr_sg(struct ib_mr *ibmr, struct ib_scatterlist *ib_sg) { struct nes_mr *nesmr = to_nesmr(ibmr); nesmr->npages = 0; - return ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, nes_set_page); + return ib_sg_to_pages(ibmr, ib_sg, nes_set_page); } /** diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c index 287c332ff0e6..c23943140162 100644 --- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c +++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c @@ -3030,12 +3030,11 @@ static int ocrdma_set_page(struct ib_mr *ibmr, u64 addr) return 0; } -int ocrdma_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, - unsigned int *sg_offset) +int ocrdma_map_mr_sg(struct ib_mr *ibmr, struct ib_scatterlist *ib_sg) { struct ocrdma_mr *mr = get_ocrdma_mr(ibmr); mr->npages = 0; - return ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, ocrdma_set_page); + return ib_sg_to_pages(ibmr, ib_sg, ocrdma_set_page); } diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.h b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.h index b69cfdce7970..f447a1afccfc 100644 --- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.h +++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.h @@ -110,7 +110,6 @@ struct ib_mr *ocrdma_reg_user_mr(struct ib_pd *, u64 start, u64 length, struct ib_mr *ocrdma_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, u32 max_num_sg); -int ocrdma_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, - unsigned int *sg_offset); +int ocrdma_map_mr_sg(struct ib_mr *ibmr, struct ib_scatterlist *ib_sg); #endif /* __OCRDMA_VERBS_H__ */ diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c index e1ccf32b1c3d..083f7d4a54fd 100644 --- a/drivers/infiniband/hw/qedr/verbs.c +++ b/drivers/infiniband/hw/qedr/verbs.c @@ -2938,15 +2938,14 @@ static void handle_completed_mrs(struct qedr_dev *dev, struct mr_info *info) } } -int qedr_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, - int sg_nents, unsigned int *sg_offset) +int qedr_map_mr_sg(struct ib_mr *ibmr, struct ib_scatterlist *ib_sg) { struct qedr_mr *mr = get_qedr_mr(ibmr); mr->npages = 0; handle_completed_mrs(mr->dev, &mr->info); - return ib_sg_to_pages(ibmr, sg, sg_nents, NULL, qedr_set_page); + return ib_sg_to_pages(ibmr, ib_sg, qedr_set_page); } struct ib_mr *qedr_get_dma_mr(struct ib_pd *ibpd, int acc) diff --git a/drivers/infiniband/hw/qedr/verbs.h b/drivers/infiniband/hw/qedr/verbs.h index 1852b7012bf4..67dbd105c38f 100644 --- a/drivers/infiniband/hw/qedr/verbs.h +++ b/drivers/infiniband/hw/qedr/verbs.h @@ -85,8 +85,7 @@ struct ib_mr *qedr_get_dma_mr(struct ib_pd *, int acc); struct ib_mr *qedr_reg_user_mr(struct ib_pd *, u64 start, u64 length, u64 virt, int acc, struct ib_udata *); -int qedr_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, - int sg_nents, unsigned int *sg_offset); +int qedr_map_mr_sg(struct ib_mr *ibmr, struct ib_scatterlist *ib_sg); struct ib_mr *qedr_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, u32 max_num_sg); diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_mr.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_mr.c index fa96fa4fb829..8c5d592a65c1 100644 --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_mr.c +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_mr.c @@ -310,8 +310,7 @@ static int pvrdma_set_page(struct ib_mr *ibmr, u64 addr) return 0; } -int pvrdma_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, - unsigned int *sg_offset) +int pvrdma_map_mr_sg(struct ib_mr *ibmr, struct ib_scatterlist *ib_sg) { struct pvrdma_user_mr *mr = to_vmr(ibmr); struct pvrdma_dev *dev = to_vdev(ibmr->device); @@ -319,7 +318,7 @@ int pvrdma_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, mr->npages = 0; - ret = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, pvrdma_set_page); + ret = ib_sg_to_pages(ibmr, ib_sg, pvrdma_set_page); if (ret < 0) dev_warn(&dev->pdev->dev, "could not map sg to pages\n"); diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.h b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.h index f7f758d60110..215c288ec217 100644 --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.h +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.h @@ -410,8 +410,7 @@ struct ib_mr *pvrdma_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, int pvrdma_dereg_mr(struct ib_mr *mr); struct ib_mr *pvrdma_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, u32 max_num_sg); -int pvrdma_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, - int sg_nents, unsigned int *sg_offset); +int pvrdma_map_mr_sg(struct ib_mr *ibmr, struct ib_scatterlist *ib_sg); struct ib_cq *pvrdma_create_cq(struct ib_device *ibdev, const struct ib_cq_init_attr *attr, struct ib_ucontext *context, diff --git a/drivers/infiniband/sw/rdmavt/mr.c b/drivers/infiniband/sw/rdmavt/mr.c index 49c9541050d4..745e41e66f04 100644 --- a/drivers/infiniband/sw/rdmavt/mr.c +++ b/drivers/infiniband/sw/rdmavt/mr.c @@ -629,21 +629,17 @@ static int rvt_set_page(struct ib_mr *ibmr, u64 addr) /** * rvt_map_mr_sg - map sg list and set it the memory region * @ibmr: memory region - * @sg: dma mapped scatterlist - * @sg_nents: number of entries in sg - * @sg_offset: offset in bytes into sg + * @ib_sg: dma mapped ib scatterlist * * Return: number of sg elements mapped to the memory region */ -int rvt_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, - int sg_nents, unsigned int *sg_offset) +int rvt_map_mr_sg(struct ib_mr *ibmr, struct ib_scatterlist *ib_sg) { struct rvt_mr *mr = to_imr(ibmr); mr->mr.length = 0; mr->mr.page_shift = PAGE_SHIFT; - return ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, - rvt_set_page); + return ib_sg_to_pages(ibmr, ib_sg, rvt_set_page); } /** diff --git a/drivers/infiniband/sw/rdmavt/mr.h b/drivers/infiniband/sw/rdmavt/mr.h index 132800ee0205..878e93af3c46 100644 --- a/drivers/infiniband/sw/rdmavt/mr.h +++ b/drivers/infiniband/sw/rdmavt/mr.h @@ -82,8 +82,7 @@ int rvt_dereg_mr(struct ib_mr *ibmr); struct ib_mr *rvt_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, u32 max_num_sg); -int rvt_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, - int sg_nents, unsigned int *sg_offset); +int rvt_map_mr_sg(struct ib_mr *ibmr, struct ib_scatterlist *ib_sg); struct ib_fmr *rvt_alloc_fmr(struct ib_pd *pd, int mr_access_flags, struct ib_fmr_attr *fmr_attr); int rvt_map_phys_fmr(struct ib_fmr *ibfmr, u64 *page_list, diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index b20e6e0415f5..3bec36b3ffeb 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -1080,15 +1080,14 @@ static int rxe_set_page(struct ib_mr *ibmr, u64 addr) return 0; } -static int rxe_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, - int sg_nents, unsigned int *sg_offset) +static int rxe_map_mr_sg(struct ib_mr *ibmr, struct ib_scatterlist *ib_sg) { struct rxe_mem *mr = to_rmr(ibmr); int n; mr->nbuf = 0; - n = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, rxe_set_page); + n = ib_sg_to_pages(ibmr, ib_sg, rxe_set_page); mr->va = ibmr->iova; mr->iova = ibmr->iova; diff --git a/drivers/infiniband/ulp/iser/iser_memory.c b/drivers/infiniband/ulp/iser/iser_memory.c index 394d1b9c2ff7..da3f6f7cbc72 100644 --- a/drivers/infiniband/ulp/iser/iser_memory.c +++ b/drivers/infiniband/ulp/iser/iser_memory.c @@ -235,12 +235,15 @@ int iser_fast_reg_fmr(struct iscsi_iser_task *iser_task, struct iser_page_vec *page_vec = rsc->page_vec; struct ib_fmr_pool *fmr_pool = rsc->fmr_pool; struct ib_pool_fmr *fmr; + struct ib_scatterlist ib_sg; int ret, plen; page_vec->npages = 0; page_vec->fake_mr.page_size = SIZE_4K; - plen = ib_sg_to_pages(&page_vec->fake_mr, mem->sg, - mem->dma_nents, NULL, iser_set_page); + ib_sg.sg = mem->sg; + ib_sg.dma_nents = mem->dma_nents; + ib_sg.offset = 0; + plen = ib_sg_to_pages(&page_vec->fake_mr, &ib_sg, iser_set_page); if (unlikely(plen < mem->dma_nents)) { iser_err("page vec too short to hold this SG\n"); iser_data_buf_dump(mem, device->ib_device); @@ -441,6 +444,7 @@ static int iser_fast_reg_mr(struct iscsi_iser_task *iser_task, struct ib_cqe *cqe = &iser_task->iser_conn->ib_conn.reg_cqe; struct ib_mr *mr = rsc->mr; struct ib_reg_wr *wr; + struct ib_scatterlist ib_sg; int n; if (rsc->mr_valid) @@ -448,7 +452,10 @@ static int iser_fast_reg_mr(struct iscsi_iser_task *iser_task, ib_update_fast_reg_key(mr, ib_inc_rkey(mr->rkey)); - n = ib_map_mr_sg(mr, mem->sg, mem->dma_nents, NULL, SIZE_4K); + ib_sg.sg = mem->sg; + ib_sg.dma_nents = mem->dma_nents; + ib_sg.offset = 0; + n = ib_map_mr_sg(mr, &ib_sg, SIZE_4K); if (unlikely(n != mem->dma_nents)) { iser_err("failed to map sg (%d/%d)\n", n, mem->dma_nents); diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c index 31d91538bbf4..1ce1619990d5 100644 --- a/drivers/infiniband/ulp/srp/ib_srp.c +++ b/drivers/infiniband/ulp/srp/ib_srp.c @@ -1511,15 +1511,15 @@ static void srp_reg_mr_err_done(struct ib_cq *cq, struct ib_wc *wc) } /* - * Map up to sg_nents elements of state->sg where *sg_offset_p is the offset - * where to start in the first element. If sg_offset_p != NULL then - * *sg_offset_p is updated to the offset in state->sg[retval] of the first + * Map up to ib_sg->dma_nents elements of state->sg where ib_sg->offset + * is the offset where to start in the first element. + * ib_sg->offset is updated to the offset in state->sg[retval] of the first * byte that has not yet been mapped. */ static int srp_map_finish_fr(struct srp_map_state *state, struct srp_request *req, - struct srp_rdma_ch *ch, int sg_nents, - unsigned int *sg_offset_p) + struct srp_rdma_ch *ch, + struct ib_scatterlist *ib_sg) { struct srp_target_port *target = ch->target; struct srp_device *dev = target->srp_host->srp_dev; @@ -1537,14 +1537,11 @@ static int srp_map_finish_fr(struct srp_map_state *state, WARN_ON_ONCE(!dev->use_fast_reg); - if (sg_nents == 1 && target->global_rkey) { - unsigned int sg_offset = sg_offset_p ? *sg_offset_p : 0; - - srp_map_desc(state, sg_dma_address(state->sg) + sg_offset, - sg_dma_len(state->sg) - sg_offset, + if (ib_sg->dma_nents == 1 && target->global_rkey) { + srp_map_desc(state, sg_dma_address(state->sg) + ib_sg->offset, + sg_dma_len(state->sg) - ib_sg->offset, target->global_rkey); - if (sg_offset_p) - *sg_offset_p = 0; + ib_sg->offset = 0; return 1; } @@ -1555,13 +1552,12 @@ static int srp_map_finish_fr(struct srp_map_state *state, rkey = ib_inc_rkey(desc->mr->rkey); ib_update_fast_reg_key(desc->mr, rkey); - n = ib_map_mr_sg(desc->mr, state->sg, sg_nents, sg_offset_p, - dev->mr_page_size); + n = ib_map_mr_sg(desc->mr, ib_sg, dev->mr_page_size); if (unlikely(n < 0)) { srp_fr_pool_put(ch->fr_pool, &desc, 1); pr_debug("%s: ib_map_mr_sg(%d, %d) returned %d.\n", - dev_name(&req->scmnd->device->sdev_gendev), sg_nents, - sg_offset_p ? *sg_offset_p : -1, n); + dev_name(&req->scmnd->device->sdev_gendev), + ib_sg->dma_nents, ib_sg->offset, n); return n; } @@ -1668,8 +1664,9 @@ static int srp_map_sg_fr(struct srp_map_state *state, struct srp_rdma_ch *ch, struct srp_request *req, struct scatterlist *scat, int count) { - unsigned int sg_offset = 0; + struct ib_scatterlist ib_sg; + ib_sg.offset = 0; state->fr.next = req->fr_list; state->fr.end = req->fr_list + ch->target->mr_per_cmd; state->sg = scat; @@ -1680,7 +1677,9 @@ static int srp_map_sg_fr(struct srp_map_state *state, struct srp_rdma_ch *ch, while (count) { int i, n; - n = srp_map_finish_fr(state, req, ch, count, &sg_offset); + ib_sg.sg = state->sg; + ib_sg.dma_nents = count; + n = srp_map_finish_fr(state, req, ch, &ib_sg); if (unlikely(n < 0)) return n; @@ -1727,6 +1726,7 @@ static int srp_map_idb(struct srp_rdma_ch *ch, struct srp_request *req, struct srp_direct_buf idb_desc; u64 idb_pages[1]; struct scatterlist idb_sg[1]; + struct ib_scatterlist ib_sg; int ret; memset(&state, 0, sizeof(state)); @@ -1744,7 +1744,10 @@ static int srp_map_idb(struct srp_rdma_ch *ch, struct srp_request *req, #ifdef CONFIG_NEED_SG_DMA_LENGTH idb_sg->dma_length = idb_sg->length; /* hack^2 */ #endif - ret = srp_map_finish_fr(&state, req, ch, 1, NULL); + ib_sg.sg = state.sg; + ib_sg.dma_nents = 1; + ib_sg.offset = 0; + ret = srp_map_finish_fr(&state, req, ch, &ib_sg); if (ret < 0) return ret; WARN_ON_ONCE(ret < 1); diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index 52abc3a6de12..f92bf69d2e2c 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -1227,6 +1227,7 @@ static int nvme_rdma_map_sg_fr(struct nvme_rdma_queue *queue, int count) { struct nvme_keyed_sgl_desc *sg = &c->common.dptr.ksgl; + struct ib_scatterlist ib_sg; int nr; req->mr = ib_mr_pool_get(queue->qp, &queue->qp->rdma_mrs); @@ -1237,7 +1238,10 @@ static int nvme_rdma_map_sg_fr(struct nvme_rdma_queue *queue, * Align the MR to a 4K page size to match the ctrl page size and * the block virtual boundary. */ - nr = ib_map_mr_sg(req->mr, req->sg_table.sgl, count, NULL, SZ_4K); + ib_sg.sg = req->sg_table.sgl; + ib_sg.dma_nents = count; + ib_sg.offset = 0; + nr = ib_map_mr_sg(req->mr, &ib_sg, SZ_4K); if (unlikely(nr < count)) { ib_mr_pool_put(queue->qp, &queue->qp->rdma_mrs, req->mr); req->mr = NULL; diff --git a/fs/cifs/smbdirect.c b/fs/cifs/smbdirect.c index a568dac7b3a1..2b00a4309fcf 100644 --- a/fs/cifs/smbdirect.c +++ b/fs/cifs/smbdirect.c @@ -2489,6 +2489,7 @@ struct smbd_mr *smbd_register_mr( int rc, i; enum dma_data_direction dir; struct ib_reg_wr *reg_wr; + struct ib_scatterlist ib_sg; if (num_pages > info->max_frmr_depth) { log_rdma_mr(ERR, "num_pages=%d max_frmr_depth=%d\n", @@ -2534,8 +2535,10 @@ struct smbd_mr *smbd_register_mr( goto dma_map_error; } - rc = ib_map_mr_sg(smbdirect_mr->mr, smbdirect_mr->sgl, num_pages, - NULL, PAGE_SIZE); + ib_sg.sg = smbdirect_mr->sgl; + ib_sg.dma_nents = num_pages; + ib_sg.offset = 0; + rc = ib_map_mr_sg(smbdirect_mr->mr, &ib_sg, PAGE_SIZE); if (rc != num_pages) { log_rdma_mr(ERR, "ib_map_mr_sg failed rc = %d num_pages = %x\n", diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 8edba47f272e..b8081ca07e63 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -2157,6 +2157,18 @@ struct ib_counters_read_attr { u32 flags; /* use enum ib_read_counters_flags */ }; +/* + * struct ib_scatterlist - Mapped scatterlist for RDMA operations + * @sg: dma mapped sg list + * @dma_nents: returned by dma_map_sg + * @offset: start offset in bytes into the first sg element + */ +struct ib_scatterlist { + struct scatterlist *sg; + int dma_nents; + unsigned int offset; +}; + struct uverbs_attr_bundle; /** @@ -2317,8 +2329,7 @@ struct ib_device_ops { enum ib_uverbs_advise_mr_advice advice, u32 flags, struct ib_sge *sg_list, u32 num_sge, struct uverbs_attr_bundle *attrs); - int (*map_mr_sg)(struct ib_mr *mr, struct scatterlist *sg, int sg_nents, - unsigned int *sg_offset); + int (*map_mr_sg)(struct ib_mr *mr, struct ib_scatterlist *ib_sg); int (*check_mr_status)(struct ib_mr *mr, u32 check_mask, struct ib_mr_status *mr_status); struct ib_mw *(*alloc_mw)(struct ib_pd *pd, enum ib_mw_type type, @@ -3861,23 +3872,23 @@ struct ib_rwq_ind_table *ib_create_rwq_ind_table(struct ib_device *device, wq_ind_table_init_attr); int ib_destroy_rwq_ind_table(struct ib_rwq_ind_table *wq_ind_table); -int ib_map_mr_sg(struct ib_mr *mr, struct scatterlist *sg, int sg_nents, - unsigned int *sg_offset, unsigned int page_size); +int ib_map_mr_sg(struct ib_mr *mr, struct ib_scatterlist *ib_sg, + unsigned int page_size); static inline int -ib_map_mr_sg_zbva(struct ib_mr *mr, struct scatterlist *sg, int sg_nents, - unsigned int *sg_offset, unsigned int page_size) +ib_map_mr_sg_zbva(struct ib_mr *mr, struct ib_scatterlist *ib_sg, + unsigned int page_size) { int n; - n = ib_map_mr_sg(mr, sg, sg_nents, sg_offset, page_size); + n = ib_map_mr_sg(mr, ib_sg, page_size); mr->iova = 0; return n; } -int ib_sg_to_pages(struct ib_mr *mr, struct scatterlist *sgl, int sg_nents, - unsigned int *sg_offset, int (*set_page)(struct ib_mr *, u64)); +int ib_sg_to_pages(struct ib_mr *mr, struct ib_scatterlist *ib_sgl, + int (*set_page)(struct ib_mr *, u64)); void ib_drain_rq(struct ib_qp *qp); void ib_drain_sq(struct ib_qp *qp); diff --git a/net/rds/ib_frmr.c b/net/rds/ib_frmr.c index 6431a023ac89..abf6705617df 100644 --- a/net/rds/ib_frmr.c +++ b/net/rds/ib_frmr.c @@ -104,15 +104,18 @@ static int rds_ib_post_reg_frmr(struct rds_ib_mr *ibmr) { struct rds_ib_frmr *frmr = &ibmr->u.frmr; struct ib_reg_wr reg_wr; - int ret, off = 0; + struct ib_scatterlist ib_sg; + int ret; while (atomic_dec_return(&ibmr->ic->i_fastreg_wrs) <= 0) { atomic_inc(&ibmr->ic->i_fastreg_wrs); cpu_relax(); } - ret = ib_map_mr_sg_zbva(frmr->mr, ibmr->sg, ibmr->sg_len, - &off, PAGE_SIZE); + ib_sg.sg = ibmr->sg; + ib_sg.dma_nents = ibmr->sg_len; + ib_sg.offset = 0; + ret = ib_map_mr_sg_zbva(frmr->mr, &ib_sg, PAGE_SIZE); if (unlikely(ret != ibmr->sg_len)) return ret < 0 ? ret : -EINVAL; diff --git a/net/smc/smc_ib.c b/net/smc/smc_ib.c index e519ef29c0ff..256449f8254e 100644 --- a/net/smc/smc_ib.c +++ b/net/smc/smc_ib.c @@ -353,14 +353,15 @@ void smc_ib_put_memory_region(struct ib_mr *mr) static int smc_ib_map_mr_sg(struct smc_buf_desc *buf_slot) { - unsigned int offset = 0; int sg_num; + struct ib_scatterlist ib_sg; + ib_sg.sg = buf_slot->sgt[SMC_SINGLE_LINK].sgl; + ib_sg.dma_nents = buf_slot->sgt[SMC_SINGLE_LINK].orig_nents; + ib_sg.offset = 0; /* map the largest prefix of a dma mapped SG list */ sg_num = ib_map_mr_sg(buf_slot->mr_rx[SMC_SINGLE_LINK], - buf_slot->sgt[SMC_SINGLE_LINK].sgl, - buf_slot->sgt[SMC_SINGLE_LINK].orig_nents, - &offset, PAGE_SIZE); + &ib_sg, PAGE_SIZE); return sg_num; } diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c index 6a561056b538..954fd929c17b 100644 --- a/net/sunrpc/xprtrdma/frwr_ops.c +++ b/net/sunrpc/xprtrdma/frwr_ops.c @@ -400,6 +400,7 @@ struct rpcrdma_mr_seg *frwr_map(struct rpcrdma_xprt *r_xprt, struct rpcrdma_mr *mr; struct ib_mr *ibmr; struct ib_reg_wr *reg_wr; + struct ib_scatterlist ib_sg; int i, n; u8 key; @@ -441,7 +442,10 @@ struct rpcrdma_mr_seg *frwr_map(struct rpcrdma_xprt *r_xprt, goto out_dmamap_err; ibmr = frwr->fr_mr; - n = ib_map_mr_sg(ibmr, mr->mr_sg, mr->mr_nents, NULL, PAGE_SIZE); + ib_sg.sg = mr->mr_sg; + ib_sg.dma_nents = mr->mr_nents; + ib_sg.offset = 0; + n = ib_map_mr_sg(ibmr, &ib_sg, PAGE_SIZE); if (unlikely(n != mr->mr_nents)) goto out_mapmr_err; From patchwork Thu Feb 7 17:33:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Max Gurtovoy X-Patchwork-Id: 10801647 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8B8B214E1 for ; Thu, 7 Feb 2019 17:33:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 77500285AA for ; Thu, 7 Feb 2019 17:33:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6B959286B3; Thu, 7 Feb 2019 17:33:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B0BF3285AA for ; Thu, 7 Feb 2019 17:33:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726979AbfBGRdr (ORCPT ); Thu, 7 Feb 2019 12:33:47 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:49081 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726889AbfBGRdj (ORCPT ); Thu, 7 Feb 2019 12:33:39 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from maxg@mellanox.com) with ESMTPS (AES256-SHA encrypted); 7 Feb 2019 19:33:32 +0200 Received: from r-vnc08.mtr.labs.mlnx (r-vnc08.mtr.labs.mlnx [10.208.0.121]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id x17HXVKY025778; Thu, 7 Feb 2019 19:33:32 +0200 From: Max Gurtovoy To: linux-rdma@vger.kernel.org, sagi@grimberg.me, hch@lst.de, leon@kernel.org, jgg@mellanox.com, dledford@redhat.com, bvanassche@acm.org Cc: maxg@mellanox.com, israelr@mellanox.com, idanb@mellanox.com, vladimirk@mellanox.com, shlomin@mellanox.com, oren@mellanox.com Subject: [PATCH 05/17] IB/iser: Embed ib_scatterlist into iser_data_buf Date: Thu, 7 Feb 2019 19:33:19 +0200 Message-Id: <1549560811-8655-6-git-send-email-maxg@mellanox.com> X-Mailer: git-send-email 1.7.8.2 In-Reply-To: <1549560811-8655-1-git-send-email-maxg@mellanox.com> References: <1549560811-8655-1-git-send-email-maxg@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Israel Rukshin Use it to to save the needed details for RDMA operation description. Signed-off-by: Israel Rukshin Reviewed-by: Max Gurtovoy --- drivers/infiniband/ulp/iser/iscsi_iser.h | 14 +++++------- drivers/infiniband/ulp/iser/iser_initiator.c | 6 +++-- drivers/infiniband/ulp/iser/iser_memory.c | 33 +++++++++++----------------- 3 files changed, 23 insertions(+), 30 deletions(-) diff --git a/drivers/infiniband/ulp/iser/iscsi_iser.h b/drivers/infiniband/ulp/iser/iscsi_iser.h index 120b40829560..2fc79ea897a3 100644 --- a/drivers/infiniband/ulp/iser/iscsi_iser.h +++ b/drivers/infiniband/ulp/iser/iscsi_iser.h @@ -188,16 +188,14 @@ enum iser_data_dir { /** * struct iser_data_buf - iSER data buffer * - * @sg: pointer to the sg list - * @size: num entries of this sg - * @data_len: total beffer byte len - * @dma_nents: returned by dma_map_sg + * @ib_sg: ib scatterlist + * @size: num entries of the unmaped sg list + * @data_len: total buffer byte len */ struct iser_data_buf { - struct scatterlist *sg; - int size; - unsigned long data_len; - unsigned int dma_nents; + struct ib_scatterlist ib_sg; + int size; + unsigned long data_len; }; /* fwd declarations */ diff --git a/drivers/infiniband/ulp/iser/iser_initiator.c b/drivers/infiniband/ulp/iser/iser_initiator.c index 96af06cfe0af..9556ec55dec2 100644 --- a/drivers/infiniband/ulp/iser/iser_initiator.c +++ b/drivers/infiniband/ulp/iser/iser_initiator.c @@ -388,16 +388,18 @@ int iser_send_command(struct iscsi_conn *conn, } if (scsi_sg_count(sc)) { /* using a scatter list */ - data_buf->sg = scsi_sglist(sc); + data_buf->ib_sg.sg = scsi_sglist(sc); data_buf->size = scsi_sg_count(sc); } data_buf->data_len = scsi_bufflen(sc); + data_buf->ib_sg.offset = 0; if (scsi_prot_sg_count(sc)) { - prot_buf->sg = scsi_prot_sglist(sc); + prot_buf->ib_sg.sg = scsi_prot_sglist(sc); prot_buf->size = scsi_prot_sg_count(sc); prot_buf->data_len = (data_buf->data_len >> ilog2(sc->device->sector_size)) * 8; + prot_buf->ib_sg.offset = 0; } if (hdr->flags & ISCSI_FLAG_CMD_READ) { diff --git a/drivers/infiniband/ulp/iser/iser_memory.c b/drivers/infiniband/ulp/iser/iser_memory.c index da3f6f7cbc72..8fd3df94ec4b 100644 --- a/drivers/infiniband/ulp/iser/iser_memory.c +++ b/drivers/infiniband/ulp/iser/iser_memory.c @@ -142,7 +142,7 @@ static void iser_data_buf_dump(struct iser_data_buf *data, struct scatterlist *sg; int i; - for_each_sg(data->sg, sg, data->dma_nents, i) + for_each_sg(data->ib_sg.sg, sg, data->ib_sg.dma_nents, i) iser_dbg("sg[%d] dma_addr:0x%lX page:0x%p " "off:0x%x sz:0x%x dma_len:0x%x\n", i, (unsigned long)ib_sg_dma_address(ibdev, sg), @@ -170,8 +170,9 @@ int iser_dma_map_task_data(struct iscsi_iser_task *iser_task, iser_task->dir[iser_dir] = 1; dev = iser_task->iser_conn->ib_conn.device->ib_device; - data->dma_nents = ib_dma_map_sg(dev, data->sg, data->size, dma_dir); - if (data->dma_nents == 0) { + data->ib_sg.dma_nents = ib_dma_map_sg(dev, data->ib_sg.sg, data->size, + dma_dir); + if (data->ib_sg.dma_nents == 0) { iser_err("dma_map_sg failed!!!\n"); return -EINVAL; } @@ -185,14 +186,14 @@ void iser_dma_unmap_task_data(struct iscsi_iser_task *iser_task, struct ib_device *dev; dev = iser_task->iser_conn->ib_conn.device->ib_device; - ib_dma_unmap_sg(dev, data->sg, data->size, dir); + ib_dma_unmap_sg(dev, data->ib_sg.sg, data->size, dir); } static int iser_reg_dma(struct iser_device *device, struct iser_data_buf *mem, struct iser_mem_reg *reg) { - struct scatterlist *sg = mem->sg; + struct scatterlist *sg = mem->ib_sg.sg; reg->sge.lkey = device->pd->local_dma_lkey; /* @@ -235,16 +236,12 @@ int iser_fast_reg_fmr(struct iscsi_iser_task *iser_task, struct iser_page_vec *page_vec = rsc->page_vec; struct ib_fmr_pool *fmr_pool = rsc->fmr_pool; struct ib_pool_fmr *fmr; - struct ib_scatterlist ib_sg; int ret, plen; page_vec->npages = 0; page_vec->fake_mr.page_size = SIZE_4K; - ib_sg.sg = mem->sg; - ib_sg.dma_nents = mem->dma_nents; - ib_sg.offset = 0; - plen = ib_sg_to_pages(&page_vec->fake_mr, &ib_sg, iser_set_page); - if (unlikely(plen < mem->dma_nents)) { + plen = ib_sg_to_pages(&page_vec->fake_mr, &mem->ib_sg, iser_set_page); + if (unlikely(plen < mem->ib_sg.dma_nents)) { iser_err("page vec too short to hold this SG\n"); iser_data_buf_dump(mem, device->ib_device); iser_dump_page_vec(page_vec); @@ -444,7 +441,6 @@ static int iser_fast_reg_mr(struct iscsi_iser_task *iser_task, struct ib_cqe *cqe = &iser_task->iser_conn->ib_conn.reg_cqe; struct ib_mr *mr = rsc->mr; struct ib_reg_wr *wr; - struct ib_scatterlist ib_sg; int n; if (rsc->mr_valid) @@ -452,13 +448,9 @@ static int iser_fast_reg_mr(struct iscsi_iser_task *iser_task, ib_update_fast_reg_key(mr, ib_inc_rkey(mr->rkey)); - ib_sg.sg = mem->sg; - ib_sg.dma_nents = mem->dma_nents; - ib_sg.offset = 0; - n = ib_map_mr_sg(mr, &ib_sg, SIZE_4K); - if (unlikely(n != mem->dma_nents)) { - iser_err("failed to map sg (%d/%d)\n", - n, mem->dma_nents); + n = ib_map_mr_sg(mr, &mem->ib_sg, SIZE_4K); + if (unlikely(n != mem->ib_sg.dma_nents)) { + iser_err("failed to map sg (%d/%d)\n", n, mem->ib_sg.dma_nents); return n < 0 ? n : -EINVAL; } @@ -529,7 +521,8 @@ int iser_reg_rdma_mem(struct iscsi_iser_task *task, bool use_dma_key; int err; - use_dma_key = mem->dma_nents == 1 && (all_imm || !iser_always_reg) && + use_dma_key = mem->ib_sg.dma_nents == 1 && + (all_imm || !iser_always_reg) && scsi_get_prot_op(task->sc) == SCSI_PROT_NORMAL; if (!use_dma_key) { From patchwork Thu Feb 7 17:33:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Max Gurtovoy X-Patchwork-Id: 10801641 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8727114E1 for ; Thu, 7 Feb 2019 17:33:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 728002E34D for ; Thu, 7 Feb 2019 17:33:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 66B2E2E38D; Thu, 7 Feb 2019 17:33:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 84ACB2E34D for ; Thu, 7 Feb 2019 17:33:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726747AbfBGRdk (ORCPT ); Thu, 7 Feb 2019 12:33:40 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:49094 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726832AbfBGRdi (ORCPT ); Thu, 7 Feb 2019 12:33:38 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from maxg@mellanox.com) with ESMTPS (AES256-SHA encrypted); 7 Feb 2019 19:33:32 +0200 Received: from r-vnc08.mtr.labs.mlnx (r-vnc08.mtr.labs.mlnx [10.208.0.121]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id x17HXVKZ025778; Thu, 7 Feb 2019 19:33:32 +0200 From: Max Gurtovoy To: linux-rdma@vger.kernel.org, sagi@grimberg.me, hch@lst.de, leon@kernel.org, jgg@mellanox.com, dledford@redhat.com, bvanassche@acm.org Cc: maxg@mellanox.com, israelr@mellanox.com, idanb@mellanox.com, vladimirk@mellanox.com, shlomin@mellanox.com, oren@mellanox.com Subject: [PATCH 06/17] IB/srp: Embed ib_scatterlist into srp_map_state struct Date: Thu, 7 Feb 2019 19:33:20 +0200 Message-Id: <1549560811-8655-7-git-send-email-maxg@mellanox.com> X-Mailer: git-send-email 1.7.8.2 In-Reply-To: <1549560811-8655-1-git-send-email-maxg@mellanox.com> References: <1549560811-8655-1-git-send-email-maxg@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Israel Rukshin Use it to to save the needed details for RDMA operation description. Signed-off-by: Israel Rukshin Reviewed-by: Max Gurtovoy --- drivers/infiniband/ulp/srp/ib_srp.c | 39 ++++++++++++++++--------------------- drivers/infiniband/ulp/srp/ib_srp.h | 2 +- 2 files changed, 18 insertions(+), 23 deletions(-) diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c index 1ce1619990d5..9c7092daaa2d 100644 --- a/drivers/infiniband/ulp/srp/ib_srp.c +++ b/drivers/infiniband/ulp/srp/ib_srp.c @@ -1511,20 +1511,20 @@ static void srp_reg_mr_err_done(struct ib_cq *cq, struct ib_wc *wc) } /* - * Map up to ib_sg->dma_nents elements of state->sg where ib_sg->offset - * is the offset where to start in the first element. - * ib_sg->offset is updated to the offset in state->sg[retval] of the first - * byte that has not yet been mapped. + * Map up to state->ib_sg.dma_nents elements of state->ib_sg.sg where + * state->ib_sg.offset is the offset where to start in the first element. + * state->ib_sg.offset is updated to the offset in state->ib_sg.sg[retval] of + * the first byte that has not yet been mapped. */ static int srp_map_finish_fr(struct srp_map_state *state, struct srp_request *req, - struct srp_rdma_ch *ch, - struct ib_scatterlist *ib_sg) + struct srp_rdma_ch *ch) { struct srp_target_port *target = ch->target; struct srp_device *dev = target->srp_host->srp_dev; struct ib_reg_wr wr; struct srp_fr_desc *desc; + struct ib_scatterlist *ib_sg = &state->ib_sg; u32 rkey; int n, err; @@ -1538,8 +1538,8 @@ static int srp_map_finish_fr(struct srp_map_state *state, WARN_ON_ONCE(!dev->use_fast_reg); if (ib_sg->dma_nents == 1 && target->global_rkey) { - srp_map_desc(state, sg_dma_address(state->sg) + ib_sg->offset, - sg_dma_len(state->sg) - ib_sg->offset, + srp_map_desc(state, sg_dma_address(ib_sg->sg) + ib_sg->offset, + sg_dma_len(ib_sg->sg) - ib_sg->offset, target->global_rkey); ib_sg->offset = 0; return 1; @@ -1664,12 +1664,10 @@ static int srp_map_sg_fr(struct srp_map_state *state, struct srp_rdma_ch *ch, struct srp_request *req, struct scatterlist *scat, int count) { - struct ib_scatterlist ib_sg; - - ib_sg.offset = 0; state->fr.next = req->fr_list; state->fr.end = req->fr_list + ch->target->mr_per_cmd; - state->sg = scat; + state->ib_sg.sg = scat; + state->ib_sg.offset = 0; if (count == 0) return 0; @@ -1677,15 +1675,14 @@ static int srp_map_sg_fr(struct srp_map_state *state, struct srp_rdma_ch *ch, while (count) { int i, n; - ib_sg.sg = state->sg; - ib_sg.dma_nents = count; - n = srp_map_finish_fr(state, req, ch, &ib_sg); + state->ib_sg.dma_nents = count; + n = srp_map_finish_fr(state, req, ch); if (unlikely(n < 0)) return n; count -= n; for (i = 0; i < n; i++) - state->sg = sg_next(state->sg); + state->ib_sg.sg = sg_next(state->ib_sg.sg); } return 0; @@ -1726,7 +1723,6 @@ static int srp_map_idb(struct srp_rdma_ch *ch, struct srp_request *req, struct srp_direct_buf idb_desc; u64 idb_pages[1]; struct scatterlist idb_sg[1]; - struct ib_scatterlist ib_sg; int ret; memset(&state, 0, sizeof(state)); @@ -1738,16 +1734,15 @@ static int srp_map_idb(struct srp_rdma_ch *ch, struct srp_request *req, state.dma_len = idb_len; if (dev->use_fast_reg) { - state.sg = idb_sg; + state.ib_sg.sg = idb_sg; sg_init_one(idb_sg, req->indirect_desc, idb_len); idb_sg->dma_address = req->indirect_dma_addr; /* hack! */ #ifdef CONFIG_NEED_SG_DMA_LENGTH idb_sg->dma_length = idb_sg->length; /* hack^2 */ #endif - ib_sg.sg = state.sg; - ib_sg.dma_nents = 1; - ib_sg.offset = 0; - ret = srp_map_finish_fr(&state, req, ch, &ib_sg); + state.ib_sg.dma_nents = 1; + state.ib_sg.offset = 0; + ret = srp_map_finish_fr(&state, req, ch); if (ret < 0) return ret; WARN_ON_ONCE(ret < 1); diff --git a/drivers/infiniband/ulp/srp/ib_srp.h b/drivers/infiniband/ulp/srp/ib_srp.h index b2861cd2087a..30f6174dc23e 100644 --- a/drivers/infiniband/ulp/srp/ib_srp.h +++ b/drivers/infiniband/ulp/srp/ib_srp.h @@ -340,7 +340,7 @@ struct srp_map_state { struct srp_direct_buf *desc; union { u64 *pages; - struct scatterlist *sg; + struct ib_scatterlist ib_sg; }; dma_addr_t base_dma_addr; u32 dma_len; From patchwork Thu Feb 7 17:33:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Max Gurtovoy X-Patchwork-Id: 10801623 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id ED9B617FB for ; Thu, 7 Feb 2019 17:33:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DAA002E34D for ; Thu, 7 Feb 2019 17:33:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CF0372E38B; Thu, 7 Feb 2019 17:33:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6287E2E34D for ; Thu, 7 Feb 2019 17:33:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726897AbfBGRdl (ORCPT ); Thu, 7 Feb 2019 12:33:41 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:49129 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726850AbfBGRdi (ORCPT ); Thu, 7 Feb 2019 12:33:38 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from maxg@mellanox.com) with ESMTPS (AES256-SHA encrypted); 7 Feb 2019 19:33:33 +0200 Received: from r-vnc08.mtr.labs.mlnx (r-vnc08.mtr.labs.mlnx [10.208.0.121]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id x17HXVKa025778; Thu, 7 Feb 2019 19:33:32 +0200 From: Max Gurtovoy To: linux-rdma@vger.kernel.org, sagi@grimberg.me, hch@lst.de, leon@kernel.org, jgg@mellanox.com, dledford@redhat.com, bvanassche@acm.org Cc: maxg@mellanox.com, israelr@mellanox.com, idanb@mellanox.com, vladimirk@mellanox.com, shlomin@mellanox.com, oren@mellanox.com Subject: [PATCH 07/17] RDMA/core: Introduce ib_map_mr_sg_pi to map data/protection sgl's Date: Thu, 7 Feb 2019 19:33:21 +0200 Message-Id: <1549560811-8655-8-git-send-email-maxg@mellanox.com> X-Mailer: git-send-email 1.7.8.2 In-Reply-To: <1549560811-8655-1-git-send-email-maxg@mellanox.com> References: <1549560811-8655-1-git-send-email-maxg@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This function will map the previously dma mapped SG lists for PI (protection information) and data to an appropriate memory region for future registration. The given MR must be allocated as IB_MR_TYPE_PI. Signed-off-by: Max Gurtovoy Signed-off-by: Israel Rukshin --- drivers/infiniband/core/device.c | 1 + drivers/infiniband/core/verbs.c | 32 +++++++++++++++++++++++++++++++- include/rdma/ib_verbs.h | 6 ++++++ 3 files changed, 38 insertions(+), 1 deletion(-) diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c index bc4c7c5c305b..4c70c2f5a95b 100644 --- a/drivers/infiniband/core/device.c +++ b/drivers/infiniband/core/device.c @@ -1292,6 +1292,7 @@ void ib_set_device_ops(struct ib_device *dev, const struct ib_device_ops *ops) SET_DEVICE_OP(dev_ops, get_vf_config); SET_DEVICE_OP(dev_ops, get_vf_stats); SET_DEVICE_OP(dev_ops, map_mr_sg); + SET_DEVICE_OP(dev_ops, map_mr_sg_pi); SET_DEVICE_OP(dev_ops, map_phys_fmr); SET_DEVICE_OP(dev_ops, mmap); SET_DEVICE_OP(dev_ops, modify_ah); diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c index 08aa6715fa12..8e36e451df05 100644 --- a/drivers/infiniband/core/verbs.c +++ b/drivers/infiniband/core/verbs.c @@ -2020,7 +2020,8 @@ struct ib_mr *ib_alloc_mr_integrity(struct ib_pd *pd, { struct ib_mr *mr; - if (!pd->device->ops.alloc_mr_integrity) + if (!pd->device->ops.alloc_mr_integrity || + !pd->device->ops.map_mr_sg_pi) return ERR_PTR(-EOPNOTSUPP); if (!max_num_meta_sg) @@ -2402,6 +2403,35 @@ int ib_set_vf_guid(struct ib_device *device, int vf, u8 port, u64 guid, } EXPORT_SYMBOL(ib_set_vf_guid); +/** + * ib_map_mr_sg_pi() - Map the dma mapped SG lists for PI (protection + * information) and set an appropriate memory region for registration. + * @mr: memory region + * @data_ib_sg: dma mapped scatterlist for data + * @meta_ib_sg: dma mapped scatterlist for metadata + * @page_size: page vector desired page size + * + * Constraints: + * - The MR must be allocated with type IB_MR_TYPE_PI. + * + * Returns the number of sg elements that were mapped to the memory region. + * + * After this completes successfully, the memory region + * is ready for registration. + */ +int ib_map_mr_sg_pi(struct ib_mr *mr, struct ib_scatterlist *data_ib_sg, + struct ib_scatterlist *meta_ib_sg, unsigned int page_size) +{ + if (unlikely(!mr->device->ops.map_mr_sg_pi || + WARN_ON_ONCE(mr->type != IB_MR_TYPE_PI))) + return -EOPNOTSUPP; + + mr->page_size = page_size; + + return mr->device->ops.map_mr_sg_pi(mr, data_ib_sg, meta_ib_sg); +} +EXPORT_SYMBOL(ib_map_mr_sg_pi); + /** * ib_map_mr_sg() - Map the largest prefix of a dma mapped SG list * and set it the memory region. diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index b8081ca07e63..134d13ab39e5 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -2393,6 +2393,10 @@ struct ib_device_ops { int (*read_counters)(struct ib_counters *counters, struct ib_counters_read_attr *counters_read_attr, struct uverbs_attr_bundle *attrs); + int (*map_mr_sg_pi)(struct ib_mr *mr, + struct ib_scatterlist *data_ib_sg, + struct ib_scatterlist *meta_ib_sg); + /** * alloc_hw_stats - Allocate a struct rdma_hw_stats and fill in the * driver initialized data. The struct is kfree()'ed by the sysfs @@ -3874,6 +3878,8 @@ int ib_destroy_rwq_ind_table(struct ib_rwq_ind_table *wq_ind_table); int ib_map_mr_sg(struct ib_mr *mr, struct ib_scatterlist *ib_sg, unsigned int page_size); +int ib_map_mr_sg_pi(struct ib_mr *mr, struct ib_scatterlist *data_ib_sg, + struct ib_scatterlist *meta_ib_sg, unsigned int page_size); static inline int ib_map_mr_sg_zbva(struct ib_mr *mr, struct ib_scatterlist *ib_sg, From patchwork Thu Feb 7 17:33:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Max Gurtovoy X-Patchwork-Id: 10801627 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C6E7517FB for ; Thu, 7 Feb 2019 17:33:43 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B2F122E34D for ; Thu, 7 Feb 2019 17:33:43 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A6A472E358; Thu, 7 Feb 2019 17:33:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 474AF2E38B for ; Thu, 7 Feb 2019 17:33:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726442AbfBGRdm (ORCPT ); Thu, 7 Feb 2019 12:33:42 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:49060 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726882AbfBGRdi (ORCPT ); Thu, 7 Feb 2019 12:33:38 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from maxg@mellanox.com) with ESMTPS (AES256-SHA encrypted); 7 Feb 2019 19:33:32 +0200 Received: from r-vnc08.mtr.labs.mlnx (r-vnc08.mtr.labs.mlnx [10.208.0.121]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id x17HXVKb025778; Thu, 7 Feb 2019 19:33:32 +0200 From: Max Gurtovoy To: linux-rdma@vger.kernel.org, sagi@grimberg.me, hch@lst.de, leon@kernel.org, jgg@mellanox.com, dledford@redhat.com, bvanassche@acm.org Cc: maxg@mellanox.com, israelr@mellanox.com, idanb@mellanox.com, vladimirk@mellanox.com, shlomin@mellanox.com, oren@mellanox.com Subject: [PATCH 08/17] RDMA/core: Add signature attrs element for ib_mr structure Date: Thu, 7 Feb 2019 19:33:22 +0200 Message-Id: <1549560811-8655-9-git-send-email-maxg@mellanox.com> X-Mailer: git-send-email 1.7.8.2 In-Reply-To: <1549560811-8655-1-git-send-email-maxg@mellanox.com> References: <1549560811-8655-1-git-send-email-maxg@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This element will describe the needed characteristics for the signature operation per signature enabled memory region (type IB_MR_TYPE_PI). Signed-off-by: Max Gurtovoy Signed-off-by: Israel Rukshin Reviewed-by: Sagi Grimberg --- drivers/infiniband/core/uverbs_cmd.c | 1 + drivers/infiniband/core/verbs.c | 13 ++++++++++++- include/rdma/ib_verbs.h | 2 +- 3 files changed, 14 insertions(+), 2 deletions(-) diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c index 72c5a8daf558..699fbcf68d06 100644 --- a/drivers/infiniband/core/uverbs_cmd.c +++ b/drivers/infiniband/core/uverbs_cmd.c @@ -739,6 +739,7 @@ static int ib_uverbs_reg_mr(struct uverbs_attr_bundle *attrs) mr->pd = pd; mr->type = IB_MR_TYPE_MEM_REG; mr->dm = NULL; + mr->sig_attrs = NULL; mr->uobject = uobj; atomic_inc(&pd->usecnt); mr->res.type = RDMA_RESTRACK_MR; diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c index 8e36e451df05..1503e9dfc326 100644 --- a/drivers/infiniband/core/verbs.c +++ b/drivers/infiniband/core/verbs.c @@ -1947,6 +1947,7 @@ int ib_dereg_mr(struct ib_mr *mr) { struct ib_pd *pd = mr->pd; struct ib_dm *dm = mr->dm; + struct ib_sig_attrs *sig_attrs = mr->sig_attrs; int ret; rdma_restrack_del(&mr->res); @@ -1955,6 +1956,7 @@ int ib_dereg_mr(struct ib_mr *mr) atomic_dec(&pd->usecnt); if (dm) atomic_dec(&dm->usecnt); + kfree(sig_attrs); } return ret; @@ -1996,6 +1998,7 @@ struct ib_mr *ib_alloc_mr(struct ib_pd *pd, mr->res.type = RDMA_RESTRACK_MR; rdma_restrack_kadd(&mr->res); mr->type = mr_type; + mr->sig_attrs = NULL; } return mr; @@ -2019,6 +2022,7 @@ struct ib_mr *ib_alloc_mr_integrity(struct ib_pd *pd, u32 max_num_meta_sg) { struct ib_mr *mr; + struct ib_sig_attrs *sig_attrs; if (!pd->device->ops.alloc_mr_integrity || !pd->device->ops.map_mr_sg_pi) @@ -2027,10 +2031,16 @@ struct ib_mr *ib_alloc_mr_integrity(struct ib_pd *pd, if (!max_num_meta_sg) return ERR_PTR(-EINVAL); + sig_attrs = kzalloc(sizeof(struct ib_sig_attrs), GFP_KERNEL); + if (!sig_attrs) + return ERR_PTR(-ENOMEM); + mr = pd->device->ops.alloc_mr_integrity(pd, max_num_data_sg, max_num_meta_sg); - if (IS_ERR(mr)) + if (IS_ERR(mr)) { + kfree(sig_attrs); return mr; + } mr->device = pd->device; mr->pd = pd; @@ -2041,6 +2051,7 @@ struct ib_mr *ib_alloc_mr_integrity(struct ib_pd *pd, mr->res.type = RDMA_RESTRACK_MR; rdma_restrack_kadd(&mr->res); mr->type = IB_MR_TYPE_PI; + mr->sig_attrs = sig_attrs; return mr; } diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 134d13ab39e5..1ec36490bc6f 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -1707,7 +1707,7 @@ struct ib_mr { }; struct ib_dm *dm; - + struct ib_sig_attrs *sig_attrs; /* only for IB_MR_TYPE_PI MRs */ /* * Implementation details of the RDMA core, don't use in drivers: */ From patchwork Thu Feb 7 17:33:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Max Gurtovoy X-Patchwork-Id: 10801639 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3DD3614E1 for ; Thu, 7 Feb 2019 17:33:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2780A2E358 for ; Thu, 7 Feb 2019 17:33:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1B8432E38B; Thu, 7 Feb 2019 17:33:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 40C922E358 for ; Thu, 7 Feb 2019 17:33:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726880AbfBGRdq (ORCPT ); Thu, 7 Feb 2019 12:33:46 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:49103 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726844AbfBGRdl (ORCPT ); Thu, 7 Feb 2019 12:33:41 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from maxg@mellanox.com) with ESMTPS (AES256-SHA encrypted); 7 Feb 2019 19:33:33 +0200 Received: from r-vnc08.mtr.labs.mlnx (r-vnc08.mtr.labs.mlnx [10.208.0.121]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id x17HXVKc025778; Thu, 7 Feb 2019 19:33:32 +0200 From: Max Gurtovoy To: linux-rdma@vger.kernel.org, sagi@grimberg.me, hch@lst.de, leon@kernel.org, jgg@mellanox.com, dledford@redhat.com, bvanassche@acm.org Cc: maxg@mellanox.com, israelr@mellanox.com, idanb@mellanox.com, vladimirk@mellanox.com, shlomin@mellanox.com, oren@mellanox.com Subject: [PATCH 09/17] RDMA/mlx5: Implement mlx5_ib_map_mr_sg_pi and mlx5_ib_alloc_mr_integrity Date: Thu, 7 Feb 2019 19:33:23 +0200 Message-Id: <1549560811-8655-10-git-send-email-maxg@mellanox.com> X-Mailer: git-send-email 1.7.8.2 In-Reply-To: <1549560811-8655-1-git-send-email-maxg@mellanox.com> References: <1549560811-8655-1-git-send-email-maxg@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP mlx5_ib_map_mr_sg_pi() will map the PI and data dma mapped SG lists to the mlx5 memory region prior to the registration operation. In the new API, the mlx5 driver will allocate an internal memory region for the UMR operation to register both PI and data SG lists. The internal MR will use KLM mode in order to map 2 (possibly non-contiguous/non-align) SG lists using 1 memory key. In the new API, each ULP will use 1 memory region for the signature operation (instead of 3 in the old API). This memory region will have a key that will be exposed to remote server to perform RDMA operation. The internal memory key that will map the SG lists will stay private. Signed-off-by: Max Gurtovoy Signed-off-by: Israel Rukshin --- drivers/infiniband/hw/mlx5/main.c | 2 + drivers/infiniband/hw/mlx5/mlx5_ib.h | 9 ++ drivers/infiniband/hw/mlx5/mr.c | 183 ++++++++++++++++++++++++++++++++--- 3 files changed, 179 insertions(+), 15 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index 94fe253d4956..c4b2d9db5d07 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -5837,6 +5837,7 @@ static void mlx5_ib_stage_flow_db_cleanup(struct mlx5_ib_dev *dev) static const struct ib_device_ops mlx5_ib_dev_ops = { .add_gid = mlx5_ib_add_gid, .alloc_mr = mlx5_ib_alloc_mr, + .alloc_mr_integrity = mlx5_ib_alloc_mr_integrity, .alloc_pd = mlx5_ib_alloc_pd, .alloc_ucontext = mlx5_ib_alloc_ucontext, .attach_mcast = mlx5_ib_mcg_attach, @@ -5866,6 +5867,7 @@ static const struct ib_device_ops mlx5_ib_dev_ops = { .get_dma_mr = mlx5_ib_get_dma_mr, .get_link_layer = mlx5_ib_port_link_layer, .map_mr_sg = mlx5_ib_map_mr_sg, + .map_mr_sg_pi = mlx5_ib_map_mr_sg_pi, .mmap = mlx5_ib_mmap, .modify_cq = mlx5_ib_modify_cq, .modify_device = mlx5_ib_modify_device, diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index 33b0d042ef05..7ef1c0a3c886 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -567,6 +567,9 @@ struct mlx5_ib_mr { void *descs; dma_addr_t desc_map; int ndescs; + int data_length; + int meta_ndescs; + int meta_length; int max_descs; int desc_size; int access_mode; @@ -585,6 +588,7 @@ struct mlx5_ib_mr { int access_flags; /* Needed for rereg MR */ struct mlx5_ib_mr *parent; + struct mlx5_ib_mr *pi_mr; /* Needed for IB_MR_TYPE_PI type */ atomic_t num_leaf_free; wait_queue_head_t q_leaf_free; }; @@ -1107,7 +1111,12 @@ int mlx5_ib_dereg_mr(struct ib_mr *ibmr); struct ib_mr *mlx5_ib_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, u32 max_num_sg); +struct ib_mr *mlx5_ib_alloc_mr_integrity(struct ib_pd *pd, + u32 max_num_sg, + u32 max_num_meta_sg); int mlx5_ib_map_mr_sg(struct ib_mr *ibmr, struct ib_scatterlist *ib_sg); +int mlx5_ib_map_mr_sg_pi(struct ib_mr *ibmr, struct ib_scatterlist *data_ib_sg, + struct ib_scatterlist *meta_ib_sg); int mlx5_ib_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num, const struct ib_wc *in_wc, const struct ib_grh *in_grh, const struct ib_mad_hdr *in, size_t in_mad_size, diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index 659d39734523..b17e7078bdc4 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -1684,17 +1684,22 @@ static void dereg_mr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr) int mlx5_ib_dereg_mr(struct ib_mr *ibmr) { - dereg_mr(to_mdev(ibmr->device), to_mmr(ibmr)); + struct mlx5_ib_mr *mmr = to_mmr(ibmr); + + if (ibmr->type == IB_MR_TYPE_PI) + dereg_mr(to_mdev(mmr->pi_mr->ibmr.device), mmr->pi_mr); + + dereg_mr(to_mdev(ibmr->device), mmr); + return 0; } -struct ib_mr *mlx5_ib_alloc_mr(struct ib_pd *pd, - enum ib_mr_type mr_type, - u32 max_num_sg) +static struct mlx5_ib_mr *mlx5_ib_alloc_pi_mr(struct ib_pd *pd, + u32 max_num_sg, u32 max_num_meta_sg) { struct mlx5_ib_dev *dev = to_mdev(pd->device); int inlen = MLX5_ST_SZ_BYTES(create_mkey_in); - int ndescs = ALIGN(max_num_sg, 4); + int ndescs = ALIGN(max_num_sg + max_num_meta_sg, 4); struct mlx5_ib_mr *mr; void *mkc; u32 *in; @@ -1716,8 +1721,72 @@ struct ib_mr *mlx5_ib_alloc_mr(struct ib_pd *pd, MLX5_SET(mkc, mkc, qpn, 0xffffff); MLX5_SET(mkc, mkc, pd, to_mpd(pd)->pdn); + mr->access_mode = MLX5_MKC_ACCESS_MODE_KLMS; + + err = mlx5_alloc_priv_descs(pd->device, mr, + ndescs, sizeof(struct mlx5_klm)); + if (err) + goto err_free_in; + mr->desc_size = sizeof(struct mlx5_klm); + mr->max_descs = ndescs; + + MLX5_SET(mkc, mkc, access_mode_1_0, mr->access_mode & 0x3); + MLX5_SET(mkc, mkc, access_mode_4_2, (mr->access_mode >> 2) & 0x7); + MLX5_SET(mkc, mkc, umr_en, 1); + + mr->ibmr.pd = pd; + mr->ibmr.device = pd->device; + err = mlx5_core_create_mkey(dev->mdev, &mr->mmkey, in, inlen); + if (err) + goto err_priv_descs; + + mr->mmkey.type = MLX5_MKEY_MR; + mr->ibmr.lkey = mr->mmkey.key; + mr->ibmr.rkey = mr->mmkey.key; + mr->umem = NULL; + kfree(in); + + return mr; + +err_priv_descs: + mlx5_free_priv_descs(mr); +err_free_in: + kfree(in); +err_free: + kfree(mr); + return ERR_PTR(err); +} + +static struct ib_mr *__mlx5_ib_alloc_mr(struct ib_pd *pd, + enum ib_mr_type mr_type, + u32 max_num_sg, u32 max_num_meta_sg) +{ + struct mlx5_ib_dev *dev = to_mdev(pd->device); + int inlen = MLX5_ST_SZ_BYTES(create_mkey_in); + int ndescs = ALIGN(max_num_sg, 4); + struct mlx5_ib_mr *mr; + void *mkc; + u32 *in; + int err; + + mr = kzalloc(sizeof(*mr), GFP_KERNEL); + if (!mr) + return ERR_PTR(-ENOMEM); + + in = kzalloc(inlen, GFP_KERNEL); + if (!in) { + err = -ENOMEM; + goto err_free; + } + + mkc = MLX5_ADDR_OF(create_mkey_in, in, memory_key_mkey_entry); + MLX5_SET(mkc, mkc, free, 1); + MLX5_SET(mkc, mkc, qpn, 0xffffff); + MLX5_SET(mkc, mkc, pd, to_mpd(pd)->pdn); + if (mr_type == IB_MR_TYPE_MEM_REG) { mr->access_mode = MLX5_MKC_ACCESS_MODE_MTT; + MLX5_SET(mkc, mkc, translations_octword_size, ndescs); MLX5_SET(mkc, mkc, log_page_size, PAGE_SHIFT); err = mlx5_alloc_priv_descs(pd->device, mr, ndescs, sizeof(struct mlx5_mtt)); @@ -1728,6 +1797,7 @@ struct ib_mr *mlx5_ib_alloc_mr(struct ib_pd *pd, mr->max_descs = ndescs; } else if (mr_type == IB_MR_TYPE_SG_GAPS) { mr->access_mode = MLX5_MKC_ACCESS_MODE_KLMS; + MLX5_SET(mkc, mkc, translations_octword_size, ndescs); err = mlx5_alloc_priv_descs(pd->device, mr, ndescs, sizeof(struct mlx5_klm)); @@ -1735,11 +1805,13 @@ struct ib_mr *mlx5_ib_alloc_mr(struct ib_pd *pd, goto err_free_in; mr->desc_size = sizeof(struct mlx5_klm); mr->max_descs = ndescs; - } else if (mr_type == IB_MR_TYPE_SIGNATURE) { + } else if (mr_type == IB_MR_TYPE_SIGNATURE || + mr_type == IB_MR_TYPE_PI) { u32 psv_index[2]; MLX5_SET(mkc, mkc, bsf_en, 1); MLX5_SET(mkc, mkc, bsf_octword_size, MLX5_MKEY_BSF_OCTO_SIZE); + MLX5_SET(mkc, mkc, translations_octword_size, 4); mr->sig = kzalloc(sizeof(*mr->sig), GFP_KERNEL); if (!mr->sig) { err = -ENOMEM; @@ -1760,6 +1832,14 @@ struct ib_mr *mlx5_ib_alloc_mr(struct ib_pd *pd, mr->sig->sig_err_exists = false; /* Next UMR, Arm SIGERR */ ++mr->sig->sigerr_count; + if (mr_type == IB_MR_TYPE_PI) { + mr->pi_mr = mlx5_ib_alloc_pi_mr(pd, max_num_sg, + max_num_meta_sg); + if (IS_ERR(mr->pi_mr)) { + err = PTR_ERR(mr->pi_mr); + goto err_destroy_psv; + } + } } else { mlx5_ib_warn(dev, "Invalid mr type %d\n", mr_type); err = -EINVAL; @@ -1773,7 +1853,7 @@ struct ib_mr *mlx5_ib_alloc_mr(struct ib_pd *pd, mr->ibmr.device = pd->device; err = mlx5_core_create_mkey(dev->mdev, &mr->mmkey, in, inlen); if (err) - goto err_destroy_psv; + goto err_free_pi_mr; mr->mmkey.type = MLX5_MKEY_MR; mr->ibmr.lkey = mr->mmkey.key; @@ -1783,6 +1863,11 @@ struct ib_mr *mlx5_ib_alloc_mr(struct ib_pd *pd, return &mr->ibmr; +err_free_pi_mr: + if (mr->pi_mr) { + dereg_mr(to_mdev(mr->pi_mr->ibmr.device), mr->pi_mr); + mr->pi_mr = NULL; + } err_destroy_psv: if (mr->sig) { if (mlx5_core_destroy_psv(dev->mdev, @@ -1804,6 +1889,20 @@ struct ib_mr *mlx5_ib_alloc_mr(struct ib_pd *pd, return ERR_PTR(err); } +struct ib_mr *mlx5_ib_alloc_mr(struct ib_pd *pd, + enum ib_mr_type mr_type, + u32 max_num_sg) +{ + return __mlx5_ib_alloc_mr(pd, mr_type, max_num_sg, 0); +} + +struct ib_mr *mlx5_ib_alloc_mr_integrity(struct ib_pd *pd, + u32 max_num_sg, u32 max_num_meta_sg) +{ + return __mlx5_ib_alloc_mr(pd, IB_MR_TYPE_PI, max_num_sg, + max_num_meta_sg); +} + struct ib_mw *mlx5_ib_alloc_mw(struct ib_pd *pd, enum ib_mw_type type, struct ib_udata *udata) { @@ -1934,18 +2033,19 @@ int mlx5_ib_check_mr_status(struct ib_mr *ibmr, u32 check_mask, static int mlx5_ib_sg_to_klms(struct mlx5_ib_mr *mr, - struct ib_scatterlist *ib_sgl) + struct ib_scatterlist *data_ib_sgl, + struct ib_scatterlist *meta_ib_sgl) { - struct scatterlist *sg = ib_sgl->sg; + struct scatterlist *sg = data_ib_sgl->sg; struct mlx5_klm *klms = mr->descs; - unsigned int sg_offset = ib_sgl->offset; + unsigned int sg_offset = data_ib_sgl->offset; u32 lkey = mr->ibmr.pd->local_dma_lkey; - int i; + int i, j = 0; mr->ibmr.iova = sg_dma_address(sg) + sg_offset; mr->ibmr.length = 0; - for_each_sg(ib_sgl->sg, sg, ib_sgl->dma_nents, i) { + for_each_sg(data_ib_sgl->sg, sg, data_ib_sgl->dma_nents, i) { if (unlikely(i >= mr->max_descs)) break; klms[i].va = cpu_to_be64(sg_dma_address(sg) + sg_offset); @@ -1955,11 +2055,34 @@ mlx5_ib_sg_to_klms(struct mlx5_ib_mr *mr, sg_offset = 0; } + + data_ib_sgl->offset = sg_offset; + mr->ndescs = i; + mr->data_length = mr->ibmr.length; + + if (meta_ib_sgl && meta_ib_sgl->dma_nents) { + sg = meta_ib_sgl->sg; + sg_offset = meta_ib_sgl->offset; + for_each_sg(meta_ib_sgl->sg, sg, meta_ib_sgl->dma_nents, j) { + if (unlikely(i + j >= mr->max_descs)) + break; + klms[i + j].va = cpu_to_be64(sg_dma_address(sg) + + sg_offset); + klms[i + j].bcount = cpu_to_be32(sg_dma_len(sg) - + sg_offset); + klms[i + j].key = cpu_to_be32(lkey); + mr->ibmr.length += sg_dma_len(sg) - sg_offset; + + sg_offset = 0; + } + meta_ib_sgl->offset = sg_offset; - ib_sgl->offset = sg_offset; + mr->meta_ndescs = j; + mr->meta_length = mr->ibmr.length - mr->data_length; + } - return i; + return i + j; } static int mlx5_set_page(struct ib_mr *ibmr, u64 addr) @@ -1976,6 +2099,36 @@ static int mlx5_set_page(struct ib_mr *ibmr, u64 addr) return 0; } +int mlx5_ib_map_mr_sg_pi(struct ib_mr *ibmr, struct ib_scatterlist *data_ib_sg, + struct ib_scatterlist *meta_ib_sg) +{ + struct mlx5_ib_mr *mr = to_mmr(ibmr); + struct mlx5_ib_mr *pi_mr = mr->pi_mr; + int n; + + WARN_ON(ibmr->type != IB_MR_TYPE_PI); + + pi_mr->ndescs = 0; + pi_mr->meta_ndescs = 0; + + ib_dma_sync_single_for_cpu(ibmr->device, pi_mr->desc_map, + pi_mr->desc_size * pi_mr->max_descs, + DMA_TO_DEVICE); + + n = mlx5_ib_sg_to_klms(pi_mr, data_ib_sg, meta_ib_sg); + + /* This is zero-based memory region */ + pi_mr->ibmr.iova = 0; + ibmr->length = pi_mr->ibmr.length; + ibmr->iova = pi_mr->ibmr.iova; + + ib_dma_sync_single_for_device(ibmr->device, pi_mr->desc_map, + pi_mr->desc_size * pi_mr->max_descs, + DMA_TO_DEVICE); + + return n; +} + int mlx5_ib_map_mr_sg(struct ib_mr *ibmr, struct ib_scatterlist *ib_sg) { struct mlx5_ib_mr *mr = to_mmr(ibmr); @@ -1988,7 +2141,7 @@ int mlx5_ib_map_mr_sg(struct ib_mr *ibmr, struct ib_scatterlist *ib_sg) DMA_TO_DEVICE); if (mr->access_mode == MLX5_MKC_ACCESS_MODE_KLMS) - n = mlx5_ib_sg_to_klms(mr, ib_sg); + n = mlx5_ib_sg_to_klms(mr, ib_sg, NULL); else n = ib_sg_to_pages(ibmr, ib_sg, mlx5_set_page); From patchwork Thu Feb 7 17:33:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Max Gurtovoy X-Patchwork-Id: 10801625 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 904861575 for ; Thu, 7 Feb 2019 17:33:43 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7E3BE2E34D for ; Thu, 7 Feb 2019 17:33:43 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 730AB2E38E; Thu, 7 Feb 2019 17:33:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 269032E34D for ; Thu, 7 Feb 2019 17:33:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726850AbfBGRdl (ORCPT ); Thu, 7 Feb 2019 12:33:41 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:49133 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726442AbfBGRdi (ORCPT ); Thu, 7 Feb 2019 12:33:38 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from maxg@mellanox.com) with ESMTPS (AES256-SHA encrypted); 7 Feb 2019 19:33:33 +0200 Received: from r-vnc08.mtr.labs.mlnx (r-vnc08.mtr.labs.mlnx [10.208.0.121]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id x17HXVKd025778; Thu, 7 Feb 2019 19:33:33 +0200 From: Max Gurtovoy To: linux-rdma@vger.kernel.org, sagi@grimberg.me, hch@lst.de, leon@kernel.org, jgg@mellanox.com, dledford@redhat.com, bvanassche@acm.org Cc: maxg@mellanox.com, israelr@mellanox.com, idanb@mellanox.com, vladimirk@mellanox.com, shlomin@mellanox.com, oren@mellanox.com Subject: [PATCH 10/17] RDMA/mlx5: Add attr for max number page list length for PI operation Date: Thu, 7 Feb 2019 19:33:24 +0200 Message-Id: <1549560811-8655-11-git-send-email-maxg@mellanox.com> X-Mailer: git-send-email 1.7.8.2 In-Reply-To: <1549560811-8655-1-git-send-email-maxg@mellanox.com> References: <1549560811-8655-1-git-send-email-maxg@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP PI offload (protection information) is a feature that each RDMA provider can implement differently. Thus, introduce new device attribute to define the maximal length of the page list for PI fast registration operation. For example, mlx5 driver uses a single internal MR to map both data and protection SGL's, so it's equal to max_fast_reg_page_list_len / 2. Signed-off-by: Max Gurtovoy Reviewed-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/main.c | 2 ++ include/rdma/ib_verbs.h | 1 + 2 files changed, 3 insertions(+) diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index c4b2d9db5d07..3460c764e341 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -912,6 +912,8 @@ static int mlx5_ib_query_device(struct ib_device *ibdev, props->max_srq_sge = max_rq_sg - 1; props->max_fast_reg_page_list_len = 1 << MLX5_CAP_GEN(mdev, log_max_klm_list_size); + props->max_pi_fast_reg_page_list_len = + props->max_fast_reg_page_list_len / 2; get_atomic_caps_qp(dev, props); props->masked_atomic_cap = IB_ATOMIC_NONE; props->max_mcast_grp = 1 << MLX5_CAP_GEN(mdev, log_max_mcg); diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 1ec36490bc6f..465009e4837b 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -364,6 +364,7 @@ struct ib_device_attr { int max_srq_wr; int max_srq_sge; unsigned int max_fast_reg_page_list_len; + unsigned int max_pi_fast_reg_page_list_len; u16 max_pkeys; u8 local_ca_ack_delay; int sig_prot_cap; From patchwork Thu Feb 7 17:33:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Max Gurtovoy X-Patchwork-Id: 10801633 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9706814E1 for ; Thu, 7 Feb 2019 17:33:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 84CC92E34D for ; Thu, 7 Feb 2019 17:33:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7957A2E38B; Thu, 7 Feb 2019 17:33:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 036062E358 for ; Thu, 7 Feb 2019 17:33:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726882AbfBGRdm (ORCPT ); Thu, 7 Feb 2019 12:33:42 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:49144 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726821AbfBGRdi (ORCPT ); Thu, 7 Feb 2019 12:33:38 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from maxg@mellanox.com) with ESMTPS (AES256-SHA encrypted); 7 Feb 2019 19:33:33 +0200 Received: from r-vnc08.mtr.labs.mlnx (r-vnc08.mtr.labs.mlnx [10.208.0.121]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id x17HXVKe025778; Thu, 7 Feb 2019 19:33:33 +0200 From: Max Gurtovoy To: linux-rdma@vger.kernel.org, sagi@grimberg.me, hch@lst.de, leon@kernel.org, jgg@mellanox.com, dledford@redhat.com, bvanassche@acm.org Cc: maxg@mellanox.com, israelr@mellanox.com, idanb@mellanox.com, vladimirk@mellanox.com, shlomin@mellanox.com, oren@mellanox.com Subject: [PATCH 11/17] RDMA/mlx5: Pass UMR segment flags instead of boolean Date: Thu, 7 Feb 2019 19:33:25 +0200 Message-Id: <1549560811-8655-12-git-send-email-maxg@mellanox.com> X-Mailer: git-send-email 1.7.8.2 In-Reply-To: <1549560811-8655-1-git-send-email-maxg@mellanox.com> References: <1549560811-8655-1-git-send-email-maxg@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP UMR ctrl segment flags can vary between UMR operations. for example, using inline UMR or adding free/not-free checks for a memory key. This is a preparation commit before adding new signature API that will not need not-free checks for the internal memory key during the UMR operation. Signed-off-by: Max Gurtovoy Reviewed-by: Leon Romanovsky Reviewed-by: Sagi Grimberg --- drivers/infiniband/hw/mlx5/qp.c | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c index 7db778d96ef5..670f7fd8de8e 100644 --- a/drivers/infiniband/hw/mlx5/qp.c +++ b/drivers/infiniband/hw/mlx5/qp.c @@ -3984,15 +3984,13 @@ static __be64 sig_mkey_mask(void) } static void set_reg_umr_seg(struct mlx5_wqe_umr_ctrl_seg *umr, - struct mlx5_ib_mr *mr, bool umr_inline) + struct mlx5_ib_mr *mr, u8 flags) { int size = mr->ndescs * mr->desc_size; memset(umr, 0, sizeof(*umr)); - umr->flags = MLX5_UMR_CHECK_NOT_FREE; - if (umr_inline) - umr->flags |= MLX5_UMR_INLINE; + umr->flags = flags; umr->xlt_octowords = cpu_to_be16(get_xlt_octo(size)); umr->mkey_mask = frwr_mkey_mask(); } @@ -4573,12 +4571,14 @@ static int set_psv_wr(struct ib_sig_domain *domain, static int set_reg_wr(struct mlx5_ib_qp *qp, const struct ib_reg_wr *wr, - void **seg, int *size, void **cur_edge) + void **seg, int *size, void **cur_edge, + bool check_not_free) { struct mlx5_ib_mr *mr = to_mmr(wr->mr); struct mlx5_ib_pd *pd = to_mpd(qp->ibqp.pd); size_t mr_list_size = mr->ndescs * mr->desc_size; bool umr_inline = mr_list_size <= MLX5_IB_SQ_UMR_INLINE_THRESHOLD; + u8 flags = 0; if (unlikely(wr->wr.send_flags & IB_SEND_INLINE)) { mlx5_ib_warn(to_mdev(qp->ibqp.device), @@ -4586,7 +4586,12 @@ static int set_reg_wr(struct mlx5_ib_qp *qp, return -EINVAL; } - set_reg_umr_seg(*seg, mr, umr_inline); + if (check_not_free) + flags |= MLX5_UMR_CHECK_NOT_FREE; + if (umr_inline) + flags |= MLX5_UMR_INLINE; + + set_reg_umr_seg(*seg, mr, flags); *seg += sizeof(struct mlx5_wqe_umr_ctrl_seg); *size += sizeof(struct mlx5_wqe_umr_ctrl_seg) / 16; handle_post_send_edge(&qp->sq, seg, *size, cur_edge); @@ -4818,7 +4823,7 @@ static int _mlx5_ib_post_send(struct ib_qp *ibqp, const struct ib_send_wr *wr, qp->sq.wr_data[idx] = IB_WR_REG_MR; ctrl->imm = cpu_to_be32(reg_wr(wr)->key); err = set_reg_wr(qp, reg_wr(wr), &seg, &size, - &cur_edge); + &cur_edge, true); if (err) { *bad_wr = wr; goto out; From patchwork Thu Feb 7 17:33:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Max Gurtovoy X-Patchwork-Id: 10801621 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7B2CF14E1 for ; Thu, 7 Feb 2019 17:33:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 66C762E358 for ; Thu, 7 Feb 2019 17:33:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5B0F42E37D; Thu, 7 Feb 2019 17:33:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 062D42E358 for ; Thu, 7 Feb 2019 17:33:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726401AbfBGRdl (ORCPT ); Thu, 7 Feb 2019 12:33:41 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:49169 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726897AbfBGRdi (ORCPT ); Thu, 7 Feb 2019 12:33:38 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from maxg@mellanox.com) with ESMTPS (AES256-SHA encrypted); 7 Feb 2019 19:33:33 +0200 Received: from r-vnc08.mtr.labs.mlnx (r-vnc08.mtr.labs.mlnx [10.208.0.121]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id x17HXVKf025778; Thu, 7 Feb 2019 19:33:33 +0200 From: Max Gurtovoy To: linux-rdma@vger.kernel.org, sagi@grimberg.me, hch@lst.de, leon@kernel.org, jgg@mellanox.com, dledford@redhat.com, bvanassche@acm.org Cc: maxg@mellanox.com, israelr@mellanox.com, idanb@mellanox.com, vladimirk@mellanox.com, shlomin@mellanox.com, oren@mellanox.com Subject: [PATCH 12/17] RDMA/mlx5: Update set_sig_data_segment attribute for new signature API Date: Thu, 7 Feb 2019 19:33:26 +0200 Message-Id: <1549560811-8655-13-git-send-email-maxg@mellanox.com> X-Mailer: git-send-email 1.7.8.2 In-Reply-To: <1549560811-8655-1-git-send-email-maxg@mellanox.com> References: <1549560811-8655-1-git-send-email-maxg@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Explicitly pass the sig_mr and the access flags for the mkey segment configuration. This function will be used also in the new signature API, so modify it in order to use it in both APIs. This is a preparation commit before adding new signature API. Signed-off-by: Max Gurtovoy Reviewed-by: Sagi Grimberg --- drivers/infiniband/hw/mlx5/qp.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c index 670f7fd8de8e..d145794f2a23 100644 --- a/drivers/infiniband/hw/mlx5/qp.c +++ b/drivers/infiniband/hw/mlx5/qp.c @@ -4462,17 +4462,15 @@ static int set_sig_data_segment(const struct ib_sig_handover_wr *wr, } static void set_sig_mkey_segment(struct mlx5_mkey_seg *seg, - const struct ib_sig_handover_wr *wr, u32 size, - u32 length, u32 pdn) + struct ib_mr *sig_mr, int access_flags, + u32 size, u32 length, u32 pdn) { - struct ib_mr *sig_mr = wr->sig_mr; u32 sig_key = sig_mr->rkey; u8 sigerr = to_mmr(sig_mr)->sig->sigerr_count & 1; memset(seg, 0, sizeof(*seg)); - seg->flags = get_umr_flags(wr->access_flags) | - MLX5_MKC_ACCESS_MODE_KLMS; + seg->flags = get_umr_flags(access_flags) | MLX5_MKC_ACCESS_MODE_KLMS; seg->qpn_mkey7_0 = cpu_to_be32((sig_key & 0xff) | 0xffffff00); seg->flags_pd = cpu_to_be32(MLX5_MKEY_REMOTE_INVAL | sigerr << 26 | MLX5_MKEY_BSF_EN | pdn); @@ -4529,7 +4527,8 @@ static int set_sig_umr_wr(const struct ib_send_wr *send_wr, *size += sizeof(struct mlx5_wqe_umr_ctrl_seg) / 16; handle_post_send_edge(&qp->sq, seg, *size, cur_edge); - set_sig_mkey_segment(*seg, wr, xlt_size, region_len, pdn); + set_sig_mkey_segment(*seg, wr->sig_mr, wr->access_flags, xlt_size, + region_len, pdn); *seg += sizeof(struct mlx5_mkey_seg); *size += sizeof(struct mlx5_mkey_seg) / 16; handle_post_send_edge(&qp->sq, seg, *size, cur_edge); From patchwork Thu Feb 7 17:33:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Max Gurtovoy X-Patchwork-Id: 10801645 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 24C871575 for ; Thu, 7 Feb 2019 17:33:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0CEAF28647 for ; Thu, 7 Feb 2019 17:33:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F1C93286DF; Thu, 7 Feb 2019 17:33:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 095BE28647 for ; Thu, 7 Feb 2019 17:33:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726990AbfBGRds (ORCPT ); Thu, 7 Feb 2019 12:33:48 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:49157 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726707AbfBGRdj (ORCPT ); Thu, 7 Feb 2019 12:33:39 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from maxg@mellanox.com) with ESMTPS (AES256-SHA encrypted); 7 Feb 2019 19:33:33 +0200 Received: from r-vnc08.mtr.labs.mlnx (r-vnc08.mtr.labs.mlnx [10.208.0.121]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id x17HXVKg025778; Thu, 7 Feb 2019 19:33:33 +0200 From: Max Gurtovoy To: linux-rdma@vger.kernel.org, sagi@grimberg.me, hch@lst.de, leon@kernel.org, jgg@mellanox.com, dledford@redhat.com, bvanassche@acm.org Cc: maxg@mellanox.com, israelr@mellanox.com, idanb@mellanox.com, vladimirk@mellanox.com, shlomin@mellanox.com, oren@mellanox.com Subject: [PATCH 13/17] RDMA/mlx5: Introduce and implement new IB_WR_REG_PI_MR work request Date: Thu, 7 Feb 2019 19:33:27 +0200 Message-Id: <1549560811-8655-14-git-send-email-maxg@mellanox.com> X-Mailer: git-send-email 1.7.8.2 In-Reply-To: <1549560811-8655-1-git-send-email-maxg@mellanox.com> References: <1549560811-8655-1-git-send-email-maxg@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This new WR will be used to perform PI (protection information) handover using the new API. Using the new API, the user will post a single WR that will internally perform all the needed actions to complete PI operation. This new WR will use a memory region that was allocated as IB_MR_TYPE_PI and was mapped using ib_map_mr_sg_pi to perform the registration. In the old API, in order to perform a signature handover operation, each ULP should perform the following: 1. Map and register the data buffers. 2. Map and register the protection buffers. 3. Post a special reg WR to configure the signature handover operation layout. 4. Invalidate the signature memory key. 4. Invalidate protection buffers memory key. 5. Invalidate data buffers memory key. In the new API, the mapping of both data and protection buffers is performed using a single call to ib_map_mr_sg_pi function. Also the registration of the buffers and the configuration of the signature operation layout is done by a single new work request called IB_WR_REG_PI_MR. This patch implements this operation for mlx5 devices that are capable to offload data integrity generation/validation while performing the actual buffer transfer. This patch will not remove the old signature API that is used by the iSER initiator and target drivers. This will be done in the future. In the internal implementation, for each IB_WR_REG_PI_MR work request, we are using a single UMR operation to register both data and protection buffers using KLM's. Afterwards, another UMR operation will describe the strided block format. These will be followed by 2 SET_PSV operations to set the memory/wire domains initial signature parameters passed by the user. In the end of the whole transaction, only the signature memory key (the one that exposed for the RDMA operation) will be invalidated. Signed-off-by: Max Gurtovoy Signed-off-by: Israel Rukshin Reviewed-by: Sagi Grimberg --- drivers/infiniband/hw/mlx5/qp.c | 218 ++++++++++++++++++++++++++++++++++++---- include/linux/mlx5/qp.h | 3 +- include/rdma/ib_verbs.h | 1 + 3 files changed, 201 insertions(+), 21 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c index d145794f2a23..c1150b693f2a 100644 --- a/drivers/infiniband/hw/mlx5/qp.c +++ b/drivers/infiniband/hw/mlx5/qp.c @@ -3986,7 +3986,7 @@ static __be64 sig_mkey_mask(void) static void set_reg_umr_seg(struct mlx5_wqe_umr_ctrl_seg *umr, struct mlx5_ib_mr *mr, u8 flags) { - int size = mr->ndescs * mr->desc_size; + int size = (mr->ndescs + mr->meta_ndescs) * mr->desc_size; memset(umr, 0, sizeof(*umr)); @@ -4117,7 +4117,7 @@ static void set_reg_mkey_seg(struct mlx5_mkey_seg *seg, struct mlx5_ib_mr *mr, u32 key, int access) { - int ndescs = ALIGN(mr->ndescs, 8) >> 1; + int ndescs = ALIGN(mr->ndescs + mr->meta_ndescs, 8) >> 1; memset(seg, 0, sizeof(*seg)); @@ -4168,7 +4168,7 @@ static void set_reg_data_seg(struct mlx5_wqe_data_seg *dseg, struct mlx5_ib_mr *mr, struct mlx5_ib_pd *pd) { - int bcount = mr->desc_size * mr->ndescs; + int bcount = mr->desc_size * (mr->ndescs + mr->meta_ndescs); dseg->addr = cpu_to_be64(mr->desc_map); dseg->byte_count = cpu_to_be32(ALIGN(bcount, 64)); @@ -4361,23 +4361,52 @@ static int mlx5_set_bsf(struct ib_mr *sig_mr, return 0; } -static int set_sig_data_segment(const struct ib_sig_handover_wr *wr, - struct mlx5_ib_qp *qp, void **seg, - int *size, void **cur_edge) +static int set_sig_data_segment(const struct ib_send_wr *send_wr, + struct ib_mr *sig_mr, + struct ib_sig_attrs *sig_attrs, + struct mlx5_ib_qp *qp, void **seg, int *size, + void **cur_edge) { - struct ib_sig_attrs *sig_attrs = wr->sig_attrs; - struct ib_mr *sig_mr = wr->sig_mr; struct mlx5_bsf *bsf; - u32 data_len = wr->wr.sg_list->length; - u32 data_key = wr->wr.sg_list->lkey; - u64 data_va = wr->wr.sg_list->addr; + u32 data_len; + u32 data_key; + u64 data_va; + u32 prot_len = 0; + u32 prot_key = 0; + u64 prot_va = 0; + bool prot = false; int ret; int wqe_size; - if (!wr->prot || - (data_key == wr->prot->lkey && - data_va == wr->prot->addr && - data_len == wr->prot->length)) { + if (send_wr->opcode == IB_WR_REG_SIG_MR) { + const struct ib_sig_handover_wr *wr = sig_handover_wr(send_wr); + + data_len = wr->wr.sg_list->length; + data_key = wr->wr.sg_list->lkey; + data_va = wr->wr.sg_list->addr; + if (wr->prot) { + prot_len = wr->prot->length; + prot_key = wr->prot->lkey; + prot_va = wr->prot->addr; + prot = true; + } + } else { + struct mlx5_ib_mr *mr = to_mmr(sig_mr); + struct mlx5_ib_mr *pi_mr = mr->pi_mr; + + data_len = pi_mr->data_length; + data_key = pi_mr->ibmr.lkey; + data_va = pi_mr->ibmr.iova; + if (pi_mr->meta_ndescs) { + prot_len = pi_mr->meta_length; + prot_key = pi_mr->ibmr.lkey; + prot_va = pi_mr->ibmr.iova + data_len; + prot = true; + } + } + + if (!prot || (data_key == prot_key && data_va == prot_va && + data_len == prot_len)) { /** * Source domain doesn't contain signature information * or data and protection are interleaved in memory. @@ -4411,8 +4440,6 @@ static int set_sig_data_segment(const struct ib_sig_handover_wr *wr, struct mlx5_stride_block_ctrl_seg *sblock_ctrl; struct mlx5_stride_block_entry *data_sentry; struct mlx5_stride_block_entry *prot_sentry; - u32 prot_key = wr->prot->lkey; - u64 prot_va = wr->prot->addr; u16 block_size = sig_attrs->mem.sig.dif.pi_interval; int prot_size; @@ -4490,6 +4517,56 @@ static void set_sig_umr_segment(struct mlx5_wqe_umr_ctrl_seg *umr, umr->mkey_mask = sig_mkey_mask(); } +static int set_pi_umr_wr(const struct ib_send_wr *send_wr, + struct mlx5_ib_qp *qp, void **seg, int *size, + void **cur_edge) +{ + const struct ib_reg_wr *wr = reg_wr(send_wr); + struct mlx5_ib_mr *sig_mr = to_mmr(wr->mr); + struct mlx5_ib_mr *pi_mr = sig_mr->pi_mr; + struct ib_sig_attrs *sig_attrs = sig_mr->ibmr.sig_attrs; + u32 pdn = get_pd(qp)->pdn; + u32 xlt_size; + int region_len, ret; + + if (unlikely(send_wr->num_sge != 0) || + unlikely(wr->access & IB_ACCESS_REMOTE_ATOMIC) || + unlikely(!sig_mr->sig) || unlikely(!qp->signature_en) || + unlikely(!sig_mr->sig->sig_status_checked)) + return -EINVAL; + + /* length of the protected region, data + protection */ + region_len = pi_mr->ibmr.length; + + /** + * KLM octoword size - if protection was provided + * then we use strided block format (3 octowords), + * else we use single KLM (1 octoword) + **/ + if (sig_attrs->mem.sig_type != IB_SIG_TYPE_NONE) + xlt_size = 0x30; + else + xlt_size = sizeof(struct mlx5_klm); + + set_sig_umr_segment(*seg, xlt_size); + *seg += sizeof(struct mlx5_wqe_umr_ctrl_seg); + *size += sizeof(struct mlx5_wqe_umr_ctrl_seg) / 16; + handle_post_send_edge(&qp->sq, seg, *size, cur_edge); + + set_sig_mkey_segment(*seg, wr->mr, wr->access, xlt_size, region_len, + pdn); + *seg += sizeof(struct mlx5_mkey_seg); + *size += sizeof(struct mlx5_mkey_seg) / 16; + handle_post_send_edge(&qp->sq, seg, *size, cur_edge); + + ret = set_sig_data_segment(send_wr, wr->mr, sig_attrs, qp, seg, size, + cur_edge); + if (ret) + return ret; + + sig_mr->sig->sig_status_checked = false; + return 0; +} static int set_sig_umr_wr(const struct ib_send_wr *send_wr, struct mlx5_ib_qp *qp, void **seg, int *size, @@ -4533,7 +4610,8 @@ static int set_sig_umr_wr(const struct ib_send_wr *send_wr, *size += sizeof(struct mlx5_mkey_seg) / 16; handle_post_send_edge(&qp->sq, seg, *size, cur_edge); - ret = set_sig_data_segment(wr, qp, seg, size, cur_edge); + ret = set_sig_data_segment(send_wr, wr->sig_mr, wr->sig_attrs, qp, seg, + size, cur_edge); if (ret) return ret; @@ -4575,7 +4653,7 @@ static int set_reg_wr(struct mlx5_ib_qp *qp, { struct mlx5_ib_mr *mr = to_mmr(wr->mr); struct mlx5_ib_pd *pd = to_mpd(qp->ibqp.pd); - size_t mr_list_size = mr->ndescs * mr->desc_size; + int mr_list_size = (mr->ndescs + mr->meta_ndescs) * mr->desc_size; bool umr_inline = mr_list_size <= MLX5_IB_SQ_UMR_INLINE_THRESHOLD; u8 flags = 0; @@ -4717,8 +4795,11 @@ static int _mlx5_ib_post_send(struct ib_qp *ibqp, const struct ib_send_wr *wr, struct mlx5_wqe_ctrl_seg *ctrl = NULL; /* compiler warning */ struct mlx5_ib_dev *dev = to_mdev(ibqp->device); struct mlx5_core_dev *mdev = dev->mdev; + struct ib_reg_wr reg_pi_wr; struct mlx5_ib_qp *qp; struct mlx5_ib_mr *mr; + struct mlx5_ib_mr *pi_mr; + struct ib_sig_attrs *sig_attrs; struct mlx5_wqe_xrc_seg *xrc; struct mlx5_bf *bf; void *cur_edge; @@ -4772,7 +4853,8 @@ static int _mlx5_ib_post_send(struct ib_qp *ibqp, const struct ib_send_wr *wr, goto out; } - if (wr->opcode == IB_WR_REG_MR) { + if (wr->opcode == IB_WR_REG_MR || + wr->opcode == IB_WR_REG_PI_MR) { fence = dev->umr_fence; next_fence = MLX5_FENCE_MODE_INITIATOR_SMALL; } else { @@ -4830,6 +4912,102 @@ static int _mlx5_ib_post_send(struct ib_qp *ibqp, const struct ib_send_wr *wr, num_sge = 0; break; + case IB_WR_REG_PI_MR: + memset(®_pi_wr, 0, sizeof(struct ib_reg_wr)); + + mr = to_mmr(reg_wr(wr)->mr); + pi_mr = mr->pi_mr; + + reg_pi_wr.mr = &pi_mr->ibmr; + reg_pi_wr.access = reg_wr(wr)->access; + reg_pi_wr.key = pi_mr->ibmr.rkey; + + qp->sq.wr_data[idx] = IB_WR_REG_PI_MR; + ctrl->imm = cpu_to_be32(reg_pi_wr.key); + /* UMR for data + protection registration */ + err = set_reg_wr(qp, ®_pi_wr, &seg, &size, + &cur_edge, false); + if (err) { + *bad_wr = wr; + goto out; + } + finish_wqe(qp, ctrl, seg, size, cur_edge, idx, + wr->wr_id, nreq, fence, + MLX5_OPCODE_UMR); + + err = begin_wqe(qp, &seg, &ctrl, wr, &idx, + &size, &cur_edge, nreq); + if (err) { + mlx5_ib_warn(dev, "\n"); + err = -ENOMEM; + *bad_wr = wr; + goto out; + } + ctrl->imm = cpu_to_be32(mr->ibmr.rkey); + /* UMR for sig MR */ + err = set_pi_umr_wr(wr, qp, &seg, &size, + &cur_edge); + if (err) { + mlx5_ib_warn(dev, "\n"); + *bad_wr = wr; + goto out; + } + finish_wqe(qp, ctrl, seg, size, cur_edge, idx, + wr->wr_id, nreq, fence, + MLX5_OPCODE_UMR); + + /* + * SET_PSV WQEs are not signaled and solicited + * on error + */ + sig_attrs = mr->ibmr.sig_attrs; + err = __begin_wqe(qp, &seg, &ctrl, wr, &idx, + &size, &cur_edge, nreq, false, + true); + if (err) { + mlx5_ib_warn(dev, "\n"); + err = -ENOMEM; + *bad_wr = wr; + goto out; + } + err = set_psv_wr(&sig_attrs->mem, + mr->sig->psv_memory.psv_idx, + &seg, &size); + if (err) { + mlx5_ib_warn(dev, "\n"); + *bad_wr = wr; + goto out; + } + finish_wqe(qp, ctrl, seg, size, cur_edge, idx, + wr->wr_id, nreq, next_fence, + MLX5_OPCODE_SET_PSV); + + err = __begin_wqe(qp, &seg, &ctrl, wr, &idx, + &size, &cur_edge, nreq, false, + true); + if (err) { + mlx5_ib_warn(dev, "\n"); + err = -ENOMEM; + *bad_wr = wr; + goto out; + } + err = set_psv_wr(&sig_attrs->wire, + mr->sig->psv_wire.psv_idx, + &seg, &size); + if (err) { + mlx5_ib_warn(dev, "\n"); + *bad_wr = wr; + goto out; + } + finish_wqe(qp, ctrl, seg, size, cur_edge, idx, + wr->wr_id, nreq, next_fence, + MLX5_OPCODE_SET_PSV); + + qp->next_fence = + MLX5_FENCE_MODE_INITIATOR_SMALL; + num_sge = 0; + goto skip_psv; + case IB_WR_REG_SIG_MR: qp->sq.wr_data[idx] = IB_WR_REG_SIG_MR; mr = to_mmr(sig_handover_wr(wr)->sig_mr); diff --git a/include/linux/mlx5/qp.h b/include/linux/mlx5/qp.h index b26ea9077384..4342c3f1b2ee 100644 --- a/include/linux/mlx5/qp.h +++ b/include/linux/mlx5/qp.h @@ -37,7 +37,8 @@ #include #define MLX5_INVALID_LKEY 0x100 -#define MLX5_SIG_WQE_SIZE (MLX5_SEND_WQE_BB * 5) +/* UMR (3 WQE_BB's) + SIG (3 WQE_BB's) + PSV (mem) + PSV (wire) */ +#define MLX5_SIG_WQE_SIZE (MLX5_SEND_WQE_BB * 8) #define MLX5_DIF_SIZE 8 #define MLX5_STRIDE_BLOCK_OP 0x400 #define MLX5_CPY_GRD_MASK 0xc0 diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 465009e4837b..85b8306dd372 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -1202,6 +1202,7 @@ enum ib_wr_opcode { /* These are kernel only and can not be issued by userspace */ IB_WR_REG_MR = 0x20, IB_WR_REG_SIG_MR, + IB_WR_REG_PI_MR, /* reserve values for low level drivers' internal use. * These values will not be used at all in the ib core layer. From patchwork Thu Feb 7 17:33:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Max Gurtovoy X-Patchwork-Id: 10801615 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 082FF1575 for ; Thu, 7 Feb 2019 17:33:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E56702E37D for ; Thu, 7 Feb 2019 17:33:40 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D34F82E353; Thu, 7 Feb 2019 17:33:40 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8B7AB2E358 for ; Thu, 7 Feb 2019 17:33:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726934AbfBGRdj (ORCPT ); Thu, 7 Feb 2019 12:33:39 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:49182 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726902AbfBGRdj (ORCPT ); Thu, 7 Feb 2019 12:33:39 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from maxg@mellanox.com) with ESMTPS (AES256-SHA encrypted); 7 Feb 2019 19:33:34 +0200 Received: from r-vnc08.mtr.labs.mlnx (r-vnc08.mtr.labs.mlnx [10.208.0.121]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id x17HXVKh025778; Thu, 7 Feb 2019 19:33:33 +0200 From: Max Gurtovoy To: linux-rdma@vger.kernel.org, sagi@grimberg.me, hch@lst.de, leon@kernel.org, jgg@mellanox.com, dledford@redhat.com, bvanassche@acm.org Cc: maxg@mellanox.com, israelr@mellanox.com, idanb@mellanox.com, vladimirk@mellanox.com, shlomin@mellanox.com, oren@mellanox.com Subject: [PATCH 14/17] IB/iser: Refactor iscsi_iser_check_protection function Date: Thu, 7 Feb 2019 19:33:28 +0200 Message-Id: <1549560811-8655-15-git-send-email-maxg@mellanox.com> X-Mailer: git-send-email 1.7.8.2 In-Reply-To: <1549560811-8655-1-git-send-email-maxg@mellanox.com> References: <1549560811-8655-1-git-send-email-maxg@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Israel Rukshin Reduce lines of code by using local variable. Signed-off-by: Israel Rukshin Reviewed-by: Max Gurtovoy Reviewed-by: Sagi Grimberg --- drivers/infiniband/ulp/iser/iscsi_iser.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/drivers/infiniband/ulp/iser/iscsi_iser.c b/drivers/infiniband/ulp/iser/iscsi_iser.c index 8c707accd148..a78e219a2297 100644 --- a/drivers/infiniband/ulp/iser/iscsi_iser.c +++ b/drivers/infiniband/ulp/iser/iscsi_iser.c @@ -406,13 +406,10 @@ static u8 iscsi_iser_check_protection(struct iscsi_task *task, sector_t *sector) { struct iscsi_iser_task *iser_task = task->dd_data; + enum iser_data_dir dir = iser_task->dir[ISER_DIR_IN] ? + ISER_DIR_IN : ISER_DIR_OUT; - if (iser_task->dir[ISER_DIR_IN]) - return iser_check_task_pi_status(iser_task, ISER_DIR_IN, - sector); - else - return iser_check_task_pi_status(iser_task, ISER_DIR_OUT, - sector); + return iser_check_task_pi_status(iser_task, dir, sector); } /** From patchwork Thu Feb 7 17:33:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Max Gurtovoy X-Patchwork-Id: 10801631 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 560BC1575 for ; Thu, 7 Feb 2019 17:33:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 411EC2E34D for ; Thu, 7 Feb 2019 17:33:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 35C582E38C; Thu, 7 Feb 2019 17:33:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1040D2E34D for ; Thu, 7 Feb 2019 17:33:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726821AbfBGRdn (ORCPT ); Thu, 7 Feb 2019 12:33:43 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:49187 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726914AbfBGRdj (ORCPT ); Thu, 7 Feb 2019 12:33:39 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from maxg@mellanox.com) with ESMTPS (AES256-SHA encrypted); 7 Feb 2019 19:33:34 +0200 Received: from r-vnc08.mtr.labs.mlnx (r-vnc08.mtr.labs.mlnx [10.208.0.121]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id x17HXVKi025778; Thu, 7 Feb 2019 19:33:33 +0200 From: Max Gurtovoy To: linux-rdma@vger.kernel.org, sagi@grimberg.me, hch@lst.de, leon@kernel.org, jgg@mellanox.com, dledford@redhat.com, bvanassche@acm.org Cc: maxg@mellanox.com, israelr@mellanox.com, idanb@mellanox.com, vladimirk@mellanox.com, shlomin@mellanox.com, oren@mellanox.com Subject: [PATCH 15/17] IB/iser: Use IB_WR_REG_PI_MR for PI handover Date: Thu, 7 Feb 2019 19:33:29 +0200 Message-Id: <1549560811-8655-16-git-send-email-maxg@mellanox.com> X-Mailer: git-send-email 1.7.8.2 In-Reply-To: <1549560811-8655-1-git-send-email-maxg@mellanox.com> References: <1549560811-8655-1-git-send-email-maxg@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Israel Rukshin Using this new API reduce iSER code complexity. It also reduce the maximum number of work requests per task and the need of dealing with multiple MRs (and their registrations and invalidations) per task. It is done by using a single WR and a special MR type (IB_MR_TYPE_PI) for PI operation. The setup of the tested benchmark: - 2 servers with 24 cores (1 initiator and 1 target) - 24 target sessions with 1 LUN each - ramdisk backstore - PI active Performance results running fio (24 jobs, 128 iodepth) using write_generate=0 and read_verify=0 (w/w.o patch): bs IOPS(read) IOPS(write) ---- ---------- ---------- 512 1236.6K/1164.3K 1357.2K/1332.8K 1k 1196.5K/1163.8K 1348.4K/1262.7K 2k 1016.7K/921950 1003.7K/931230 4k 662728/600545 595423/501513 8k 385954/384345 333775/277090 16k 222864/222820 170317/170671 32k 116869/114896 82331/82244 64k 55205/54931 40264/40021 Using write_generate=1 and read_verify=1 (w/w.o patch): bs IOPS(read) IOPS(write) ---- ---------- ---------- 512 1090.1K/1030.9K 1303.9K/1101.4K 1k 1057.7K/904583 1318.4K/988085 2k 965226/638799 1008.6K/692514 4k 555479/410151 542414/414517 8k 298675/224964 264729/237508 16k 133485/122481 164625/138647 32k 74329/67615 80143/78743 64k 35716/35519 39294/37334 We get performance improvement at all block sizes. The most significant improvement is when writing 4k bs (almost 30% more iops). Signed-off-by: Israel Rukshin Reviewed-by: Max Gurtovoy --- drivers/infiniband/ulp/iser/iscsi_iser.h | 34 ++-------- drivers/infiniband/ulp/iser/iser_initiator.c | 8 ++- drivers/infiniband/ulp/iser/iser_memory.c | 99 ++++++++++------------------ drivers/infiniband/ulp/iser/iser_verbs.c | 91 ++++++++----------------- 4 files changed, 75 insertions(+), 157 deletions(-) diff --git a/drivers/infiniband/ulp/iser/iscsi_iser.h b/drivers/infiniband/ulp/iser/iscsi_iser.h index 2fc79ea897a3..76736befca70 100644 --- a/drivers/infiniband/ulp/iser/iscsi_iser.h +++ b/drivers/infiniband/ulp/iser/iscsi_iser.h @@ -224,12 +224,10 @@ enum iser_desc_type { }; /* Maximum number of work requests per task: - * Data memory region local invalidate + fast registration - * Protection memory region local invalidate + fast registration - * Signature memory region local invalidate + fast registration + * Memory region local invalidate + fast registration * PDU send */ -#define ISER_MAX_WRS 7 +#define ISER_MAX_WRS 3 /** * struct iser_tx_desc - iSER TX descriptor @@ -245,9 +243,6 @@ enum iser_desc_type { * @mapped: Is the task header mapped * @wr_idx: Current WR index * @wrs: Array of WRs per task - * @data_reg: Data buffer registration details - * @prot_reg: Protection buffer registration details - * @sig_attrs: Signature attributes */ struct iser_tx_desc { struct iser_ctrl iser_header; @@ -262,11 +257,7 @@ struct iser_tx_desc { union iser_wr { struct ib_send_wr send; struct ib_reg_wr fast_reg; - struct ib_sig_handover_wr sig; } wrs[ISER_MAX_WRS]; - struct iser_mem_reg data_reg; - struct iser_mem_reg prot_reg; - struct ib_sig_attrs sig_attrs; }; #define ISER_RX_PAD_SIZE (256 - (ISER_RX_PAYLOAD_SIZE + \ @@ -386,6 +377,7 @@ struct iser_device { * * @mr: memory region * @fmr_pool: pool of fmrs + * @sig_mr: signature memory region * @page_vec: fast reg page list used by fmr pool * @mr_valid: is mr valid indicator */ @@ -394,36 +386,22 @@ struct iser_reg_resources { struct ib_mr *mr; struct ib_fmr_pool *fmr_pool; }; + struct ib_mr *sig_mr; struct iser_page_vec *page_vec; u8 mr_valid:1; }; -/** - * struct iser_pi_context - Protection information context - * - * @rsc: protection buffer registration resources - * @sig_mr: signature enable memory region - * @sig_mr_valid: is sig_mr valid indicator - * @sig_protected: is region protected indicator - */ -struct iser_pi_context { - struct iser_reg_resources rsc; - struct ib_mr *sig_mr; - u8 sig_mr_valid:1; - u8 sig_protected:1; -}; - /** * struct iser_fr_desc - Fast registration descriptor * * @list: entry in connection fastreg pool * @rsc: data buffer registration resources - * @pi_ctx: protection information context + * @sig_protected: is region protected indicator */ struct iser_fr_desc { struct list_head list; struct iser_reg_resources rsc; - struct iser_pi_context *pi_ctx; + bool sig_protected; struct list_head all_list; }; diff --git a/drivers/infiniband/ulp/iser/iser_initiator.c b/drivers/infiniband/ulp/iser/iser_initiator.c index 9556ec55dec2..ae8993a7a34c 100644 --- a/drivers/infiniband/ulp/iser/iser_initiator.c +++ b/drivers/infiniband/ulp/iser/iser_initiator.c @@ -594,10 +594,9 @@ void iser_login_rsp(struct ib_cq *cq, struct ib_wc *wc) static inline int iser_inv_desc(struct iser_fr_desc *desc, u32 rkey) { - if (likely(rkey == desc->rsc.mr->rkey)) { + if (likely(rkey == desc->rsc.mr->rkey || + (desc->rsc.sig_mr && rkey == desc->rsc.sig_mr->rkey))) { desc->rsc.mr_valid = 0; - } else if (likely(desc->pi_ctx && rkey == desc->pi_ctx->sig_mr->rkey)) { - desc->pi_ctx->sig_mr_valid = 0; } else { iser_err("Bogus remote invalidation for rkey %#x\n", rkey); return -EINVAL; @@ -752,6 +751,9 @@ void iser_task_rdma_init(struct iscsi_iser_task *iser_task) iser_task->prot[ISER_DIR_IN].data_len = 0; iser_task->prot[ISER_DIR_OUT].data_len = 0; + iser_task->prot[ISER_DIR_IN].ib_sg.dma_nents = 0; + iser_task->prot[ISER_DIR_OUT].ib_sg.dma_nents = 0; + memset(&iser_task->rdma_reg[ISER_DIR_IN], 0, sizeof(struct iser_mem_reg)); memset(&iser_task->rdma_reg[ISER_DIR_OUT], 0, diff --git a/drivers/infiniband/ulp/iser/iser_memory.c b/drivers/infiniband/ulp/iser/iser_memory.c index 8fd3df94ec4b..1e8473e6e375 100644 --- a/drivers/infiniband/ulp/iser/iser_memory.c +++ b/drivers/infiniband/ulp/iser/iser_memory.c @@ -378,17 +378,17 @@ iser_inv_rkey(struct ib_send_wr *inv_wr, static int iser_reg_sig_mr(struct iscsi_iser_task *iser_task, - struct iser_pi_context *pi_ctx, - struct iser_mem_reg *data_reg, - struct iser_mem_reg *prot_reg, + struct iser_data_buf *mem, + struct iser_data_buf *sig_mem, + struct iser_reg_resources *rsc, struct iser_mem_reg *sig_reg) { struct iser_tx_desc *tx_desc = &iser_task->desc; - struct ib_sig_attrs *sig_attrs = &tx_desc->sig_attrs; struct ib_cqe *cqe = &iser_task->iser_conn->ib_conn.reg_cqe; - struct ib_sig_handover_wr *wr; - struct ib_mr *mr = pi_ctx->sig_mr; - int ret; + struct ib_mr *mr = rsc->sig_mr; + struct ib_sig_attrs *sig_attrs = mr->sig_attrs; + struct ib_reg_wr *wr; + int ret, n; memset(sig_attrs, 0, sizeof(*sig_attrs)); ret = iser_set_sig_attrs(iser_task->sc, sig_attrs); @@ -397,33 +397,35 @@ iser_reg_sig_mr(struct iscsi_iser_task *iser_task, iser_set_prot_checks(iser_task->sc, &sig_attrs->check_mask); - if (pi_ctx->sig_mr_valid) + if (rsc->mr_valid) iser_inv_rkey(iser_tx_next_wr(tx_desc), mr, cqe); ib_update_fast_reg_key(mr, ib_inc_rkey(mr->rkey)); - wr = container_of(iser_tx_next_wr(tx_desc), struct ib_sig_handover_wr, - wr); - wr->wr.opcode = IB_WR_REG_SIG_MR; + n = ib_map_mr_sg_pi(mr, &mem->ib_sg, &sig_mem->ib_sg, SZ_4K); + if (unlikely(n != mem->ib_sg.dma_nents + sig_mem->ib_sg.dma_nents)) { + iser_err("failed to map sg (%d/%d)\n", + n, mem->ib_sg.dma_nents + sig_mem->ib_sg.dma_nents); + return n < 0 ? n : -EINVAL; + } + + wr = container_of(iser_tx_next_wr(tx_desc), struct ib_reg_wr, wr); + memset(wr, 0, sizeof(*wr)); + wr->wr.opcode = IB_WR_REG_PI_MR; wr->wr.wr_cqe = cqe; - wr->wr.sg_list = &data_reg->sge; - wr->wr.num_sge = 1; + wr->wr.num_sge = 0; wr->wr.send_flags = 0; - wr->sig_attrs = sig_attrs; - wr->sig_mr = mr; - if (scsi_prot_sg_count(iser_task->sc)) - wr->prot = &prot_reg->sge; - else - wr->prot = NULL; - wr->access_flags = IB_ACCESS_LOCAL_WRITE | - IB_ACCESS_REMOTE_READ | - IB_ACCESS_REMOTE_WRITE; - pi_ctx->sig_mr_valid = 1; + wr->mr = mr; + wr->key = mr->rkey; + wr->access = IB_ACCESS_LOCAL_WRITE | + IB_ACCESS_REMOTE_READ | + IB_ACCESS_REMOTE_WRITE; + rsc->mr_valid = 1; sig_reg->sge.lkey = mr->lkey; sig_reg->rkey = mr->rkey; - sig_reg->sge.addr = 0; - sig_reg->sge.length = scsi_transfer_length(iser_task->sc); + sig_reg->sge.addr = mr->iova; + sig_reg->sge.length = mr->length; iser_dbg("lkey=0x%x rkey=0x%x addr=0x%llx length=%u\n", sig_reg->sge.lkey, sig_reg->rkey, sig_reg->sge.addr, @@ -478,21 +480,6 @@ static int iser_fast_reg_mr(struct iscsi_iser_task *iser_task, return 0; } -static int -iser_reg_prot_sg(struct iscsi_iser_task *task, - struct iser_data_buf *mem, - struct iser_fr_desc *desc, - bool use_dma_key, - struct iser_mem_reg *reg) -{ - struct iser_device *device = task->iser_conn->ib_conn.device; - - if (use_dma_key) - return iser_reg_dma(device, mem, reg); - - return device->reg_ops->reg_mem(task, mem, &desc->pi_ctx->rsc, reg); -} - static int iser_reg_data_sg(struct iscsi_iser_task *task, struct iser_data_buf *mem, @@ -516,7 +503,6 @@ int iser_reg_rdma_mem(struct iscsi_iser_task *task, struct iser_device *device = ib_conn->device; struct iser_data_buf *mem = &task->data[dir]; struct iser_mem_reg *reg = &task->rdma_reg[dir]; - struct iser_mem_reg *data_reg; struct iser_fr_desc *desc = NULL; bool use_dma_key; int err; @@ -530,32 +516,17 @@ int iser_reg_rdma_mem(struct iscsi_iser_task *task, reg->mem_h = desc; } - if (scsi_get_prot_op(task->sc) == SCSI_PROT_NORMAL) - data_reg = reg; - else - data_reg = &task->desc.data_reg; - - err = iser_reg_data_sg(task, mem, desc, use_dma_key, data_reg); - if (unlikely(err)) - goto err_reg; - - if (scsi_get_prot_op(task->sc) != SCSI_PROT_NORMAL) { - struct iser_mem_reg *prot_reg = &task->desc.prot_reg; - - if (scsi_prot_sg_count(task->sc)) { - mem = &task->prot[dir]; - err = iser_reg_prot_sg(task, mem, desc, - use_dma_key, prot_reg); - if (unlikely(err)) - goto err_reg; - } - - err = iser_reg_sig_mr(task, desc->pi_ctx, data_reg, - prot_reg, reg); + if (scsi_get_prot_op(task->sc) == SCSI_PROT_NORMAL) { + err = iser_reg_data_sg(task, mem, desc, use_dma_key, reg); + if (unlikely(err)) + goto err_reg; + } else { + err = iser_reg_sig_mr(task, mem, &task->prot[dir], + &desc->rsc, reg); if (unlikely(err)) goto err_reg; - desc->pi_ctx->sig_protected = 1; + desc->sig_protected = 1; } return 0; diff --git a/drivers/infiniband/ulp/iser/iser_verbs.c b/drivers/infiniband/ulp/iser/iser_verbs.c index 4ff3d98fa6a4..f7502c131b08 100644 --- a/drivers/infiniband/ulp/iser/iser_verbs.c +++ b/drivers/infiniband/ulp/iser/iser_verbs.c @@ -237,7 +237,8 @@ static int iser_alloc_reg_res(struct iser_device *device, struct ib_pd *pd, struct iser_reg_resources *res, - unsigned int size) + unsigned int size, + bool pi_enable) { struct ib_device *ib_dev = device->ib_device; enum ib_mr_type mr_type; @@ -254,62 +255,32 @@ iser_alloc_reg_res(struct iser_device *device, iser_err("Failed to allocate ib_fast_reg_mr err=%d\n", ret); return ret; } - res->mr_valid = 0; - - return 0; -} - -static void -iser_free_reg_res(struct iser_reg_resources *rsc) -{ - ib_dereg_mr(rsc->mr); -} - -static int -iser_alloc_pi_ctx(struct iser_device *device, - struct ib_pd *pd, - struct iser_fr_desc *desc, - unsigned int size) -{ - struct iser_pi_context *pi_ctx = NULL; - int ret; - - desc->pi_ctx = kzalloc(sizeof(*desc->pi_ctx), GFP_KERNEL); - if (!desc->pi_ctx) - return -ENOMEM; - - pi_ctx = desc->pi_ctx; - ret = iser_alloc_reg_res(device, pd, &pi_ctx->rsc, size); - if (ret) { - iser_err("failed to allocate reg_resources\n"); - goto alloc_reg_res_err; - } - - pi_ctx->sig_mr = ib_alloc_mr(pd, IB_MR_TYPE_SIGNATURE, 2); - if (IS_ERR(pi_ctx->sig_mr)) { - ret = PTR_ERR(pi_ctx->sig_mr); - goto sig_mr_failure; + if (pi_enable) { + res->sig_mr = ib_alloc_mr_integrity(pd, size, size); + if (IS_ERR(res->sig_mr)) { + ret = PTR_ERR(res->sig_mr); + iser_err("Failed to allocate sig_mr err=%d\n", ret); + goto err; + } } - pi_ctx->sig_mr_valid = 0; - desc->pi_ctx->sig_protected = 0; + res->mr_valid = 0; return 0; -sig_mr_failure: - iser_free_reg_res(&pi_ctx->rsc); -alloc_reg_res_err: - kfree(desc->pi_ctx); - +err: + ib_dereg_mr(res->mr); return ret; } static void -iser_free_pi_ctx(struct iser_pi_context *pi_ctx) +iser_free_reg_res(struct iser_reg_resources *res) { - iser_free_reg_res(&pi_ctx->rsc); - ib_dereg_mr(pi_ctx->sig_mr); - kfree(pi_ctx); + ib_dereg_mr(res->mr); + if (res->sig_mr) { + ib_dereg_mr(res->sig_mr); + res->sig_mr = NULL; + } } static struct iser_fr_desc * @@ -325,20 +296,12 @@ iser_create_fastreg_desc(struct iser_device *device, if (!desc) return ERR_PTR(-ENOMEM); - ret = iser_alloc_reg_res(device, pd, &desc->rsc, size); + ret = iser_alloc_reg_res(device, pd, &desc->rsc, size, pi_enable); if (ret) goto reg_res_alloc_failure; - if (pi_enable) { - ret = iser_alloc_pi_ctx(device, pd, desc, size); - if (ret) - goto pi_ctx_alloc_failure; - } - return desc; -pi_ctx_alloc_failure: - iser_free_reg_res(&desc->rsc); reg_res_alloc_failure: kfree(desc); @@ -400,8 +363,6 @@ void iser_free_fastreg_pool(struct ib_conn *ib_conn) list_for_each_entry_safe(desc, tmp, &fr_pool->all_list, all_list) { list_del(&desc->all_list); iser_free_reg_res(&desc->rsc); - if (desc->pi_ctx) - iser_free_pi_ctx(desc->pi_ctx); kfree(desc); ++i; } @@ -707,6 +668,7 @@ iser_calc_scsi_params(struct iser_conn *iser_conn, struct ib_device_attr *attr = &device->ib_device->attrs; unsigned short sg_tablesize, sup_sg_tablesize; unsigned short reserved_mr_pages; + u32 max_num_sg; /* * FRs without SG_GAPS or FMRs can only map up to a (device) page per @@ -720,12 +682,17 @@ iser_calc_scsi_params(struct iser_conn *iser_conn, else reserved_mr_pages = 1; + if (iser_conn->ib_conn.pi_support) + max_num_sg = attr->max_pi_fast_reg_page_list_len; + else + max_num_sg = attr->max_fast_reg_page_list_len; + sg_tablesize = DIV_ROUND_UP(max_sectors * 512, SIZE_4K); if (attr->device_cap_flags & IB_DEVICE_MEM_MGT_EXTENSIONS) sup_sg_tablesize = min_t( uint, ISCSI_ISER_MAX_SG_TABLESIZE, - attr->max_fast_reg_page_list_len - reserved_mr_pages); + max_num_sg - reserved_mr_pages); else sup_sg_tablesize = ISCSI_ISER_MAX_SG_TABLESIZE; @@ -1118,9 +1085,9 @@ u8 iser_check_task_pi_status(struct iscsi_iser_task *iser_task, struct ib_mr_status mr_status; int ret; - if (desc && desc->pi_ctx->sig_protected) { - desc->pi_ctx->sig_protected = 0; - ret = ib_check_mr_status(desc->pi_ctx->sig_mr, + if (desc && desc->sig_protected) { + desc->sig_protected = 0; + ret = ib_check_mr_status(desc->rsc.sig_mr, IB_MR_CHECK_SIG_STATUS, &mr_status); if (ret) { pr_err("ib_check_mr_status failed, ret %d\n", ret); From patchwork Thu Feb 7 17:33:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Max Gurtovoy X-Patchwork-Id: 10801619 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F32AA1575 for ; Thu, 7 Feb 2019 17:33:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DF1192E358 for ; Thu, 7 Feb 2019 17:33:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D388C2E37D; Thu, 7 Feb 2019 17:33:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 771A32E38C for ; Thu, 7 Feb 2019 17:33:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726915AbfBGRdk (ORCPT ); Thu, 7 Feb 2019 12:33:40 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:49207 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726401AbfBGRdi (ORCPT ); Thu, 7 Feb 2019 12:33:38 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from maxg@mellanox.com) with ESMTPS (AES256-SHA encrypted); 7 Feb 2019 19:33:34 +0200 Received: from r-vnc08.mtr.labs.mlnx (r-vnc08.mtr.labs.mlnx [10.208.0.121]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id x17HXVKj025778; Thu, 7 Feb 2019 19:33:34 +0200 From: Max Gurtovoy To: linux-rdma@vger.kernel.org, sagi@grimberg.me, hch@lst.de, leon@kernel.org, jgg@mellanox.com, dledford@redhat.com, bvanassche@acm.org Cc: maxg@mellanox.com, israelr@mellanox.com, idanb@mellanox.com, vladimirk@mellanox.com, shlomin@mellanox.com, oren@mellanox.com Subject: [PATCH 16/17] IB/iser: Remove unused sig_attrs argument Date: Thu, 7 Feb 2019 19:33:30 +0200 Message-Id: <1549560811-8655-17-git-send-email-maxg@mellanox.com> X-Mailer: git-send-email 1.7.8.2 In-Reply-To: <1549560811-8655-1-git-send-email-maxg@mellanox.com> References: <1549560811-8655-1-git-send-email-maxg@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Israel Rukshin Signed-off-by: Israel Rukshin Reviewed-by: Max Gurtovoy --- drivers/infiniband/ulp/iser/iser_memory.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/drivers/infiniband/ulp/iser/iser_memory.c b/drivers/infiniband/ulp/iser/iser_memory.c index 1e8473e6e375..57440354f262 100644 --- a/drivers/infiniband/ulp/iser/iser_memory.c +++ b/drivers/infiniband/ulp/iser/iser_memory.c @@ -303,8 +303,7 @@ void iser_unreg_mem_fastreg(struct iscsi_iser_task *iser_task, } static void -iser_set_dif_domain(struct scsi_cmnd *sc, struct ib_sig_attrs *sig_attrs, - struct ib_sig_domain *domain) +iser_set_dif_domain(struct scsi_cmnd *sc, struct ib_sig_domain *domain) { domain->sig_type = IB_SIG_TYPE_T10_DIF; domain->sig.dif.pi_interval = scsi_prot_interval(sc); @@ -327,21 +326,21 @@ iser_set_sig_attrs(struct scsi_cmnd *sc, struct ib_sig_attrs *sig_attrs) case SCSI_PROT_WRITE_INSERT: case SCSI_PROT_READ_STRIP: sig_attrs->mem.sig_type = IB_SIG_TYPE_NONE; - iser_set_dif_domain(sc, sig_attrs, &sig_attrs->wire); + iser_set_dif_domain(sc, &sig_attrs->wire); sig_attrs->wire.sig.dif.bg_type = IB_T10DIF_CRC; break; case SCSI_PROT_READ_INSERT: case SCSI_PROT_WRITE_STRIP: sig_attrs->wire.sig_type = IB_SIG_TYPE_NONE; - iser_set_dif_domain(sc, sig_attrs, &sig_attrs->mem); + iser_set_dif_domain(sc, &sig_attrs->mem); sig_attrs->mem.sig.dif.bg_type = sc->prot_flags & SCSI_PROT_IP_CHECKSUM ? IB_T10DIF_CSUM : IB_T10DIF_CRC; break; case SCSI_PROT_READ_PASS: case SCSI_PROT_WRITE_PASS: - iser_set_dif_domain(sc, sig_attrs, &sig_attrs->wire); + iser_set_dif_domain(sc, &sig_attrs->wire); sig_attrs->wire.sig.dif.bg_type = IB_T10DIF_CRC; - iser_set_dif_domain(sc, sig_attrs, &sig_attrs->mem); + iser_set_dif_domain(sc, &sig_attrs->mem); sig_attrs->mem.sig.dif.bg_type = sc->prot_flags & SCSI_PROT_IP_CHECKSUM ? IB_T10DIF_CSUM : IB_T10DIF_CRC; break; From patchwork Thu Feb 7 17:33:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Max Gurtovoy X-Patchwork-Id: 10801635 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1F99217FB for ; Thu, 7 Feb 2019 17:33:47 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 058CD2E34D for ; Thu, 7 Feb 2019 17:33:47 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EE04D2E38D; Thu, 7 Feb 2019 17:33:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A08CC2E34D for ; Thu, 7 Feb 2019 17:33:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726832AbfBGRdk (ORCPT ); Thu, 7 Feb 2019 12:33:40 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:49211 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726915AbfBGRdi (ORCPT ); Thu, 7 Feb 2019 12:33:38 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from maxg@mellanox.com) with ESMTPS (AES256-SHA encrypted); 7 Feb 2019 19:33:34 +0200 Received: from r-vnc08.mtr.labs.mlnx (r-vnc08.mtr.labs.mlnx [10.208.0.121]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id x17HXVKk025778; Thu, 7 Feb 2019 19:33:34 +0200 From: Max Gurtovoy To: linux-rdma@vger.kernel.org, sagi@grimberg.me, hch@lst.de, leon@kernel.org, jgg@mellanox.com, dledford@redhat.com, bvanassche@acm.org Cc: maxg@mellanox.com, israelr@mellanox.com, idanb@mellanox.com, vladimirk@mellanox.com, shlomin@mellanox.com, oren@mellanox.com Subject: [PATCH 17/17] IB/isert: Remove unused sig_attrs argument Date: Thu, 7 Feb 2019 19:33:31 +0200 Message-Id: <1549560811-8655-18-git-send-email-maxg@mellanox.com> X-Mailer: git-send-email 1.7.8.2 In-Reply-To: <1549560811-8655-1-git-send-email-maxg@mellanox.com> References: <1549560811-8655-1-git-send-email-maxg@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Israel Rukshin Signed-off-by: Israel Rukshin Reviewed-by: Max Gurtovoy --- drivers/infiniband/ulp/isert/ib_isert.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c index e3dd13798d79..fdc720b2c5cf 100644 --- a/drivers/infiniband/ulp/isert/ib_isert.c +++ b/drivers/infiniband/ulp/isert/ib_isert.c @@ -2067,8 +2067,7 @@ isert_put_text_rsp(struct iscsi_cmd *cmd, struct iscsi_conn *conn) } static inline void -isert_set_dif_domain(struct se_cmd *se_cmd, struct ib_sig_attrs *sig_attrs, - struct ib_sig_domain *domain) +isert_set_dif_domain(struct se_cmd *se_cmd, struct ib_sig_domain *domain) { domain->sig_type = IB_SIG_TYPE_T10_DIF; domain->sig.dif.bg_type = IB_T10DIF_CRC; @@ -2096,17 +2095,17 @@ isert_set_sig_attrs(struct se_cmd *se_cmd, struct ib_sig_attrs *sig_attrs) case TARGET_PROT_DIN_INSERT: case TARGET_PROT_DOUT_STRIP: sig_attrs->mem.sig_type = IB_SIG_TYPE_NONE; - isert_set_dif_domain(se_cmd, sig_attrs, &sig_attrs->wire); + isert_set_dif_domain(se_cmd, &sig_attrs->wire); break; case TARGET_PROT_DOUT_INSERT: case TARGET_PROT_DIN_STRIP: sig_attrs->wire.sig_type = IB_SIG_TYPE_NONE; - isert_set_dif_domain(se_cmd, sig_attrs, &sig_attrs->mem); + isert_set_dif_domain(se_cmd, &sig_attrs->mem); break; case TARGET_PROT_DIN_PASS: case TARGET_PROT_DOUT_PASS: - isert_set_dif_domain(se_cmd, sig_attrs, &sig_attrs->wire); - isert_set_dif_domain(se_cmd, sig_attrs, &sig_attrs->mem); + isert_set_dif_domain(se_cmd, &sig_attrs->wire); + isert_set_dif_domain(se_cmd, &sig_attrs->mem); break; default: isert_err("Unsupported PI operation %d\n", se_cmd->prot_op);