From patchwork Mon Apr 5 05:23:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 12182611 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0B0FC43460 for ; Mon, 5 Apr 2021 05:24:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AE3906139F for ; Mon, 5 Apr 2021 05:24:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232105AbhDEFYW (ORCPT ); Mon, 5 Apr 2021 01:24:22 -0400 Received: from mail.kernel.org ([198.145.29.99]:56684 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229540AbhDEFYV (ORCPT ); Mon, 5 Apr 2021 01:24:21 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id D90E661395; Mon, 5 Apr 2021 05:24:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1617600256; bh=cKVzXTospyoAPnuwfNPiiZRibqBhsg50do4P7egUacY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Dyc6+wcWSHtvcjGCiU1g1L10qGGHNDfWffEzv31YcC7MpQVzNgUupKOZj6AL+cNlJ aPsppZ6ApJNt762MrExUctnt4dsWZB+msAVwQ9YUswRMySseKDegRRqj87gvdkzWuw XksmX5Wy2N13YmsGjs++M4OOlqY8nv3gThjc5gIGrcUOMohSaIWcyAhUB6oONuNhMo 7s84USdQ0yvON3lhWC0adM4WUG+4uSorQ4N+J5ewnfnP4UWBo2grjmGILLBa9881VW u22eFp6WFHeIdSseTGYLqz93qADfomej727NHxo+OIiVZsBqZxW9u6KGk+Y6FjBXHJ Bqc6XdGY8Uv5g== From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Avihai Horon , Adit Ranadive , Anna Schumaker , Ariel Elior , Bart Van Assche , Bernard Metzler , Christoph Hellwig , Chuck Lever , "David S. Miller" , Dennis Dalessandro , Devesh Sharma , Faisal Latif , Jack Wang , Jakub Kicinski , "J. Bruce Fields" , Jens Axboe , Karsten Graul , Keith Busch , Lijun Ou , linux-cifs@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nfs@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-s390@vger.kernel.org, Max Gurtovoy , Max Gurtovoy , "Md. Haris Iqbal" , Michael Guralnik , Michal Kalderon , Mike Marciniszyn , Naresh Kumar PBS , netdev@vger.kernel.org, Potnuri Bharat Teja , rds-devel@oss.oracle.com, Sagi Grimberg , samba-technical@lists.samba.org, Santosh Shilimkar , Selvin Xavier , Shiraz Saleem , Somnath Kotur , Sriharsha Basavapatna , Steve French , Trond Myklebust , VMware PV-Drivers , Weihang Li , Yishai Hadas , Zhu Yanjun Subject: [PATCH rdma-next 01/10] RDMA: Add access flags to ib_alloc_mr() and ib_mr_pool_init() Date: Mon, 5 Apr 2021 08:23:55 +0300 Message-Id: <20210405052404.213889-2-leon@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210405052404.213889-1-leon@kernel.org> References: <20210405052404.213889-1-leon@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org From: Avihai Horon Add access flags parameter to ib_alloc_mr() and to ib_mr_pool_init(), and refactor relevant code. This parameter is used to pass MR access flags during MR allocation. In the following patches, the new access flags parameter will be used to enable Relaxed Ordering for ib_alloc_mr() and ib_mr_pool_init() users. Signed-off-by: Avihai Horon Reviewed-by: Max Gurtovoy Reviewed-by: Michael Guralnik Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/mr_pool.c | 7 +- drivers/infiniband/core/rw.c | 12 ++-- drivers/infiniband/core/verbs.c | 23 +++++-- drivers/infiniband/hw/bnxt_re/ib_verbs.c | 2 +- drivers/infiniband/hw/bnxt_re/ib_verbs.h | 2 +- drivers/infiniband/hw/cxgb4/iw_cxgb4.h | 2 +- drivers/infiniband/hw/cxgb4/mem.c | 2 +- drivers/infiniband/hw/hns/hns_roce_device.h | 2 +- drivers/infiniband/hw/hns/hns_roce_mr.c | 2 +- drivers/infiniband/hw/i40iw/i40iw_verbs.c | 3 +- drivers/infiniband/hw/mlx4/mlx4_ib.h | 2 +- drivers/infiniband/hw/mlx4/mr.c | 2 +- drivers/infiniband/hw/mlx5/mlx5_ib.h | 12 ++-- drivers/infiniband/hw/mlx5/mr.c | 61 ++++++++-------- drivers/infiniband/hw/mlx5/wr.c | 69 ++++++++++++++----- drivers/infiniband/hw/ocrdma/ocrdma_verbs.c | 2 +- drivers/infiniband/hw/ocrdma/ocrdma_verbs.h | 2 +- drivers/infiniband/hw/qedr/verbs.c | 2 +- drivers/infiniband/hw/qedr/verbs.h | 2 +- drivers/infiniband/hw/vmw_pvrdma/pvrdma_mr.c | 3 +- .../infiniband/hw/vmw_pvrdma/pvrdma_verbs.h | 2 +- drivers/infiniband/sw/rdmavt/mr.c | 3 +- drivers/infiniband/sw/rdmavt/mr.h | 2 +- drivers/infiniband/sw/rxe/rxe_verbs.c | 2 +- drivers/infiniband/sw/siw/siw_verbs.c | 2 +- drivers/infiniband/sw/siw/siw_verbs.h | 2 +- drivers/infiniband/ulp/iser/iser_verbs.c | 4 +- drivers/infiniband/ulp/rtrs/rtrs-clt.c | 2 +- drivers/infiniband/ulp/rtrs/rtrs-srv.c | 2 +- drivers/infiniband/ulp/srp/ib_srp.c | 2 +- drivers/nvme/host/rdma.c | 4 +- fs/cifs/smbdirect.c | 7 +- include/rdma/ib_verbs.h | 11 ++- include/rdma/mr_pool.h | 3 +- net/rds/ib_frmr.c | 2 +- net/smc/smc_ib.c | 2 +- net/sunrpc/xprtrdma/frwr_ops.c | 2 +- 37 files changed, 163 insertions(+), 105 deletions(-) diff --git a/drivers/infiniband/core/mr_pool.c b/drivers/infiniband/core/mr_pool.c index c0e2df128b34..b869c3487475 100644 --- a/drivers/infiniband/core/mr_pool.c +++ b/drivers/infiniband/core/mr_pool.c @@ -34,7 +34,8 @@ void ib_mr_pool_put(struct ib_qp *qp, struct list_head *list, struct ib_mr *mr) EXPORT_SYMBOL(ib_mr_pool_put); int ib_mr_pool_init(struct ib_qp *qp, struct list_head *list, int nr, - enum ib_mr_type type, u32 max_num_sg, u32 max_num_meta_sg) + enum ib_mr_type type, u32 max_num_sg, u32 max_num_meta_sg, + u32 access) { struct ib_mr *mr; unsigned long flags; @@ -43,9 +44,9 @@ int ib_mr_pool_init(struct ib_qp *qp, struct list_head *list, int nr, for (i = 0; i < nr; i++) { if (type == IB_MR_TYPE_INTEGRITY) mr = ib_alloc_mr_integrity(qp->pd, max_num_sg, - max_num_meta_sg); + max_num_meta_sg, access); else - mr = ib_alloc_mr(qp->pd, type, max_num_sg); + mr = ib_alloc_mr(qp->pd, type, max_num_sg, access); if (IS_ERR(mr)) { ret = PTR_ERR(mr); goto out; diff --git a/drivers/infiniband/core/rw.c b/drivers/infiniband/core/rw.c index a588c2038479..d5a0038e82a4 100644 --- a/drivers/infiniband/core/rw.c +++ b/drivers/infiniband/core/rw.c @@ -110,7 +110,7 @@ static int rdma_rw_init_one_mr(struct ib_qp *qp, u32 port_num, reg->reg_wr.wr.opcode = IB_WR_REG_MR; reg->reg_wr.mr = reg->mr; - reg->reg_wr.access = IB_ACCESS_LOCAL_WRITE; + reg->reg_wr.access = IB_ACCESS_LOCAL_WRITE | IB_ACCESS_RELAXED_ORDERING; if (rdma_protocol_iwarp(qp->device, port_num)) reg->reg_wr.access |= IB_ACCESS_REMOTE_WRITE; count++; @@ -437,7 +437,8 @@ int rdma_rw_ctx_signature_init(struct rdma_rw_ctx *ctx, struct ib_qp *qp, ctx->reg->reg_wr.wr.wr_cqe = NULL; ctx->reg->reg_wr.wr.num_sge = 0; ctx->reg->reg_wr.wr.send_flags = 0; - ctx->reg->reg_wr.access = IB_ACCESS_LOCAL_WRITE; + ctx->reg->reg_wr.access = + IB_ACCESS_LOCAL_WRITE | IB_ACCESS_RELAXED_ORDERING; if (rdma_protocol_iwarp(qp->device, port_num)) ctx->reg->reg_wr.access |= IB_ACCESS_REMOTE_WRITE; ctx->reg->reg_wr.mr = ctx->reg->mr; @@ -711,8 +712,8 @@ int rdma_rw_init_mrs(struct ib_qp *qp, struct ib_qp_init_attr *attr) if (nr_mrs) { ret = ib_mr_pool_init(qp, &qp->rdma_mrs, nr_mrs, - IB_MR_TYPE_MEM_REG, - max_num_sg, 0); + IB_MR_TYPE_MEM_REG, max_num_sg, 0, + IB_ACCESS_RELAXED_ORDERING); if (ret) { pr_err("%s: failed to allocated %d MRs\n", __func__, nr_mrs); @@ -722,7 +723,8 @@ int rdma_rw_init_mrs(struct ib_qp *qp, struct ib_qp_init_attr *attr) if (nr_sig_mrs) { ret = ib_mr_pool_init(qp, &qp->sig_mrs, nr_sig_mrs, - IB_MR_TYPE_INTEGRITY, max_num_sg, max_num_sg); + IB_MR_TYPE_INTEGRITY, max_num_sg, + max_num_sg, IB_ACCESS_RELAXED_ORDERING); if (ret) { pr_err("%s: failed to allocated %d SIG MRs\n", __func__, nr_sig_mrs); diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c index c576e2bc39c6..a1782f8a6ca0 100644 --- a/drivers/infiniband/core/verbs.c +++ b/drivers/infiniband/core/verbs.c @@ -2136,6 +2136,7 @@ EXPORT_SYMBOL(ib_dereg_mr_user); * @pd: protection domain associated with the region * @mr_type: memory region type * @max_num_sg: maximum sg entries available for registration. + * @access: access flags for the memory region. * * Notes: * Memory registeration page/sg lists must not exceed max_num_sg. @@ -2144,7 +2145,7 @@ EXPORT_SYMBOL(ib_dereg_mr_user); * */ struct ib_mr *ib_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, - u32 max_num_sg) + u32 max_num_sg, u32 access) { struct ib_mr *mr; @@ -2159,7 +2160,12 @@ struct ib_mr *ib_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, goto out; } - mr = pd->device->ops.alloc_mr(pd, mr_type, max_num_sg); + if (access & ~IB_ACCESS_RELAXED_ORDERING) { + mr = ERR_PTR(-EINVAL); + goto out; + } + + mr = pd->device->ops.alloc_mr(pd, mr_type, max_num_sg, access); if (IS_ERR(mr)) goto out; @@ -2187,15 +2193,15 @@ EXPORT_SYMBOL(ib_alloc_mr); * @max_num_data_sg: maximum data sg entries available for registration * @max_num_meta_sg: maximum metadata sg entries available for * registration + * @access: access flags for the memory region. * * Notes: * Memory registration page/sg lists must not exceed max_num_sg, * also the integrity page/sg lists must not exceed max_num_meta_sg. * */ -struct ib_mr *ib_alloc_mr_integrity(struct ib_pd *pd, - u32 max_num_data_sg, - u32 max_num_meta_sg) +struct ib_mr *ib_alloc_mr_integrity(struct ib_pd *pd, u32 max_num_data_sg, + u32 max_num_meta_sg, u32 access) { struct ib_mr *mr; struct ib_sig_attrs *sig_attrs; @@ -2211,6 +2217,11 @@ struct ib_mr *ib_alloc_mr_integrity(struct ib_pd *pd, goto out; } + if (access & ~IB_ACCESS_RELAXED_ORDERING) { + mr = ERR_PTR(-EINVAL); + goto out; + } + sig_attrs = kzalloc(sizeof(struct ib_sig_attrs), GFP_KERNEL); if (!sig_attrs) { mr = ERR_PTR(-ENOMEM); @@ -2218,7 +2229,7 @@ struct ib_mr *ib_alloc_mr_integrity(struct ib_pd *pd, } mr = pd->device->ops.alloc_mr_integrity(pd, max_num_data_sg, - max_num_meta_sg); + max_num_meta_sg, access); if (IS_ERR(mr)) { kfree(sig_attrs); goto out; diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c index 2efaa80bfbd2..116febdf999b 100644 --- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c +++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c @@ -3672,7 +3672,7 @@ int bnxt_re_map_mr_sg(struct ib_mr *ib_mr, struct scatterlist *sg, int sg_nents, } struct ib_mr *bnxt_re_alloc_mr(struct ib_pd *ib_pd, enum ib_mr_type type, - u32 max_num_sg) + u32 max_num_sg, u32 access) { struct bnxt_re_pd *pd = container_of(ib_pd, struct bnxt_re_pd, ib_pd); struct bnxt_re_dev *rdev = pd->rdev; diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.h b/drivers/infiniband/hw/bnxt_re/ib_verbs.h index d68671cc6173..3e8342a6f367 100644 --- a/drivers/infiniband/hw/bnxt_re/ib_verbs.h +++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.h @@ -201,7 +201,7 @@ struct ib_mr *bnxt_re_get_dma_mr(struct ib_pd *pd, int mr_access_flags); int bnxt_re_map_mr_sg(struct ib_mr *ib_mr, struct scatterlist *sg, int sg_nents, unsigned int *sg_offset); struct ib_mr *bnxt_re_alloc_mr(struct ib_pd *ib_pd, enum ib_mr_type mr_type, - u32 max_num_sg); + u32 max_num_sg, u32 access); int bnxt_re_dereg_mr(struct ib_mr *mr, struct ib_udata *udata); struct ib_mw *bnxt_re_alloc_mw(struct ib_pd *ib_pd, enum ib_mw_type type, struct ib_udata *udata); diff --git a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h index cdec5deb37a1..4520c53aa1f6 100644 --- a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h +++ b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h @@ -969,7 +969,7 @@ int c4iw_reject_cr(struct iw_cm_id *cm_id, const void *pdata, u8 pdata_len); void c4iw_qp_add_ref(struct ib_qp *qp); void c4iw_qp_rem_ref(struct ib_qp *qp); struct ib_mr *c4iw_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, - u32 max_num_sg); + u32 max_num_sg, u32 access); int c4iw_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, unsigned int *sg_offset); void c4iw_dealloc(struct uld_ctx *ctx); diff --git a/drivers/infiniband/hw/cxgb4/mem.c b/drivers/infiniband/hw/cxgb4/mem.c index a2c71a1d93d5..c8ed4c56925d 100644 --- a/drivers/infiniband/hw/cxgb4/mem.c +++ b/drivers/infiniband/hw/cxgb4/mem.c @@ -596,7 +596,7 @@ struct ib_mr *c4iw_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, } struct ib_mr *c4iw_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, - u32 max_num_sg) + u32 max_num_sg, u32 access) { struct c4iw_dev *rhp; struct c4iw_pd *php; diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h index 55cbbd524057..3e2aed7e8329 100644 --- a/drivers/infiniband/hw/hns/hns_roce_device.h +++ b/drivers/infiniband/hw/hns/hns_roce_device.h @@ -1205,7 +1205,7 @@ struct ib_mr *hns_roce_rereg_user_mr(struct ib_mr *mr, int flags, u64 start, int mr_access_flags, struct ib_pd *pd, struct ib_udata *udata); struct ib_mr *hns_roce_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, - u32 max_num_sg); + u32 max_num_sg, u32 access); int hns_roce_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, unsigned int *sg_offset); int hns_roce_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata); diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c index 79b3c3023fe7..c16638ad66f4 100644 --- a/drivers/infiniband/hw/hns/hns_roce_mr.c +++ b/drivers/infiniband/hw/hns/hns_roce_mr.c @@ -381,7 +381,7 @@ int hns_roce_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) } struct ib_mr *hns_roce_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, - u32 max_num_sg) + u32 max_num_sg, u32 access) { struct hns_roce_dev *hr_dev = to_hr_dev(pd->device); struct device *dev = hr_dev->dev; diff --git a/drivers/infiniband/hw/i40iw/i40iw_verbs.c b/drivers/infiniband/hw/i40iw/i40iw_verbs.c index b876d722fcc8..827dbca3ddf3 100644 --- a/drivers/infiniband/hw/i40iw/i40iw_verbs.c +++ b/drivers/infiniband/hw/i40iw/i40iw_verbs.c @@ -1451,9 +1451,10 @@ static int i40iw_hw_alloc_stag(struct i40iw_device *iwdev, struct i40iw_mr *iwmr * @pd: ibpd pointer * @mr_type: memory for stag registrion * @max_num_sg: man number of pages + * @access: access flags of memory region */ static struct ib_mr *i40iw_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, - u32 max_num_sg) + u32 max_num_sg, u32 access) { struct i40iw_pd *iwpd = to_iwpd(pd); struct i40iw_device *iwdev = to_iwdev(pd->device); diff --git a/drivers/infiniband/hw/mlx4/mlx4_ib.h b/drivers/infiniband/hw/mlx4/mlx4_ib.h index e856cf23a0a1..0c99dd57de3f 100644 --- a/drivers/infiniband/hw/mlx4/mlx4_ib.h +++ b/drivers/infiniband/hw/mlx4/mlx4_ib.h @@ -759,7 +759,7 @@ int mlx4_ib_dereg_mr(struct ib_mr *mr, struct ib_udata *udata); int mlx4_ib_alloc_mw(struct ib_mw *mw, struct ib_udata *udata); int mlx4_ib_dealloc_mw(struct ib_mw *mw); struct ib_mr *mlx4_ib_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, - u32 max_num_sg); + u32 max_num_sg, u32 access); int mlx4_ib_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, unsigned int *sg_offset); int mlx4_ib_modify_cq(struct ib_cq *cq, u16 cq_count, u16 cq_period); diff --git a/drivers/infiniband/hw/mlx4/mr.c b/drivers/infiniband/hw/mlx4/mr.c index 50becc0e4b62..5a6fc7d7a89f 100644 --- a/drivers/infiniband/hw/mlx4/mr.c +++ b/drivers/infiniband/hw/mlx4/mr.c @@ -643,7 +643,7 @@ int mlx4_ib_dealloc_mw(struct ib_mw *ibmw) } struct ib_mr *mlx4_ib_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, - u32 max_num_sg) + u32 max_num_sg, u32 access) { struct mlx4_ib_dev *dev = to_mdev(pd->device); struct mlx4_ib_mr *mr; diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index 69ecd0229322..0a8b33244fdd 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -670,6 +670,9 @@ struct mlx5_ib_mr { struct mlx5_cache_ent *cache_ent; struct ib_umem *umem; + /* Current access_flags */ + int access_flags; + /* This is zero'd when the MR is allocated */ union { /* Used only while the MR is in the cache */ @@ -705,8 +708,6 @@ struct mlx5_ib_mr { /* Used only by User MRs (umem != NULL) */ struct { unsigned int page_shift; - /* Current access_flags */ - int access_flags; /* For User ODP */ struct mlx5_ib_mr *parent; @@ -1306,10 +1307,9 @@ struct ib_mr *mlx5_ib_rereg_user_mr(struct ib_mr *ib_mr, int flags, u64 start, struct ib_pd *pd, struct ib_udata *udata); int mlx5_ib_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata); struct ib_mr *mlx5_ib_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, - u32 max_num_sg); -struct ib_mr *mlx5_ib_alloc_mr_integrity(struct ib_pd *pd, - u32 max_num_sg, - u32 max_num_meta_sg); + u32 max_num_sg, u32 access); +struct ib_mr *mlx5_ib_alloc_mr_integrity(struct ib_pd *pd, u32 max_num_sg, + u32 max_num_meta_sg, u32 access); int mlx5_ib_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, unsigned int *sg_offset); int mlx5_ib_map_mr_sg_pi(struct ib_mr *ibmr, struct scatterlist *data_sg, diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index 552fecd210c2..9ba7d5d6c668 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -2015,14 +2015,14 @@ int mlx5_ib_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) } static void mlx5_set_umr_free_mkey(struct ib_pd *pd, u32 *in, int ndescs, - int access_mode, int page_shift) + int access_mode, u32 access, int page_shift) { void *mkc; mkc = MLX5_ADDR_OF(create_mkey_in, in, memory_key_mkey_entry); /* This is only used from the kernel, so setting the PD is OK. */ - set_mkc_access_pd_addr_fields(mkc, 0, 0, pd); + set_mkc_access_pd_addr_fields(mkc, access, 0, pd); MLX5_SET(mkc, mkc, free, 1); MLX5_SET(mkc, mkc, translations_octword_size, ndescs); MLX5_SET(mkc, mkc, access_mode_1_0, access_mode & 0x3); @@ -2033,7 +2033,8 @@ static void mlx5_set_umr_free_mkey(struct ib_pd *pd, u32 *in, int ndescs, static int _mlx5_alloc_mkey_descs(struct ib_pd *pd, struct mlx5_ib_mr *mr, int ndescs, int desc_size, int page_shift, - int access_mode, u32 *in, int inlen) + int access_mode, u32 access, u32 *in, + int inlen) { struct mlx5_ib_dev *dev = to_mdev(pd->device); int err; @@ -2046,7 +2047,7 @@ static int _mlx5_alloc_mkey_descs(struct ib_pd *pd, struct mlx5_ib_mr *mr, if (err) return err; - mlx5_set_umr_free_mkey(pd, in, ndescs, access_mode, page_shift); + mlx5_set_umr_free_mkey(pd, in, ndescs, access_mode, access, page_shift); err = mlx5_ib_create_mkey(dev, &mr->mmkey, in, inlen); if (err) @@ -2055,6 +2056,7 @@ static int _mlx5_alloc_mkey_descs(struct ib_pd *pd, struct mlx5_ib_mr *mr, mr->mmkey.type = MLX5_MKEY_MR; mr->ibmr.lkey = mr->mmkey.key; mr->ibmr.rkey = mr->mmkey.key; + mr->access_flags = access; return 0; @@ -2063,9 +2065,10 @@ static int _mlx5_alloc_mkey_descs(struct ib_pd *pd, struct mlx5_ib_mr *mr, return err; } -static struct mlx5_ib_mr *mlx5_ib_alloc_pi_mr(struct ib_pd *pd, - u32 max_num_sg, u32 max_num_meta_sg, - int desc_size, int access_mode) +static struct mlx5_ib_mr *mlx5_ib_alloc_pi_mr(struct ib_pd *pd, u32 max_num_sg, + u32 max_num_meta_sg, + int desc_size, int access_mode, + u32 access) { int inlen = MLX5_ST_SZ_BYTES(create_mkey_in); int ndescs = ALIGN(max_num_sg + max_num_meta_sg, 4); @@ -2091,7 +2094,7 @@ static struct mlx5_ib_mr *mlx5_ib_alloc_pi_mr(struct ib_pd *pd, page_shift = PAGE_SHIFT; err = _mlx5_alloc_mkey_descs(pd, mr, ndescs, desc_size, page_shift, - access_mode, in, inlen); + access_mode, access, in, inlen); if (err) goto err_free_in; @@ -2108,23 +2111,24 @@ static struct mlx5_ib_mr *mlx5_ib_alloc_pi_mr(struct ib_pd *pd, } static int mlx5_alloc_mem_reg_descs(struct ib_pd *pd, struct mlx5_ib_mr *mr, - int ndescs, u32 *in, int inlen) + int ndescs, u32 access, u32 *in, int inlen) { return _mlx5_alloc_mkey_descs(pd, mr, ndescs, sizeof(struct mlx5_mtt), - PAGE_SHIFT, MLX5_MKC_ACCESS_MODE_MTT, in, - inlen); + PAGE_SHIFT, MLX5_MKC_ACCESS_MODE_MTT, + access, in, inlen); } static int mlx5_alloc_sg_gaps_descs(struct ib_pd *pd, struct mlx5_ib_mr *mr, - int ndescs, u32 *in, int inlen) + int ndescs, u32 access, u32 *in, int inlen) { return _mlx5_alloc_mkey_descs(pd, mr, ndescs, sizeof(struct mlx5_klm), - 0, MLX5_MKC_ACCESS_MODE_KLMS, in, inlen); + 0, MLX5_MKC_ACCESS_MODE_KLMS, access, in, + inlen); } static int mlx5_alloc_integrity_descs(struct ib_pd *pd, struct mlx5_ib_mr *mr, int max_num_sg, int max_num_meta_sg, - u32 *in, int inlen) + u32 access, u32 *in, int inlen) { struct mlx5_ib_dev *dev = to_mdev(pd->device); u32 psv_index[2]; @@ -2149,14 +2153,14 @@ static int mlx5_alloc_integrity_descs(struct ib_pd *pd, struct mlx5_ib_mr *mr, ++mr->sig->sigerr_count; mr->klm_mr = mlx5_ib_alloc_pi_mr(pd, max_num_sg, max_num_meta_sg, sizeof(struct mlx5_klm), - MLX5_MKC_ACCESS_MODE_KLMS); + MLX5_MKC_ACCESS_MODE_KLMS, access); if (IS_ERR(mr->klm_mr)) { err = PTR_ERR(mr->klm_mr); goto err_destroy_psv; } mr->mtt_mr = mlx5_ib_alloc_pi_mr(pd, max_num_sg, max_num_meta_sg, sizeof(struct mlx5_mtt), - MLX5_MKC_ACCESS_MODE_MTT); + MLX5_MKC_ACCESS_MODE_MTT, access); if (IS_ERR(mr->mtt_mr)) { err = PTR_ERR(mr->mtt_mr); goto err_free_klm_mr; @@ -2168,7 +2172,8 @@ static int mlx5_alloc_integrity_descs(struct ib_pd *pd, struct mlx5_ib_mr *mr, MLX5_SET(mkc, mkc, bsf_octword_size, MLX5_MKEY_BSF_OCTO_SIZE); err = _mlx5_alloc_mkey_descs(pd, mr, 4, sizeof(struct mlx5_klm), 0, - MLX5_MKC_ACCESS_MODE_KLMS, in, inlen); + MLX5_MKC_ACCESS_MODE_KLMS, access, in, + inlen); if (err) goto err_free_mtt_mr; @@ -2202,7 +2207,7 @@ static int mlx5_alloc_integrity_descs(struct ib_pd *pd, struct mlx5_ib_mr *mr, static struct ib_mr *__mlx5_ib_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, u32 max_num_sg, - u32 max_num_meta_sg) + u32 max_num_meta_sg, u32 access) { struct mlx5_ib_dev *dev = to_mdev(pd->device); int inlen = MLX5_ST_SZ_BYTES(create_mkey_in); @@ -2226,14 +2231,16 @@ static struct ib_mr *__mlx5_ib_alloc_mr(struct ib_pd *pd, switch (mr_type) { case IB_MR_TYPE_MEM_REG: - err = mlx5_alloc_mem_reg_descs(pd, mr, ndescs, in, inlen); + err = mlx5_alloc_mem_reg_descs(pd, mr, ndescs, access, in, + inlen); break; case IB_MR_TYPE_SG_GAPS: - err = mlx5_alloc_sg_gaps_descs(pd, mr, ndescs, in, inlen); + err = mlx5_alloc_sg_gaps_descs(pd, mr, ndescs, access, in, + inlen); break; case IB_MR_TYPE_INTEGRITY: - err = mlx5_alloc_integrity_descs(pd, mr, max_num_sg, - max_num_meta_sg, in, inlen); + err = mlx5_alloc_integrity_descs( + pd, mr, max_num_sg, max_num_meta_sg, access, in, inlen); break; default: mlx5_ib_warn(dev, "Invalid mr type %d\n", mr_type); @@ -2255,16 +2262,16 @@ static struct ib_mr *__mlx5_ib_alloc_mr(struct ib_pd *pd, } struct ib_mr *mlx5_ib_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, - u32 max_num_sg) + u32 max_num_sg, u32 access) { - return __mlx5_ib_alloc_mr(pd, mr_type, max_num_sg, 0); + return __mlx5_ib_alloc_mr(pd, mr_type, max_num_sg, 0, access); } -struct ib_mr *mlx5_ib_alloc_mr_integrity(struct ib_pd *pd, - u32 max_num_sg, u32 max_num_meta_sg) +struct ib_mr *mlx5_ib_alloc_mr_integrity(struct ib_pd *pd, u32 max_num_sg, + u32 max_num_meta_sg, u32 access) { return __mlx5_ib_alloc_mr(pd, IB_MR_TYPE_INTEGRITY, max_num_sg, - max_num_meta_sg); + max_num_meta_sg, access); } int mlx5_ib_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata) diff --git a/drivers/infiniband/hw/mlx5/wr.c b/drivers/infiniband/hw/mlx5/wr.c index cf2852cba45c..a1b6d0ff8461 100644 --- a/drivers/infiniband/hw/mlx5/wr.c +++ b/drivers/infiniband/hw/mlx5/wr.c @@ -171,7 +171,8 @@ static u64 get_xlt_octo(u64 bytes) MLX5_IB_UMR_OCTOWORD; } -static __be64 frwr_mkey_mask(bool atomic) +static __be64 frwr_mkey_mask(bool atomic, int relaxed_ordering_write, + int relaxed_ordering_read) { u64 result; @@ -190,10 +191,17 @@ static __be64 frwr_mkey_mask(bool atomic) if (atomic) result |= MLX5_MKEY_MASK_A; + if (relaxed_ordering_write) + result |= MLX5_MKEY_MASK_RELAXED_ORDERING_WRITE; + + if (relaxed_ordering_read) + result |= MLX5_MKEY_MASK_RELAXED_ORDERING_READ; + return cpu_to_be64(result); } -static __be64 sig_mkey_mask(void) +static __be64 sig_mkey_mask(int relaxed_ordering_write, + int relaxed_ordering_read) { u64 result; @@ -211,10 +219,17 @@ static __be64 sig_mkey_mask(void) MLX5_MKEY_MASK_FREE | MLX5_MKEY_MASK_BSF_EN; + if (relaxed_ordering_write) + result |= MLX5_MKEY_MASK_RELAXED_ORDERING_WRITE; + + if (relaxed_ordering_read) + result |= MLX5_MKEY_MASK_RELAXED_ORDERING_READ; + return cpu_to_be64(result); } -static void set_reg_umr_seg(struct mlx5_wqe_umr_ctrl_seg *umr, +static void set_reg_umr_seg(struct mlx5_ib_dev *dev, + struct mlx5_wqe_umr_ctrl_seg *umr, struct mlx5_ib_mr *mr, u8 flags, bool atomic) { int size = (mr->ndescs + mr->meta_ndescs) * mr->desc_size; @@ -223,7 +238,9 @@ static void set_reg_umr_seg(struct mlx5_wqe_umr_ctrl_seg *umr, umr->flags = flags; umr->xlt_octowords = cpu_to_be16(get_xlt_octo(size)); - umr->mkey_mask = frwr_mkey_mask(atomic); + umr->mkey_mask = frwr_mkey_mask( + atomic, MLX5_CAP_GEN(dev->mdev, relaxed_ordering_write_umr), + MLX5_CAP_GEN(dev->mdev, relaxed_ordering_read_umr)); } static void set_linv_umr_seg(struct mlx5_wqe_umr_ctrl_seg *umr) @@ -370,9 +387,8 @@ static u8 get_umr_flags(int acc) MLX5_PERM_LOCAL_READ | MLX5_PERM_UMR_EN; } -static void set_reg_mkey_seg(struct mlx5_mkey_seg *seg, - struct mlx5_ib_mr *mr, - u32 key, int access) +static void set_reg_mkey_seg(struct mlx5_ib_dev *dev, struct mlx5_mkey_seg *seg, + struct mlx5_ib_mr *mr, u32 key, int access) { int ndescs = ALIGN(mr->ndescs + mr->meta_ndescs, 8) >> 1; @@ -390,6 +406,13 @@ static void set_reg_mkey_seg(struct mlx5_mkey_seg *seg, seg->start_addr = cpu_to_be64(mr->ibmr.iova); seg->len = cpu_to_be64(mr->ibmr.length); seg->xlt_oct_size = cpu_to_be32(ndescs); + + if (MLX5_CAP_GEN(dev->mdev, relaxed_ordering_write_umr) && + (access & IB_ACCESS_RELAXED_ORDERING)) + MLX5_SET(mkc, seg, relaxed_ordering_write, 1); + if (MLX5_CAP_GEN(dev->mdev, relaxed_ordering_read_umr) && + (access & IB_ACCESS_RELAXED_ORDERING)) + MLX5_SET(mkc, seg, relaxed_ordering_read, 1); } static void set_linv_mkey_seg(struct mlx5_mkey_seg *seg) @@ -746,7 +769,8 @@ static int set_sig_data_segment(const struct ib_send_wr *send_wr, return 0; } -static void set_sig_mkey_segment(struct mlx5_mkey_seg *seg, +static void set_sig_mkey_segment(struct mlx5_ib_dev *dev, + struct mlx5_mkey_seg *seg, struct ib_mr *sig_mr, int access_flags, u32 size, u32 length, u32 pdn) { @@ -762,23 +786,34 @@ static void set_sig_mkey_segment(struct mlx5_mkey_seg *seg, seg->len = cpu_to_be64(length); seg->xlt_oct_size = cpu_to_be32(get_xlt_octo(size)); seg->bsfs_octo_size = cpu_to_be32(MLX5_MKEY_BSF_OCTO_SIZE); + + if (MLX5_CAP_GEN(dev->mdev, relaxed_ordering_write_umr) && + (access_flags & IB_ACCESS_RELAXED_ORDERING)) + MLX5_SET(mkc, seg, relaxed_ordering_write, 1); + if (MLX5_CAP_GEN(dev->mdev, relaxed_ordering_read_umr) && + (access_flags & IB_ACCESS_RELAXED_ORDERING)) + MLX5_SET(mkc, seg, relaxed_ordering_read, 1); } -static void set_sig_umr_segment(struct mlx5_wqe_umr_ctrl_seg *umr, - u32 size) +static void set_sig_umr_segment(struct mlx5_ib_dev *dev, + struct mlx5_wqe_umr_ctrl_seg *umr, u32 size) { memset(umr, 0, sizeof(*umr)); umr->flags = MLX5_FLAGS_INLINE | MLX5_FLAGS_CHECK_FREE; umr->xlt_octowords = cpu_to_be16(get_xlt_octo(size)); umr->bsf_octowords = cpu_to_be16(MLX5_MKEY_BSF_OCTO_SIZE); - umr->mkey_mask = sig_mkey_mask(); + umr->mkey_mask = sig_mkey_mask( + MLX5_CAP_GEN(dev->mdev, relaxed_ordering_write_umr), + MLX5_CAP_GEN(dev->mdev, relaxed_ordering_read_umr)); } static int set_pi_umr_wr(const struct ib_send_wr *send_wr, struct mlx5_ib_qp *qp, void **seg, int *size, void **cur_edge) { + struct mlx5_ib_pd *pd = to_mpd(qp->ibqp.pd); + struct mlx5_ib_dev *dev = to_mdev(pd->ibpd.device); const struct ib_reg_wr *wr = reg_wr(send_wr); struct mlx5_ib_mr *sig_mr = to_mmr(wr->mr); struct mlx5_ib_mr *pi_mr = sig_mr->pi_mr; @@ -806,13 +841,13 @@ static int set_pi_umr_wr(const struct ib_send_wr *send_wr, else xlt_size = sizeof(struct mlx5_klm); - set_sig_umr_segment(*seg, xlt_size); + set_sig_umr_segment(dev, *seg, xlt_size); *seg += sizeof(struct mlx5_wqe_umr_ctrl_seg); *size += sizeof(struct mlx5_wqe_umr_ctrl_seg) / 16; handle_post_send_edge(&qp->sq, seg, *size, cur_edge); - set_sig_mkey_segment(*seg, wr->mr, wr->access, xlt_size, region_len, - pdn); + set_sig_mkey_segment(dev, *seg, wr->mr, wr->access, xlt_size, + region_len, pdn); *seg += sizeof(struct mlx5_mkey_seg); *size += sizeof(struct mlx5_mkey_seg) / 16; handle_post_send_edge(&qp->sq, seg, *size, cur_edge); @@ -867,7 +902,7 @@ static int set_reg_wr(struct mlx5_ib_qp *qp, u8 flags = 0; /* Matches access in mlx5_set_umr_free_mkey() */ - if (!mlx5_ib_can_reconfig_with_umr(dev, 0, wr->access)) { + if (!mlx5_ib_can_reconfig_with_umr(dev, mr->access_flags, wr->access)) { mlx5_ib_warn( to_mdev(qp->ibqp.device), "Fast update for MR access flags is not possible\n"); @@ -885,12 +920,12 @@ static int set_reg_wr(struct mlx5_ib_qp *qp, if (umr_inline) flags |= MLX5_UMR_INLINE; - set_reg_umr_seg(*seg, mr, flags, atomic); + set_reg_umr_seg(dev, *seg, mr, flags, atomic); *seg += sizeof(struct mlx5_wqe_umr_ctrl_seg); *size += sizeof(struct mlx5_wqe_umr_ctrl_seg) / 16; handle_post_send_edge(&qp->sq, seg, *size, cur_edge); - set_reg_mkey_seg(*seg, mr, wr->key, wr->access); + set_reg_mkey_seg(dev, *seg, mr, wr->key, wr->access); *seg += sizeof(struct mlx5_mkey_seg); *size += sizeof(struct mlx5_mkey_seg) / 16; handle_post_send_edge(&qp->sq, seg, *size, cur_edge); diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c index 58619ce64d0d..419711552825 100644 --- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c +++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c @@ -2904,7 +2904,7 @@ int ocrdma_arm_cq(struct ib_cq *ibcq, enum ib_cq_notify_flags cq_flags) } struct ib_mr *ocrdma_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type, - u32 max_num_sg) + u32 max_num_sg, u32 access) { int status; struct ocrdma_mr *mr; diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.h b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.h index b1c5fad81603..7644e343fcd6 100644 --- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.h +++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.h @@ -102,7 +102,7 @@ struct ib_mr *ocrdma_get_dma_mr(struct ib_pd *, int acc); struct ib_mr *ocrdma_reg_user_mr(struct ib_pd *, u64 start, u64 length, u64 virt, int acc, struct ib_udata *); struct ib_mr *ocrdma_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, - u32 max_num_sg); + u32 max_num_sg, u32 access); int ocrdma_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, unsigned int *sg_offset); diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c index 41e12f011f22..51fa57b97928 100644 --- a/drivers/infiniband/hw/qedr/verbs.c +++ b/drivers/infiniband/hw/qedr/verbs.c @@ -3132,7 +3132,7 @@ static struct qedr_mr *__qedr_alloc_mr(struct ib_pd *ibpd, } struct ib_mr *qedr_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type, - u32 max_num_sg) + u32 max_num_sg, u32 access) { struct qedr_mr *mr; diff --git a/drivers/infiniband/hw/qedr/verbs.h b/drivers/infiniband/hw/qedr/verbs.h index 34ad47515861..8ea872b56ee0 100644 --- a/drivers/infiniband/hw/qedr/verbs.h +++ b/drivers/infiniband/hw/qedr/verbs.h @@ -86,7 +86,7 @@ int qedr_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, unsigned int *sg_offset); struct ib_mr *qedr_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, - u32 max_num_sg); + u32 max_num_sg, u32 access); int qedr_poll_cq(struct ib_cq *, int num_entries, struct ib_wc *wc); int qedr_post_send(struct ib_qp *, const struct ib_send_wr *, const struct ib_send_wr **bad_wr); diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_mr.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_mr.c index e80848bfb3bd..b3fa783698a0 100644 --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_mr.c +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_mr.c @@ -198,11 +198,12 @@ struct ib_mr *pvrdma_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, * @pd: protection domain * @mr_type: type of memory region * @max_num_sg: maximum number of pages + * @access: access flags of memory region * * @return: ib_mr pointer on success, otherwise returns an errno. */ struct ib_mr *pvrdma_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, - u32 max_num_sg) + u32 max_num_sg, u32 access) { struct pvrdma_dev *dev = to_vdev(pd->device); struct pvrdma_user_mr *mr; diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.h b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.h index 544b94d97c3a..079fb4c09979 100644 --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.h +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.h @@ -371,7 +371,7 @@ struct ib_mr *pvrdma_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, struct ib_udata *udata); int pvrdma_dereg_mr(struct ib_mr *mr, struct ib_udata *udata); struct ib_mr *pvrdma_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, - u32 max_num_sg); + u32 max_num_sg, u32 access); int pvrdma_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, unsigned int *sg_offset); int pvrdma_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr, diff --git a/drivers/infiniband/sw/rdmavt/mr.c b/drivers/infiniband/sw/rdmavt/mr.c index 601d18dda1f5..b484a7968681 100644 --- a/drivers/infiniband/sw/rdmavt/mr.c +++ b/drivers/infiniband/sw/rdmavt/mr.c @@ -571,11 +571,12 @@ int rvt_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) * @pd: protection domain for this memory region * @mr_type: mem region type * @max_num_sg: Max number of segments allowed + * @access: access flags of memory region * * Return: the memory region on success, otherwise return an errno. */ struct ib_mr *rvt_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, - u32 max_num_sg) + u32 max_num_sg, u32 access) { struct rvt_mr *mr; diff --git a/drivers/infiniband/sw/rdmavt/mr.h b/drivers/infiniband/sw/rdmavt/mr.h index b3aba359401b..0542b2c6dbfc 100644 --- a/drivers/infiniband/sw/rdmavt/mr.h +++ b/drivers/infiniband/sw/rdmavt/mr.h @@ -71,7 +71,7 @@ struct ib_mr *rvt_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, struct ib_udata *udata); int rvt_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata); struct ib_mr *rvt_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, - u32 max_num_sg); + u32 max_num_sg, u32 access); int rvt_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, unsigned int *sg_offset); diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index aeb5e232c195..6a23f54c88a6 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -925,7 +925,7 @@ static int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) } static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type, - u32 max_num_sg) + u32 max_num_sg, u32 access) { struct rxe_dev *rxe = to_rdev(ibpd->device); struct rxe_pd *pd = to_rpd(ibpd); diff --git a/drivers/infiniband/sw/siw/siw_verbs.c b/drivers/infiniband/sw/siw/siw_verbs.c index d2313efb26db..a9ea13cd67bd 100644 --- a/drivers/infiniband/sw/siw/siw_verbs.c +++ b/drivers/infiniband/sw/siw/siw_verbs.c @@ -1383,7 +1383,7 @@ struct ib_mr *siw_reg_user_mr(struct ib_pd *pd, u64 start, u64 len, } struct ib_mr *siw_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, - u32 max_sge) + u32 max_sge, u32 access) { struct siw_device *sdev = to_siw_dev(pd->device); struct siw_mr *mr = NULL; diff --git a/drivers/infiniband/sw/siw/siw_verbs.h b/drivers/infiniband/sw/siw/siw_verbs.h index 67ac08886a70..817f72cd242a 100644 --- a/drivers/infiniband/sw/siw/siw_verbs.h +++ b/drivers/infiniband/sw/siw/siw_verbs.h @@ -68,7 +68,7 @@ int siw_req_notify_cq(struct ib_cq *base_cq, enum ib_cq_notify_flags flags); struct ib_mr *siw_reg_user_mr(struct ib_pd *base_pd, u64 start, u64 len, u64 rnic_va, int rights, struct ib_udata *udata); struct ib_mr *siw_alloc_mr(struct ib_pd *base_pd, enum ib_mr_type mr_type, - u32 max_sge); + u32 max_sge, u32 access); struct ib_mr *siw_get_dma_mr(struct ib_pd *base_pd, int rights); int siw_map_mr_sg(struct ib_mr *base_mr, struct scatterlist *sl, int num_sle, unsigned int *sg_off); diff --git a/drivers/infiniband/ulp/iser/iser_verbs.c b/drivers/infiniband/ulp/iser/iser_verbs.c index 136f6c4492e0..3c370ee25f2f 100644 --- a/drivers/infiniband/ulp/iser/iser_verbs.c +++ b/drivers/infiniband/ulp/iser/iser_verbs.c @@ -121,7 +121,7 @@ iser_create_fastreg_desc(struct iser_device *device, else mr_type = IB_MR_TYPE_MEM_REG; - desc->rsc.mr = ib_alloc_mr(pd, mr_type, size); + desc->rsc.mr = ib_alloc_mr(pd, mr_type, size, 0); if (IS_ERR(desc->rsc.mr)) { ret = PTR_ERR(desc->rsc.mr); iser_err("Failed to allocate ib_fast_reg_mr err=%d\n", ret); @@ -129,7 +129,7 @@ iser_create_fastreg_desc(struct iser_device *device, } if (pi_enable) { - desc->rsc.sig_mr = ib_alloc_mr_integrity(pd, size, size); + desc->rsc.sig_mr = ib_alloc_mr_integrity(pd, size, size, 0); if (IS_ERR(desc->rsc.sig_mr)) { ret = PTR_ERR(desc->rsc.sig_mr); iser_err("Failed to allocate sig_mr err=%d\n", ret); diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c index 64990df81937..0d3960ed5b2b 100644 --- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c +++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c @@ -1260,7 +1260,7 @@ static int alloc_sess_reqs(struct rtrs_clt_sess *sess) goto out; req->mr = ib_alloc_mr(sess->s.dev->ib_pd, IB_MR_TYPE_MEM_REG, - sess->max_pages_per_mr); + sess->max_pages_per_mr, 0); if (IS_ERR(req->mr)) { err = PTR_ERR(req->mr); req->mr = NULL; diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c index 5e9bb7bf5ef3..575f31ff20fd 100644 --- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c +++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c @@ -638,7 +638,7 @@ static int map_cont_bufs(struct rtrs_srv_sess *sess) goto free_sg; } mr = ib_alloc_mr(sess->s.dev->ib_pd, IB_MR_TYPE_MEM_REG, - sgt->nents); + sgt->nents, 0); if (IS_ERR(mr)) { err = PTR_ERR(mr); goto unmap_sg; diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c index 31f8aa2c40ed..8481ad769ba4 100644 --- a/drivers/infiniband/ulp/srp/ib_srp.c +++ b/drivers/infiniband/ulp/srp/ib_srp.c @@ -436,7 +436,7 @@ static struct srp_fr_pool *srp_create_fr_pool(struct ib_device *device, mr_type = IB_MR_TYPE_MEM_REG; for (i = 0, d = &pool->desc[0]; i < pool->size; i++, d++) { - mr = ib_alloc_mr(pd, mr_type, max_page_list_len); + mr = ib_alloc_mr(pd, mr_type, max_page_list_len, 0); if (IS_ERR(mr)) { ret = PTR_ERR(mr); if (ret == -ENOMEM) diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index 53ac4d7442ba..4dbc17311e0b 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -534,7 +534,7 @@ static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue) ret = ib_mr_pool_init(queue->qp, &queue->qp->rdma_mrs, queue->queue_size, IB_MR_TYPE_MEM_REG, - pages_per_mr, 0); + pages_per_mr, 0, 0); if (ret) { dev_err(queue->ctrl->ctrl.device, "failed to initialize MR pool sized %d for QID %d\n", @@ -545,7 +545,7 @@ static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue) if (queue->pi_support) { ret = ib_mr_pool_init(queue->qp, &queue->qp->sig_mrs, queue->queue_size, IB_MR_TYPE_INTEGRITY, - pages_per_mr, pages_per_mr); + pages_per_mr, pages_per_mr, 0); if (ret) { dev_err(queue->ctrl->ctrl.device, "failed to initialize PI MR pool sized %d for QID %d\n", diff --git a/fs/cifs/smbdirect.c b/fs/cifs/smbdirect.c index 10dfe5006792..647098a5cf3b 100644 --- a/fs/cifs/smbdirect.c +++ b/fs/cifs/smbdirect.c @@ -2178,9 +2178,8 @@ static void smbd_mr_recovery_work(struct work_struct *work) continue; } - smbdirect_mr->mr = ib_alloc_mr( - info->pd, info->mr_type, - info->max_frmr_depth); + smbdirect_mr->mr = ib_alloc_mr(info->pd, info->mr_type, + info->max_frmr_depth, 0); if (IS_ERR(smbdirect_mr->mr)) { log_rdma_mr(ERR, "ib_alloc_mr failed mr_type=%x max_frmr_depth=%x\n", info->mr_type, @@ -2245,7 +2244,7 @@ static int allocate_mr_list(struct smbd_connection *info) if (!smbdirect_mr) goto out; smbdirect_mr->mr = ib_alloc_mr(info->pd, info->mr_type, - info->max_frmr_depth); + info->max_frmr_depth, 0); if (IS_ERR(smbdirect_mr->mr)) { log_rdma_mr(ERR, "ib_alloc_mr failed mr_type=%x max_frmr_depth=%x\n", info->mr_type, info->max_frmr_depth); diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index bed4cfe50554..59138174affa 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -2444,10 +2444,10 @@ struct ib_device_ops { struct ib_udata *udata); int (*dereg_mr)(struct ib_mr *mr, struct ib_udata *udata); struct ib_mr *(*alloc_mr)(struct ib_pd *pd, enum ib_mr_type mr_type, - u32 max_num_sg); + u32 max_num_sg, u32 access); struct ib_mr *(*alloc_mr_integrity)(struct ib_pd *pd, u32 max_num_data_sg, - u32 max_num_meta_sg); + u32 max_num_meta_sg, u32 access); int (*advise_mr)(struct ib_pd *pd, enum ib_uverbs_advise_mr_advice advice, u32 flags, struct ib_sge *sg_list, u32 num_sge, @@ -4142,11 +4142,10 @@ static inline int ib_dereg_mr(struct ib_mr *mr) } struct ib_mr *ib_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, - u32 max_num_sg); + u32 max_num_sg, u32 access); -struct ib_mr *ib_alloc_mr_integrity(struct ib_pd *pd, - u32 max_num_data_sg, - u32 max_num_meta_sg); +struct ib_mr *ib_alloc_mr_integrity(struct ib_pd *pd, u32 max_num_data_sg, + u32 max_num_meta_sg, u32 access); /** * ib_update_fast_reg_key - updates the key portion of the fast_reg MR diff --git a/include/rdma/mr_pool.h b/include/rdma/mr_pool.h index e77123bcb43b..2a0ee791037d 100644 --- a/include/rdma/mr_pool.h +++ b/include/rdma/mr_pool.h @@ -11,7 +11,8 @@ struct ib_mr *ib_mr_pool_get(struct ib_qp *qp, struct list_head *list); void ib_mr_pool_put(struct ib_qp *qp, struct list_head *list, struct ib_mr *mr); int ib_mr_pool_init(struct ib_qp *qp, struct list_head *list, int nr, - enum ib_mr_type type, u32 max_num_sg, u32 max_num_meta_sg); + enum ib_mr_type type, u32 max_num_sg, u32 max_num_meta_sg, + u32 access); void ib_mr_pool_destroy(struct ib_qp *qp, struct list_head *list); #endif /* _RDMA_MR_POOL_H */ diff --git a/net/rds/ib_frmr.c b/net/rds/ib_frmr.c index 9b6ffff72f2d..694eb916319e 100644 --- a/net/rds/ib_frmr.c +++ b/net/rds/ib_frmr.c @@ -76,7 +76,7 @@ static struct rds_ib_mr *rds_ib_alloc_frmr(struct rds_ib_device *rds_ibdev, frmr = &ibmr->u.frmr; frmr->mr = ib_alloc_mr(rds_ibdev->pd, IB_MR_TYPE_MEM_REG, - pool->max_pages); + pool->max_pages, 0); if (IS_ERR(frmr->mr)) { pr_warn("RDS/IB: %s failed to allocate MR", __func__); err = PTR_ERR(frmr->mr); diff --git a/net/smc/smc_ib.c b/net/smc/smc_ib.c index 7d7ba0320d5a..4e91ed3dc265 100644 --- a/net/smc/smc_ib.c +++ b/net/smc/smc_ib.c @@ -579,7 +579,7 @@ int smc_ib_get_memory_region(struct ib_pd *pd, int access_flags, return 0; /* already done */ buf_slot->mr_rx[link_idx] = - ib_alloc_mr(pd, IB_MR_TYPE_MEM_REG, 1 << buf_slot->order); + ib_alloc_mr(pd, IB_MR_TYPE_MEM_REG, 1 << buf_slot->order, 0); if (IS_ERR(buf_slot->mr_rx[link_idx])) { int rc; diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c index 766a1048a48a..cfbdd197cdfe 100644 --- a/net/sunrpc/xprtrdma/frwr_ops.c +++ b/net/sunrpc/xprtrdma/frwr_ops.c @@ -135,7 +135,7 @@ int frwr_mr_init(struct rpcrdma_xprt *r_xprt, struct rpcrdma_mr *mr) struct ib_mr *frmr; int rc; - frmr = ib_alloc_mr(ep->re_pd, ep->re_mrtype, depth); + frmr = ib_alloc_mr(ep->re_pd, ep->re_mrtype, depth, 0); if (IS_ERR(frmr)) goto out_mr_err; From patchwork Mon Apr 5 05:23:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 12182617 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F0A6C433ED for ; Mon, 5 Apr 2021 05:24:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 17BC4613A5 for ; Mon, 5 Apr 2021 05:24:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232216AbhDEFYi (ORCPT ); Mon, 5 Apr 2021 01:24:38 -0400 Received: from mail.kernel.org ([198.145.29.99]:57206 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232184AbhDEFYf (ORCPT ); Mon, 5 Apr 2021 01:24:35 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 8480761399; Mon, 5 Apr 2021 05:24:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1617600269; bh=p/hww5IZwUmzeql1CaZSwd9MbMxjAL9Py3kRE/ZWykI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uLDgHWnJZEA57dXtfbUnnMoN/P65nZ62XVceMV0PYewnSGtXpa1kEmCn/aU7YHKau MhUfd5vWeGLxn/LMJ/EQN9Ip3pQqBTuSZ4m4km9tfyG5RTzN1ZyYU4QixYyzJsDqzO XlL7M6JX3pBDqjeArPmtC8gPieGD6EFcnV3lbqgZy/pXahBL3gbinib+Mw87e/3tPi lUExqbfEvV9NXPic6DXti8R2W9MHIurmzIhR9VkfDoD3I7ceB2DjwrJt2aH2/L45n6 2VPIPu5r6tPNnG0JsUIAp3gBDb6CDio6PV6WPJ8inYhOWZ/yUfGECBUUE2tCamY992 hR0HSc24y6wsw== From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Avihai Horon , Adit Ranadive , Anna Schumaker , Ariel Elior , Bart Van Assche , Bernard Metzler , Christoph Hellwig , Chuck Lever , "David S. Miller" , Dennis Dalessandro , Devesh Sharma , Faisal Latif , Jack Wang , Jakub Kicinski , "J. Bruce Fields" , Jens Axboe , Karsten Graul , Keith Busch , Lijun Ou , linux-cifs@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nfs@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-s390@vger.kernel.org, Max Gurtovoy , Max Gurtovoy , "Md. Haris Iqbal" , Michael Guralnik , Michal Kalderon , Mike Marciniszyn , Naresh Kumar PBS , netdev@vger.kernel.org, Potnuri Bharat Teja , rds-devel@oss.oracle.com, Sagi Grimberg , samba-technical@lists.samba.org, Santosh Shilimkar , Selvin Xavier , Shiraz Saleem , Somnath Kotur , Sriharsha Basavapatna , Steve French , Trond Myklebust , VMware PV-Drivers , Weihang Li , Yishai Hadas , Zhu Yanjun Subject: [PATCH rdma-next 02/10] RDMA/core: Enable Relaxed Ordering in __ib_alloc_pd() Date: Mon, 5 Apr 2021 08:23:56 +0300 Message-Id: <20210405052404.213889-3-leon@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210405052404.213889-1-leon@kernel.org> References: <20210405052404.213889-1-leon@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org From: Avihai Horon Enable Relaxed Ordering in __ib_alloc_pd() allocation of the local_dma_lkey. This will take effect only for devices that don't pre-allocate the lkey but allocate it per PD allocation. Signed-off-by: Avihai Horon Reviewed-by: Michael Guralnik Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/verbs.c | 3 ++- drivers/infiniband/hw/vmw_pvrdma/pvrdma_mr.c | 1 + 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c index a1782f8a6ca0..9b719f7d6fd5 100644 --- a/drivers/infiniband/core/verbs.c +++ b/drivers/infiniband/core/verbs.c @@ -287,7 +287,8 @@ struct ib_pd *__ib_alloc_pd(struct ib_device *device, unsigned int flags, if (device->attrs.device_cap_flags & IB_DEVICE_LOCAL_DMA_LKEY) pd->local_dma_lkey = device->local_dma_lkey; else - mr_access_flags |= IB_ACCESS_LOCAL_WRITE; + mr_access_flags |= + IB_ACCESS_LOCAL_WRITE | IB_ACCESS_RELAXED_ORDERING; if (flags & IB_PD_UNSAFE_GLOBAL_RKEY) { pr_warn("%s: enabling unsafe global rkey\n", caller); diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_mr.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_mr.c index b3fa783698a0..d74827694f92 100644 --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_mr.c +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_mr.c @@ -66,6 +66,7 @@ struct ib_mr *pvrdma_get_dma_mr(struct ib_pd *pd, int acc) int ret; /* Support only LOCAL_WRITE flag for DMA MRs */ + acc &= ~IB_ACCESS_RELAXED_ORDERING; if (acc & ~IB_ACCESS_LOCAL_WRITE) { dev_warn(&dev->pdev->dev, "unsupported dma mr access flags %#x\n", acc); From patchwork Mon Apr 5 05:23:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 12182613 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C248C43603 for ; Mon, 5 Apr 2021 05:24:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0D0DD6139D for ; Mon, 5 Apr 2021 05:24:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232124AbhDEFY0 (ORCPT ); Mon, 5 Apr 2021 01:24:26 -0400 Received: from mail.kernel.org ([198.145.29.99]:56772 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232098AbhDEFYZ (ORCPT ); Mon, 5 Apr 2021 01:24:25 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 892346139E; Mon, 5 Apr 2021 05:24:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1617600259; bh=nRfDFwKnXvGxEINJ1gcUC5CV0hDDdn2QGMGYyGbA7kU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RHoI+pUcHzU20igTKYZrklmw7W/BSsCop538orqhMBlWsUMJ25vZUuom6KI8GJ6xJ HaROOmDmBRB/yGlqL4AtjyV/K9d0f4v9hRClzfsYeretoARDsMPdSctmtreSR/Ht6H WsuQfYs5lXbXpYinCPOm6HuwsFhisYx9BDQrlcETN+mrI4GhkEA6+wOgjRdIQp5X8s MkOBzOg7YwifLq7AcIG4RMeq9ySjzFiBaMHNvR5zm5Xwhn6DtVXYST+6bXKykJbjXZ kH30bar0LGJwgxKwOoqtsOY+jzXkUPejDXliyyPKH5uOYsOsInJ5kpgXj2s9xBpBbs +swSb/FSAvQ+A== From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Avihai Horon , Adit Ranadive , Anna Schumaker , Ariel Elior , Bart Van Assche , Bernard Metzler , Christoph Hellwig , Chuck Lever , "David S. Miller" , Dennis Dalessandro , Devesh Sharma , Faisal Latif , Jack Wang , Jakub Kicinski , "J. Bruce Fields" , Jens Axboe , Karsten Graul , Keith Busch , Lijun Ou , linux-cifs@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nfs@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-s390@vger.kernel.org, Max Gurtovoy , Max Gurtovoy , "Md. Haris Iqbal" , Michael Guralnik , Michal Kalderon , Mike Marciniszyn , Naresh Kumar PBS , netdev@vger.kernel.org, Potnuri Bharat Teja , rds-devel@oss.oracle.com, Sagi Grimberg , samba-technical@lists.samba.org, Santosh Shilimkar , Selvin Xavier , Shiraz Saleem , Somnath Kotur , Sriharsha Basavapatna , Steve French , Trond Myklebust , VMware PV-Drivers , Weihang Li , Yishai Hadas , Zhu Yanjun Subject: [PATCH rdma-next 03/10] RDMA/iser: Enable Relaxed Ordering Date: Mon, 5 Apr 2021 08:23:57 +0300 Message-Id: <20210405052404.213889-4-leon@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210405052404.213889-1-leon@kernel.org> References: <20210405052404.213889-1-leon@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org From: Avihai Horon Enable Relaxed Ordering for iser. Relaxed Ordering is an optional access flag and as such, it is ignored by vendors that don't support it. Signed-off-by: Avihai Horon Reviewed-by: Max Gurtovoy Reviewed-by: Michael Guralnik Signed-off-by: Leon Romanovsky --- drivers/infiniband/ulp/iser/iser_memory.c | 10 ++++------ drivers/infiniband/ulp/iser/iser_verbs.c | 6 ++++-- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/drivers/infiniband/ulp/iser/iser_memory.c b/drivers/infiniband/ulp/iser/iser_memory.c index afec40da9b58..29849c9e82e8 100644 --- a/drivers/infiniband/ulp/iser/iser_memory.c +++ b/drivers/infiniband/ulp/iser/iser_memory.c @@ -271,9 +271,8 @@ iser_reg_sig_mr(struct iscsi_iser_task *iser_task, wr->wr.send_flags = 0; wr->mr = mr; wr->key = mr->rkey; - wr->access = IB_ACCESS_LOCAL_WRITE | - IB_ACCESS_REMOTE_READ | - IB_ACCESS_REMOTE_WRITE; + wr->access = IB_ACCESS_LOCAL_WRITE | IB_ACCESS_REMOTE_READ | + IB_ACCESS_REMOTE_WRITE | IB_ACCESS_RELAXED_ORDERING; rsc->mr_valid = 1; sig_reg->sge.lkey = mr->lkey; @@ -318,9 +317,8 @@ static int iser_fast_reg_mr(struct iscsi_iser_task *iser_task, wr->wr.num_sge = 0; wr->mr = mr; wr->key = mr->rkey; - wr->access = IB_ACCESS_LOCAL_WRITE | - IB_ACCESS_REMOTE_WRITE | - IB_ACCESS_REMOTE_READ; + wr->access = IB_ACCESS_LOCAL_WRITE | IB_ACCESS_REMOTE_WRITE | + IB_ACCESS_REMOTE_READ | IB_ACCESS_RELAXED_ORDERING; rsc->mr_valid = 1; diff --git a/drivers/infiniband/ulp/iser/iser_verbs.c b/drivers/infiniband/ulp/iser/iser_verbs.c index 3c370ee25f2f..1e236b1cf29b 100644 --- a/drivers/infiniband/ulp/iser/iser_verbs.c +++ b/drivers/infiniband/ulp/iser/iser_verbs.c @@ -121,7 +121,8 @@ iser_create_fastreg_desc(struct iser_device *device, else mr_type = IB_MR_TYPE_MEM_REG; - desc->rsc.mr = ib_alloc_mr(pd, mr_type, size, 0); + desc->rsc.mr = ib_alloc_mr(pd, mr_type, size, + IB_ACCESS_RELAXED_ORDERING); if (IS_ERR(desc->rsc.mr)) { ret = PTR_ERR(desc->rsc.mr); iser_err("Failed to allocate ib_fast_reg_mr err=%d\n", ret); @@ -129,7 +130,8 @@ iser_create_fastreg_desc(struct iser_device *device, } if (pi_enable) { - desc->rsc.sig_mr = ib_alloc_mr_integrity(pd, size, size, 0); + desc->rsc.sig_mr = ib_alloc_mr_integrity(pd, size, size, + IB_ACCESS_RELAXED_ORDERING); if (IS_ERR(desc->rsc.sig_mr)) { ret = PTR_ERR(desc->rsc.sig_mr); iser_err("Failed to allocate sig_mr err=%d\n", ret); From patchwork Mon Apr 5 05:23:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 12182615 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E96EAC43461 for ; Mon, 5 Apr 2021 05:24:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B9C3A6139F for ; Mon, 5 Apr 2021 05:24:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232149AbhDEFYb (ORCPT ); Mon, 5 Apr 2021 01:24:31 -0400 Received: from mail.kernel.org ([198.145.29.99]:56872 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232141AbhDEFY2 (ORCPT ); Mon, 5 Apr 2021 01:24:28 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id F258160724; Mon, 5 Apr 2021 05:24:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1617600262; bh=TEonNohSk3EOSuNC7NQhTyXQR/le1bI4Rm+lbGRMhFU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OYZI5B7xWpbmozNboLJR3DNhlI8gl+yFyBzL8s8rkQ9v3cxdRSR4Pcl95wBnkwIGw 7WpqToEYZZxUC46b/gQsHXzarF7l05/xnPGzOa0KcFQnnWj79dVfrWZUT5eLVKXgqv pz4DQa+RVAw7nQAqAHF2civ70nLPbDRMkJj5iColoB4/u3rLW1mugaxyWVa1KIw4Z3 p1l89kLcOsl2MGAbplU8xnOMrt9uTcsZf1+hOJgSaNp2kSu5AIzZgbgMKnUwEg5CBs MbKq8oGOxt9TmW0UuTH05peUnwILEeGNCr+CJWKKIMLZ+pqVrW1f6rox4r6cw0impJ CHjWu7sq0Q2yw== From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Avihai Horon , Adit Ranadive , Anna Schumaker , Ariel Elior , Bart Van Assche , Bernard Metzler , Christoph Hellwig , Chuck Lever , "David S. Miller" , Dennis Dalessandro , Devesh Sharma , Faisal Latif , Jack Wang , Jakub Kicinski , "J. Bruce Fields" , Jens Axboe , Karsten Graul , Keith Busch , Lijun Ou , linux-cifs@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nfs@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-s390@vger.kernel.org, Max Gurtovoy , Max Gurtovoy , "Md. Haris Iqbal" , Michael Guralnik , Michal Kalderon , Mike Marciniszyn , Naresh Kumar PBS , netdev@vger.kernel.org, Potnuri Bharat Teja , rds-devel@oss.oracle.com, Sagi Grimberg , samba-technical@lists.samba.org, Santosh Shilimkar , Selvin Xavier , Shiraz Saleem , Somnath Kotur , Sriharsha Basavapatna , Steve French , Trond Myklebust , VMware PV-Drivers , Weihang Li , Yishai Hadas , Zhu Yanjun Subject: [PATCH rdma-next 04/10] RDMA/rtrs: Enable Relaxed Ordering Date: Mon, 5 Apr 2021 08:23:58 +0300 Message-Id: <20210405052404.213889-5-leon@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210405052404.213889-1-leon@kernel.org> References: <20210405052404.213889-1-leon@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org From: Avihai Horon Enable Relaxed Ordering fro rtrs client and server. Relaxed Ordering is an optional access flag and as such, it is ignored by vendors that don't support it. Signed-off-by: Avihai Horon Reviewed-by: Michael Guralnik Signed-off-by: Leon Romanovsky --- drivers/infiniband/ulp/rtrs/rtrs-clt.c | 6 ++++-- drivers/infiniband/ulp/rtrs/rtrs-srv.c | 15 ++++++++------- 2 files changed, 12 insertions(+), 9 deletions(-) diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c index 0d3960ed5b2b..a3fbb47a3574 100644 --- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c +++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c @@ -1099,7 +1099,8 @@ static int rtrs_clt_read_req(struct rtrs_clt_io_req *req) .mr = req->mr, .key = req->mr->rkey, .access = (IB_ACCESS_LOCAL_WRITE | - IB_ACCESS_REMOTE_WRITE), + IB_ACCESS_REMOTE_WRITE | + IB_ACCESS_RELAXED_ORDERING), }; wr = &rwr.wr; @@ -1260,7 +1261,8 @@ static int alloc_sess_reqs(struct rtrs_clt_sess *sess) goto out; req->mr = ib_alloc_mr(sess->s.dev->ib_pd, IB_MR_TYPE_MEM_REG, - sess->max_pages_per_mr, 0); + sess->max_pages_per_mr, + IB_ACCESS_RELAXED_ORDERING); if (IS_ERR(req->mr)) { err = PTR_ERR(req->mr); req->mr = NULL; diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c index 575f31ff20fd..c28ed5e2245d 100644 --- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c +++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c @@ -312,8 +312,8 @@ static int rdma_write_sg(struct rtrs_srv_op *id) rwr.mr = srv_mr->mr; rwr.wr.send_flags = 0; rwr.key = srv_mr->mr->rkey; - rwr.access = (IB_ACCESS_LOCAL_WRITE | - IB_ACCESS_REMOTE_WRITE); + rwr.access = (IB_ACCESS_LOCAL_WRITE | IB_ACCESS_REMOTE_WRITE | + IB_ACCESS_RELAXED_ORDERING); msg = srv_mr->iu->buf; msg->buf_id = cpu_to_le16(id->msg_id); msg->type = cpu_to_le16(RTRS_MSG_RKEY_RSP); @@ -432,8 +432,8 @@ static int send_io_resp_imm(struct rtrs_srv_con *con, struct rtrs_srv_op *id, rwr.wr.send_flags = 0; rwr.mr = srv_mr->mr; rwr.key = srv_mr->mr->rkey; - rwr.access = (IB_ACCESS_LOCAL_WRITE | - IB_ACCESS_REMOTE_WRITE); + rwr.access = (IB_ACCESS_LOCAL_WRITE | IB_ACCESS_REMOTE_WRITE | + IB_ACCESS_RELAXED_ORDERING); msg = srv_mr->iu->buf; msg->buf_id = cpu_to_le16(id->msg_id); msg->type = cpu_to_le16(RTRS_MSG_RKEY_RSP); @@ -638,7 +638,7 @@ static int map_cont_bufs(struct rtrs_srv_sess *sess) goto free_sg; } mr = ib_alloc_mr(sess->s.dev->ib_pd, IB_MR_TYPE_MEM_REG, - sgt->nents, 0); + sgt->nents, IB_ACCESS_RELAXED_ORDERING); if (IS_ERR(mr)) { err = PTR_ERR(mr); goto unmap_sg; @@ -823,8 +823,9 @@ static int process_info_req(struct rtrs_srv_con *con, rwr[mri].wr.send_flags = 0; rwr[mri].mr = mr; rwr[mri].key = mr->rkey; - rwr[mri].access = (IB_ACCESS_LOCAL_WRITE | - IB_ACCESS_REMOTE_WRITE); + rwr[mri].access = + (IB_ACCESS_LOCAL_WRITE | IB_ACCESS_REMOTE_WRITE | + IB_ACCESS_RELAXED_ORDERING); reg_wr = &rwr[mri].wr; } From patchwork Mon Apr 5 05:23:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 12182619 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F6CBC43603 for ; Mon, 5 Apr 2021 05:24:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F21E6613B1 for ; Mon, 5 Apr 2021 05:24:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232186AbhDEFYf (ORCPT ); Mon, 5 Apr 2021 01:24:35 -0400 Received: from mail.kernel.org ([198.145.29.99]:57056 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232148AbhDEFYc (ORCPT ); Mon, 5 Apr 2021 01:24:32 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 30A2E613A4; Mon, 5 Apr 2021 05:24:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1617600266; bh=imvbyn97thI9yWumE7pEl40IQe358kCagE2ZGNMZU9U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HZEzb+U8Ry5BOtN1qNr3CelJoI40xG7feEOzNwXvuXUEOQO3t+IeULIAKlg0xq4Ps J5AcD6+++TL27oFLWK80YU2XPT8zCB6J+XRulPpz9MTDdcSIGNm6PGnK3hChgNsqVr 8CDSLtsMRaDgk291G4zGT7zeU9k3R5bQ4k/K1kx+Z1h6TXYqxyqz4+DzkAdIiK+l+r SMXkJnWjg3vbf+UY4oiB57JeZa/Xg2MeojDuvo1VBrKI3DFPfAKqJBIeAyiLOZFCDP dCJYSeTmHUXmoR1Ihh5FvuyelpHmTPzXJhngHA6OGcoDWelbwKJWvTYXfx6QoH0SMl AxUWR/QpJ9f5Q== From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Avihai Horon , Adit Ranadive , Anna Schumaker , Ariel Elior , Bart Van Assche , Bernard Metzler , Christoph Hellwig , Chuck Lever , "David S. Miller" , Dennis Dalessandro , Devesh Sharma , Faisal Latif , Jack Wang , Jakub Kicinski , "J. Bruce Fields" , Jens Axboe , Karsten Graul , Keith Busch , Lijun Ou , linux-cifs@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nfs@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-s390@vger.kernel.org, Max Gurtovoy , Max Gurtovoy , "Md. Haris Iqbal" , Michael Guralnik , Michal Kalderon , Mike Marciniszyn , Naresh Kumar PBS , netdev@vger.kernel.org, Potnuri Bharat Teja , rds-devel@oss.oracle.com, Sagi Grimberg , samba-technical@lists.samba.org, Santosh Shilimkar , Selvin Xavier , Shiraz Saleem , Somnath Kotur , Sriharsha Basavapatna , Steve French , Trond Myklebust , VMware PV-Drivers , Weihang Li , Yishai Hadas , Zhu Yanjun Subject: [PATCH rdma-next 05/10] RDMA/srp: Enable Relaxed Ordering Date: Mon, 5 Apr 2021 08:23:59 +0300 Message-Id: <20210405052404.213889-6-leon@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210405052404.213889-1-leon@kernel.org> References: <20210405052404.213889-1-leon@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org From: Avihai Horon Enable Relaxed Ordering for srp. Relaxed Ordering is an optional access flag and as such, it is ignored by vendors that don't support it. Signed-off-by: Avihai Horon Reviewed-by: Max Gurtovoy Reviewed-by: Michael Guralnik Signed-off-by: Leon Romanovsky --- drivers/infiniband/ulp/srp/ib_srp.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c index 8481ad769ba4..0026660c3ead 100644 --- a/drivers/infiniband/ulp/srp/ib_srp.c +++ b/drivers/infiniband/ulp/srp/ib_srp.c @@ -436,7 +436,8 @@ static struct srp_fr_pool *srp_create_fr_pool(struct ib_device *device, mr_type = IB_MR_TYPE_MEM_REG; for (i = 0, d = &pool->desc[0]; i < pool->size; i++, d++) { - mr = ib_alloc_mr(pd, mr_type, max_page_list_len, 0); + mr = ib_alloc_mr(pd, mr_type, max_page_list_len, + IB_ACCESS_RELAXED_ORDERING); if (IS_ERR(mr)) { ret = PTR_ERR(mr); if (ret == -ENOMEM) @@ -1487,9 +1488,8 @@ static int srp_map_finish_fr(struct srp_map_state *state, wr.wr.send_flags = 0; wr.mr = desc->mr; wr.key = desc->mr->rkey; - wr.access = (IB_ACCESS_LOCAL_WRITE | - IB_ACCESS_REMOTE_READ | - IB_ACCESS_REMOTE_WRITE); + wr.access = (IB_ACCESS_LOCAL_WRITE | IB_ACCESS_REMOTE_READ | + IB_ACCESS_REMOTE_WRITE | IB_ACCESS_RELAXED_ORDERING); *state->fr.next++ = desc; state->nmdesc++; From patchwork Mon Apr 5 05:24:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 12182625 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C6E3C43460 for ; Mon, 5 Apr 2021 05:24:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3120C613A8 for ; Mon, 5 Apr 2021 05:24:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232274AbhDEFYv (ORCPT ); Mon, 5 Apr 2021 01:24:51 -0400 Received: from mail.kernel.org ([198.145.29.99]:57662 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232269AbhDEFYp (ORCPT ); Mon, 5 Apr 2021 01:24:45 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id E7DCD613AE; Mon, 5 Apr 2021 05:24:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1617600279; bh=kgFeNEGqbN9WM+uS4TOUKfA2KjjvWH4Sn9QKyrs2Esc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=H/LrR2eygKEWxFLckWrWCNTnD6kKtvMcLmftsZoTnWCqk/rgGjPILP1O9Sb/m2viK unVBfkwguZGE9J6jSmdfrppM0HIlMce2I1CMFt3b8rq3SQaGOSdUrmAdgRZOndp/VF zIYWDg6Ge7lvfxKr803oVG0AT8FhZZ2x5c5l0N3hBRcjqBiaRPlr+4xs3g3l3XdBJF XvOyzE85/pRqLznPmdWpEyLjjT8s0HbLGGqxpaxyrGJxkVqP+3zTrUzHF9bbDXflGB pGiGbDaDmxBdvgIZL3qNem0va+qf59TNStPrQAG9mmWUREFLPlkFkrT/UVlRflsgKH ir0e3t698GSvQ== From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Avihai Horon , Adit Ranadive , Anna Schumaker , Ariel Elior , Bart Van Assche , Bernard Metzler , Christoph Hellwig , Chuck Lever , "David S. Miller" , Dennis Dalessandro , Devesh Sharma , Faisal Latif , Jack Wang , Jakub Kicinski , "J. Bruce Fields" , Jens Axboe , Karsten Graul , Keith Busch , Lijun Ou , linux-cifs@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nfs@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-s390@vger.kernel.org, Max Gurtovoy , Max Gurtovoy , "Md. Haris Iqbal" , Michael Guralnik , Michal Kalderon , Mike Marciniszyn , Naresh Kumar PBS , netdev@vger.kernel.org, Potnuri Bharat Teja , rds-devel@oss.oracle.com, Sagi Grimberg , samba-technical@lists.samba.org, Santosh Shilimkar , Selvin Xavier , Shiraz Saleem , Somnath Kotur , Sriharsha Basavapatna , Steve French , Trond Myklebust , VMware PV-Drivers , Weihang Li , Yishai Hadas , Zhu Yanjun Subject: [PATCH rdma-next 06/10] nvme-rdma: Enable Relaxed Ordering Date: Mon, 5 Apr 2021 08:24:00 +0300 Message-Id: <20210405052404.213889-7-leon@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210405052404.213889-1-leon@kernel.org> References: <20210405052404.213889-1-leon@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org From: Avihai Horon Enable Relaxed Ordering for nvme. Relaxed Ordering is an optional access flag and as such, it is ignored by vendors that don't support it. Signed-off-by: Avihai Horon Reviewed-by: Max Gurtovoy Reviewed-by: Michael Guralnik Signed-off-by: Leon Romanovsky --- drivers/nvme/host/rdma.c | 19 +++++++++---------- 1 file changed, 9 insertions(+), 10 deletions(-) diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index 4dbc17311e0b..8f106b20b39c 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -532,9 +532,8 @@ static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue) */ pages_per_mr = nvme_rdma_get_max_fr_pages(ibdev, queue->pi_support) + 1; ret = ib_mr_pool_init(queue->qp, &queue->qp->rdma_mrs, - queue->queue_size, - IB_MR_TYPE_MEM_REG, - pages_per_mr, 0, 0); + queue->queue_size, IB_MR_TYPE_MEM_REG, + pages_per_mr, 0, IB_ACCESS_RELAXED_ORDERING); if (ret) { dev_err(queue->ctrl->ctrl.device, "failed to initialize MR pool sized %d for QID %d\n", @@ -545,7 +544,8 @@ static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue) if (queue->pi_support) { ret = ib_mr_pool_init(queue->qp, &queue->qp->sig_mrs, queue->queue_size, IB_MR_TYPE_INTEGRITY, - pages_per_mr, pages_per_mr, 0); + pages_per_mr, pages_per_mr, + IB_ACCESS_RELAXED_ORDERING); if (ret) { dev_err(queue->ctrl->ctrl.device, "failed to initialize PI MR pool sized %d for QID %d\n", @@ -1382,9 +1382,9 @@ static int nvme_rdma_map_sg_fr(struct nvme_rdma_queue *queue, req->reg_wr.wr.num_sge = 0; req->reg_wr.mr = req->mr; req->reg_wr.key = req->mr->rkey; - req->reg_wr.access = IB_ACCESS_LOCAL_WRITE | - IB_ACCESS_REMOTE_READ | - IB_ACCESS_REMOTE_WRITE; + req->reg_wr.access = IB_ACCESS_LOCAL_WRITE | IB_ACCESS_REMOTE_READ | + IB_ACCESS_REMOTE_WRITE | + IB_ACCESS_RELAXED_ORDERING; sg->addr = cpu_to_le64(req->mr->iova); put_unaligned_le24(req->mr->length, sg->length); @@ -1488,9 +1488,8 @@ static int nvme_rdma_map_sg_pi(struct nvme_rdma_queue *queue, wr->wr.send_flags = 0; wr->mr = req->mr; wr->key = req->mr->rkey; - wr->access = IB_ACCESS_LOCAL_WRITE | - IB_ACCESS_REMOTE_READ | - IB_ACCESS_REMOTE_WRITE; + wr->access = IB_ACCESS_LOCAL_WRITE | IB_ACCESS_REMOTE_READ | + IB_ACCESS_REMOTE_WRITE | IB_ACCESS_RELAXED_ORDERING; sg->addr = cpu_to_le64(req->mr->iova); put_unaligned_le24(req->mr->length, sg->length); From patchwork Mon Apr 5 05:24:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 12182621 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3BD0C43616 for ; Mon, 5 Apr 2021 05:24:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 85D6F613AE for ; Mon, 5 Apr 2021 05:24:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232191AbhDEFYm (ORCPT ); Mon, 5 Apr 2021 01:24:42 -0400 Received: from mail.kernel.org ([198.145.29.99]:57314 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232218AbhDEFYj (ORCPT ); Mon, 5 Apr 2021 01:24:39 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id D3E3C6139F; Mon, 5 Apr 2021 05:24:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1617600272; bh=H8VyokLLj861miN6eL/BS/iydH8X5tcb3ZsH8b+Rr3E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Pc2AL8IA2ENJh5d/e1ATkUHiXYZJp3c0RoiDrukP7UR18XgRnLUPOt455aETZDgD+ 9tN6dCnr71sF7xwuEtUhcHtpYZv9kBVm2hXs5G4KU8C43sJeLHi9n4p2Yca+o1XWUm 4x87/gYWytjWu5929v6HulHxqQmQQA+kzbo5WGRzXAT/mfUU28J4IoUVN36S0olB1C SSiN6RrpurpK+3g49qd79KBJ73uiaGztbXchONgqfNrwuB6SP1tsCLgjB4KFzTjLHK 0DkCmaVePHL7WmR8f37thwoi/35+N1BSlEBQDZAZbwzNsjvUBXpKze2FWem04y6aEO Jk8f5lhdh17MQ== From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Avihai Horon , Adit Ranadive , Anna Schumaker , Ariel Elior , Bart Van Assche , Bernard Metzler , Christoph Hellwig , Chuck Lever , "David S. Miller" , Dennis Dalessandro , Devesh Sharma , Faisal Latif , Jack Wang , Jakub Kicinski , "J. Bruce Fields" , Jens Axboe , Karsten Graul , Keith Busch , Lijun Ou , linux-cifs@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nfs@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-s390@vger.kernel.org, Max Gurtovoy , Max Gurtovoy , "Md. Haris Iqbal" , Michael Guralnik , Michal Kalderon , Mike Marciniszyn , Naresh Kumar PBS , netdev@vger.kernel.org, Potnuri Bharat Teja , rds-devel@oss.oracle.com, Sagi Grimberg , samba-technical@lists.samba.org, Santosh Shilimkar , Selvin Xavier , Shiraz Saleem , Somnath Kotur , Sriharsha Basavapatna , Steve French , Trond Myklebust , VMware PV-Drivers , Weihang Li , Yishai Hadas , Zhu Yanjun Subject: [PATCH rdma-next 07/10] cifs: smbd: Enable Relaxed Ordering Date: Mon, 5 Apr 2021 08:24:01 +0300 Message-Id: <20210405052404.213889-8-leon@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210405052404.213889-1-leon@kernel.org> References: <20210405052404.213889-1-leon@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org From: Avihai Horon Enable Relaxed Ordering for smbd. Relaxed Ordering is an optional access flag and as such, it is ignored by vendors that don't support it. Signed-off-by: Avihai Horon Reviewed-by: Michael Guralnik Signed-off-by: Leon Romanovsky --- fs/cifs/smbdirect.c | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/fs/cifs/smbdirect.c b/fs/cifs/smbdirect.c index 647098a5cf3b..1e86dc8bbe85 100644 --- a/fs/cifs/smbdirect.c +++ b/fs/cifs/smbdirect.c @@ -2178,8 +2178,10 @@ static void smbd_mr_recovery_work(struct work_struct *work) continue; } - smbdirect_mr->mr = ib_alloc_mr(info->pd, info->mr_type, - info->max_frmr_depth, 0); + smbdirect_mr->mr = + ib_alloc_mr(info->pd, info->mr_type, + info->max_frmr_depth, + IB_ACCESS_RELAXED_ORDERING); if (IS_ERR(smbdirect_mr->mr)) { log_rdma_mr(ERR, "ib_alloc_mr failed mr_type=%x max_frmr_depth=%x\n", info->mr_type, @@ -2244,7 +2246,8 @@ static int allocate_mr_list(struct smbd_connection *info) if (!smbdirect_mr) goto out; smbdirect_mr->mr = ib_alloc_mr(info->pd, info->mr_type, - info->max_frmr_depth, 0); + info->max_frmr_depth, + IB_ACCESS_RELAXED_ORDERING); if (IS_ERR(smbdirect_mr->mr)) { log_rdma_mr(ERR, "ib_alloc_mr failed mr_type=%x max_frmr_depth=%x\n", info->mr_type, info->max_frmr_depth); @@ -2406,9 +2409,10 @@ struct smbd_mr *smbd_register_mr( reg_wr->wr.send_flags = IB_SEND_SIGNALED; reg_wr->mr = smbdirect_mr->mr; reg_wr->key = smbdirect_mr->mr->rkey; - reg_wr->access = writing ? - IB_ACCESS_REMOTE_WRITE | IB_ACCESS_LOCAL_WRITE : - IB_ACCESS_REMOTE_READ; + reg_wr->access = + (writing ? IB_ACCESS_REMOTE_WRITE | IB_ACCESS_LOCAL_WRITE : + IB_ACCESS_REMOTE_READ) | + IB_ACCESS_RELAXED_ORDERING; /* * There is no need for waiting for complemtion on ib_post_send From patchwork Mon Apr 5 05:24:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 12182623 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2797C43462 for ; Mon, 5 Apr 2021 05:24:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 90FA9613B4 for ; Mon, 5 Apr 2021 05:24:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232268AbhDEFYp (ORCPT ); Mon, 5 Apr 2021 01:24:45 -0400 Received: from mail.kernel.org ([198.145.29.99]:57488 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232184AbhDEFYm (ORCPT ); Mon, 5 Apr 2021 01:24:42 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 2F92E613A7; Mon, 5 Apr 2021 05:24:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1617600276; bh=PAgj56Yc//zBEZolPkWhsSnGjEMS6kZSDqBlkURbZas=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AYxv5/O9LXkexYx1MQDU6ox6EdhzHB4uGJEl6xIqmgN3v+/+IXqbR/96mtUTQxa7L jSdhIGDVjcJBsXfrWfnoQ9hoWgB51uBDgqGKwBVjvHr7sKGRXuHD4KE5U5+mvekNf2 jgaH/6UhjlCgxX93pKq/a/2N39eIzYUIbsccbxUAttXvyG30WG82NYcoyLgMJp/QLy GIhiZMCeqTaqK2op2Fnk/dZyJf+fOiHxDpGjeTQwZtW7yad/MlAgSOwdOLXw6TYcP6 A0EKyLcC6rRPmGH9WfOryJ9ALoePf3A/Oe68jTK5RXqYw+Xr+hzYsSTH7CPeDp+ZOr uoDtLhQl6cu1Q== From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Avihai Horon , Adit Ranadive , Anna Schumaker , Ariel Elior , Bart Van Assche , Bernard Metzler , Christoph Hellwig , Chuck Lever , "David S. Miller" , Dennis Dalessandro , Devesh Sharma , Faisal Latif , Jack Wang , Jakub Kicinski , "J. Bruce Fields" , Jens Axboe , Karsten Graul , Keith Busch , Lijun Ou , linux-cifs@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nfs@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-s390@vger.kernel.org, Max Gurtovoy , Max Gurtovoy , "Md. Haris Iqbal" , Michael Guralnik , Michal Kalderon , Mike Marciniszyn , Naresh Kumar PBS , netdev@vger.kernel.org, Potnuri Bharat Teja , rds-devel@oss.oracle.com, Sagi Grimberg , samba-technical@lists.samba.org, Santosh Shilimkar , Selvin Xavier , Shiraz Saleem , Somnath Kotur , Sriharsha Basavapatna , Steve French , Trond Myklebust , VMware PV-Drivers , Weihang Li , Yishai Hadas , Zhu Yanjun Subject: [PATCH rdma-next 08/10] net/rds: Enable Relaxed Ordering Date: Mon, 5 Apr 2021 08:24:02 +0300 Message-Id: <20210405052404.213889-9-leon@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210405052404.213889-1-leon@kernel.org> References: <20210405052404.213889-1-leon@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org From: Avihai Horon Enable Relaxed Ordering for rds. Relaxed Ordering is an optional access flag and as such, it is ignored by vendors that don't support it. Signed-off-by: Avihai Horon Reviewed-by: Michael Guralnik Signed-off-by: Leon Romanovsky --- net/rds/ib_frmr.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/net/rds/ib_frmr.c b/net/rds/ib_frmr.c index 694eb916319e..1a60c2c90c78 100644 --- a/net/rds/ib_frmr.c +++ b/net/rds/ib_frmr.c @@ -76,7 +76,7 @@ static struct rds_ib_mr *rds_ib_alloc_frmr(struct rds_ib_device *rds_ibdev, frmr = &ibmr->u.frmr; frmr->mr = ib_alloc_mr(rds_ibdev->pd, IB_MR_TYPE_MEM_REG, - pool->max_pages, 0); + pool->max_pages, IB_ACCESS_RELAXED_ORDERING); if (IS_ERR(frmr->mr)) { pr_warn("RDS/IB: %s failed to allocate MR", __func__); err = PTR_ERR(frmr->mr); @@ -156,9 +156,8 @@ static int rds_ib_post_reg_frmr(struct rds_ib_mr *ibmr) reg_wr.wr.num_sge = 0; reg_wr.mr = frmr->mr; reg_wr.key = frmr->mr->rkey; - reg_wr.access = IB_ACCESS_LOCAL_WRITE | - IB_ACCESS_REMOTE_READ | - IB_ACCESS_REMOTE_WRITE; + reg_wr.access = IB_ACCESS_LOCAL_WRITE | IB_ACCESS_REMOTE_READ | + IB_ACCESS_REMOTE_WRITE | IB_ACCESS_RELAXED_ORDERING; reg_wr.wr.send_flags = IB_SEND_SIGNALED; ret = ib_post_send(ibmr->ic->i_cm_id->qp, ®_wr.wr, NULL); From patchwork Mon Apr 5 05:24:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 12182629 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73190C433B4 for ; Mon, 5 Apr 2021 05:24:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5277D613A8 for ; Mon, 5 Apr 2021 05:24:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232354AbhDEFY5 (ORCPT ); Mon, 5 Apr 2021 01:24:57 -0400 Received: from mail.kernel.org ([198.145.29.99]:57994 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232302AbhDEFYx (ORCPT ); Mon, 5 Apr 2021 01:24:53 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 27751613A0; Mon, 5 Apr 2021 05:24:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1617600286; bh=qC4efJ6nsgEYXrpmkxKWy75LAL147M2IGYN5TPsAdII=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fWplzcsL9Z5M80eaItlaxJPIR8oDNUKVel7eVHXrIxgYhy15rmgiaRan2ANUF/xfS apgjMlinRcwdZm0uRUeyF/w4IyRoC8LiJdvurqBkHVTLLZHkorC1ppo+CRAOPQhuco hko2LOHnT6P1Cu75LPSLN6AjVOzu0jA/c2M6h8tw4toBn+o4y/+WJoCy91HO2D7mvd NAzKe3/TxiRAXu7DgOhWFA9vuqbpH3rhesPmiSiqOcI7iH/mrdhmzUMyzAgWVBGM3/ VuCoVYPSlpRIc2bqYFPLZamRFc55xmRnknWVB8LUXs8Muk5YNrVOlYzmxGXd+Mjwyu zd12jPeU6mFnw== From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Avihai Horon , Adit Ranadive , Anna Schumaker , Ariel Elior , Bart Van Assche , Bernard Metzler , Christoph Hellwig , Chuck Lever , "David S. Miller" , Dennis Dalessandro , Devesh Sharma , Faisal Latif , Jack Wang , Jakub Kicinski , "J. Bruce Fields" , Jens Axboe , Karsten Graul , Keith Busch , Lijun Ou , linux-cifs@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nfs@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-s390@vger.kernel.org, Max Gurtovoy , Max Gurtovoy , "Md. Haris Iqbal" , Michael Guralnik , Michal Kalderon , Mike Marciniszyn , Naresh Kumar PBS , netdev@vger.kernel.org, Potnuri Bharat Teja , rds-devel@oss.oracle.com, Sagi Grimberg , samba-technical@lists.samba.org, Santosh Shilimkar , Selvin Xavier , Shiraz Saleem , Somnath Kotur , Sriharsha Basavapatna , Steve French , Trond Myklebust , VMware PV-Drivers , Weihang Li , Yishai Hadas , Zhu Yanjun Subject: [PATCH rdma-next 09/10] net/smc: Enable Relaxed Ordering Date: Mon, 5 Apr 2021 08:24:03 +0300 Message-Id: <20210405052404.213889-10-leon@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210405052404.213889-1-leon@kernel.org> References: <20210405052404.213889-1-leon@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org From: Avihai Horon Enable Relaxed Ordering for smc. Relaxed Ordering is an optional access flag and as such, it is ignored by vendors that don't support it. Signed-off-by: Avihai Horon Reviewed-by: Michael Guralnik Signed-off-by: Leon Romanovsky --- net/smc/smc_ib.c | 3 ++- net/smc/smc_wr.c | 3 ++- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/net/smc/smc_ib.c b/net/smc/smc_ib.c index 4e91ed3dc265..6b65c5d1f957 100644 --- a/net/smc/smc_ib.c +++ b/net/smc/smc_ib.c @@ -579,7 +579,8 @@ int smc_ib_get_memory_region(struct ib_pd *pd, int access_flags, return 0; /* already done */ buf_slot->mr_rx[link_idx] = - ib_alloc_mr(pd, IB_MR_TYPE_MEM_REG, 1 << buf_slot->order, 0); + ib_alloc_mr(pd, IB_MR_TYPE_MEM_REG, 1 << buf_slot->order, + IB_ACCESS_RELAXED_ORDERING); if (IS_ERR(buf_slot->mr_rx[link_idx])) { int rc; diff --git a/net/smc/smc_wr.c b/net/smc/smc_wr.c index cbc73a7e4d59..78e9650621f1 100644 --- a/net/smc/smc_wr.c +++ b/net/smc/smc_wr.c @@ -555,7 +555,8 @@ static void smc_wr_init_sge(struct smc_link *lnk) lnk->wr_reg.wr.num_sge = 0; lnk->wr_reg.wr.send_flags = IB_SEND_SIGNALED; lnk->wr_reg.wr.opcode = IB_WR_REG_MR; - lnk->wr_reg.access = IB_ACCESS_LOCAL_WRITE | IB_ACCESS_REMOTE_WRITE; + lnk->wr_reg.access = IB_ACCESS_LOCAL_WRITE | IB_ACCESS_REMOTE_WRITE | + IB_ACCESS_RELAXED_ORDERING; } void smc_wr_free_link(struct smc_link *lnk) From patchwork Mon Apr 5 05:24:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 12182627 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B481DC433B4 for ; Mon, 5 Apr 2021 05:24:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7DB49613AC for ; Mon, 5 Apr 2021 05:24:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232330AbhDEFYy (ORCPT ); Mon, 5 Apr 2021 01:24:54 -0400 Received: from mail.kernel.org ([198.145.29.99]:57808 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232296AbhDEFYt (ORCPT ); Mon, 5 Apr 2021 01:24:49 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 98D37613A5; Mon, 5 Apr 2021 05:24:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1617600283; bh=dMWjVf2aUxdb7ZrnGVveey6nby3LeCk61fbfNTM9fl4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gTxgWIVArORnsJVcmrlN8d/R5yi5c6bOrNBzElohC5vaA3tthZxAmBXekzQxLPZt1 znzexk81BrDTA5rsNCCVYOvwwlFDnzS86wc5yRwGWJWRt2aJNEqlhMh8X3Yin5MnUg egYUxDlxiP5r1F4WaDRJYqPSPkYbHdcFvkPU39s/4JSgSFdlIFeQGgL/MD/zmAqfpV quoKIMb/Ra/cj5B4Sy7YeCi7JhHJGHVrBLPV05lfGEptIVl6j3TYWFL1RzMBwdrQlr KEGiPPxsxY/9/6wsL1nkOvjaFxpOYMEgu7CPBdfRPgPfo6Fuhy9FxKi4vnWqOGX83B 98mrg+Kp01FKw== From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Avihai Horon , Adit Ranadive , Anna Schumaker , Ariel Elior , Bart Van Assche , Bernard Metzler , Christoph Hellwig , Chuck Lever , "David S. Miller" , Dennis Dalessandro , Devesh Sharma , Faisal Latif , Jack Wang , Jakub Kicinski , "J. Bruce Fields" , Jens Axboe , Karsten Graul , Keith Busch , Lijun Ou , linux-cifs@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nfs@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-s390@vger.kernel.org, Max Gurtovoy , Max Gurtovoy , "Md. Haris Iqbal" , Michael Guralnik , Michal Kalderon , Mike Marciniszyn , Naresh Kumar PBS , netdev@vger.kernel.org, Potnuri Bharat Teja , rds-devel@oss.oracle.com, Sagi Grimberg , samba-technical@lists.samba.org, Santosh Shilimkar , Selvin Xavier , Shiraz Saleem , Somnath Kotur , Sriharsha Basavapatna , Steve French , Trond Myklebust , VMware PV-Drivers , Weihang Li , Yishai Hadas , Zhu Yanjun Subject: [PATCH rdma-next 10/10] xprtrdma: Enable Relaxed Ordering Date: Mon, 5 Apr 2021 08:24:04 +0300 Message-Id: <20210405052404.213889-11-leon@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210405052404.213889-1-leon@kernel.org> References: <20210405052404.213889-1-leon@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org From: Avihai Horon Enable Relaxed Ordering for xprtrdma. Relaxed Ordering is an optional access flag and as such, it is ignored by vendors that don't support it. Signed-off-by: Avihai Horon Reviewed-by: Michael Guralnik Signed-off-by: Leon Romanovsky --- net/sunrpc/xprtrdma/frwr_ops.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c index cfbdd197cdfe..f9334c0a1a13 100644 --- a/net/sunrpc/xprtrdma/frwr_ops.c +++ b/net/sunrpc/xprtrdma/frwr_ops.c @@ -135,7 +135,8 @@ int frwr_mr_init(struct rpcrdma_xprt *r_xprt, struct rpcrdma_mr *mr) struct ib_mr *frmr; int rc; - frmr = ib_alloc_mr(ep->re_pd, ep->re_mrtype, depth, 0); + frmr = ib_alloc_mr(ep->re_pd, ep->re_mrtype, depth, + IB_ACCESS_RELAXED_ORDERING); if (IS_ERR(frmr)) goto out_mr_err; @@ -339,9 +340,10 @@ struct rpcrdma_mr_seg *frwr_map(struct rpcrdma_xprt *r_xprt, reg_wr = &mr->frwr.fr_regwr; reg_wr->mr = ibmr; reg_wr->key = ibmr->rkey; - reg_wr->access = writing ? - IB_ACCESS_REMOTE_WRITE | IB_ACCESS_LOCAL_WRITE : - IB_ACCESS_REMOTE_READ; + reg_wr->access = + (writing ? IB_ACCESS_REMOTE_WRITE | IB_ACCESS_LOCAL_WRITE : + IB_ACCESS_REMOTE_READ) | + IB_ACCESS_RELAXED_ORDERING; mr->mr_handle = ibmr->rkey; mr->mr_length = ibmr->length;