From patchwork Mon Apr 5 05:23:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 12182637 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A60AC43460 for ; Mon, 5 Apr 2021 05:24:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 61E816139D for ; Mon, 5 Apr 2021 05:24:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232151AbhDEFYb (ORCPT ); Mon, 5 Apr 2021 01:24:31 -0400 Received: from mail.kernel.org ([198.145.29.99]:56872 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232141AbhDEFY2 (ORCPT ); Mon, 5 Apr 2021 01:24:28 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id F258160724; Mon, 5 Apr 2021 05:24:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1617600262; bh=TEonNohSk3EOSuNC7NQhTyXQR/le1bI4Rm+lbGRMhFU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OYZI5B7xWpbmozNboLJR3DNhlI8gl+yFyBzL8s8rkQ9v3cxdRSR4Pcl95wBnkwIGw 7WpqToEYZZxUC46b/gQsHXzarF7l05/xnPGzOa0KcFQnnWj79dVfrWZUT5eLVKXgqv pz4DQa+RVAw7nQAqAHF2civ70nLPbDRMkJj5iColoB4/u3rLW1mugaxyWVa1KIw4Z3 p1l89kLcOsl2MGAbplU8xnOMrt9uTcsZf1+hOJgSaNp2kSu5AIzZgbgMKnUwEg5CBs MbKq8oGOxt9TmW0UuTH05peUnwILEeGNCr+CJWKKIMLZ+pqVrW1f6rox4r6cw0impJ CHjWu7sq0Q2yw== From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Avihai Horon , Adit Ranadive , Anna Schumaker , Ariel Elior , Bart Van Assche , Bernard Metzler , Christoph Hellwig , Chuck Lever , "David S. Miller" , Dennis Dalessandro , Devesh Sharma , Faisal Latif , Jack Wang , Jakub Kicinski , "J. Bruce Fields" , Jens Axboe , Karsten Graul , Keith Busch , Lijun Ou , linux-cifs@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nfs@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-s390@vger.kernel.org, Max Gurtovoy , Max Gurtovoy , "Md. Haris Iqbal" , Michael Guralnik , Michal Kalderon , Mike Marciniszyn , Naresh Kumar PBS , netdev@vger.kernel.org, Potnuri Bharat Teja , rds-devel@oss.oracle.com, Sagi Grimberg , samba-technical@lists.samba.org, Santosh Shilimkar , Selvin Xavier , Shiraz Saleem , Somnath Kotur , Sriharsha Basavapatna , Steve French , Trond Myklebust , VMware PV-Drivers , Weihang Li , Yishai Hadas , Zhu Yanjun Subject: [PATCH rdma-next 04/10] RDMA/rtrs: Enable Relaxed Ordering Date: Mon, 5 Apr 2021 08:23:58 +0300 Message-Id: <20210405052404.213889-5-leon@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210405052404.213889-1-leon@kernel.org> References: <20210405052404.213889-1-leon@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cifs@vger.kernel.org From: Avihai Horon Enable Relaxed Ordering fro rtrs client and server. Relaxed Ordering is an optional access flag and as such, it is ignored by vendors that don't support it. Signed-off-by: Avihai Horon Reviewed-by: Michael Guralnik Signed-off-by: Leon Romanovsky --- drivers/infiniband/ulp/rtrs/rtrs-clt.c | 6 ++++-- drivers/infiniband/ulp/rtrs/rtrs-srv.c | 15 ++++++++------- 2 files changed, 12 insertions(+), 9 deletions(-) diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c index 0d3960ed5b2b..a3fbb47a3574 100644 --- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c +++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c @@ -1099,7 +1099,8 @@ static int rtrs_clt_read_req(struct rtrs_clt_io_req *req) .mr = req->mr, .key = req->mr->rkey, .access = (IB_ACCESS_LOCAL_WRITE | - IB_ACCESS_REMOTE_WRITE), + IB_ACCESS_REMOTE_WRITE | + IB_ACCESS_RELAXED_ORDERING), }; wr = &rwr.wr; @@ -1260,7 +1261,8 @@ static int alloc_sess_reqs(struct rtrs_clt_sess *sess) goto out; req->mr = ib_alloc_mr(sess->s.dev->ib_pd, IB_MR_TYPE_MEM_REG, - sess->max_pages_per_mr, 0); + sess->max_pages_per_mr, + IB_ACCESS_RELAXED_ORDERING); if (IS_ERR(req->mr)) { err = PTR_ERR(req->mr); req->mr = NULL; diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c index 575f31ff20fd..c28ed5e2245d 100644 --- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c +++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c @@ -312,8 +312,8 @@ static int rdma_write_sg(struct rtrs_srv_op *id) rwr.mr = srv_mr->mr; rwr.wr.send_flags = 0; rwr.key = srv_mr->mr->rkey; - rwr.access = (IB_ACCESS_LOCAL_WRITE | - IB_ACCESS_REMOTE_WRITE); + rwr.access = (IB_ACCESS_LOCAL_WRITE | IB_ACCESS_REMOTE_WRITE | + IB_ACCESS_RELAXED_ORDERING); msg = srv_mr->iu->buf; msg->buf_id = cpu_to_le16(id->msg_id); msg->type = cpu_to_le16(RTRS_MSG_RKEY_RSP); @@ -432,8 +432,8 @@ static int send_io_resp_imm(struct rtrs_srv_con *con, struct rtrs_srv_op *id, rwr.wr.send_flags = 0; rwr.mr = srv_mr->mr; rwr.key = srv_mr->mr->rkey; - rwr.access = (IB_ACCESS_LOCAL_WRITE | - IB_ACCESS_REMOTE_WRITE); + rwr.access = (IB_ACCESS_LOCAL_WRITE | IB_ACCESS_REMOTE_WRITE | + IB_ACCESS_RELAXED_ORDERING); msg = srv_mr->iu->buf; msg->buf_id = cpu_to_le16(id->msg_id); msg->type = cpu_to_le16(RTRS_MSG_RKEY_RSP); @@ -638,7 +638,7 @@ static int map_cont_bufs(struct rtrs_srv_sess *sess) goto free_sg; } mr = ib_alloc_mr(sess->s.dev->ib_pd, IB_MR_TYPE_MEM_REG, - sgt->nents, 0); + sgt->nents, IB_ACCESS_RELAXED_ORDERING); if (IS_ERR(mr)) { err = PTR_ERR(mr); goto unmap_sg; @@ -823,8 +823,9 @@ static int process_info_req(struct rtrs_srv_con *con, rwr[mri].wr.send_flags = 0; rwr[mri].mr = mr; rwr[mri].key = mr->rkey; - rwr[mri].access = (IB_ACCESS_LOCAL_WRITE | - IB_ACCESS_REMOTE_WRITE); + rwr[mri].access = + (IB_ACCESS_LOCAL_WRITE | IB_ACCESS_REMOTE_WRITE | + IB_ACCESS_RELAXED_ORDERING); reg_wr = &rwr[mri].wr; }