From patchwork Tue Jun 8 11:35:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinpu Wang X-Patchwork-Id: 12306481 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98058C47082 for ; Tue, 8 Jun 2021 11:35:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8047061354 for ; Tue, 8 Jun 2021 11:35:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231791AbhFHLhe (ORCPT ); Tue, 8 Jun 2021 07:37:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49178 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231630AbhFHLhc (ORCPT ); Tue, 8 Jun 2021 07:37:32 -0400 Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com [IPv6:2a00:1450:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BE968C061574 for ; Tue, 8 Jun 2021 04:35:39 -0700 (PDT) Received: by mail-ej1-x632.google.com with SMTP id he7so12642011ejc.13 for ; Tue, 08 Jun 2021 04:35:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ionos.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=WXjLdfJV/3Gom+3eZIJ83Y2LOor3DLtq98kWLaTPV60=; b=UVON4Ly5gY8+mJe9sJMiqLLClxtn4OAUK0lBhL9tY85iRYM9C8HE/zIOQHLH66BghR uevS0KFn4rHrB03U5Ad9WuNj2KLhGchx5WOGhNs/PRjtnC6Lv48yhqYMYBLfXbvgKl7P E9TfFqpc6ey+uC0EtmNIS306ZCqUVJ982dIz6MeV5EbeC+f8VUNwRpiJ1/dMyufyKzpg JklXUxY18Hti4tYgngDs4D/CE1F34PfdKwnGJlpvA8R8B0TfuftGfH9xi/Nk/Gu5qrk7 MBjsdyePNrXI/KzjF12YSOgOa09ZOmeuhBPnLcr26gEyKP1E/LEHqKdKCTz0csgQRPA4 TdOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WXjLdfJV/3Gom+3eZIJ83Y2LOor3DLtq98kWLaTPV60=; b=MU3CV/nEEtzfzOdOJZuB1FCjpeNfEteEkXwsMaYXUs6ucfhfv5y4lLWVbX57MZ2F1Q synyOfIdkkNGOlAoq2k5wLPgT5Q6cVGHeZMKYj3HPEeKsrj73OXbTcLfN67r444GiHrm 3Xs113R9Rz8NP528VHWW6/CWSpCG5GKCgPcy6C3T+JujvGTPUeYyC9JL3EMK9g184JCW BbZ2Ob3evA/SQEyjdVLRuuGterhF6dVYThUOeCsv9F3OnSjkM0rcLw4fJ6pd0hla3PKH fFVYtUOxaddqAjA0zvsJojlT/LxXrgi+uZosB8N6KPfx/XyWH21h5gsX1eFOYX4mjCdp F4CQ== X-Gm-Message-State: AOAM531Xac0BL2Ft8ojuZtVQRZr23fh6Z1d7uAfSrJ/pqLqgoLdb04/u l3ZJgEiw3t9b8n9q/mwJbguPQKZQQGaN/w== X-Google-Smtp-Source: ABdhPJwrBjb3+HWMtTr1DZcSdeGRgbSMii1GnP7S9Dp6AsUnkTf9K3SRNX6OtUcAzDIKbZa86YOu3w== X-Received: by 2002:a17:906:5049:: with SMTP id e9mr22532037ejk.30.1623152138089; Tue, 08 Jun 2021 04:35:38 -0700 (PDT) Received: from jwang-Latitude-5491.fkb.profitbricks.net ([2001:16b8:4943:5b00:2927:a608:b264:d770]) by smtp.gmail.com with ESMTPSA id c7sm7675480ejs.26.2021.06.08.04.35.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Jun 2021 04:35:37 -0700 (PDT) From: Jack Wang To: linux-rdma@vger.kernel.org Cc: bvanassche@acm.org, leon@kernel.org, dledford@redhat.com, jgg@ziepe.ca, haris.iqbal@ionos.com, jinpu.wang@ionos.com, axboe@kernel.dk, Jack Wang , Md Haris Iqbal Subject: [PATCH for-next 1/5] RDMA/rtrs: Introduce head/tail wr Date: Tue, 8 Jun 2021 13:35:32 +0200 Message-Id: <20210608113536.42965-2-jinpu.wang@ionos.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210608113536.42965-1-jinpu.wang@ionos.com> References: <20210608113536.42965-1-jinpu.wang@ionos.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Jack Wang Introduce tail wr, we can send as the last wr, we want to send the local invalidate wr after rdma wr in later patch. While at it, also fix coding style issue. Signed-off-by: Jack Wang Reviewed-by: Md Haris Iqbal --- drivers/infiniband/ulp/rtrs/rtrs-clt.c | 16 ++++++++------- drivers/infiniband/ulp/rtrs/rtrs-pri.h | 3 ++- drivers/infiniband/ulp/rtrs/rtrs.c | 28 +++++++++++++++----------- 3 files changed, 27 insertions(+), 20 deletions(-) diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c index 67ff5bf9bfa8..5ec02f78be3f 100644 --- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c +++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c @@ -480,7 +480,7 @@ static int rtrs_post_send_rdma(struct rtrs_clt_con *con, return rtrs_iu_post_rdma_write_imm(&con->c, req->iu, &sge, 1, rbuf->rkey, rbuf->addr + off, - imm, flags, wr); + imm, flags, wr, NULL); } static void process_io_rsp(struct rtrs_clt_sess *sess, u32 msg_id, @@ -999,9 +999,10 @@ rtrs_clt_get_copy_req(struct rtrs_clt_sess *alive_sess, } static int rtrs_post_rdma_write_sg(struct rtrs_clt_con *con, - struct rtrs_clt_io_req *req, - struct rtrs_rbuf *rbuf, - u32 size, u32 imm) + struct rtrs_clt_io_req *req, + struct rtrs_rbuf *rbuf, + u32 size, u32 imm, struct ib_send_wr *wr, + struct ib_send_wr *tail) { struct rtrs_clt_sess *sess = to_clt_sess(con->c.sess); struct ib_sge *sge = req->sge; @@ -1009,6 +1010,7 @@ static int rtrs_post_rdma_write_sg(struct rtrs_clt_con *con, struct scatterlist *sg; size_t num_sge; int i; + struct ib_send_wr *ptail = NULL; for_each_sg(req->sglist, sg, req->sg_cnt, i) { sge[i].addr = sg_dma_address(sg); @@ -1033,7 +1035,7 @@ static int rtrs_post_rdma_write_sg(struct rtrs_clt_con *con, return rtrs_iu_post_rdma_write_imm(&con->c, req->iu, sge, num_sge, rbuf->rkey, rbuf->addr, imm, - flags, NULL); + flags, wr, ptail); } static int rtrs_clt_write_req(struct rtrs_clt_io_req *req) @@ -1081,8 +1083,8 @@ static int rtrs_clt_write_req(struct rtrs_clt_io_req *req) rtrs_clt_update_all_stats(req, WRITE); ret = rtrs_post_rdma_write_sg(req->con, req, rbuf, - req->usr_len + sizeof(*msg), - imm); + req->usr_len + sizeof(*msg), + imm, NULL, NULL); if (unlikely(ret)) { rtrs_err_rl(s, "Write request failed: error=%d path=%s [%s:%u]\n", diff --git a/drivers/infiniband/ulp/rtrs/rtrs-pri.h b/drivers/infiniband/ulp/rtrs/rtrs-pri.h index 76cca2058f6f..36f184a3b676 100644 --- a/drivers/infiniband/ulp/rtrs/rtrs-pri.h +++ b/drivers/infiniband/ulp/rtrs/rtrs-pri.h @@ -305,7 +305,8 @@ int rtrs_iu_post_rdma_write_imm(struct rtrs_con *con, struct rtrs_iu *iu, struct ib_sge *sge, unsigned int num_sge, u32 rkey, u64 rdma_addr, u32 imm_data, enum ib_send_flags flags, - struct ib_send_wr *head); + struct ib_send_wr *head, + struct ib_send_wr *tail); int rtrs_post_recv_empty(struct rtrs_con *con, struct ib_cqe *cqe); int rtrs_post_rdma_write_imm_empty(struct rtrs_con *con, struct ib_cqe *cqe, diff --git a/drivers/infiniband/ulp/rtrs/rtrs.c b/drivers/infiniband/ulp/rtrs/rtrs.c index 08e1f7d82c95..61919ebd92b2 100644 --- a/drivers/infiniband/ulp/rtrs/rtrs.c +++ b/drivers/infiniband/ulp/rtrs/rtrs.c @@ -105,18 +105,21 @@ int rtrs_post_recv_empty(struct rtrs_con *con, struct ib_cqe *cqe) EXPORT_SYMBOL_GPL(rtrs_post_recv_empty); static int rtrs_post_send(struct ib_qp *qp, struct ib_send_wr *head, - struct ib_send_wr *wr) + struct ib_send_wr *wr, struct ib_send_wr *tail) { if (head) { - struct ib_send_wr *tail = head; + struct ib_send_wr *next = head; - while (tail->next) - tail = tail->next; - tail->next = wr; + while (next->next) + next = next->next; + next->next = wr; } else { head = wr; } + if (tail) + wr->next = tail; + return ib_post_send(qp, head, NULL); } @@ -142,15 +145,16 @@ int rtrs_iu_post_send(struct rtrs_con *con, struct rtrs_iu *iu, size_t size, .send_flags = IB_SEND_SIGNALED, }; - return rtrs_post_send(con->qp, head, &wr); + return rtrs_post_send(con->qp, head, &wr, NULL); } EXPORT_SYMBOL_GPL(rtrs_iu_post_send); int rtrs_iu_post_rdma_write_imm(struct rtrs_con *con, struct rtrs_iu *iu, - struct ib_sge *sge, unsigned int num_sge, - u32 rkey, u64 rdma_addr, u32 imm_data, - enum ib_send_flags flags, - struct ib_send_wr *head) + struct ib_sge *sge, unsigned int num_sge, + u32 rkey, u64 rdma_addr, u32 imm_data, + enum ib_send_flags flags, + struct ib_send_wr *head, + struct ib_send_wr *tail) { struct ib_rdma_wr wr; int i; @@ -174,7 +178,7 @@ int rtrs_iu_post_rdma_write_imm(struct rtrs_con *con, struct rtrs_iu *iu, if (WARN_ON(sge[i].length == 0)) return -EINVAL; - return rtrs_post_send(con->qp, head, &wr.wr); + return rtrs_post_send(con->qp, head, &wr.wr, tail); } EXPORT_SYMBOL_GPL(rtrs_iu_post_rdma_write_imm); @@ -191,7 +195,7 @@ int rtrs_post_rdma_write_imm_empty(struct rtrs_con *con, struct ib_cqe *cqe, .wr.ex.imm_data = cpu_to_be32(imm_data), }; - return rtrs_post_send(con->qp, head, &wr.wr); + return rtrs_post_send(con->qp, head, &wr.wr, NULL); } EXPORT_SYMBOL_GPL(rtrs_post_rdma_write_imm_empty); From patchwork Tue Jun 8 11:35:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinpu Wang X-Patchwork-Id: 12306483 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44EB2C48BE6 for ; Tue, 8 Jun 2021 11:36:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 203B060FE8 for ; Tue, 8 Jun 2021 11:36:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232089AbhFHLie (ORCPT ); Tue, 8 Jun 2021 07:38:34 -0400 Received: from mail-ej1-f53.google.com ([209.85.218.53]:40602 "EHLO mail-ej1-f53.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232000AbhFHLic (ORCPT ); Tue, 8 Jun 2021 07:38:32 -0400 Received: by mail-ej1-f53.google.com with SMTP id my49so15382840ejc.7 for ; Tue, 08 Jun 2021 04:36:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ionos.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=81ZfyxHatZkzP1y8cEA2/eADwQXaGqar5TdnIipl7BU=; b=Ul2Lpc3IPX8etDKcbEf3MoUbxYwsISd2NFVK7pW/XJTKbEdJI+DH0Xu41VfnGOmmum xqQUDmZPX8NRdgWri5XIvLvK+6VwdMdY0iom1hyo4q4/ZBCHvwDjWEk6GvSAZM77QrTL OehHuLwT9krXi0dnRYOQj2nfDiNJlNMui3JeYON6kCVqapKjgxhWbE33hycG2oWEborW LVlE1TQAsUGWoR4fwFUqJ7Z+aFY3N+eKdY7Xs9LzxpAe8IulNXiP7u0s9ZOkYhbHdUJu zqggxP9jm8ltzioZ1kd6asQ+lgeJVQUWqSx6kNnbr/bbYwcPRcU0k39VvTFMLMjcxtF9 ToiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=81ZfyxHatZkzP1y8cEA2/eADwQXaGqar5TdnIipl7BU=; b=Bu5sQ4btsOxeHuRSdfIzDC1NHtM//d+y8ZYnT2o0tZ/tQeVKcyGWcIIoMYX0CSu7xg 4mp2Ux5s3lNnVBIa919QNzGM+FxZlIWgaNSCZDK4xTMcrqbdnI6uWK4/0kAwDkLDP3Cv wi4kS1TIrLUIiKZ1ZGPb5erzeBXMjyqOtehwrT9ugSJFq3zv0P0fn3B7LCXHWtRIWQoN YLtdadn2kwEaRdpgeTvWuvItrLiUjbDsVvFGQmhjhcsbChNSigKVnksYypvcMDRoz0Ae d78kRQQ/a1DBCYwnyeYkSO+WUivFm1nJIlDFPpkHt7EwsGJ1Xujc5mFPeVVTMRp3zrTx r3lQ== X-Gm-Message-State: AOAM533D503luNnkyVNT5dIqwm91FZau8K3OkCbSbRIrkK5AF5IAOd8J kf/DOUOUgServ43gRFJnysS4KELofj2CaA== X-Google-Smtp-Source: ABdhPJzsMilv2cYICIqkJ1hfn4ND/Jm/uL53vMRSavxGJv4Bqyq56c0aGJZ5d9CXEgIZzuSvVXICmg== X-Received: by 2002:a17:906:3845:: with SMTP id w5mr23454977ejc.518.1623152138867; Tue, 08 Jun 2021 04:35:38 -0700 (PDT) Received: from jwang-Latitude-5491.fkb.profitbricks.net ([2001:16b8:4943:5b00:2927:a608:b264:d770]) by smtp.gmail.com with ESMTPSA id c7sm7675480ejs.26.2021.06.08.04.35.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Jun 2021 04:35:38 -0700 (PDT) From: Jack Wang To: linux-rdma@vger.kernel.org Cc: bvanassche@acm.org, leon@kernel.org, dledford@redhat.com, jgg@ziepe.ca, haris.iqbal@ionos.com, jinpu.wang@ionos.com, axboe@kernel.dk, Jack Wang , Dima Stepanov Subject: [PATCH for-next 2/5] RDMA/rtrs-clt: Write path fast memory registration Date: Tue, 8 Jun 2021 13:35:33 +0200 Message-Id: <20210608113536.42965-3-jinpu.wang@ionos.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210608113536.42965-1-jinpu.wang@ionos.com> References: <20210608113536.42965-1-jinpu.wang@ionos.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Jack Wang With fast memory registration in write path, we can reduce the memory consumption by using less max_send_sge, support IO bigger than 116 KB (29 segments * 4 KB) without splitting, and it also make the IO path more symmetric. To avoid some times MR reg failed, waiting for the invalidation to finish before the new mr reg. Introduce a refcount, only finish the request when both local invalidation and io reply are there. Signed-off-by: Jack Wang Signed-off-by: Md Haris Iqbal Signed-off-by: Dima Stepanov --- drivers/infiniband/ulp/rtrs/rtrs-clt.c | 100 ++++++++++++++++++------- drivers/infiniband/ulp/rtrs/rtrs-clt.h | 1 + 2 files changed, 74 insertions(+), 27 deletions(-) diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c index 5ec02f78be3f..b7c9684d7f62 100644 --- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c +++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c @@ -412,6 +412,7 @@ static void complete_rdma_req(struct rtrs_clt_io_req *req, int errno, req->inv_errno = errno; } + refcount_inc(&req->ref); err = rtrs_inv_rkey(req); if (unlikely(err)) { rtrs_err(con->c.sess, "Send INV WR key=%#x: %d\n", @@ -427,10 +428,14 @@ static void complete_rdma_req(struct rtrs_clt_io_req *req, int errno, return; } + if (!refcount_dec_and_test(&req->ref)) + return; } ib_dma_unmap_sg(sess->s.dev->ib_dev, req->sglist, req->sg_cnt, req->dir); } + if (!refcount_dec_and_test(&req->ref)) + return; if (sess->clt->mp_policy == MP_POLICY_MIN_INFLIGHT) atomic_dec(&sess->stats->inflight); @@ -438,10 +443,9 @@ static void complete_rdma_req(struct rtrs_clt_io_req *req, int errno, req->con = NULL; if (errno) { - rtrs_err_rl(con->c.sess, - "IO request failed: error=%d path=%s [%s:%u]\n", + rtrs_err_rl(con->c.sess, "IO request failed: error=%d path=%s [%s:%u] notify=%d\n", errno, kobject_name(&sess->kobj), sess->hca_name, - sess->hca_port); + sess->hca_port, notify); } if (notify) @@ -956,6 +960,7 @@ static void rtrs_clt_init_req(struct rtrs_clt_io_req *req, req->need_inv = false; req->need_inv_comp = false; req->inv_errno = 0; + refcount_set(&req->ref, 1); iov_iter_kvec(&iter, READ, vec, 1, usr_len); len = _copy_from_iter(req->iu->buf, usr_len, &iter); @@ -1000,7 +1005,7 @@ rtrs_clt_get_copy_req(struct rtrs_clt_sess *alive_sess, static int rtrs_post_rdma_write_sg(struct rtrs_clt_con *con, struct rtrs_clt_io_req *req, - struct rtrs_rbuf *rbuf, + struct rtrs_rbuf *rbuf, bool fr_en, u32 size, u32 imm, struct ib_send_wr *wr, struct ib_send_wr *tail) { @@ -1012,17 +1017,26 @@ static int rtrs_post_rdma_write_sg(struct rtrs_clt_con *con, int i; struct ib_send_wr *ptail = NULL; - for_each_sg(req->sglist, sg, req->sg_cnt, i) { - sge[i].addr = sg_dma_address(sg); - sge[i].length = sg_dma_len(sg); - sge[i].lkey = sess->s.dev->ib_pd->local_dma_lkey; + if (fr_en) { + i = 0; + sge[i].addr = req->mr->iova; + sge[i].length = req->mr->length; + sge[i].lkey = req->mr->lkey; + i++; + num_sge = 2; + ptail = tail; + } else { + for_each_sg(req->sglist, sg, req->sg_cnt, i) { + sge[i].addr = sg_dma_address(sg); + sge[i].length = sg_dma_len(sg); + sge[i].lkey = sess->s.dev->ib_pd->local_dma_lkey; + } + num_sge = 1 + req->sg_cnt; } sge[i].addr = req->iu->dma_addr; sge[i].length = size; sge[i].lkey = sess->s.dev->ib_pd->local_dma_lkey; - num_sge = 1 + req->sg_cnt; - /* * From time to time we have to post signalled sends, * or send queue will fill up and only QP reset can help. @@ -1038,6 +1052,21 @@ static int rtrs_post_rdma_write_sg(struct rtrs_clt_con *con, flags, wr, ptail); } +static int rtrs_map_sg_fr(struct rtrs_clt_io_req *req, size_t count) +{ + int nr; + + /* Align the MR to a 4K page size to match the block virt boundary */ + nr = ib_map_mr_sg(req->mr, req->sglist, count, NULL, SZ_4K); + if (nr < 0) + return nr; + if (unlikely(nr < req->sg_cnt)) + return -EINVAL; + ib_update_fast_reg_key(req->mr, ib_inc_rkey(req->mr->rkey)); + + return nr; +} + static int rtrs_clt_write_req(struct rtrs_clt_io_req *req) { struct rtrs_clt_con *con = req->con; @@ -1048,6 +1077,10 @@ static int rtrs_clt_write_req(struct rtrs_clt_io_req *req) struct rtrs_rbuf *rbuf; int ret, count = 0; u32 imm, buf_id; + struct ib_reg_wr rwr; + struct ib_send_wr inv_wr; + struct ib_send_wr *wr = NULL; + bool fr_en = false; const size_t tsize = sizeof(*msg) + req->data_len + req->usr_len; @@ -1076,15 +1109,43 @@ static int rtrs_clt_write_req(struct rtrs_clt_io_req *req) req->sg_size = tsize; rbuf = &sess->rbufs[buf_id]; + if (count) { + ret = rtrs_map_sg_fr(req, count); + if (ret < 0) { + rtrs_err_rl(s, + "Write request failed, failed to map fast reg. data, err: %d\n", + ret); + ib_dma_unmap_sg(sess->s.dev->ib_dev, req->sglist, + req->sg_cnt, req->dir); + return ret; + } + inv_wr = (struct ib_send_wr) { + .opcode = IB_WR_LOCAL_INV, + .wr_cqe = &req->inv_cqe, + .send_flags = IB_SEND_SIGNALED, + .ex.invalidate_rkey = req->mr->rkey, + }; + req->inv_cqe.done = rtrs_clt_inv_rkey_done; + rwr = (struct ib_reg_wr) { + .wr.opcode = IB_WR_REG_MR, + .wr.wr_cqe = &fast_reg_cqe, + .mr = req->mr, + .key = req->mr->rkey, + .access = (IB_ACCESS_LOCAL_WRITE), + }; + wr = &rwr.wr; + fr_en = true; + refcount_inc(&req->ref); + } /* * Update stats now, after request is successfully sent it is not * safe anymore to touch it. */ rtrs_clt_update_all_stats(req, WRITE); - ret = rtrs_post_rdma_write_sg(req->con, req, rbuf, + ret = rtrs_post_rdma_write_sg(req->con, req, rbuf, fr_en, req->usr_len + sizeof(*msg), - imm, NULL, NULL); + imm, wr, &inv_wr); if (unlikely(ret)) { rtrs_err_rl(s, "Write request failed: error=%d path=%s [%s:%u]\n", @@ -1100,21 +1161,6 @@ static int rtrs_clt_write_req(struct rtrs_clt_io_req *req) return ret; } -static int rtrs_map_sg_fr(struct rtrs_clt_io_req *req, size_t count) -{ - int nr; - - /* Align the MR to a 4K page size to match the block virt boundary */ - nr = ib_map_mr_sg(req->mr, req->sglist, count, NULL, SZ_4K); - if (nr < 0) - return nr; - if (unlikely(nr < req->sg_cnt)) - return -EINVAL; - ib_update_fast_reg_key(req->mr, ib_inc_rkey(req->mr->rkey)); - - return nr; -} - static int rtrs_clt_read_req(struct rtrs_clt_io_req *req) { struct rtrs_clt_con *con = req->con; diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.h b/drivers/infiniband/ulp/rtrs/rtrs-clt.h index eed2a20ee9be..e276a2dfcf7c 100644 --- a/drivers/infiniband/ulp/rtrs/rtrs-clt.h +++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.h @@ -116,6 +116,7 @@ struct rtrs_clt_io_req { int inv_errno; bool need_inv_comp; bool need_inv; + refcount_t ref; }; struct rtrs_rbuf { From patchwork Tue Jun 8 11:35:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinpu Wang X-Patchwork-Id: 12306485 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E54DC47082 for ; Tue, 8 Jun 2021 11:36:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 24F3F60FE8 for ; Tue, 8 Jun 2021 11:36:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232000AbhFHLie (ORCPT ); Tue, 8 Jun 2021 07:38:34 -0400 Received: from mail-ej1-f48.google.com ([209.85.218.48]:46950 "EHLO mail-ej1-f48.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232078AbhFHLid (ORCPT ); Tue, 8 Jun 2021 07:38:33 -0400 Received: by mail-ej1-f48.google.com with SMTP id he7so12642116ejc.13 for ; Tue, 08 Jun 2021 04:36:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ionos.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=O+LEBi5Z/wtsHTWBc3Zdl2VwUxRi/WQcAXOK5w1mBfY=; b=CSo3nlFuA3UgZ6dy/vT72tHgPrT/e0Mow9fnC4q1XnYr5P9WZpE9mo4Ie4LpuTLvGL SXQfzLNIlR5E2ML/VY6ohX8Y6EKSLBnOpjRRzNv6rRQOsFMCr7mR84krr9sfqfjf9reO G/cc2AXNMYaDajpzRShmdFNTpxd7nZpEC1cGoQgel0e5y/ny3BvGY6TQmdI98eNh72EW SNL4U16N99hnTlkoesQrNPpQUT7YJ4QwylIJ+XLg5DA8LOjhM5RFJl+VamB1z3EOSrUb bY59pb+AYmCAgrO1Be5qsVP7PspZIIv0qbigc2hj5wRKhicVkKtqLSqc+D7Fx0qtq4J5 eThw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=O+LEBi5Z/wtsHTWBc3Zdl2VwUxRi/WQcAXOK5w1mBfY=; b=TpmT3+IefgUJcITLLzn0uw/h00ugqZzlDaG5YOkDbPa9XPN6QVHkFAPyuSB6cY3Mul clUdYrNKf0bwsH/WNkiP6VX64WnN+RzDlR2xZpPdPp+2WK+f5YkivI96y3TuU1W85yPt NOJlrU8M5+RIU0zKb18Nj4c+VMmDG5gzDTf06fXTuE1NKYWJaZUtDL5qO2bTtxB3nM7+ 0q3RdXVPZRDevKn00bIOTkYLf6ACkIAMXvI19bXOvy6q+VghtZ8PG7EuFU2BWmn0bngH YPM7qYaMmVMTVlQ6tBUnl8G12clasRoyhiCwJzGUq8zZM1NmNezeId59vU8nSknrdWB/ X7Sg== X-Gm-Message-State: AOAM530396eoT8JR/x1c3rdRteigkX+PSmT9CwidntU/HGMG9bSwsI/o ESdqm9zeEavebdSBkg6FHKTKcc0BLrFKyQ== X-Google-Smtp-Source: ABdhPJwVvUsbIvK1kOMo0DlVs2FWhlcmBpHv27WxpkorRRr2yRWffI+Wx2jbqWRZd/I1XvwIXM4h/A== X-Received: by 2002:a17:906:af85:: with SMTP id mj5mr23754522ejb.352.1623152139610; Tue, 08 Jun 2021 04:35:39 -0700 (PDT) Received: from jwang-Latitude-5491.fkb.profitbricks.net ([2001:16b8:4943:5b00:2927:a608:b264:d770]) by smtp.gmail.com with ESMTPSA id c7sm7675480ejs.26.2021.06.08.04.35.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Jun 2021 04:35:39 -0700 (PDT) From: Jack Wang To: linux-rdma@vger.kernel.org Cc: bvanassche@acm.org, leon@kernel.org, dledford@redhat.com, jgg@ziepe.ca, haris.iqbal@ionos.com, jinpu.wang@ionos.com, axboe@kernel.dk, Jack Wang , Md Haris Iqbal Subject: [PATCH for-next 3/5] RDMA/rtrs_clt: Alloc less memory with write path fast memory registration Date: Tue, 8 Jun 2021 13:35:34 +0200 Message-Id: <20210608113536.42965-4-jinpu.wang@ionos.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210608113536.42965-1-jinpu.wang@ionos.com> References: <20210608113536.42965-1-jinpu.wang@ionos.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Jack Wang With write path fast memory registration, we need less memory for each request. With fast memory registration, we can reduce max_send_sge to save memory usage. Also convert the kmalloc_array to kcalloc. Signed-off-by: Jack Wang Reviewed-by: Md Haris Iqbal --- drivers/infiniband/ulp/rtrs/rtrs-clt.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c index b7c9684d7f62..af738e7e1396 100644 --- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c +++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c @@ -1372,8 +1372,7 @@ static int alloc_sess_reqs(struct rtrs_clt_sess *sess) if (!req->iu) goto out; - req->sge = kmalloc_array(clt->max_segments + 1, - sizeof(*req->sge), GFP_KERNEL); + req->sge = kcalloc(2, sizeof(*req->sge), GFP_KERNEL); if (!req->sge) goto out; @@ -1674,7 +1673,7 @@ static int create_con_cq_qp(struct rtrs_clt_con *con) max_recv_wr = min_t(int, sess->s.dev->ib_dev->attrs.max_qp_wr, sess->queue_depth * 3 + 1); - max_send_sge = sess->clt->max_segments + 1; + max_send_sge = 2; } cq_num = max_send_wr + max_recv_wr; /* alloc iu to recv new rkey reply when server reports flags set */ From patchwork Tue Jun 8 11:35:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinpu Wang X-Patchwork-Id: 12306487 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B80E6C49360 for ; Tue, 8 Jun 2021 11:36:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9FE5C60FF4 for ; Tue, 8 Jun 2021 11:36:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232081AbhFHLif (ORCPT ); Tue, 8 Jun 2021 07:38:35 -0400 Received: from mail-ej1-f53.google.com ([209.85.218.53]:35493 "EHLO mail-ej1-f53.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232046AbhFHLie (ORCPT ); Tue, 8 Jun 2021 07:38:34 -0400 Received: by mail-ej1-f53.google.com with SMTP id h24so32115296ejy.2 for ; Tue, 08 Jun 2021 04:36:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ionos.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9BCO67ATEZhiNryCOj59nf1QjZjzBte/L5fuaPWdfrQ=; b=BGTVO6mDBuZnoYaVUKIHXcSc8Vrb4lBm3qod/BQ6hyFkYXZh/WfbGla0L2mdfIu4Ae 4Naysng+VysCgNLSJJJH212xvSDSSU/aCGLUwUFKKa/r6qjxlLowoUKvJN2kC72fIZHB dVWWVFzjgwZjuPbJSE0GNy35eBsDXrVLic8QnfE8JG897jTvENT0jf20IB4Ni2BG+rDC FQKa6DwA7cWEMgr62vqmjtcsIdn3fzjhSOAc8gHkjO0BefpgJzlr1Lt22RAqhqCJJP9F U85CGuHsHWt5+QpU9lJlJZgSqoqxakdZLAWyEa17V6JxbOlSjgvdZUk0gBGPvK+imnwe cjmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9BCO67ATEZhiNryCOj59nf1QjZjzBte/L5fuaPWdfrQ=; b=cE9M8lp7HRrRiwhgG9XjmQ5B+cqRuH4qB7BhxqgpHabgaqIXSTGD+Mb4peVPb1LIYN dG1kEKnamTwCMLXUAqFRedhUPF4NZ/tGsFdRysv1fAS0BrJaKuHDN/H70h/3IakMVWIO lbeUo3Hse+KCwvUcnH7I+jNukDafh7QgRKWGn1E/j5XUe7DfSkAP1e0Z7ksy0y7iJ9ro gnwAvXWHuH2TKF74A+l4rDvGWJ0bud8wdXNhT1uK77WJjZhCnPzslJTmir8cVBBg+U06 opDSRBRYkJLaapfezgA1ObocciIvMm6ou3XVAQ/W+8jX3Su5lv7TI9ZX84VE9P97o4bt Kvwg== X-Gm-Message-State: AOAM532VGe3cnIKTCi4YFLsk95qan16VUv7E8tJikGJ0rEaHL/4fWG7W rZ5O4MP4yHZNDC+pzQjlFYVaXaagh3Ofeg== X-Google-Smtp-Source: ABdhPJxf5TFfm9zLHVvDenhAwQ1SwX2PlMTTey7M8zdHaIgx8FaXY5yMhdJVS7jp2J9CZg85FEFxtA== X-Received: by 2002:a17:906:8608:: with SMTP id o8mr23086028ejx.72.1623152140340; Tue, 08 Jun 2021 04:35:40 -0700 (PDT) Received: from jwang-Latitude-5491.fkb.profitbricks.net ([2001:16b8:4943:5b00:2927:a608:b264:d770]) by smtp.gmail.com with ESMTPSA id c7sm7675480ejs.26.2021.06.08.04.35.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Jun 2021 04:35:40 -0700 (PDT) From: Jack Wang To: linux-rdma@vger.kernel.org Cc: bvanassche@acm.org, leon@kernel.org, dledford@redhat.com, jgg@ziepe.ca, haris.iqbal@ionos.com, jinpu.wang@ionos.com, axboe@kernel.dk, Jack Wang , Md Haris Iqbal Subject: [PATCH for-next 4/5] RDMA/rtrs-clt: Raise MAX_SEGMENTS Date: Tue, 8 Jun 2021 13:35:35 +0200 Message-Id: <20210608113536.42965-5-jinpu.wang@ionos.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210608113536.42965-1-jinpu.wang@ionos.com> References: <20210608113536.42965-1-jinpu.wang@ionos.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Jack Wang As we can do fast memory registration on write, we can increase the max_segments, default to 512K. Signed-off-by: Jack Wang Reviewed-by: Md Haris Iqbal --- drivers/infiniband/ulp/rtrs/rtrs-clt.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c index af738e7e1396..721ed0b5ae70 100644 --- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c +++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c @@ -32,6 +32,8 @@ #define RTRS_RECONNECT_SEED 8 #define FIRST_CONN 0x01 +/* limit to 128 * 4k = 512k max IO */ +#define RTRS_MAX_SEGMENTS 128 MODULE_DESCRIPTION("RDMA Transport Client"); MODULE_LICENSE("GPL"); @@ -1545,7 +1547,7 @@ static struct rtrs_clt_sess *alloc_sess(struct rtrs_clt *clt, rdma_addr_size((struct sockaddr *)path->src)); strscpy(sess->s.sessname, clt->sessname, sizeof(sess->s.sessname)); sess->clt = clt; - sess->max_pages_per_mr = max_segments; + sess->max_pages_per_mr = RTRS_MAX_SEGMENTS; init_waitqueue_head(&sess->state_wq); sess->state = RTRS_CLT_CONNECTING; atomic_set(&sess->connected_cnt, 0); @@ -2694,7 +2696,7 @@ static struct rtrs_clt *alloc_clt(const char *sessname, size_t paths_num, clt->paths_up = MAX_PATHS_NUM; clt->port = port; clt->pdu_sz = pdu_sz; - clt->max_segments = max_segments; + clt->max_segments = RTRS_MAX_SEGMENTS; clt->reconnect_delay_sec = reconnect_delay_sec; clt->max_reconnect_attempts = max_reconnect_attempts; clt->priv = priv; From patchwork Tue Jun 8 11:35:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinpu Wang X-Patchwork-Id: 12306479 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08967C4743E for ; Tue, 8 Jun 2021 11:35:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E3C1361354 for ; Tue, 8 Jun 2021 11:35:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231959AbhFHLhf (ORCPT ); Tue, 8 Jun 2021 07:37:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49190 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231630AbhFHLhf (ORCPT ); Tue, 8 Jun 2021 07:37:35 -0400 Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com [IPv6:2a00:1450:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BC1E1C061574 for ; Tue, 8 Jun 2021 04:35:42 -0700 (PDT) Received: by mail-ej1-x630.google.com with SMTP id h24so32115328ejy.2 for ; Tue, 08 Jun 2021 04:35:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ionos.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kxGFXVvnNahCS3OePh/f7xn3jy7S/TpFmFeEU66sGCQ=; b=OAiPVLER1NvFnzNPrIQHU/mDfUY7+p2SSq0lfD2pBPSzpB34hnaQ7U+KvJfDL1PffK LOv05JiuubbsYOsrkeDy4DPsNpvE7Qpmj/Hs+0YxnmCSfL7IGhq7EYlxERLk+Exlj8aK qkFao5GFNzNMlp+gOjkkwuQkhklV9bFIBLGy/aQQRxp9txZx1f4MPTVksWZn+jTJihqS DYB+RVvUDNMfC3YEPXHjbeivhCefvKKyeuTt8L3uoMrt8ggb9iEICbnp/WlMK7PKdiZy Fn7r7QUCcNbJPIx5kHsc3UJIsMaDKFqegKujyhX24FGvJBc/o/qqWD7fpjdbHUC7SK6U 1CPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kxGFXVvnNahCS3OePh/f7xn3jy7S/TpFmFeEU66sGCQ=; b=KMgySfBvISWJQcs1BSABZX0lEfSujk2DaVdPxd1g9A3Rl5pmkBLqosBwRAwNRbyCTU HoZ64234Z3e7s8LzjOt+I2KzKkKgrh2pkkbzLz+6aAHQHdqiNHBS045AtaOY58AsaXPf CvJRjWlmvYRKJ7Yw4UXJhoJyaJvGDhGI/bYXw/Lq9996TjCnh47dQcaTFaNuTtPT1WjO qmzvY2WV2N1yvC5nr+IROJu8sZq2FaokVr86cWhZq2I60+L9uUNWtr5UBcYpNN0CsmUG rnWlSHaZwqfGwh6oX7PtqSY7LWUDv5oTunOexEo8MzO4/FUiv+Sz9Sthg+cWidUCfY8L JBNw== X-Gm-Message-State: AOAM530YieWstm4MAmsY1YeaGzyK13ztVzOCRzVT72uF8hh8y2vPo5cv c8Kia/gAwKACpgyuzvxc8rwIR2vO1hEGzg== X-Google-Smtp-Source: ABdhPJy1sJ/cX9g2qv1qc474r76S9fVO6+O5355zBKUOs3hnU79qyIvKfqoVtgDfR5SGZj7G1OUT2g== X-Received: by 2002:a17:906:9715:: with SMTP id k21mr23021586ejx.553.1623152141126; Tue, 08 Jun 2021 04:35:41 -0700 (PDT) Received: from jwang-Latitude-5491.fkb.profitbricks.net ([2001:16b8:4943:5b00:2927:a608:b264:d770]) by smtp.gmail.com with ESMTPSA id c7sm7675480ejs.26.2021.06.08.04.35.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Jun 2021 04:35:40 -0700 (PDT) From: Jack Wang To: linux-rdma@vger.kernel.org Cc: bvanassche@acm.org, leon@kernel.org, dledford@redhat.com, jgg@ziepe.ca, haris.iqbal@ionos.com, jinpu.wang@ionos.com, axboe@kernel.dk, Jack Wang , Md Haris Iqbal Subject: [PATCH for-next 5/5] rnbd/rtrs-clt: Query and use max_segments from rtrs-clt. Date: Tue, 8 Jun 2021 13:35:36 +0200 Message-Id: <20210608113536.42965-6-jinpu.wang@ionos.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210608113536.42965-1-jinpu.wang@ionos.com> References: <20210608113536.42965-1-jinpu.wang@ionos.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Jack Wang With fast memory registration on write request, rnbd-clt can do bigger IO without split. rnbd-clt now can query rtrs-clt to get the max_segments, instead of using BMAX_SEGMENTS. BMAX_SEGMENTS is not longer needed, so remove it. Cc: Jens Axboe Signed-off-by: Jack Wang Reviewed-by: Md Haris Iqbal --- drivers/block/rnbd/rnbd-clt.c | 5 +++-- drivers/block/rnbd/rnbd-clt.h | 5 +---- drivers/infiniband/ulp/rtrs/rtrs-clt.c | 18 ++++++++---------- drivers/infiniband/ulp/rtrs/rtrs.h | 2 +- 4 files changed, 13 insertions(+), 17 deletions(-) diff --git a/drivers/block/rnbd/rnbd-clt.c b/drivers/block/rnbd/rnbd-clt.c index c604a402cd5c..d6f12e6c91f7 100644 --- a/drivers/block/rnbd/rnbd-clt.c +++ b/drivers/block/rnbd/rnbd-clt.c @@ -92,7 +92,7 @@ static int rnbd_clt_set_dev_attr(struct rnbd_clt_dev *dev, dev->fua = !!(rsp->cache_policy & RNBD_FUA); dev->max_hw_sectors = sess->max_io_size / SECTOR_SIZE; - dev->max_segments = BMAX_SEGMENTS; + dev->max_segments = sess->max_segments; return 0; } @@ -1292,7 +1292,7 @@ find_and_get_or_create_sess(const char *sessname, sess->rtrs = rtrs_clt_open(&rtrs_ops, sessname, paths, path_cnt, port_nr, 0, /* Do not use pdu of rtrs */ - RECONNECT_DELAY, BMAX_SEGMENTS, + RECONNECT_DELAY, MAX_RECONNECTS, nr_poll_queues); if (IS_ERR(sess->rtrs)) { err = PTR_ERR(sess->rtrs); @@ -1306,6 +1306,7 @@ find_and_get_or_create_sess(const char *sessname, sess->max_io_size = attrs.max_io_size; sess->queue_depth = attrs.queue_depth; sess->nr_poll_queues = nr_poll_queues; + sess->max_segments = attrs.max_segments; err = setup_mq_tags(sess); if (err) diff --git a/drivers/block/rnbd/rnbd-clt.h b/drivers/block/rnbd/rnbd-clt.h index b5322c5aaac0..9ef8c4f306f2 100644 --- a/drivers/block/rnbd/rnbd-clt.h +++ b/drivers/block/rnbd/rnbd-clt.h @@ -20,10 +20,6 @@ #include "rnbd-proto.h" #include "rnbd-log.h" -/* Max. number of segments per IO request, Mellanox Connect X ~ Connect X5, - * choose minimial 30 for all, minus 1 for internal protocol, so 29. - */ -#define BMAX_SEGMENTS 29 /* time in seconds between reconnect tries, default to 30 s */ #define RECONNECT_DELAY 30 /* @@ -89,6 +85,7 @@ struct rnbd_clt_session { atomic_t busy; size_t queue_depth; u32 max_io_size; + u32 max_segments; struct blk_mq_tag_set tag_set; u32 nr_poll_queues; struct mutex lock; /* protects state and devs_list */ diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c index 721ed0b5ae70..40dd524b5101 100644 --- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c +++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c @@ -1357,7 +1357,6 @@ static void free_sess_reqs(struct rtrs_clt_sess *sess) static int alloc_sess_reqs(struct rtrs_clt_sess *sess) { struct rtrs_clt_io_req *req; - struct rtrs_clt *clt = sess->clt; int i, err = -ENOMEM; sess->reqs = kcalloc(sess->queue_depth, sizeof(*sess->reqs), @@ -1466,6 +1465,8 @@ static void query_fast_reg_mode(struct rtrs_clt_sess *sess) sess->max_pages_per_mr = min3(sess->max_pages_per_mr, (u32)max_pages_per_mr, ib_dev->attrs.max_fast_reg_page_list_len); + sess->clt->max_segments = + min(sess->max_pages_per_mr, sess->clt->max_segments); } static bool rtrs_clt_change_state_get_old(struct rtrs_clt_sess *sess, @@ -1503,9 +1504,8 @@ static void rtrs_clt_reconnect_work(struct work_struct *work); static void rtrs_clt_close_work(struct work_struct *work); static struct rtrs_clt_sess *alloc_sess(struct rtrs_clt *clt, - const struct rtrs_addr *path, - size_t con_num, u16 max_segments, - u32 nr_poll_queues) + const struct rtrs_addr *path, + size_t con_num, u32 nr_poll_queues) { struct rtrs_clt_sess *sess; int err = -ENOMEM; @@ -2667,7 +2667,6 @@ static struct rtrs_clt *alloc_clt(const char *sessname, size_t paths_num, u16 port, size_t pdu_sz, void *priv, void (*link_ev)(void *priv, enum rtrs_clt_link_ev ev), - unsigned int max_segments, unsigned int reconnect_delay_sec, unsigned int max_reconnect_attempts) { @@ -2765,7 +2764,6 @@ static void free_clt(struct rtrs_clt *clt) * @port: port to be used by the RTRS session * @pdu_sz: Size of extra payload which can be accessed after permit allocation. * @reconnect_delay_sec: time between reconnect tries - * @max_segments: Max. number of segments per IO request * @max_reconnect_attempts: Number of times to reconnect on error before giving * up, 0 for * disabled, -1 for forever * @nr_poll_queues: number of polling mode connection using IB_POLL_DIRECT flag @@ -2780,7 +2778,6 @@ struct rtrs_clt *rtrs_clt_open(struct rtrs_clt_ops *ops, const struct rtrs_addr *paths, size_t paths_num, u16 port, size_t pdu_sz, u8 reconnect_delay_sec, - u16 max_segments, s16 max_reconnect_attempts, u32 nr_poll_queues) { struct rtrs_clt_sess *sess, *tmp; @@ -2789,7 +2786,7 @@ struct rtrs_clt *rtrs_clt_open(struct rtrs_clt_ops *ops, clt = alloc_clt(sessname, paths_num, port, pdu_sz, ops->priv, ops->link_ev, - max_segments, reconnect_delay_sec, + reconnect_delay_sec, max_reconnect_attempts); if (IS_ERR(clt)) { err = PTR_ERR(clt); @@ -2799,7 +2796,7 @@ struct rtrs_clt *rtrs_clt_open(struct rtrs_clt_ops *ops, struct rtrs_clt_sess *sess; sess = alloc_sess(clt, &paths[i], nr_cpu_ids, - max_segments, nr_poll_queues); + nr_poll_queues); if (IS_ERR(sess)) { err = PTR_ERR(sess); goto close_all_sess; @@ -3061,6 +3058,7 @@ int rtrs_clt_query(struct rtrs_clt *clt, struct rtrs_attrs *attr) return -ECOMM; attr->queue_depth = clt->queue_depth; + attr->max_segments = clt->max_segments; /* Cap max_io_size to min of remote buffer size and the fr pages */ attr->max_io_size = min_t(int, clt->max_io_size, clt->max_segments * SZ_4K); @@ -3075,7 +3073,7 @@ int rtrs_clt_create_path_from_sysfs(struct rtrs_clt *clt, struct rtrs_clt_sess *sess; int err; - sess = alloc_sess(clt, addr, nr_cpu_ids, clt->max_segments, 0); + sess = alloc_sess(clt, addr, nr_cpu_ids, 0); if (IS_ERR(sess)) return PTR_ERR(sess); diff --git a/drivers/infiniband/ulp/rtrs/rtrs.h b/drivers/infiniband/ulp/rtrs/rtrs.h index dc3e1af1a85b..859c79685daf 100644 --- a/drivers/infiniband/ulp/rtrs/rtrs.h +++ b/drivers/infiniband/ulp/rtrs/rtrs.h @@ -57,7 +57,6 @@ struct rtrs_clt *rtrs_clt_open(struct rtrs_clt_ops *ops, const struct rtrs_addr *paths, size_t path_cnt, u16 port, size_t pdu_sz, u8 reconnect_delay_sec, - u16 max_segments, s16 max_reconnect_attempts, u32 nr_poll_queues); void rtrs_clt_close(struct rtrs_clt *sess); @@ -110,6 +109,7 @@ int rtrs_clt_rdma_cq_direct(struct rtrs_clt *clt, unsigned int index); struct rtrs_attrs { u32 queue_depth; u32 max_io_size; + u32 max_segments; }; int rtrs_clt_query(struct rtrs_clt *sess, struct rtrs_attrs *attr);