From patchwork Thu Oct 27 18:54:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13022530 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8BB1FFA3740 for ; Thu, 27 Oct 2022 18:56:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236138AbiJ0S4S (ORCPT ); Thu, 27 Oct 2022 14:56:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56466 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236168AbiJ0S4Q (ORCPT ); Thu, 27 Oct 2022 14:56:16 -0400 Received: from mail-oa1-x33.google.com (mail-oa1-x33.google.com [IPv6:2001:4860:4864:20::33]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1267336DF8 for ; Thu, 27 Oct 2022 11:56:13 -0700 (PDT) Received: by mail-oa1-x33.google.com with SMTP id 586e51a60fabf-13c569e5ff5so3016990fac.6 for ; Thu, 27 Oct 2022 11:56:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fTx3ZwrPzjrPtYAsIakHh9HEYCDZ3bW/H1ENJ8+eUII=; b=g/V4bMmJTsF7RgXlFzw89zYoqehgKHJaaAHNbDSUUyMWP5p/sJ+FCTNBi4zm3VMH8N b3ki+uaCYQ9TwCLSLKLKzI9C+VlVsHUUbIDS1gL12o5Qv6ZcizZV3ivDLgsh931WDChb jlBz7cfK59t3tS+doMQIvkbG6/cpkSTNoRIX+AGlcs6TngzesIWswlIOftYbiwxYNDIn Y0/l/1zrlF+m5ox6YYkGhFgmVgRH4H76lJ9LmyoJBdLbT2+ffJW/t0VL6e9xdfyChszH W4TL3pBbiHQmTiUIs2bevPE5JsRNWeFPhDzRI3ABVIgzi6Ffhmt9Q3qNuPUzVoQlU1zO IHUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fTx3ZwrPzjrPtYAsIakHh9HEYCDZ3bW/H1ENJ8+eUII=; b=s+0Q5ZQ0saM8ayPbAIzN6r2if+BjecKDgz85dqPeyd/A/re4a2QseOiBjDIw0oRS3K rPnX11y09eZvKyKd+M4wXnL2SXTuIu27DavUtB0O1S4UJVPhrRkidcG68GrSf8GVXOpl 6BKkF/DdY+oOD+7dOQEh9bxwNEpD/aHhADX8C0ONy5tCAHC4ubh+oy3Twy7gdPETLgE6 cTSAf+HK6Tge6mz0iwQvChuuJyfoQPdGkBOaPngxhCD8+XZQQnjBps8IW1t6bC2Gwyip eL+d976xSvDcloueMYbA34OWpuYVjC82SDdE8uX0CafNduLE9gJTrXMw7/Za1nQT5hyd Odyg== X-Gm-Message-State: ACrzQf0x2a4q8yauFnKMFbHVw7Tg3toxOm6Pq/7NFPaO6HnB/I0i4fnz xlaQ2FFh4+PuF9xXMBmOYOA= X-Google-Smtp-Source: AMsMyM5Hr1WkNfT/vIjrnMQUm490Q3aORCWEzBJnO9yvNfqqnwbTNBrjloeo8GDfAxeOzBL2vTMyBw== X-Received: by 2002:a05:6870:f5a4:b0:136:3e0d:acdd with SMTP id eh36-20020a056870f5a400b001363e0dacddmr6843821oab.298.1666896972290; Thu, 27 Oct 2022 11:56:12 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f015-3653-e617-fa3f.res6.spectrum.com. [2603:8081:140c:1a00:f015:3653:e617:fa3f]) by smtp.googlemail.com with ESMTPSA id f1-20020a4a8f41000000b0049602fb9b4csm732736ool.46.2022.10.27.11.56.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Oct 2022 11:56:11 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, jhack@hpe.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 01/17] RDMA/rxe: Isolate code to fill request roce headers Date: Thu, 27 Oct 2022 13:54:55 -0500 Message-Id: <20221027185510.33808-2-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221027185510.33808-1-rpearsonhpe@gmail.com> References: <20221027185510.33808-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Isolate the code to fill in roce headers in a request packet into a subroutine named init_roce_headers. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_req.c | 106 +++++++++++++++------------- 1 file changed, 57 insertions(+), 49 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index f63771207970..bcfbc78c0b53 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -377,79 +377,87 @@ static inline int get_mtu(struct rxe_qp *qp) return rxe->port.mtu_cap; } -static struct sk_buff *init_req_packet(struct rxe_qp *qp, - struct rxe_av *av, - struct rxe_send_wqe *wqe, - int opcode, u32 payload, - struct rxe_pkt_info *pkt) +static void rxe_init_roce_hdrs(struct rxe_qp *qp, struct rxe_send_wqe *wqe, + struct rxe_pkt_info *pkt, int pad) { - struct rxe_dev *rxe = to_rdev(qp->ibqp.device); - struct sk_buff *skb; - struct rxe_send_wr *ibwr = &wqe->wr; - int pad = (-payload) & 0x3; - int paylen; - int solicited; - u32 qp_num; - int ack_req; - - /* length from start of bth to end of icrc */ - paylen = rxe_opcode[opcode].length + payload + pad + RXE_ICRC_SIZE; - pkt->paylen = paylen; - - /* init skb */ - skb = rxe_init_packet(rxe, av, paylen, pkt); - if (unlikely(!skb)) - return NULL; + struct rxe_send_wr *wr = &wqe->wr; + int is_send; + int is_write_imm; + int is_end; + int solicited; + u32 dst_qpn; + u32 qkey; + int ack_req; /* init bth */ - solicited = (ibwr->send_flags & IB_SEND_SOLICITED) && - (pkt->mask & RXE_END_MASK) && - ((pkt->mask & (RXE_SEND_MASK)) || - (pkt->mask & (RXE_WRITE_MASK | RXE_IMMDT_MASK)) == - (RXE_WRITE_MASK | RXE_IMMDT_MASK)); - - qp_num = (pkt->mask & RXE_DETH_MASK) ? ibwr->wr.ud.remote_qpn : - qp->attr.dest_qp_num; - - ack_req = ((pkt->mask & RXE_END_MASK) || - (qp->req.noack_pkts++ > RXE_MAX_PKT_PER_ACK)); + is_send = pkt->mask & RXE_SEND_MASK; + is_write_imm = (pkt->mask & RXE_WRITE_MASK) && + (pkt->mask & RXE_IMMDT_MASK); + is_end = pkt->mask & RXE_END_MASK; + solicited = (wr->send_flags & IB_SEND_SOLICITED) && is_end && + (is_send || is_write_imm); + dst_qpn = (pkt->mask & RXE_DETH_MASK) ? wr->wr.ud.remote_qpn : + qp->attr.dest_qp_num; + ack_req = is_end || (qp->req.noack_pkts++ > RXE_MAX_PKT_PER_ACK); if (ack_req) qp->req.noack_pkts = 0; - bth_init(pkt, pkt->opcode, solicited, 0, pad, IB_DEFAULT_PKEY_FULL, qp_num, - ack_req, pkt->psn); + bth_init(pkt, pkt->opcode, solicited, 0, pad, IB_DEFAULT_PKEY_FULL, + dst_qpn, ack_req, pkt->psn); - /* init optional headers */ + /* init extended headers */ if (pkt->mask & RXE_RETH_MASK) { - reth_set_rkey(pkt, ibwr->wr.rdma.rkey); + reth_set_rkey(pkt, wr->wr.rdma.rkey); reth_set_va(pkt, wqe->iova); reth_set_len(pkt, wqe->dma.resid); } if (pkt->mask & RXE_IMMDT_MASK) - immdt_set_imm(pkt, ibwr->ex.imm_data); + immdt_set_imm(pkt, wr->ex.imm_data); if (pkt->mask & RXE_IETH_MASK) - ieth_set_rkey(pkt, ibwr->ex.invalidate_rkey); + ieth_set_rkey(pkt, wr->ex.invalidate_rkey); if (pkt->mask & RXE_ATMETH_MASK) { atmeth_set_va(pkt, wqe->iova); - if (opcode == IB_OPCODE_RC_COMPARE_SWAP) { - atmeth_set_swap_add(pkt, ibwr->wr.atomic.swap); - atmeth_set_comp(pkt, ibwr->wr.atomic.compare_add); + if (pkt->opcode == IB_OPCODE_RC_COMPARE_SWAP) { + atmeth_set_swap_add(pkt, wr->wr.atomic.swap); + atmeth_set_comp(pkt, wr->wr.atomic.compare_add); } else { - atmeth_set_swap_add(pkt, ibwr->wr.atomic.compare_add); + atmeth_set_swap_add(pkt, wr->wr.atomic.compare_add); } - atmeth_set_rkey(pkt, ibwr->wr.atomic.rkey); + atmeth_set_rkey(pkt, wr->wr.atomic.rkey); } if (pkt->mask & RXE_DETH_MASK) { - if (qp->ibqp.qp_num == 1) - deth_set_qkey(pkt, GSI_QKEY); - else - deth_set_qkey(pkt, ibwr->wr.ud.remote_qkey); - deth_set_sqp(pkt, qp->ibqp.qp_num); + qkey = (qp->ibqp.qp_num == 1) ? GSI_QKEY : + wr->wr.ud.remote_qkey; + deth_set_qkey(pkt, qkey); + deth_set_sqp(pkt, qp_num(qp)); } +} + +static struct sk_buff *init_req_packet(struct rxe_qp *qp, + struct rxe_av *av, + struct rxe_send_wqe *wqe, + int opcode, u32 payload, + struct rxe_pkt_info *pkt) +{ + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); + struct sk_buff *skb; + int pad = (-payload) & 0x3; + int paylen; + + /* length from start of bth to end of icrc */ + paylen = rxe_opcode[opcode].length + payload + pad + RXE_ICRC_SIZE; + pkt->paylen = paylen; + + /* init skb */ + skb = rxe_init_packet(rxe, av, paylen, pkt); + if (unlikely(!skb)) + return NULL; + + rxe_init_roce_hdrs(qp, wqe, pkt, pad); return skb; } From patchwork Thu Oct 27 18:54:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13022532 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24EAFFA3742 for ; Thu, 27 Oct 2022 18:56:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236266AbiJ0S4T (ORCPT ); Thu, 27 Oct 2022 14:56:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56480 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234815AbiJ0S4Q (ORCPT ); Thu, 27 Oct 2022 14:56:16 -0400 Received: from mail-oi1-x22d.google.com (mail-oi1-x22d.google.com [IPv6:2607:f8b0:4864:20::22d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E60AD1D33C for ; Thu, 27 Oct 2022 11:56:13 -0700 (PDT) Received: by mail-oi1-x22d.google.com with SMTP id l5so3389889oif.7 for ; Thu, 27 Oct 2022 11:56:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=OEmM98qWgdC7JibJI9d8dkTVJi2BBzT4fZsK+PNxQ0c=; b=mlGQy/jmi3dVysoMrth9e0fplnNCx+DgN+sjjKohSkraQQj70yWyuN7aiwKbJEVawT KHJrWS0wixtTiwY15CduEvqu+IxM58ilZrE9vY9QrkywKJJ3i6iClqDaNxq2YzLDqQ4i JIVVjygvJbZsipM7iW0W6OzQHz7n3WAwgZghCxs+pkD9oeq46kg50baVBEmVgwklcDS/ 3TprY6+4sAnlBXB6akaZoSFzMh2HISzYzXb3mJs2mDjQNKq36C1gC3CtKkVbMRCWZrg+ tmhXBq1hzwj1u25/3P8X2+CCMLt20Xfy9mZFUT8/NmFRzUkgE+j1xk85lMG/IvtpINiD PDdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=OEmM98qWgdC7JibJI9d8dkTVJi2BBzT4fZsK+PNxQ0c=; b=NpcuosZCXKNuZSq0U5gF6j1pZGgl1DhfVK69Ask72wZrVo0JRl6ynb4JD7r3F91Roj je07Ck3mezU7bl4dIR3tzD86iWFU+xj9SXXXRok5CsgIpsB/QJh49ZZIXKl98LXNVfFM D4w6p8fWPKIJ5QBqfucId8Q3vGn3TG21uQv5ZFofPNtjSj4LDbvqHPu+DffIoTYJcCOB ePC4SnSOYmscC+bur067oL8w+JkgQ+SeRPEncYp9d5lVT/oV5FaZ7pGGHAqFy1uiNoJN 2diV8KGf/JK7OM5i8G8zB6YRQH2QsAYjliWaPc1W+5abyz0p/ERv0FESY1fEOlC89JDx eUNA== X-Gm-Message-State: ACrzQf2+ZZ1S/Yw9Jgt49+sFdM6qpHTUjoV0nn2/JqT6s1gNBeU9bdpK emYRoTdaFgGQwZ5H7hDWG3k= X-Google-Smtp-Source: AMsMyM7usfRuTIulNgdr+aVwJsHe3zAM2vRWFpH8UX92P27faccOluLW1FtbRS1+rbiCbOL463mP1g== X-Received: by 2002:a05:6808:1291:b0:359:b6bc:df0c with SMTP id a17-20020a056808129100b00359b6bcdf0cmr4283983oiw.50.1666896973285; Thu, 27 Oct 2022 11:56:13 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f015-3653-e617-fa3f.res6.spectrum.com. [2603:8081:140c:1a00:f015:3653:e617:fa3f]) by smtp.googlemail.com with ESMTPSA id f1-20020a4a8f41000000b0049602fb9b4csm732736ool.46.2022.10.27.11.56.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Oct 2022 11:56:12 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, jhack@hpe.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 02/17] RDMA/rxe: Isolate request payload code in a subroutine Date: Thu, 27 Oct 2022 13:54:56 -0500 Message-Id: <20221027185510.33808-3-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221027185510.33808-1-rpearsonhpe@gmail.com> References: <20221027185510.33808-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Isolate the code that fills the payload of a request packet into a subroutine named rxe_init_payload(). Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_req.c | 34 +++++++++++++++++------------ 1 file changed, 20 insertions(+), 14 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index bcfbc78c0b53..10a75f4e3608 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -437,6 +437,25 @@ static void rxe_init_roce_hdrs(struct rxe_qp *qp, struct rxe_send_wqe *wqe, } } +static int rxe_init_payload(struct rxe_qp *qp, struct rxe_send_wqe *wqe, + struct rxe_pkt_info *pkt, u32 payload) +{ + void *data; + int err = 0; + + if (wqe->wr.send_flags & IB_SEND_INLINE) { + data = &wqe->dma.inline_data[wqe->dma.sge_offset]; + memcpy(payload_addr(pkt), data, payload); + wqe->dma.resid -= payload; + wqe->dma.sge_offset += payload; + } else { + err = copy_data(qp->pd, 0, &wqe->dma, payload_addr(pkt), + payload, RXE_FROM_MR_OBJ); + } + + return err; +} + static struct sk_buff *init_req_packet(struct rxe_qp *qp, struct rxe_av *av, struct rxe_send_wqe *wqe, @@ -473,20 +492,7 @@ static int finish_packet(struct rxe_qp *qp, struct rxe_av *av, return err; if (pkt->mask & RXE_WRITE_OR_SEND_MASK) { - if (wqe->wr.send_flags & IB_SEND_INLINE) { - u8 *tmp = &wqe->dma.inline_data[wqe->dma.sge_offset]; - - memcpy(payload_addr(pkt), tmp, payload); - - wqe->dma.resid -= payload; - wqe->dma.sge_offset += payload; - } else { - err = copy_data(qp->pd, 0, &wqe->dma, - payload_addr(pkt), payload, - RXE_FROM_MR_OBJ); - if (err) - return err; - } + err = rxe_init_payload(qp, wqe, pkt, payload); if (bth_pad(pkt)) { u8 *pad = payload_addr(pkt) + payload; From patchwork Thu Oct 27 18:54:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13022533 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9C0AFA3743 for ; Thu, 27 Oct 2022 18:56:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236277AbiJ0S4U (ORCPT ); Thu, 27 Oct 2022 14:56:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56478 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236217AbiJ0S4Q (ORCPT ); Thu, 27 Oct 2022 14:56:16 -0400 Received: from mail-oi1-x22a.google.com (mail-oi1-x22a.google.com [IPv6:2607:f8b0:4864:20::22a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 07ABF248C0 for ; Thu, 27 Oct 2022 11:56:15 -0700 (PDT) Received: by mail-oi1-x22a.google.com with SMTP id g10so3366219oif.10 for ; Thu, 27 Oct 2022 11:56:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5PvhWMu9/x3vdMmCVm42c5SotIg29dsGdluHAkSLFeE=; b=FLJ63mZkwgFpmdNwy30fBpNjbVdj+EXro/fl8lUPl11SyqHeQ40Qg1NMFkqEM7lOVN GxjQCqlGBZze8hOW5L3PxOV9KE7aRAcaHtIUu3J/nw3GJDZjbekdcrLsGwWtSAmisjPD SyoFKFVSQVofh0cgaDcASOCzzA8b++r7EAbmByXljhZvccfPXVnpHuWdFY4hOMqKXG9s UNqEVdkv8A4rwGXJ6yQrZit1WCaGnbcIFqegZAqtQKNg7VyvL827oYboaIFDSNw7FWYe oFYxvDmDuHo/BZr6wRaLnBFc6zghXm3ShKeuopptgKDiaSZTS5LT2zKJApeuQc/EnG7x XcbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5PvhWMu9/x3vdMmCVm42c5SotIg29dsGdluHAkSLFeE=; b=t+U+HvyUQAKHtE5CaRH6EE/1e4+aOhGKJvOcPR1SGjqSYURXak/p7sZ1xW5ziCVJsD rOqcK59YA2eP2dlKgo2/DxGYjxbSKn3T50zg6C0m/l77Y4nxDkgub3DL5lHTekP94M6B yHPdLQJh/g3EGOqkOGNoGSBuGj5umS/sRafnqHmwD8NaepOeIvoORRV7ZJ5xz9Q63hFU PoXQtNcWEfntek1gVuoxNdByNJHYW9pcFo70RDI/dxH86UWJ/zdIL0A82lzDQzm2fXZP 0bUSksIPux0HT2938n6+aWOdpJ0zTMY4oZmJnW2KOTde/ZNnulbFf/WePkoRu0MlMmad Le2w== X-Gm-Message-State: ACrzQf1SgMKQtRyiRCxxCrBYmfvJoFJR08+qikACH0wjGddCXWHPeZLP liLCOFAm83zYOpmyHkREu4U= X-Google-Smtp-Source: AMsMyM4CpKLfGirtY/N/4h/NctPVDszfshzIvDgHQkZyd6CrrIF7Gnd5Zj9eb2AMpNw6kbFXxSdzQg== X-Received: by 2002:a05:6808:ec2:b0:351:3762:5ff2 with SMTP id q2-20020a0568080ec200b0035137625ff2mr5577398oiv.218.1666896974345; Thu, 27 Oct 2022 11:56:14 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f015-3653-e617-fa3f.res6.spectrum.com. [2603:8081:140c:1a00:f015:3653:e617:fa3f]) by smtp.googlemail.com with ESMTPSA id f1-20020a4a8f41000000b0049602fb9b4csm732736ool.46.2022.10.27.11.56.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Oct 2022 11:56:13 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, jhack@hpe.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 03/17] RDMA/rxe: Isolate code to build request packet Date: Thu, 27 Oct 2022 13:54:57 -0500 Message-Id: <20221027185510.33808-4-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221027185510.33808-1-rpearsonhpe@gmail.com> References: <20221027185510.33808-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Isolate all the code to build a request packet into a single subroutine called rxe_init_req_packet(). Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 2 +- drivers/infiniband/sw/rxe/rxe_net.c | 6 +- drivers/infiniband/sw/rxe/rxe_req.c | 121 ++++++++++++--------------- drivers/infiniband/sw/rxe/rxe_resp.c | 11 +-- 4 files changed, 62 insertions(+), 78 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index c2a5c8814a48..574a6afc1199 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -92,7 +92,7 @@ void rxe_mw_cleanup(struct rxe_pool_elem *elem); /* rxe_net.c */ struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, - int paylen, struct rxe_pkt_info *pkt); + struct rxe_pkt_info *pkt); int rxe_prepare(struct rxe_av *av, struct rxe_pkt_info *pkt, struct sk_buff *skb); int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index 35f327b9d4b8..1e4456f5cda2 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -443,7 +443,7 @@ int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, } struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, - int paylen, struct rxe_pkt_info *pkt) + struct rxe_pkt_info *pkt) { unsigned int hdr_len; struct sk_buff *skb = NULL; @@ -468,7 +468,7 @@ struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, rcu_read_unlock(); goto out; } - skb = alloc_skb(paylen + hdr_len + LL_RESERVED_SPACE(ndev), + skb = alloc_skb(pkt->paylen + hdr_len + LL_RESERVED_SPACE(ndev), GFP_ATOMIC); if (unlikely(!skb)) { @@ -489,7 +489,7 @@ struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, pkt->rxe = rxe; pkt->port_num = port_num; - pkt->hdr = skb_put(skb, paylen); + pkt->hdr = skb_put(skb, pkt->paylen); pkt->mask |= RXE_GRH_MASK; out: diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 10a75f4e3608..8cc683ebf536 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -456,51 +456,76 @@ static int rxe_init_payload(struct rxe_qp *qp, struct rxe_send_wqe *wqe, return err; } -static struct sk_buff *init_req_packet(struct rxe_qp *qp, - struct rxe_av *av, - struct rxe_send_wqe *wqe, - int opcode, u32 payload, - struct rxe_pkt_info *pkt) +static struct sk_buff *rxe_init_req_packet(struct rxe_qp *qp, + struct rxe_send_wqe *wqe, + int opcode, u32 payload, + struct rxe_pkt_info *pkt) { struct rxe_dev *rxe = to_rdev(qp->ibqp.device); struct sk_buff *skb; - int pad = (-payload) & 0x3; - int paylen; + struct rxe_av *av; + struct rxe_ah *ah; + void *padp; + int pad; + int err = -EINVAL; + + pkt->rxe = rxe; + pkt->opcode = opcode; + pkt->qp = qp; + pkt->psn = qp->req.psn; + pkt->mask = rxe_opcode[opcode].mask; + pkt->wqe = wqe; + pkt->port_num = 1; + + /* get address vector and address handle for UD qps only */ + av = rxe_get_av(pkt, &ah); + if (unlikely(!av)) + goto err_out; /* length from start of bth to end of icrc */ - paylen = rxe_opcode[opcode].length + payload + pad + RXE_ICRC_SIZE; - pkt->paylen = paylen; + pad = (-payload) & 0x3; + pkt->paylen = rxe_opcode[opcode].length + payload + + pad + RXE_ICRC_SIZE; /* init skb */ - skb = rxe_init_packet(rxe, av, paylen, pkt); + skb = rxe_init_packet(rxe, av, pkt); if (unlikely(!skb)) - return NULL; + goto err_out; rxe_init_roce_hdrs(qp, wqe, pkt, pad); - return skb; -} + if (pkt->mask & RXE_WRITE_OR_SEND_MASK) { + err = rxe_init_payload(qp, wqe, pkt, payload); + if (err) + goto err_out; + } -static int finish_packet(struct rxe_qp *qp, struct rxe_av *av, - struct rxe_send_wqe *wqe, struct rxe_pkt_info *pkt, - struct sk_buff *skb, u32 payload) -{ - int err; + if (pad) { + padp = payload_addr(pkt) + payload; + memset(padp, 0, pad); + } + /* IP and UDP network headers */ err = rxe_prepare(av, pkt, skb); if (err) - return err; + goto err_out; - if (pkt->mask & RXE_WRITE_OR_SEND_MASK) { - err = rxe_init_payload(qp, wqe, pkt, payload); - if (bth_pad(pkt)) { - u8 *pad = payload_addr(pkt) + payload; + if (ah) + rxe_put(ah); - memset(pad, 0, bth_pad(pkt)); - } - } + return skb; - return 0; +err_out: + if (err == -EFAULT) + wqe->status = IB_WC_LOC_PROT_ERR; + else + wqe->status = IB_WC_LOC_QP_OP_ERR; + if (skb) + kfree_skb(skb); + if (ah) + rxe_put(ah); + + return NULL; } static void update_wqe_state(struct rxe_qp *qp, @@ -630,7 +655,6 @@ static int rxe_do_local_ops(struct rxe_qp *qp, struct rxe_send_wqe *wqe) int rxe_requester(void *arg) { struct rxe_qp *qp = (struct rxe_qp *)arg; - struct rxe_dev *rxe = to_rdev(qp->ibqp.device); struct rxe_pkt_info pkt; struct sk_buff *skb; struct rxe_send_wqe *wqe; @@ -643,8 +667,6 @@ int rxe_requester(void *arg) struct rxe_send_wqe rollback_wqe; u32 rollback_psn; struct rxe_queue *q = qp->sq.queue; - struct rxe_ah *ah; - struct rxe_av *av; if (!rxe_get(qp)) return -EAGAIN; @@ -753,44 +775,9 @@ int rxe_requester(void *arg) payload = mtu; } - pkt.rxe = rxe; - pkt.opcode = opcode; - pkt.qp = qp; - pkt.psn = qp->req.psn; - pkt.mask = rxe_opcode[opcode].mask; - pkt.wqe = wqe; - - av = rxe_get_av(&pkt, &ah); - if (unlikely(!av)) { - pr_err("qp#%d Failed no address vector\n", qp_num(qp)); - wqe->status = IB_WC_LOC_QP_OP_ERR; - goto err; - } - - skb = init_req_packet(qp, av, wqe, opcode, payload, &pkt); - if (unlikely(!skb)) { - pr_err("qp#%d Failed allocating skb\n", qp_num(qp)); - wqe->status = IB_WC_LOC_QP_OP_ERR; - if (ah) - rxe_put(ah); - goto err; - } - - err = finish_packet(qp, av, wqe, &pkt, skb, payload); - if (unlikely(err)) { - pr_debug("qp#%d Error during finish packet\n", qp_num(qp)); - if (err == -EFAULT) - wqe->status = IB_WC_LOC_PROT_ERR; - else - wqe->status = IB_WC_LOC_QP_OP_ERR; - kfree_skb(skb); - if (ah) - rxe_put(ah); + skb = rxe_init_req_packet(qp, wqe, opcode, payload, &pkt); + if (unlikely(!skb)) goto err; - } - - if (ah) - rxe_put(ah); /* * To prevent a race on wqe access between requester and completer, diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 95d372db934d..a00885799619 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -665,22 +665,19 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, int pad; int err; - /* - * allocate packet - */ pad = (-payload) & 0x3; paylen = rxe_opcode[opcode].length + payload + pad + RXE_ICRC_SIZE; - skb = rxe_init_packet(rxe, &qp->pri_av, paylen, ack); - if (!skb) - return NULL; - ack->qp = qp; ack->opcode = opcode; ack->mask = rxe_opcode[opcode].mask; ack->paylen = paylen; ack->psn = psn; + skb = rxe_init_packet(rxe, &qp->pri_av, ack); + if (!skb) + return NULL; + bth_init(ack, opcode, 0, 0, pad, IB_DEFAULT_PKEY_FULL, qp->attr.dest_qp_num, 0, psn); From patchwork Thu Oct 27 18:54:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13022535 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19622FA3740 for ; Thu, 27 Oct 2022 18:56:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236217AbiJ0S4V (ORCPT ); Thu, 27 Oct 2022 14:56:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56466 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235987AbiJ0S4R (ORCPT ); Thu, 27 Oct 2022 14:56:17 -0400 Received: from mail-ot1-x333.google.com (mail-ot1-x333.google.com [IPv6:2607:f8b0:4864:20::333]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 122783ECF6 for ; Thu, 27 Oct 2022 11:56:16 -0700 (PDT) Received: by mail-ot1-x333.google.com with SMTP id d18-20020a05683025d200b00661c6f1b6a4so1615333otu.1 for ; Thu, 27 Oct 2022 11:56:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SgfK65zhGxM3YpTMVzt80iGe2w+uRgkCclbJ+PVGusU=; b=ahcG85LZqWnEnksh3uOtK6Si8CG7PReDFThyA9n4fueuH4KKTnDi2jYQaV3adId2PC TrqU/1A+TidovKe0VN4ZZTjMAizlWnPeMAu9S6IfUeGqmFdpEi1Jm+r8DPj/U/BQpvTi hfM/QD41WNhvoNZ1dV1QcEVl6VAZ+4m+FkPfz7Uhh3hTVmrYb4LiwVAdpUGQoZJYCRHR 4n27skBeoT15sqcMh6AtfL9mGX0qSEDqiZGLMsBTGooPIPly6ir6fu78OGHY93jIxVjg kzSGIfU1q2doycJ34r+1ehbq0QNhbIUdWC1e2W+o+dVf+0PXoX+LABEyey+SbXEOy9wu WioA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SgfK65zhGxM3YpTMVzt80iGe2w+uRgkCclbJ+PVGusU=; b=w9GwuszW4Kn7tU2GE1oAqFogiwAYjDVxVnP8luTDr/5A7epOTAo4+yzKM/FMnwoR+6 f1THuXx8RlTNK3cjUFMWUNQ86VkkeEEhTAqouDz8w0KBJd16qQkAgI2bXC74MPqwlNMg NFZo2pMPdt4tjlq6t423QXGTzpQ1DMnPMUjAeJPTsiBHb0QhAx3J7Uovl4G4mJi4ZC3D 0M4C45Q5s5h4nqJIKzAdClkI4VWS7hE2Ugia3tuaOKxLnBpvUqjIz9ZgVwtndMg6V4Jm deqOPp/ICJNFj3kvBnh/7HgY7uij93nFKaijJN1euJkpFRiHF/7DIzDoAGLaUhlJ3VYn Stnw== X-Gm-Message-State: ACrzQf2wJYEPM3dE8x51lnJqnvqesqyZhdyD4Bi2wCWqKN+G6Qq7OBh3 9Q0P0k27PvjaGkgsc7mS2pQ= X-Google-Smtp-Source: AMsMyM6sokGDC1jzAfIm7VjGi7+/cnDCv+WUtI7mdEeSh/6gmKNUSH2Cl523cilAJiaCehHTfd0JCg== X-Received: by 2002:a9d:2ae8:0:b0:667:d857:55bc with SMTP id e95-20020a9d2ae8000000b00667d85755bcmr6324731otb.50.1666896975397; Thu, 27 Oct 2022 11:56:15 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f015-3653-e617-fa3f.res6.spectrum.com. [2603:8081:140c:1a00:f015:3653:e617:fa3f]) by smtp.googlemail.com with ESMTPSA id f1-20020a4a8f41000000b0049602fb9b4csm732736ool.46.2022.10.27.11.56.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Oct 2022 11:56:14 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, jhack@hpe.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 04/17] RDMA/rxe: Add sg fragment ops Date: Thu, 27 Oct 2022 13:54:58 -0500 Message-Id: <20221027185510.33808-5-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221027185510.33808-1-rpearsonhpe@gmail.com> References: <20221027185510.33808-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Rename rxe_mr_copy_dir to rxe_mr_copy_op and add new operations for copying between an skb fragment list and an mr. This is in preparation for supporting fragmented skbs. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_comp.c | 4 ++-- drivers/infiniband/sw/rxe/rxe_loc.h | 4 ++-- drivers/infiniband/sw/rxe/rxe_mr.c | 14 +++++++------- drivers/infiniband/sw/rxe/rxe_req.c | 2 +- drivers/infiniband/sw/rxe/rxe_resp.c | 9 +++++---- drivers/infiniband/sw/rxe/rxe_verbs.h | 15 ++++++++++++--- 6 files changed, 29 insertions(+), 19 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c index c9170dd99f3a..77640e35ae88 100644 --- a/drivers/infiniband/sw/rxe/rxe_comp.c +++ b/drivers/infiniband/sw/rxe/rxe_comp.c @@ -356,7 +356,7 @@ static inline enum comp_state do_read(struct rxe_qp *qp, ret = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, &wqe->dma, payload_addr(pkt), - payload_size(pkt), RXE_TO_MR_OBJ); + payload_size(pkt), RXE_COPY_TO_MR); if (ret) { wqe->status = IB_WC_LOC_PROT_ERR; return COMPST_ERROR; @@ -378,7 +378,7 @@ static inline enum comp_state do_atomic(struct rxe_qp *qp, ret = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, &wqe->dma, &atomic_orig, - sizeof(u64), RXE_TO_MR_OBJ); + sizeof(u64), RXE_COPY_TO_MR); if (ret) { wqe->status = IB_WC_LOC_PROT_ERR; return COMPST_ERROR; diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 574a6afc1199..ff803a957ac1 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -69,9 +69,9 @@ int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova, int access, struct rxe_mr *mr); int rxe_mr_init_fast(int max_pages, struct rxe_mr *mr); int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, - enum rxe_mr_copy_dir dir); + enum rxe_mr_copy_op op); int copy_data(struct rxe_pd *pd, int access, struct rxe_dma_info *dma, - void *addr, int length, enum rxe_mr_copy_dir dir); + void *addr, int length, enum rxe_mr_copy_op op); void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length); struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key, enum rxe_mr_lookup_type type); diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 502e9ada99b3..1cb997caa292 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -290,7 +290,7 @@ void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length) * a mr object starting at iova. */ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, - enum rxe_mr_copy_dir dir) + enum rxe_mr_copy_op op) { int err; int bytes; @@ -307,9 +307,9 @@ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, if (mr->type == IB_MR_TYPE_DMA) { u8 *src, *dest; - src = (dir == RXE_TO_MR_OBJ) ? addr : ((void *)(uintptr_t)iova); + src = (op == RXE_COPY_TO_MR) ? addr : ((void *)(uintptr_t)iova); - dest = (dir == RXE_TO_MR_OBJ) ? ((void *)(uintptr_t)iova) : addr; + dest = (op == RXE_COPY_TO_MR) ? ((void *)(uintptr_t)iova) : addr; memcpy(dest, src, length); @@ -333,8 +333,8 @@ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, u8 *src, *dest; va = (u8 *)(uintptr_t)buf->addr + offset; - src = (dir == RXE_TO_MR_OBJ) ? addr : va; - dest = (dir == RXE_TO_MR_OBJ) ? va : addr; + src = (op == RXE_COPY_TO_MR) ? addr : va; + dest = (op == RXE_COPY_TO_MR) ? va : addr; bytes = buf->size - offset; @@ -372,7 +372,7 @@ int copy_data( struct rxe_dma_info *dma, void *addr, int length, - enum rxe_mr_copy_dir dir) + enum rxe_mr_copy_op op) { int bytes; struct rxe_sge *sge = &dma->sge[dma->cur_sge]; @@ -433,7 +433,7 @@ int copy_data( if (bytes > 0) { iova = sge->addr + offset; - err = rxe_mr_copy(mr, iova, addr, bytes, dir); + err = rxe_mr_copy(mr, iova, addr, bytes, op); if (err) goto err2; diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 8cc683ebf536..9d92acfb2fcf 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -450,7 +450,7 @@ static int rxe_init_payload(struct rxe_qp *qp, struct rxe_send_wqe *wqe, wqe->dma.sge_offset += payload; } else { err = copy_data(qp->pd, 0, &wqe->dma, payload_addr(pkt), - payload, RXE_FROM_MR_OBJ); + payload, RXE_COPY_FROM_MR); } return err; diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index a00885799619..4b185ddac887 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -524,7 +524,7 @@ static enum resp_states send_data_in(struct rxe_qp *qp, void *data_addr, int err; err = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, &qp->resp.wqe->dma, - data_addr, data_len, RXE_TO_MR_OBJ); + data_addr, data_len, RXE_COPY_TO_MR); if (unlikely(err)) return (err == -ENOSPC) ? RESPST_ERR_LENGTH : RESPST_ERR_MALFORMED_WQE; @@ -540,7 +540,7 @@ static enum resp_states write_data_in(struct rxe_qp *qp, int data_len = payload_size(pkt); err = rxe_mr_copy(qp->resp.mr, qp->resp.va + qp->resp.offset, - payload_addr(pkt), data_len, RXE_TO_MR_OBJ); + payload_addr(pkt), data_len, RXE_COPY_TO_MR); if (err) { rc = RESPST_ERR_RKEY_VIOLATION; goto out; @@ -807,8 +807,9 @@ static enum resp_states read_reply(struct rxe_qp *qp, return RESPST_ERR_RNR; err = rxe_mr_copy(mr, res->read.va, payload_addr(&ack_pkt), - payload, RXE_FROM_MR_OBJ); - rxe_put(mr); + payload, RXE_COPY_FROM_MR); + if (mr) + rxe_put(mr); if (err) { kfree_skb(skb); return RESPST_ERR_RKEY_VIOLATION; diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 5f5cbfcb3569..b3218715ef5d 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -267,9 +267,18 @@ enum rxe_mr_state { RXE_MR_STATE_VALID, }; -enum rxe_mr_copy_dir { - RXE_TO_MR_OBJ, - RXE_FROM_MR_OBJ, +/** + * enum rxe_mr_copy_op - Operations peformed by rxe_copy_mr/dma_data() + * @RXE_COPY_TO_MR: Copy data from packet to MR(s) + * @RXE_COPY_FROM_MR: Copy data from MR(s) to packet + * @RXE_FRAG_TO_MR: Copy data from frag list to MR(s) + * @RXE_FRAG_FROM_MR: Copy data from MR(s) to frag list + */ +enum rxe_mr_copy_op { + RXE_COPY_TO_MR, + RXE_COPY_FROM_MR, + RXE_FRAG_TO_MR, + RXE_FRAG_FROM_MR, }; enum rxe_mr_lookup_type { From patchwork Thu Oct 27 18:54:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13022536 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8CA90FA3744 for ; Thu, 27 Oct 2022 18:56:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236342AbiJ0S4W (ORCPT ); Thu, 27 Oct 2022 14:56:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56616 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236168AbiJ0S4T (ORCPT ); Thu, 27 Oct 2022 14:56:19 -0400 Received: from mail-oi1-x22e.google.com (mail-oi1-x22e.google.com [IPv6:2607:f8b0:4864:20::22e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9300036DF8 for ; Thu, 27 Oct 2022 11:56:17 -0700 (PDT) Received: by mail-oi1-x22e.google.com with SMTP id g10so3366362oif.10 for ; Thu, 27 Oct 2022 11:56:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mTJppGzc1eVithpbvGTjYFt0v38pZhvidz2L6iFmF4E=; b=ldCR6QpyP0ubyiIjLB4T3D+QZ2Z0H+u+7+a/KWmG+zWQ43N2G5cobdd8/HuTYHpDtR IR2Uu0kKUmA+GpNnXgEygLdVq+uQQFVHF9rrEan3mFzaEwT6/cW+XR/UXArBjG+RGGFE wCqVSMT/dgpnsPeu7cV4MF1IrtzxWE8yJ8GhWIKtqPaKtrZDEkj1oa4p8f1E45GSPUpI 3/GeNG8lbvUVPWYWyKJYjQwKgqbsaViEMLTju0G1rnReTeFjIpUT0DuExsxR4wW0m84L 1bL5PUyNjiUSTaxe9nwLzI8KmrnzHOnGgcqLprrb6Z0eDdOcoGAasjCuPeTQsQbm9pQB i7+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mTJppGzc1eVithpbvGTjYFt0v38pZhvidz2L6iFmF4E=; b=lUPaPBAJg5U//GPu6HOTaKzL3XxDpZnXD1ERVbrQPNONPQDA6riRJTG2IkoGHcVCIc +awMyv1mOOFkqKIz/seffhAAWOEdqvpkqWMnto+1NsWBDGSP5HNr6hr2bpenNIs9E0aN Cp8bupOds+1cGHo8+O5+1Ynghu3i7mp2Ml7v/bJP2h3WYiX+HXLnukNEhI4QgQ6YO0a5 f+QdHWYBYGHuKnBz7Z0SCXpa+oc//5G9KC28iZZYCRJCvGUK2uabu66uvOwhPylypasV AbyYdBCnl44Lh9UTtzmJ/Yscva1iihg29uaiObx6v6BES7zFfA87K96aPtkZWVFQweCX gnuw== X-Gm-Message-State: ACrzQf0SW/BOLkG7cvIuzBaaofhLzKtJeE0k9vUVp+3iJroHaKaRQaBJ XbKGewhp9gL8Q/RbFqgFRUE= X-Google-Smtp-Source: AMsMyM79cIcwejtlF+sHmqkYdilpwesLEMS4Oqr7bDYjG3dW6iF5jDPkKHSsF+aspiglijIiCU1RJw== X-Received: by 2002:a05:6808:989:b0:354:fcc8:2b4d with SMTP id a9-20020a056808098900b00354fcc82b4dmr5558953oic.157.1666896976425; Thu, 27 Oct 2022 11:56:16 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f015-3653-e617-fa3f.res6.spectrum.com. [2603:8081:140c:1a00:f015:3653:e617:fa3f]) by smtp.googlemail.com with ESMTPSA id f1-20020a4a8f41000000b0049602fb9b4csm732736ool.46.2022.10.27.11.56.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Oct 2022 11:56:16 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, jhack@hpe.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 05/17] RDMA/rxe: Add rxe_add_frag() to rxe_mr.c Date: Thu, 27 Oct 2022 13:54:59 -0500 Message-Id: <20221027185510.33808-6-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221027185510.33808-1-rpearsonhpe@gmail.com> References: <20221027185510.33808-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add the subroutine rxe_add_frag() to add a fragment to an skb. This is in preparation for supporting fragmented skbs. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 2 ++ drivers/infiniband/sw/rxe/rxe_mr.c | 34 +++++++++++++++++++++++++++++ 2 files changed, 36 insertions(+) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index ff803a957ac1..81a611778d44 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -68,6 +68,8 @@ void rxe_mr_init_dma(int access, struct rxe_mr *mr); int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova, int access, struct rxe_mr *mr); int rxe_mr_init_fast(int max_pages, struct rxe_mr *mr); +int rxe_add_frag(struct sk_buff *skb, struct rxe_phys_buf *buf, + int length, int offset); int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, enum rxe_mr_copy_op op); int copy_data(struct rxe_pd *pd, int access, struct rxe_dma_info *dma, diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 1cb997caa292..cf39412cac54 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -286,6 +286,40 @@ void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length) return addr; } +/** + * rxe_add_frag() - Add a frag to a nonlinear packet + * @skb: The packet buffer + * @buf: Kernel buffer info + * @length: Length of fragment + * @offset: Offset of fragment in buf + * + * Returns: 0 on success else a negative errno + */ +int rxe_add_frag(struct sk_buff *skb, struct rxe_phys_buf *buf, + int length, int offset) +{ + int nr_frags = skb_shinfo(skb)->nr_frags; + skb_frag_t *frag = &skb_shinfo(skb)->frags[nr_frags]; + + if (nr_frags >= MAX_SKB_FRAGS) { + pr_debug("%s: nr_frags (%d) >= MAX_SKB_FRAGS\n", + __func__, nr_frags); + return -EINVAL; + } + + frag->bv_len = length; + frag->bv_offset = offset; + frag->bv_page = virt_to_page(buf->addr); + /* because kfree_skb will call put_page() */ + get_page(frag->bv_page); + skb_shinfo(skb)->nr_frags++; + + skb->data_len += length; + skb->len += length; + + return 0; +} + /* copy data from a range (vaddr, vaddr+length-1) to or from * a mr object starting at iova. */ From patchwork Thu Oct 27 18:55:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13022534 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A80C7FA3746 for ; Thu, 27 Oct 2022 18:56:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234815AbiJ0S4V (ORCPT ); Thu, 27 Oct 2022 14:56:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56618 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236245AbiJ0S4T (ORCPT ); Thu, 27 Oct 2022 14:56:19 -0400 Received: from mail-oi1-x22b.google.com (mail-oi1-x22b.google.com [IPv6:2607:f8b0:4864:20::22b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 52AD21D33C for ; Thu, 27 Oct 2022 11:56:18 -0700 (PDT) Received: by mail-oi1-x22b.google.com with SMTP id r83so3396628oih.2 for ; Thu, 27 Oct 2022 11:56:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7arYcdylk3ELIDmzLKj5jx5A064a8DPStUt34r9GnZk=; b=GJNSwA69kawmg5RRiFva69v6JwP/06m343/Jy9GF2W75k6vqUDFuyM/uyPFvotJhiw 4jCZyZ5U2gt6UPIvdxk0P/C2D4640lNDsBxzxs5NYNTdANWK2Qt40jyFGmEBkexPdagE Iw5HDSJP09upW786yEujDXD8T9VUoWlLTMTdYfmbRj89oZBAdx8fpO40FEWhUug/ik7K PqkLx5/J2+K4QoVMUehbfhYMMHZrtoU8XaXRRTPueXZTakvM07BbXsHXysQhVcDfXQMt auYPW8AAcKibwDgVc9lp8CRLdYKa7FHM8JD1axjdBGUQZ60Z64JEUh7vBqrfH2V6fyDg BPWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7arYcdylk3ELIDmzLKj5jx5A064a8DPStUt34r9GnZk=; b=IlzMgpkGqP5BhgyYbmoLZqukQZTMSYGpY8HrxWjLh93tXzAEd9D0nDVogLBVQUCUdD k+gA9E18O8U/5Lyj+EbYxwUUmJXpDWKW3UtvIbj1jEgGiDg/a5oxmh/WEm615XVY68fo rOa45GA4f8A15XkKjNY3nQja8WqJL2JLmKljWQX1DEzsEthbIxprFsxBthKA6ypWg3CU AEA1D/KDfaW3O+0Zpa40De2Ku9lgUoIYevJVQEDMbtld+elOBl+20oY0CwEEwfGHwixF gfhwIiDFXoynfNo7ilZ1/bCbxLvfQCltLxjitR4nDwKEaB5z9KFSvo+3pXA+A/BWqMv+ eY4Q== X-Gm-Message-State: ACrzQf34B1MdvfNaHcruYVgQNhdxFb8UwP65DSL34VnhBhEflxg3H9XS Km8ZI5QlwliHw5/cqq+62rk= X-Google-Smtp-Source: AMsMyM6o7RP03k3y7kiz/57RgH7FC7NE/jcIFZkaNC+Hiov8QZnwNZ7Mldn77GtAL5AAtYbcb2l1mw== X-Received: by 2002:a05:6808:128b:b0:354:c73a:7241 with SMTP id a11-20020a056808128b00b00354c73a7241mr5667063oiw.34.1666896977596; Thu, 27 Oct 2022 11:56:17 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f015-3653-e617-fa3f.res6.spectrum.com. [2603:8081:140c:1a00:f015:3653:e617:fa3f]) by smtp.googlemail.com with ESMTPSA id f1-20020a4a8f41000000b0049602fb9b4csm732736ool.46.2022.10.27.11.56.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Oct 2022 11:56:16 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, jhack@hpe.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 06/17] RDMA/rxe: Add routine to compute the number of frags Date: Thu, 27 Oct 2022 13:55:00 -0500 Message-Id: <20221027185510.33808-7-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221027185510.33808-1-rpearsonhpe@gmail.com> References: <20221027185510.33808-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add a subroutine named rxe_num_mr_frags() to compute the number of skb frags needed to hold length bytes in an skb when sending data from an mr starting at iova. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 1 + drivers/infiniband/sw/rxe/rxe_mr.c | 68 +++++++++++++++++++++++++++++ 2 files changed, 69 insertions(+) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 81a611778d44..87fb052c1d0a 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -70,6 +70,7 @@ int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova, int rxe_mr_init_fast(int max_pages, struct rxe_mr *mr); int rxe_add_frag(struct sk_buff *skb, struct rxe_phys_buf *buf, int length, int offset); +int rxe_num_mr_frags(struct rxe_mr *mr, u64 iova, int length); int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, enum rxe_mr_copy_op op); int copy_data(struct rxe_pd *pd, int access, struct rxe_dma_info *dma, diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index cf39412cac54..dd4dbe117c91 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -320,6 +320,74 @@ int rxe_add_frag(struct sk_buff *skb, struct rxe_phys_buf *buf, return 0; } +/** + * rxe_num_mr_frags() - Compute the number of skb frags needed to copy + * length bytes from an mr to an skb frag list. + * @mr: mr to copy data from + * @iova: iova in memory region as starting point + * @length: number of bytes to transfer + * + * Returns: the number of frags needed or a negative error + */ +int rxe_num_mr_frags(struct rxe_mr *mr, u64 iova, int length) +{ + struct rxe_phys_buf *buf; + struct rxe_map **map; + size_t buf_offset; + int bytes; + int m; + int i; + int num_frags = 0; + int err; + + if (length == 0) + return 0; + + if (mr->type == IB_MR_TYPE_DMA) { + while (length > 0) { + buf_offset = iova & ~PAGE_MASK; + bytes = PAGE_SIZE - buf_offset; + if (bytes > length) + bytes = length; + length -= bytes; + num_frags++; + } + + return num_frags; + } + + WARN_ON_ONCE(!mr->map); + + err = mr_check_range(mr, iova, length); + if (err) + return err; + + lookup_iova(mr, iova, &m, &i, &buf_offset); + + map = mr->map + m; + buf = map[0]->buf + i; + + while (length > 0) { + bytes = buf->size - buf_offset; + if (bytes > length) + bytes = length; + length -= bytes; + buf_offset = 0; + buf++; + i++; + num_frags++; + + /* we won't overrun since we checked range above */ + if (i == RXE_BUF_PER_MAP) { + i = 0; + map++; + buf = map[0]->buf; + } + } + + return num_frags; +} + /* copy data from a range (vaddr, vaddr+length-1) to or from * a mr object starting at iova. */ From patchwork Thu Oct 27 18:55:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13022537 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E126AECAAA1 for ; Thu, 27 Oct 2022 18:56:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235987AbiJ0S4W (ORCPT ); Thu, 27 Oct 2022 14:56:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236015AbiJ0S4U (ORCPT ); Thu, 27 Oct 2022 14:56:20 -0400 Received: from mail-oi1-x234.google.com (mail-oi1-x234.google.com [IPv6:2607:f8b0:4864:20::234]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EB341248C0 for ; Thu, 27 Oct 2022 11:56:18 -0700 (PDT) Received: by mail-oi1-x234.google.com with SMTP id s206so3405246oie.3 for ; Thu, 27 Oct 2022 11:56:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=IP44sTc2xj7SImr8+JKsYI5I/w/qT5HqhNNRRehFm3o=; b=aMhOc15B4MPVAeVxndr95FnGiwiP8RL5VjoRzCoF4dPparWIvVL8woaxR2jFp9WfLa 6S2NsgwlW7gdCTBxDIaSXY+vrkEQ5g6Ukkg17330edrDn32+YY9HZwbDjdw3DwjZNAj1 VMx77h3emAuO1c31x7uKaIRSeFXIiaOpSELcnq1bO0V6fKut1Fkz/qrZKhJ0WdvCD1f0 BwZk0ujwTShC5+TJYFPvUc0vPLIMLBFoTZihlG4VRYsRff1PwK5sIQQcy5VdTV9yv8nk WgjskE+a9kaelY01ds3Acs7TcggHtLhhYikitRuoYXkuFchd3unGzmP03MRmODJMWgXO bOPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=IP44sTc2xj7SImr8+JKsYI5I/w/qT5HqhNNRRehFm3o=; b=Ikq6LlLSzEYdQTZM8lGxTMqHLwnxfTP2SHdvaD6vVNAVFUr0P9u5shrWsEPJ0FYK0n Iu21zv/mX24nj+YK7jcs38D9laYURwlZqumWskg5/MctH5S+URL+9JZa2munnfi1IzwM 2SAbmotR6HGt8a3P+/pg/imKwM9w4T6lLa+EvQT2VpwxnUnJcVDYqhE2UnapeI06raBN +pFr2P09m3RDseeCRJ+bF2QHWdyHvw2WKMhk/Rk+LwvQdzkUH5H1kU/d+wTYTOjL9pUV KuqVh3EzAg365Oqrp8rR+k+Nrly60dZkJVrMwquYVUrXVMTp0NJxEaDsJEnGkAbUQJJS cL8Q== X-Gm-Message-State: ACrzQf3H0QgiNCDVKVI6f26YaCen2AoAyRDT4wQ0cqWm5c/SyWCuRX+u sM3djJAseTfNPf8IS0lb2oM= X-Google-Smtp-Source: AMsMyM51GfGir5Mq6DGi/eDMQxyq4En1oaMgOofJ5nGvBa4cDCQw1McbXi3k3GA2FzKCjchp3OuxtA== X-Received: by 2002:a05:6808:e8f:b0:345:7a77:2ae0 with SMTP id k15-20020a0568080e8f00b003457a772ae0mr5832185oil.52.1666896978566; Thu, 27 Oct 2022 11:56:18 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f015-3653-e617-fa3f.res6.spectrum.com. [2603:8081:140c:1a00:f015:3653:e617:fa3f]) by smtp.googlemail.com with ESMTPSA id f1-20020a4a8f41000000b0049602fb9b4csm732736ool.46.2022.10.27.11.56.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Oct 2022 11:56:18 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, jhack@hpe.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 07/17] RDMA/rxe: Extend rxe_mr_copy to support skb frags Date: Thu, 27 Oct 2022 13:55:01 -0500 Message-Id: <20221027185510.33808-8-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221027185510.33808-1-rpearsonhpe@gmail.com> References: <20221027185510.33808-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org rxe_mr_copy() currently supports copying between an mr and a contiguous region of kernel memory. Rename rxe_mr_copy() to rxe_copy_mr_data(). Extend the operations to support copying between an mr and an skb fragment list. Fixup calls to rxe_mr_copy() to support the new API. This is in preparation for supporting fragmented skbs. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 3 + drivers/infiniband/sw/rxe/rxe_mr.c | 142 +++++++++++++++++++-------- drivers/infiniband/sw/rxe/rxe_resp.c | 20 ++-- 3 files changed, 116 insertions(+), 49 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 87fb052c1d0a..c62fc2613a01 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -71,6 +71,9 @@ int rxe_mr_init_fast(int max_pages, struct rxe_mr *mr); int rxe_add_frag(struct sk_buff *skb, struct rxe_phys_buf *buf, int length, int offset); int rxe_num_mr_frags(struct rxe_mr *mr, u64 iova, int length); +int rxe_copy_mr_data(struct sk_buff *skb, struct rxe_mr *mr, u64 iova, + void *addr, int skb_offset, int length, + enum rxe_mr_copy_op op); int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, enum rxe_mr_copy_op op); int copy_data(struct rxe_pd *pd, int access, struct rxe_dma_info *dma, diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index dd4dbe117c91..fd39b3e17f41 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -388,70 +388,130 @@ int rxe_num_mr_frags(struct rxe_mr *mr, u64 iova, int length) return num_frags; } -/* copy data from a range (vaddr, vaddr+length-1) to or from - * a mr object starting at iova. +/** + * rxe_copy_mr_data() - transfer data between an MR and a packet + * @skb: the packet buffer + * @mr: the MR + * @iova: the address in the MR + * @addr: the address in the packet (TO/FROM MR only) + * @length: the length to transfer + * @op: copy operation (TO MR, FROM MR or FRAG MR) + * + * Copy data from a range (addr, addr+length-1) in a packet + * to or from a range in an MR object at (iova, iova+length-1). + * Or, build a frag list referencing the MR range. + * + * Caller must verify that the access permissions support the + * operation. + * + * Returns: 0 on success or an error */ -int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, - enum rxe_mr_copy_op op) +int rxe_copy_mr_data(struct sk_buff *skb, struct rxe_mr *mr, u64 iova, + void *addr, int skb_offset, int length, + enum rxe_mr_copy_op op) { - int err; - int bytes; - u8 *va; - struct rxe_map **map; - struct rxe_phys_buf *buf; - int m; - int i; - size_t offset; + struct rxe_phys_buf dmabuf; + struct rxe_phys_buf *buf; + struct rxe_map **map; + size_t buf_offset; + int bytes; + void *va; + int m; + int i; + int err = 0; if (length == 0) return 0; - if (mr->type == IB_MR_TYPE_DMA) { - u8 *src, *dest; - - src = (op == RXE_COPY_TO_MR) ? addr : ((void *)(uintptr_t)iova); - - dest = (op == RXE_COPY_TO_MR) ? ((void *)(uintptr_t)iova) : addr; + switch (mr->type) { + case IB_MR_TYPE_DMA: + va = (void *)(uintptr_t)iova; + switch (op) { + case RXE_COPY_TO_MR: + memcpy(va, addr, length); + break; + case RXE_COPY_FROM_MR: + memcpy(addr, va, length); + break; + case RXE_FRAG_TO_MR: + err = skb_copy_bits(skb, skb_offset, va, length); + if (err) + return err; + break; + case RXE_FRAG_FROM_MR: + /* limit frag length to PAGE_SIZE */ + while (length) { + dmabuf.addr = iova & PAGE_MASK; + buf_offset = iova & ~PAGE_MASK; + bytes = PAGE_SIZE - buf_offset; + if (bytes > length) + bytes = length; + err = rxe_add_frag(skb, &dmabuf, bytes, + buf_offset); + if (err) + return err; + iova += bytes; + length -= bytes; + } + break; + } + return 0; - memcpy(dest, src, length); + case IB_MR_TYPE_MEM_REG: + case IB_MR_TYPE_USER: + break; - return 0; + default: + pr_warn("%s: mr type (%d) not supported\n", + __func__, mr->type); + return -EINVAL; } WARN_ON_ONCE(!mr->map); err = mr_check_range(mr, iova, length); - if (err) { - err = -EFAULT; - goto err1; - } + if (err) + return -EFAULT; - lookup_iova(mr, iova, &m, &i, &offset); + lookup_iova(mr, iova, &m, &i, &buf_offset); map = mr->map + m; - buf = map[0]->buf + i; + buf = map[0]->buf + i; while (length > 0) { - u8 *src, *dest; - - va = (u8 *)(uintptr_t)buf->addr + offset; - src = (op == RXE_COPY_TO_MR) ? addr : va; - dest = (op == RXE_COPY_TO_MR) ? va : addr; - - bytes = buf->size - offset; - + va = (void *)(uintptr_t)buf->addr + buf_offset; + bytes = buf->size - buf_offset; if (bytes > length) bytes = length; - memcpy(dest, src, bytes); + switch (op) { + case RXE_COPY_TO_MR: + memcpy(va, addr, bytes); + break; + case RXE_COPY_FROM_MR: + memcpy(addr, va, bytes); + break; + case RXE_FRAG_TO_MR: + err = skb_copy_bits(skb, skb_offset, va, bytes); + if (err) + return err; + break; + case RXE_FRAG_FROM_MR: + err = rxe_add_frag(skb, buf, bytes, buf_offset); + if (err) + return err; + break; + } - length -= bytes; - addr += bytes; + length -= bytes; + addr += bytes; - offset = 0; + buf_offset = 0; + skb_offset += bytes; buf++; i++; + /* we won't overrun since we checked range above */ if (i == RXE_BUF_PER_MAP) { i = 0; map++; @@ -460,9 +520,6 @@ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, } return 0; - -err1: - return err; } /* copy data in or out of a wqe, i.e. sg list @@ -535,7 +592,8 @@ int copy_data( if (bytes > 0) { iova = sge->addr + offset; - err = rxe_mr_copy(mr, iova, addr, bytes, op); + err = rxe_copy_mr_data(NULL, mr, iova, addr, + 0, bytes, op); if (err) goto err2; diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 4b185ddac887..ba359242118a 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -535,12 +535,15 @@ static enum resp_states send_data_in(struct rxe_qp *qp, void *data_addr, static enum resp_states write_data_in(struct rxe_qp *qp, struct rxe_pkt_info *pkt) { + struct sk_buff *skb = PKT_TO_SKB(pkt); enum resp_states rc = RESPST_NONE; - int err; int data_len = payload_size(pkt); + int err; + int skb_offset = 0; - err = rxe_mr_copy(qp->resp.mr, qp->resp.va + qp->resp.offset, - payload_addr(pkt), data_len, RXE_COPY_TO_MR); + err = rxe_copy_mr_data(skb, qp->resp.mr, qp->resp.va + qp->resp.offset, + payload_addr(pkt), skb_offset, data_len, + RXE_COPY_TO_MR); if (err) { rc = RESPST_ERR_RKEY_VIOLATION; goto out; @@ -766,6 +769,7 @@ static enum resp_states read_reply(struct rxe_qp *qp, int err; struct resp_res *res = qp->resp.res; struct rxe_mr *mr; + int skb_offset = 0; if (!res) { res = rxe_prepare_res(qp, req_pkt, RXE_READ_MASK); @@ -806,15 +810,17 @@ static enum resp_states read_reply(struct rxe_qp *qp, if (!skb) return RESPST_ERR_RNR; - err = rxe_mr_copy(mr, res->read.va, payload_addr(&ack_pkt), - payload, RXE_COPY_FROM_MR); - if (mr) - rxe_put(mr); + err = rxe_copy_mr_data(skb, mr, res->read.va, payload_addr(&ack_pkt), + skb_offset, payload, RXE_COPY_FROM_MR); if (err) { kfree_skb(skb); + rxe_put(mr); return RESPST_ERR_RKEY_VIOLATION; } + if (mr) + rxe_put(mr); + if (bth_pad(&ack_pkt)) { u8 *pad = payload_addr(&ack_pkt) + payload; From patchwork Thu Oct 27 18:55:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13022538 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F659FA3742 for ; Thu, 27 Oct 2022 18:56:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236327AbiJ0S4X (ORCPT ); Thu, 27 Oct 2022 14:56:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56684 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236284AbiJ0S4U (ORCPT ); Thu, 27 Oct 2022 14:56:20 -0400 Received: from mail-oi1-x22b.google.com (mail-oi1-x22b.google.com [IPv6:2607:f8b0:4864:20::22b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 23E271D33C for ; Thu, 27 Oct 2022 11:56:20 -0700 (PDT) Received: by mail-oi1-x22b.google.com with SMTP id r83so3396785oih.2 for ; Thu, 27 Oct 2022 11:56:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=G7zgQgeSXLF/fNVhN5xzAHBWpV3bu2nbiNNH6HqTPc0=; b=OFSMwDOoSFJYKmC+eyxnXSmJI3lcnLePQtweYxFJ2ZVPmcoHs81rUPbQpkn+oFKmYq ivu6f9cDVgYoVbpCorxTTbOB415aUW7mS/Q6DDOHD9WtnYQT8Nm8jUXHsLSWua2kSGpN HwzH/aKxlrIAZ5FXGHwOO5gGVneY83krxrLO7EdYRxCpNDflZYzoGOG5T++PlDz6qbKx GEVBezNY6ZKPYzpFLGuWJcSpnduEoLFMeXG//P4UozlM37VLA3pQhyh+bv+mS+k611AZ QHcijNKo5bZiRidmwYbe5oNamwsudcgpBoB3A7h/AJgkj4Owr6eIsDsclTBzbJCRCUZU OLSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=G7zgQgeSXLF/fNVhN5xzAHBWpV3bu2nbiNNH6HqTPc0=; b=NStG6xnb57rp0pUhsZS95YjWqeXQH2LyoCTbXR5j30sjOeLpbnrAeXYRR/zeYBWe7d PaDTOnWhhSiALpjRa2bF01OQ7X4NKjRzcS9Aeb+zYWme9U4mtaePMepsVbOvCRwla8dV Iwt+7+G0xX8Z5Thml2rCmojGpxjunFwIa0hW4KrhxBtI6qaymtD8P7o0EmZgRvOEoVU9 lyywJpzb4zoJ491vCRbPJpV2wbwM0wCTxpbHOIyvkrc81GnTGSJ/AM+Bi2yA+mXkGsRT 3UWNAQPbwH3OBcggGwKyreoiuNtRA6EIoFYR/QDbAynXQeI5B7aHf3j5Kjhcqwy6y3TI u3Ag== X-Gm-Message-State: ACrzQf3DIMyGnJ5GOJ1/0V8PBRu9sJbNSAakrhoemHSOyzqN4+X6vI9K FhDUmuaPz6haR2Vk9gJUUE4= X-Google-Smtp-Source: AMsMyM5FblE9ZV06iAys+dh0HlmpnK58UhJGT7q0Op88g+7TvrFTsYeZXSUlQBffilQSUE3dsJbtVg== X-Received: by 2002:a05:6808:f91:b0:359:a22e:b047 with SMTP id o17-20020a0568080f9100b00359a22eb047mr5794526oiw.215.1666896979835; Thu, 27 Oct 2022 11:56:19 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f015-3653-e617-fa3f.res6.spectrum.com. [2603:8081:140c:1a00:f015:3653:e617:fa3f]) by smtp.googlemail.com with ESMTPSA id f1-20020a4a8f41000000b0049602fb9b4csm732736ool.46.2022.10.27.11.56.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Oct 2022 11:56:19 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, jhack@hpe.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 08/17] RDMA/rxe: Add routine to compute number of frags for dma Date: Thu, 27 Oct 2022 13:55:02 -0500 Message-Id: <20221027185510.33808-9-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221027185510.33808-1-rpearsonhpe@gmail.com> References: <20221027185510.33808-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add routine named rxe_num_dma_frags() to compute the number of skb frags needed to copy length bytes from a dma info struct. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 4 +- drivers/infiniband/sw/rxe/rxe_mr.c | 67 ++++++++++++++++++++++++++++- 2 files changed, 69 insertions(+), 2 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index c62fc2613a01..4c30ffaccc92 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -76,10 +76,12 @@ int rxe_copy_mr_data(struct sk_buff *skb, struct rxe_mr *mr, u64 iova, enum rxe_mr_copy_op op); int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, enum rxe_mr_copy_op op); +int rxe_num_dma_frags(const struct rxe_pd *pd, const struct rxe_dma_info *dma, + int length); int copy_data(struct rxe_pd *pd, int access, struct rxe_dma_info *dma, void *addr, int length, enum rxe_mr_copy_op op); void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length); -struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key, +struct rxe_mr *lookup_mr(const struct rxe_pd *pd, int access, u32 key, enum rxe_mr_lookup_type type); int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length); int advance_dma_data(struct rxe_dma_info *dma, unsigned int length); diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index fd39b3e17f41..77437a0dd7ec 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -522,6 +522,71 @@ int rxe_copy_mr_data(struct sk_buff *skb, struct rxe_mr *mr, u64 iova, return 0; } +/** + * rxe_num_dma_frags() - Count the number of skb frags needed to copy + * length bytes from a dma info struct to an skb + * @pd: protection domain used by dma entries + * @dma: dma info + * @length: number of bytes to copy + * + * Returns: number of frags needed or negative error + */ +int rxe_num_dma_frags(const struct rxe_pd *pd, const struct rxe_dma_info *dma, + int length) +{ + int cur_sge = dma->cur_sge; + const struct rxe_sge *sge = &dma->sge[cur_sge]; + int buf_offset = dma->sge_offset; + int resid = dma->resid; + struct rxe_mr *mr = NULL; + int bytes; + u64 iova; + int ret; + int num_frags = 0; + + if (length == 0) + return 0; + + if (length > resid) + return -EINVAL; + + while (length > 0) { + if (buf_offset >= sge->length) { + if (mr) + rxe_put(mr); + + sge++; + cur_sge++; + buf_offset = 0; + + if (cur_sge >= dma->num_sge) + return -ENOSPC; + if (!sge->length) + continue; + } + + mr = lookup_mr(pd, 0, sge->lkey, RXE_LOOKUP_LOCAL); + if (!mr) + return -EINVAL; + + bytes = min_t(int, length, sge->length - buf_offset); + if (bytes > 0) { + iova = sge->addr + buf_offset; + ret = rxe_num_mr_frags(mr, iova, length); + if (ret < 0) { + rxe_put(mr); + return ret; + } + + buf_offset += bytes; + resid -= bytes; + length -= bytes; + } + } + + return num_frags; +} + /* copy data in or out of a wqe, i.e. sg list * under the control of a dma descriptor */ @@ -658,7 +723,7 @@ int advance_dma_data(struct rxe_dma_info *dma, unsigned int length) * (3) verify that the mr can support the requested access * (4) verify that mr state is valid */ -struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key, +struct rxe_mr *lookup_mr(const struct rxe_pd *pd, int access, u32 key, enum rxe_mr_lookup_type type) { struct rxe_mr *mr; From patchwork Thu Oct 27 18:55:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13022539 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22338FA3740 for ; Thu, 27 Oct 2022 18:56:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236376AbiJ0S4Y (ORCPT ); Thu, 27 Oct 2022 14:56:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56878 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236015AbiJ0S4X (ORCPT ); Thu, 27 Oct 2022 14:56:23 -0400 Received: from mail-oi1-x22d.google.com (mail-oi1-x22d.google.com [IPv6:2607:f8b0:4864:20::22d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C69D51D33C for ; Thu, 27 Oct 2022 11:56:21 -0700 (PDT) Received: by mail-oi1-x22d.google.com with SMTP id s125so2901731oib.6 for ; Thu, 27 Oct 2022 11:56:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7MChkxtjkhxIWWxt3gK9PetgTDkpZGN347IVsIZZOBg=; b=GXKnBUY6xk9NAnNPlsvhrigLwsJTCvbclz9/fgNzubVWetU5pzs6TfgWyLdjd3lDXs i6MosDCRzekY1aYhr5WaQI6VeqVNAUYBm+U5Jza1SUyCWwagLGCxJogjSi7piyt5LWgp FcJ0gIdfF5bKvJ27O5WSxdToNss5QetWvuqEpk/y/VcWfeuczVb00h350RAExRiuOsob GO31Z/ahlW8ZIsWN5rybaJjAEaoItImy6LdA2Evn4hY4rKuFXiVom0BKEVm2CNt7YXE7 RQIXkEmvQYdUlJsLLuTqNjPLvIMQ1oUbQaIqvJAQ6j/UG+NtHF25Y9RGdpj/O/FmN0Zg Mi3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7MChkxtjkhxIWWxt3gK9PetgTDkpZGN347IVsIZZOBg=; b=zyNMfjWiAZSgZZNQWlBJTxZp6MfvKUimL7Ugn4yaWEsg9ozY3cBLyDMki1103o0kRc 2bxPDkaML5c0/la/oNyfg6XaYdIClOytByRcVAgFe9a0tJhm23E1VI6MhDEKgCegHKF4 LTVCRPcKhOpFX2BLP/lHaiomIPt16hSeqE3QqVDzcMFRy+WESSayF03c3S0zWiHa6atZ bqC82BqoN96vVNeCgvyc0DfARlb9kqYXs+vWe2+A5mrqF09BFMY1RSnYWpVbIXUtr6L+ 8sL9VFH7prjDNjWf9BQ7B9S/E13QqidzeJRpG3sPbKBwYr1oizYD+YZlI3VloLrP5BNT u8gw== X-Gm-Message-State: ACrzQf0K0FGYeCiyfGCKqlSOfhPaxXTxymZltICvCQMj0NvPJcaeDGhM c8+se9chBZxXYrfPTXyZE4A= X-Google-Smtp-Source: AMsMyM7jfzflMCuwiF+kScImJ9hlViLTjJkNEeJ6sFkttPTpR/+ntPxmgI05pnwa95Ae9cXIHcQ1Vw== X-Received: by 2002:a05:6808:f0e:b0:359:b055:32ea with SMTP id m14-20020a0568080f0e00b00359b05532eamr4909842oiw.112.1666896980976; Thu, 27 Oct 2022 11:56:20 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f015-3653-e617-fa3f.res6.spectrum.com. [2603:8081:140c:1a00:f015:3653:e617:fa3f]) by smtp.googlemail.com with ESMTPSA id f1-20020a4a8f41000000b0049602fb9b4csm732736ool.46.2022.10.27.11.56.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Oct 2022 11:56:20 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, jhack@hpe.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 09/17] RDMA/rxe: Extend copy_data to support skb frags Date: Thu, 27 Oct 2022 13:55:03 -0500 Message-Id: <20221027185510.33808-10-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221027185510.33808-1-rpearsonhpe@gmail.com> References: <20221027185510.33808-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org copy_data() currently supports copying between an mr and the scatter-gather list of a wqe. Rename copy_data() to rxe_copy_dma_data(). Extend the operations to support copying between a sg list and an skb fragment list. Fixup calls to copy_data() to support the new API. This is in preparation for supporting fragmented skbs. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_comp.c | 17 ++-- drivers/infiniband/sw/rxe/rxe_loc.h | 5 +- drivers/infiniband/sw/rxe/rxe_mr.c | 122 ++++++++++++--------------- drivers/infiniband/sw/rxe/rxe_req.c | 11 ++- drivers/infiniband/sw/rxe/rxe_resp.c | 7 +- 5 files changed, 79 insertions(+), 83 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c index 77640e35ae88..3c1ecc88446d 100644 --- a/drivers/infiniband/sw/rxe/rxe_comp.c +++ b/drivers/infiniband/sw/rxe/rxe_comp.c @@ -352,11 +352,14 @@ static inline enum comp_state do_read(struct rxe_qp *qp, struct rxe_pkt_info *pkt, struct rxe_send_wqe *wqe) { + struct sk_buff *skb = PKT_TO_SKB(pkt); + int skb_offset = 0; int ret; - ret = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, - &wqe->dma, payload_addr(pkt), - payload_size(pkt), RXE_COPY_TO_MR); + ret = rxe_copy_dma_data(skb, qp->pd, IB_ACCESS_LOCAL_WRITE, + &wqe->dma, payload_addr(pkt), + skb_offset, payload_size(pkt), + RXE_COPY_TO_MR); if (ret) { wqe->status = IB_WC_LOC_PROT_ERR; return COMPST_ERROR; @@ -372,13 +375,15 @@ static inline enum comp_state do_atomic(struct rxe_qp *qp, struct rxe_pkt_info *pkt, struct rxe_send_wqe *wqe) { + struct sk_buff *skb = NULL; + int skb_offset = 0; int ret; u64 atomic_orig = atmack_orig(pkt); - ret = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, - &wqe->dma, &atomic_orig, - sizeof(u64), RXE_COPY_TO_MR); + ret = rxe_copy_dma_data(skb, qp->pd, IB_ACCESS_LOCAL_WRITE, + &wqe->dma, &atomic_orig, + skb_offset, sizeof(u64), RXE_COPY_TO_MR); if (ret) { wqe->status = IB_WC_LOC_PROT_ERR; return COMPST_ERROR; diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 4c30ffaccc92..dbead759123d 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -78,8 +78,9 @@ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, enum rxe_mr_copy_op op); int rxe_num_dma_frags(const struct rxe_pd *pd, const struct rxe_dma_info *dma, int length); -int copy_data(struct rxe_pd *pd, int access, struct rxe_dma_info *dma, - void *addr, int length, enum rxe_mr_copy_op op); +int rxe_copy_dma_data(struct sk_buff *skb, struct rxe_pd *pd, int access, + struct rxe_dma_info *dma, void *addr, + int skb_offset, int length, enum rxe_mr_copy_op op); void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length); struct rxe_mr *lookup_mr(const struct rxe_pd *pd, int access, u32 key, enum rxe_mr_lookup_type type); diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 77437a0dd7ec..85d46ea24166 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -587,100 +587,84 @@ int rxe_num_dma_frags(const struct rxe_pd *pd, const struct rxe_dma_info *dma, return num_frags; } -/* copy data in or out of a wqe, i.e. sg list - * under the control of a dma descriptor +/** + * rxe_copy_dma_data() - transfer data between a packet and a wqe + * @skb: packet buffer (FRAG MR only) + * @pd: PD which MRs must match + * @access: access permission for MRs in sge (TO MR only) + * @dma: dma info from a wqe + * @addr: payload address in packet (TO/FROM MR only) + * @skb_offset: offset of data in skb (RXE_FRAG_TO_MR only) + * @length: payload length + * @op: copy operation (RXE_COPY_TO/FROM_MR or RXE_FRAG_TO/FROM_MR) + * + * Iterate over scatter/gather list in dma info starting from the + * current location until the payload length is used up and for each + * entry copy or build a frag list referencing the MR obtained from + * the lkey in the sge. This routine is called once for each packet + * sent or received to/from the wqe. + * + * Returns: 0 on success or an error */ -int copy_data( - struct rxe_pd *pd, - int access, - struct rxe_dma_info *dma, - void *addr, - int length, - enum rxe_mr_copy_op op) +int rxe_copy_dma_data(struct sk_buff *skb, struct rxe_pd *pd, int access, + struct rxe_dma_info *dma, void *addr, + int skb_offset, int length, enum rxe_mr_copy_op op) { - int bytes; - struct rxe_sge *sge = &dma->sge[dma->cur_sge]; - int offset = dma->sge_offset; - int resid = dma->resid; - struct rxe_mr *mr = NULL; - u64 iova; - int err; + struct rxe_sge *sge = &dma->sge[dma->cur_sge]; + int buf_offset = dma->sge_offset; + int resid = dma->resid; + struct rxe_mr *mr = NULL; + int bytes; + u64 iova; + int err = 0; if (length == 0) return 0; - if (length > resid) { - err = -EINVAL; - goto err2; - } - - if (sge->length && (offset < sge->length)) { - mr = lookup_mr(pd, access, sge->lkey, RXE_LOOKUP_LOCAL); - if (!mr) { - err = -EINVAL; - goto err1; - } - } + if (length > resid) + return -EINVAL; while (length > 0) { - bytes = length; - - if (offset >= sge->length) { - if (mr) { + if (buf_offset >= sge->length) { + if (mr) rxe_put(mr); - mr = NULL; - } + sge++; dma->cur_sge++; - offset = 0; - - if (dma->cur_sge >= dma->num_sge) { - err = -ENOSPC; - goto err2; - } + buf_offset = 0; - if (sge->length) { - mr = lookup_mr(pd, access, sge->lkey, - RXE_LOOKUP_LOCAL); - if (!mr) { - err = -EINVAL; - goto err1; - } - } else { + if (dma->cur_sge >= dma->num_sge) + return -ENOSPC; + if (!sge->length) continue; - } } - if (bytes > sge->length - offset) - bytes = sge->length - offset; + mr = lookup_mr(pd, access, sge->lkey, RXE_LOOKUP_LOCAL); + if (!mr) + return -EINVAL; + bytes = min_t(int, length, sge->length - buf_offset); if (bytes > 0) { - iova = sge->addr + offset; - - err = rxe_copy_mr_data(NULL, mr, iova, addr, - 0, bytes, op); + iova = sge->addr + buf_offset; + err = rxe_copy_mr_data(skb, mr, iova, addr, + skb_offset, bytes, op); if (err) - goto err2; + goto err_put; - offset += bytes; - resid -= bytes; - length -= bytes; - addr += bytes; + addr += bytes; + buf_offset += bytes; + skb_offset += bytes; + resid -= bytes; + length -= bytes; } } - dma->sge_offset = offset; - dma->resid = resid; + dma->sge_offset = buf_offset; + dma->resid = resid; +err_put: if (mr) rxe_put(mr); - - return 0; - -err2: - if (mr) - rxe_put(mr); -err1: return err; } diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 9d92acfb2fcf..c4ab1a152491 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -438,8 +438,10 @@ static void rxe_init_roce_hdrs(struct rxe_qp *qp, struct rxe_send_wqe *wqe, } static int rxe_init_payload(struct rxe_qp *qp, struct rxe_send_wqe *wqe, - struct rxe_pkt_info *pkt, u32 payload) + struct rxe_pkt_info *pkt, u32 payload, + struct sk_buff *skb) { + int skb_offset = 0; void *data; int err = 0; @@ -449,8 +451,9 @@ static int rxe_init_payload(struct rxe_qp *qp, struct rxe_send_wqe *wqe, wqe->dma.resid -= payload; wqe->dma.sge_offset += payload; } else { - err = copy_data(qp->pd, 0, &wqe->dma, payload_addr(pkt), - payload, RXE_COPY_FROM_MR); + err = rxe_copy_dma_data(skb, qp->pd, 0, &wqe->dma, + payload_addr(pkt), skb_offset, + payload, RXE_COPY_FROM_MR); } return err; @@ -495,7 +498,7 @@ static struct sk_buff *rxe_init_req_packet(struct rxe_qp *qp, rxe_init_roce_hdrs(qp, wqe, pkt, pad); if (pkt->mask & RXE_WRITE_OR_SEND_MASK) { - err = rxe_init_payload(qp, wqe, pkt, payload); + err = rxe_init_payload(qp, wqe, pkt, payload, skb); if (err) goto err_out; } diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index ba359242118a..7afff56aa398 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -521,10 +521,13 @@ static enum resp_states check_rkey(struct rxe_qp *qp, static enum resp_states send_data_in(struct rxe_qp *qp, void *data_addr, int data_len) { + struct sk_buff *skb = NULL; + int skb_offset = 0; int err; - err = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, &qp->resp.wqe->dma, - data_addr, data_len, RXE_COPY_TO_MR); + err = rxe_copy_dma_data(skb, qp->pd, IB_ACCESS_LOCAL_WRITE, + &qp->resp.wqe->dma, data_addr, + skb_offset, data_len, RXE_COPY_TO_MR); if (unlikely(err)) return (err == -ENOSPC) ? RESPST_ERR_LENGTH : RESPST_ERR_MALFORMED_WQE; From patchwork Thu Oct 27 18:55:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13022540 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 313EDECAAA1 for ; Thu, 27 Oct 2022 18:56:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236415AbiJ0S43 (ORCPT ); Thu, 27 Oct 2022 14:56:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57082 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236418AbiJ0S4Z (ORCPT ); Thu, 27 Oct 2022 14:56:25 -0400 Received: from mail-oa1-x31.google.com (mail-oa1-x31.google.com [IPv6:2001:4860:4864:20::31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4C6AF51A28 for ; Thu, 27 Oct 2022 11:56:24 -0700 (PDT) Received: by mail-oa1-x31.google.com with SMTP id 586e51a60fabf-13ba9a4430cso3370944fac.11 for ; Thu, 27 Oct 2022 11:56:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=CduSnUe8U0ENFtTvjuXGu9liKm9ihCJ54tUJ8p9KbVU=; b=NDGZoDdpI1xhBGTRVVjX8dvpx4d4cI9LHFmfGDOIowNzVRRGZ2K6jzehz30j5C/La0 8ZFDwVsZ1I+rkNqGNxfger2wUxWydQok3pOfBWC6N8zTpA/Go8st/CiK342xPhkKyEN4 xtc1rufPAVkBSzaPT4OxG5L1Y6ZOACcGYjOSxKm4OsGMqpekk0M95JS2od6REMZWC0mt 3Q02gz+qdCfkoVaK3Ytv7xlpwvey4QssPl+Iq7ao4bykVhrJKaZBcNXu2OklnSg+cgwU 7L3XEgcaLKvGh7Q65Z8jYmuoK3RJPIC7kL+2kpJmRNaiTCeZJp+0GO7lFSGcx3ljjoRO 7+bA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CduSnUe8U0ENFtTvjuXGu9liKm9ihCJ54tUJ8p9KbVU=; b=wSAVMccRCTGuSyqZoicScL3RPQiH6otL665+hJgXVvFWbsU77MMK8d21LWzkBElbMF QAg6w0ZFPKQkREvMNb1C7sAxhx40c/gb64N22XuvQm1zfcLiReaJaxllorpm+HxD7vRQ eFzSIhzQ4Rg7g9oWGwxorI3dqsbh4sUd1i1/OeCm2yls4a96uXX9qxEwjVKRS6sVkJmO er9fj4tzECjJB12bCv0Rrp7YFHBb6I63ofEV26t7daKSs6KCxu7qrGmUcY5pBncU074U hOYw6HNon0gKSortEExFLn8Adthoaaw+uvIp0A782gh18FouNqUeQgGCzsqv1vKJVAyr rXPg== X-Gm-Message-State: ACrzQf32cgf9Y03l7668moZqQ8FN7DBGYIyb3W2tuKADpRTeyHV/3k5V CVtYw6DLsBJHXu+DG7HRUhBw8v13fSo= X-Google-Smtp-Source: AMsMyM6ODWxMmBsPkxgA6TUZ1omAx5P4hYR4HZTKg3OGWEF8HzmofUnUO9SH7iUZRA9CqUTYT9eh1g== X-Received: by 2002:a05:6870:82a8:b0:13c:673e:322a with SMTP id q40-20020a05687082a800b0013c673e322amr1188364oae.249.1666896982114; Thu, 27 Oct 2022 11:56:22 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f015-3653-e617-fa3f.res6.spectrum.com. [2603:8081:140c:1a00:f015:3653:e617:fa3f]) by smtp.googlemail.com with ESMTPSA id f1-20020a4a8f41000000b0049602fb9b4csm732736ool.46.2022.10.27.11.56.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Oct 2022 11:56:21 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, jhack@hpe.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 10/17] RDMA/rxe: Replace rxe by qp as a parameter Date: Thu, 27 Oct 2022 13:55:04 -0500 Message-Id: <20221027185510.33808-11-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221027185510.33808-1-rpearsonhpe@gmail.com> References: <20221027185510.33808-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Replace rxe as a parameter by qp in rxe_init_packet(). This will allow some simplification. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 2 +- drivers/infiniband/sw/rxe/rxe_net.c | 3 ++- drivers/infiniband/sw/rxe/rxe_req.c | 2 +- drivers/infiniband/sw/rxe/rxe_resp.c | 3 +-- 4 files changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index dbead759123d..4e5fbc33277d 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -100,7 +100,7 @@ struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey); void rxe_mw_cleanup(struct rxe_pool_elem *elem); /* rxe_net.c */ -struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, +struct sk_buff *rxe_init_packet(struct rxe_qp *qp, struct rxe_av *av, struct rxe_pkt_info *pkt); int rxe_prepare(struct rxe_av *av, struct rxe_pkt_info *pkt, struct sk_buff *skb); diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index 1e4456f5cda2..faabc444d546 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -442,9 +442,10 @@ int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, return err; } -struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, +struct sk_buff *rxe_init_packet(struct rxe_qp *qp, struct rxe_av *av, struct rxe_pkt_info *pkt) { + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); unsigned int hdr_len; struct sk_buff *skb = NULL; struct net_device *ndev; diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index c4ab1a152491..2bae7a05805b 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -491,7 +491,7 @@ static struct sk_buff *rxe_init_req_packet(struct rxe_qp *qp, pad + RXE_ICRC_SIZE; /* init skb */ - skb = rxe_init_packet(rxe, av, pkt); + skb = rxe_init_packet(qp, av, pkt); if (unlikely(!skb)) goto err_out; diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 7afff56aa398..71f6d446b1dc 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -665,7 +665,6 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, u32 psn, u8 syndrome) { - struct rxe_dev *rxe = to_rdev(qp->ibqp.device); struct sk_buff *skb; int paylen; int pad; @@ -680,7 +679,7 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, ack->paylen = paylen; ack->psn = psn; - skb = rxe_init_packet(rxe, &qp->pri_av, ack); + skb = rxe_init_packet(qp, &qp->pri_av, ack); if (!skb) return NULL; From patchwork Thu Oct 27 18:55:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13022541 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D862FA3740 for ; Thu, 27 Oct 2022 18:56:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236423AbiJ0S4a (ORCPT ); Thu, 27 Oct 2022 14:56:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56898 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236144AbiJ0S40 (ORCPT ); Thu, 27 Oct 2022 14:56:26 -0400 Received: from mail-ot1-x330.google.com (mail-ot1-x330.google.com [IPv6:2607:f8b0:4864:20::330]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4C33DDD7 for ; Thu, 27 Oct 2022 11:56:24 -0700 (PDT) Received: by mail-ot1-x330.google.com with SMTP id a13-20020a9d6e8d000000b00668d65fc44fso1589077otr.9 for ; Thu, 27 Oct 2022 11:56:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=TBxwamK8r0vurruXI2E7GOgAv09G9ZaDpZC2I7ESsSQ=; b=Klh0e9nVMhnCsb+EassDHwaqB3VvMy29vrQ0zKYJDTN7NdZZBGPLYHCvWUtGpTRTZM zt+sBOuVikXa/2fhSTJzJJ4jVhHr5YMZNkOFARXB4l3Ga8tYyyWMNKHq1FRlOtR5stCR ZvpQz7CkczEbAwgiaUYb8/fT3Xz1KalZu6ErB3aeOhUFpxrC2B8WvB7TlNAYa4PLrZlT 0RPcHbFSi+VdMSViO352vcArMQ+RhnqfZX5H+fBUHULv8NhDojQKHfGAZRJDZYU0cWOP Lp1YyM7D3b+klSizGIAq6Ie2jHVV5EFKbmB7cJx5/9b6O2UrejnGjkMA0tyqBiKKEqzf uqdA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TBxwamK8r0vurruXI2E7GOgAv09G9ZaDpZC2I7ESsSQ=; b=WY20zIaoRbESYEcAQvD6X0U3d1TilTvSZnCfwK48hZlkj7traMJf8fHa7926fOKkq9 NL6zKBhFOIphEQipmE0Gc02QX697/CBXl5e6vmGnSlJIiqJcD/8GFvOUs2v2h5aHHFtP zaKtXERFy80pSD2qtNoaMLR1128yV4DMrzqaTe+BuppfQXj3dNtnxqOlK4gCVbz6Qukn zq9lgmE7ATXbSvb7bTEm3ENK9To4+imLreaKfA6e8W+/hNzx5irXHFTBZUW/4/FseeJK /g2tjE/gVRNF0D76HcIwp0AGBofaGGKyPDD/U+3VcdvvofQ/StJ1hQrNRDSrhTSzxnKo cS+g== X-Gm-Message-State: ACrzQf1xB2CAyi5ALPkt54fyfWei8dOg9cuH47R7ksB0DQ0N5fkXiIpO Wi/WEoFW7TF4XznN24LrjDlVMDZZahA= X-Google-Smtp-Source: AMsMyM6WPwvOrQPlf2qJpU5U5vSm0yexxoPi6EX+Th0/2C6zsNyLRT5fQVdPEBrhQXUFvKjrorjbSg== X-Received: by 2002:a05:6830:1b7b:b0:668:d65f:c453 with SMTP id d27-20020a0568301b7b00b00668d65fc453mr5282303ote.136.1666896983492; Thu, 27 Oct 2022 11:56:23 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f015-3653-e617-fa3f.res6.spectrum.com. [2603:8081:140c:1a00:f015:3653:e617:fa3f]) by smtp.googlemail.com with ESMTPSA id f1-20020a4a8f41000000b0049602fb9b4csm732736ool.46.2022.10.27.11.56.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Oct 2022 11:56:22 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, jhack@hpe.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 11/17] RDMA/rxe: Extend rxe_init_packet() to support frags Date: Thu, 27 Oct 2022 13:55:05 -0500 Message-Id: <20221027185510.33808-12-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221027185510.33808-1-rpearsonhpe@gmail.com> References: <20221027185510.33808-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add a subroutine rxe_can_use_sg() to determine if a packet is a candidate for a fragmented skb. Add a global variable rxe_use_sg to control whether to support nonlinear skbs. Modify rxe_init_packet() to test if the packet should use a fragmented skb. Fixup calls to rxe_init_packet() to use the new API but disable creating nonlinear skbs for now. This is in preparation for using fragmented skbs. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe.c | 3 ++ drivers/infiniband/sw/rxe/rxe.h | 3 ++ drivers/infiniband/sw/rxe/rxe_loc.h | 2 +- drivers/infiniband/sw/rxe/rxe_mr.c | 12 +++-- drivers/infiniband/sw/rxe/rxe_net.c | 79 +++++++++++++++++++++++++--- drivers/infiniband/sw/rxe/rxe_req.c | 2 +- drivers/infiniband/sw/rxe/rxe_resp.c | 5 +- 7 files changed, 91 insertions(+), 15 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c index 51daac5c4feb..388d8103ec20 100644 --- a/drivers/infiniband/sw/rxe/rxe.c +++ b/drivers/infiniband/sw/rxe/rxe.c @@ -13,6 +13,9 @@ MODULE_AUTHOR("Bob Pearson, Frank Zago, John Groves, Kamal Heib"); MODULE_DESCRIPTION("Soft RDMA transport"); MODULE_LICENSE("Dual BSD/GPL"); +/* if true allow using fragmented skbs */ +bool rxe_use_sg; + /* free resources for a rxe device all objects created for this device must * have been destroyed */ diff --git a/drivers/infiniband/sw/rxe/rxe.h b/drivers/infiniband/sw/rxe/rxe.h index 30fbdf3bc76a..c78fb497d9c3 100644 --- a/drivers/infiniband/sw/rxe/rxe.h +++ b/drivers/infiniband/sw/rxe/rxe.h @@ -30,6 +30,9 @@ #include "rxe_verbs.h" #include "rxe_loc.h" +/* if true allow using fragmented skbs */ +extern bool rxe_use_sg; + /* * Version 1 and Version 2 are identical on 64 bit machines, but on 32 bit * machines Version 2 has a different struct layout. diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 4e5fbc33277d..12fd5811cd79 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -101,7 +101,7 @@ void rxe_mw_cleanup(struct rxe_pool_elem *elem); /* rxe_net.c */ struct sk_buff *rxe_init_packet(struct rxe_qp *qp, struct rxe_av *av, - struct rxe_pkt_info *pkt); + struct rxe_pkt_info *pkt, bool *is_frag); int rxe_prepare(struct rxe_av *av, struct rxe_pkt_info *pkt, struct sk_buff *skb); int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 85d46ea24166..dc4b509239f0 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -541,7 +541,7 @@ int rxe_num_dma_frags(const struct rxe_pd *pd, const struct rxe_dma_info *dma, struct rxe_mr *mr = NULL; int bytes; u64 iova; - int ret; + int nf; int num_frags = 0; if (length == 0) @@ -572,18 +572,22 @@ int rxe_num_dma_frags(const struct rxe_pd *pd, const struct rxe_dma_info *dma, bytes = min_t(int, length, sge->length - buf_offset); if (bytes > 0) { iova = sge->addr + buf_offset; - ret = rxe_num_mr_frags(mr, iova, length); - if (ret < 0) { + nf = rxe_num_mr_frags(mr, iova, length); + if (nf < 0) { rxe_put(mr); - return ret; + return nf; } + num_frags += nf; buf_offset += bytes; resid -= bytes; length -= bytes; } } + if (mr) + rxe_put(mr); + return num_frags; } diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index faabc444d546..c6d8f5c80562 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -442,8 +442,60 @@ int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, return err; } +/** + * rxe_can_use_sg() - determine if packet is a candidate for fragmenting + * @rxe: the rxe device + * @pkt: packet info + * + * Limit to packets with: + * rxe_use_sg set + * qp is RC + * ndev supports SG + * #sges less than #frags for sends + * + * Returns: true if conditions are met else 0 + */ +static bool rxe_can_use_sg(struct rxe_qp *qp, struct rxe_pkt_info *pkt) +{ + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); + int length = pkt->paylen - rxe_opcode[pkt->opcode].length + - RXE_ICRC_SIZE; + int nf; + + if (!rxe_use_sg) + return false; + if (qp_type(pkt->qp) != IB_QPT_RC) + return false; + if (!(rxe->ndev->features & NETIF_F_SG)) + return false; + + /* check we don't have a pathological sge list with lots of + * short segments. Recall we need one extra frag for icrc. + */ + if (pkt->mask & RXE_SEND_MASK) { + nf = rxe_num_dma_frags(qp->pd, &pkt->wqe->dma, length); + return (nf >= 0 && nf <= MAX_SKB_FRAGS - 1) ? true : false; + } + + return true; +} + +#define RXE_MIN_SKB_SIZE (256) + +/** + * rxe_init_packet - allocate and initialize a new skb + * @qp: the queue pair + * @av: remote address vector + * @pkt: packet info + * @frag: optional return value for fragmented skb + * on call if frag == NULL do not use fragmented skb + * on return if not NULL set *frag to 1 + * if packet will be fragmented else 0 + * + * Returns: an skb on success else NULL + */ struct sk_buff *rxe_init_packet(struct rxe_qp *qp, struct rxe_av *av, - struct rxe_pkt_info *pkt) + struct rxe_pkt_info *pkt, bool *frag) { struct rxe_dev *rxe = to_rdev(qp->ibqp.device); unsigned int hdr_len; @@ -451,6 +503,7 @@ struct sk_buff *rxe_init_packet(struct rxe_qp *qp, struct rxe_av *av, struct net_device *ndev; const struct ib_gid_attr *attr; const int port_num = 1; + int skb_size; attr = rdma_get_gid_attr(&rxe->ib_dev, port_num, av->grh.sgid_index); if (IS_ERR(attr)) @@ -469,9 +522,19 @@ struct sk_buff *rxe_init_packet(struct rxe_qp *qp, struct rxe_av *av, rcu_read_unlock(); goto out; } - skb = alloc_skb(pkt->paylen + hdr_len + LL_RESERVED_SPACE(ndev), - GFP_ATOMIC); + skb_size = LL_RESERVED_SPACE(ndev) + hdr_len + pkt->paylen; + if (frag) { + if (rxe_use_sg && (skb_size > RXE_MIN_SKB_SIZE) && + rxe_can_use_sg(qp, pkt)) { + skb_size = RXE_MIN_SKB_SIZE; + *frag = true; + } else { + *frag = false; + } + } + + skb = alloc_skb(skb_size, GFP_ATOMIC); if (unlikely(!skb)) { rcu_read_unlock(); goto out; @@ -480,7 +543,7 @@ struct sk_buff *rxe_init_packet(struct rxe_qp *qp, struct rxe_av *av, skb_reserve(skb, hdr_len + LL_RESERVED_SPACE(ndev)); /* FIXME: hold reference to this netdev until life of this skb. */ - skb->dev = ndev; + skb->dev = ndev; rcu_read_unlock(); if (av->network_type == RXE_NETWORK_TYPE_IPV4) @@ -488,10 +551,10 @@ struct sk_buff *rxe_init_packet(struct rxe_qp *qp, struct rxe_av *av, else skb->protocol = htons(ETH_P_IPV6); - pkt->rxe = rxe; - pkt->port_num = port_num; - pkt->hdr = skb_put(skb, pkt->paylen); - pkt->mask |= RXE_GRH_MASK; + if (frag && *frag) + pkt->hdr = skb_put(skb, rxe_opcode[pkt->opcode].length); + else + pkt->hdr = skb_put(skb, pkt->paylen); out: rdma_put_gid_attr(attr); diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 2bae7a05805b..b4bbccc3c008 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -491,7 +491,7 @@ static struct sk_buff *rxe_init_req_packet(struct rxe_qp *qp, pad + RXE_ICRC_SIZE; /* init skb */ - skb = rxe_init_packet(qp, av, pkt); + skb = rxe_init_packet(qp, av, pkt, NULL); if (unlikely(!skb)) goto err_out; diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 71f6d446b1dc..8868415b71b6 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -665,6 +665,7 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, u32 psn, u8 syndrome) { + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); struct sk_buff *skb; int paylen; int pad; @@ -673,13 +674,15 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, pad = (-payload) & 0x3; paylen = rxe_opcode[opcode].length + payload + pad + RXE_ICRC_SIZE; + ack->rxe = rxe; ack->qp = qp; ack->opcode = opcode; ack->mask = rxe_opcode[opcode].mask; ack->paylen = paylen; ack->psn = psn; + ack->port_num = 1; - skb = rxe_init_packet(qp, &qp->pri_av, ack); + skb = rxe_init_packet(qp, &qp->pri_av, ack, NULL); if (!skb) return NULL; From patchwork Thu Oct 27 18:55:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13022542 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76E2AFA3742 for ; Thu, 27 Oct 2022 18:56:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236352AbiJ0S4c (ORCPT ); Thu, 27 Oct 2022 14:56:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57164 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236350AbiJ0S40 (ORCPT ); Thu, 27 Oct 2022 14:56:26 -0400 Received: from mail-oi1-x22a.google.com (mail-oi1-x22a.google.com [IPv6:2607:f8b0:4864:20::22a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 426B45245B for ; Thu, 27 Oct 2022 11:56:25 -0700 (PDT) Received: by mail-oi1-x22a.google.com with SMTP id g10so3366946oif.10 for ; Thu, 27 Oct 2022 11:56:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=eICWrQ7xHhpVUrkestzZklAvgiBj6omyV1K9w85HBQ8=; b=SMIUoK6NQSdXNrPeBM/d2iocEn6vEzWJCC9gxkyDeGiLaFrgXGbyiDPRUuA5J15Wi1 +llZqBs2RMxv5mS/MTqx2qb4ISW3JtY4+AWNPZXXXnz4303TDN/856WfA+8pwmBROnKv UHzhz3jq/5cR28L9NdizaBCMu0t7YhopbLQBC56it2wx8T4r+fUD/7UaKnz6G7MopUMP ULqdAE5dZuC1jFM8hVN5j2PXQ5oL6Wju/WWuo4jT6mR/edIse14rA2XJK3KoUHKFE0+s 8fk4fip8BWn9d+x9dQXzf2fZGu2Hu8XMAe0lZGDn7aLd9gC0kWlITRnfybnqworgHjjl Go8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eICWrQ7xHhpVUrkestzZklAvgiBj6omyV1K9w85HBQ8=; b=gKM7FVdP4grvsWw6WZPRqq+0WLZMv37uCm3UWh4KwQoK/Bbu+2D+ienrzvTa4+Cbg0 wYFjqXJULWLgcJIZEpeop5FJs/OYk5ZB0rmNF/3Hrua68deMVy3jN3mu9yiFCn+2uBDp rEjancB20R1tlDrqSR4MYH38T6++8SXwK0nEzOVeaoQdaKFaAVZbK7qJkzaPMrG7oCk1 5Fc5lOW/ww6uT0hA2KP2Ds66YtH0v8auVRmvw62YULvRJC+7YtuGorXKiltSknpXDmB7 DmHvQTxlQJhcMGaIHBrFKwWR6an9cDw99Kw0T1sEKrcm9NN0F4cqNAin20YucllTSikx KxGg== X-Gm-Message-State: ACrzQf1XrlQWVLQsKVTJiZTofaSqrjqD95gzjdavByygW6xIpY6x5F0S v6KqOv/dE4QblllugXotVriJon8wsUk= X-Google-Smtp-Source: AMsMyM79ONTjgONkvIFbtnc9VeSCOG3rPIlaeGA2S9YiuTnnbz+5b1CGRxfzA8d185a1AXMvZfZ64g== X-Received: by 2002:aca:230e:0:b0:354:b67d:aa4 with SMTP id e14-20020aca230e000000b00354b67d0aa4mr5925559oie.74.1666896984947; Thu, 27 Oct 2022 11:56:24 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f015-3653-e617-fa3f.res6.spectrum.com. [2603:8081:140c:1a00:f015:3653:e617:fa3f]) by smtp.googlemail.com with ESMTPSA id f1-20020a4a8f41000000b0049602fb9b4csm732736ool.46.2022.10.27.11.56.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Oct 2022 11:56:24 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, jhack@hpe.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 12/17] RDMA/rxe: Extend rxe_icrc.c to support frags Date: Thu, 27 Oct 2022 13:55:06 -0500 Message-Id: <20221027185510.33808-13-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221027185510.33808-1-rpearsonhpe@gmail.com> References: <20221027185510.33808-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend the subroutines rxe_icrc_generate() and rxe_icrc_check() to support skb frags. This is in preparation for supporting fragmented skbs. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_icrc.c | 65 ++++++++++++++++++++++++---- drivers/infiniband/sw/rxe/rxe_net.c | 55 ++++++++++++++++++----- drivers/infiniband/sw/rxe/rxe_recv.c | 1 + 3 files changed, 100 insertions(+), 21 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_icrc.c b/drivers/infiniband/sw/rxe/rxe_icrc.c index 46bb07c5c4df..5f1d24e37c36 100644 --- a/drivers/infiniband/sw/rxe/rxe_icrc.c +++ b/drivers/infiniband/sw/rxe/rxe_icrc.c @@ -63,7 +63,7 @@ static __be32 rxe_crc32(struct rxe_dev *rxe, __be32 crc, void *next, size_t len) /** * rxe_icrc_hdr() - Compute the partial ICRC for the network and transport - * headers of a packet. + * headers of a packet. * @skb: packet buffer * @pkt: packet information * @@ -129,6 +129,56 @@ static __be32 rxe_icrc_hdr(struct sk_buff *skb, struct rxe_pkt_info *pkt) return crc; } +/** + * rxe_icrc_payload() - Compute the ICRC for a packet payload and also + * compute the address of the icrc in the packet. + * @skb: packet buffer + * @pkt: packet information + * @icrc: current icrc i.e. including headers + * @icrcp: returned pointer to icrc in skb + * + * Return: 0 if the values match else an error + */ +__be32 rxe_icrc_payload(struct sk_buff *skb, struct rxe_pkt_info *pkt, + __be32 icrc, __be32 **icrcp) +{ + struct skb_shared_info *shinfo = skb_shinfo(skb); + skb_frag_t *frag; + u8 *addr; + int hdr_len; + int len; + int i; + + /* handle any payload left in the linear buffer */ + hdr_len = rxe_opcode[pkt->opcode].length; + addr = pkt->hdr + hdr_len; + len = skb_tail_pointer(skb) - skb_transport_header(skb) + - sizeof(struct udphdr) - hdr_len; + if (!shinfo->nr_frags) { + len -= RXE_ICRC_SIZE; + *icrcp = (__be32 *)(addr + len); + } + if (len > 0) + icrc = rxe_crc32(pkt->rxe, icrc, payload_addr(pkt), len); + WARN_ON(len < 0); + + /* handle any payload in frags */ + for (i = 0; i < shinfo->nr_frags; i++) { + frag = &shinfo->frags[i]; + addr = page_to_virt(frag->bv_page) + frag->bv_offset; + len = frag->bv_len; + if (i == shinfo->nr_frags - 1) { + len -= RXE_ICRC_SIZE; + *icrcp = (__be32 *)(addr + len); + } + if (len > 0) + icrc = rxe_crc32(pkt->rxe, icrc, addr, len); + WARN_ON(len < 0); + } + + return icrc; +} + /** * rxe_icrc_check() - Compute ICRC for a packet and compare to the ICRC * delivered in the packet. @@ -143,13 +193,11 @@ int rxe_icrc_check(struct sk_buff *skb, struct rxe_pkt_info *pkt) __be32 pkt_icrc; __be32 icrc; - icrcp = (__be32 *)(pkt->hdr + pkt->paylen - RXE_ICRC_SIZE); - pkt_icrc = *icrcp; - icrc = rxe_icrc_hdr(skb, pkt); - icrc = rxe_crc32(pkt->rxe, icrc, (u8 *)payload_addr(pkt), - payload_size(pkt) + bth_pad(pkt)); + icrc = rxe_icrc_payload(skb, pkt, icrc, &icrcp); + icrc = ~icrc; + pkt_icrc = *icrcp; if (unlikely(icrc != pkt_icrc)) return -EINVAL; @@ -167,9 +215,8 @@ void rxe_icrc_generate(struct sk_buff *skb, struct rxe_pkt_info *pkt) __be32 *icrcp; __be32 icrc; - icrcp = (__be32 *)(pkt->hdr + pkt->paylen - RXE_ICRC_SIZE); icrc = rxe_icrc_hdr(skb, pkt); - icrc = rxe_crc32(pkt->rxe, icrc, (u8 *)payload_addr(pkt), - payload_size(pkt) + bth_pad(pkt)); + icrc = rxe_icrc_payload(skb, pkt, icrc, &icrcp); + *icrcp = ~icrc; } diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index c6d8f5c80562..395e9d7d81c3 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -134,32 +134,51 @@ static int rxe_udp_encap_recv(struct sock *sk, struct sk_buff *skb) struct rxe_dev *rxe; struct net_device *ndev = skb->dev; struct rxe_pkt_info *pkt = SKB_TO_PKT(skb); + u8 opcode; + u8 buf[1]; + u8 *p; - /* takes a reference on rxe->ib_dev - * drop when skb is freed - */ + /* Takes a reference on rxe->ib_dev. Drop when skb is freed */ rxe = rxe_get_dev_from_net(ndev); if (!rxe && is_vlan_dev(ndev)) rxe = rxe_get_dev_from_net(vlan_dev_real_dev(ndev)); if (!rxe) - goto drop; + goto err_drop; - if (skb_linearize(skb)) { - ib_device_put(&rxe->ib_dev); - goto drop; + /* Get bth opcode out of skb */ + p = skb_header_pointer(skb, sizeof(struct udphdr), 1, buf); + if (!p) + goto err_device_put; + opcode = *p; + + /* If using fragmented skbs make sure roce headers + * are in linear buffer else make skb linear + */ + if (rxe_use_sg && skb_is_nonlinear(skb)) { + int delta = rxe_opcode[opcode].length - + (skb_headlen(skb) - sizeof(struct udphdr)); + + if (delta > 0 && !__pskb_pull_tail(skb, delta)) + goto err_device_put; + } else { + if (skb_linearize(skb)) + goto err_device_put; } udph = udp_hdr(skb); pkt->rxe = rxe; pkt->port_num = 1; pkt->hdr = (u8 *)(udph + 1); - pkt->mask = RXE_GRH_MASK; + pkt->mask = rxe_opcode[opcode].mask | RXE_GRH_MASK; pkt->paylen = be16_to_cpu(udph->len) - sizeof(*udph); rxe_rcv(skb); return 0; -drop: + +err_device_put: + ib_device_put(&rxe->ib_dev); +err_drop: kfree_skb(skb); return 0; @@ -385,21 +404,32 @@ static int rxe_send(struct sk_buff *skb, struct rxe_pkt_info *pkt) */ static int rxe_loopback(struct sk_buff *skb, struct rxe_pkt_info *pkt) { - memcpy(SKB_TO_PKT(skb), pkt, sizeof(*pkt)); + struct rxe_pkt_info *newpkt; + int err; + /* make loopback line up with rxe_udp_encap_recv */ if (skb->protocol == htons(ETH_P_IP)) skb_pull(skb, sizeof(struct iphdr)); else skb_pull(skb, sizeof(struct ipv6hdr)); + skb_reset_transport_header(skb); + + newpkt = SKB_TO_PKT(skb); + memcpy(newpkt, pkt, sizeof(*newpkt)); + newpkt->hdr = skb_transport_header(skb) + sizeof(struct udphdr); if (WARN_ON(!ib_device_try_get(&pkt->rxe->ib_dev))) { kfree_skb(skb); - return -EIO; + err = -EINVAL; + goto drop; } rxe_rcv(skb); - return 0; + +drop: + kfree_skb(skb); + return err; } int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, @@ -415,6 +445,7 @@ int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, goto drop; } + /* skb->data points at IP header */ rxe_icrc_generate(skb, pkt); if (pkt->mask & RXE_LOOPBACK_MASK) diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c index 434a693cd4a5..ba786e5c6266 100644 --- a/drivers/infiniband/sw/rxe/rxe_recv.c +++ b/drivers/infiniband/sw/rxe/rxe_recv.c @@ -329,6 +329,7 @@ void rxe_rcv(struct sk_buff *skb) if (unlikely(err)) goto drop; + /* skb->data points at UDP header */ err = rxe_icrc_check(skb, pkt); if (unlikely(err)) goto drop; From patchwork Thu Oct 27 18:55:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13022543 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0AC50ECAAA1 for ; Thu, 27 Oct 2022 18:56:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236399AbiJ0S4f (ORCPT ); Thu, 27 Oct 2022 14:56:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57308 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236410AbiJ0S42 (ORCPT ); Thu, 27 Oct 2022 14:56:28 -0400 Received: from mail-oa1-x32.google.com (mail-oa1-x32.google.com [IPv6:2001:4860:4864:20::32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CBE805247A for ; Thu, 27 Oct 2022 11:56:26 -0700 (PDT) Received: by mail-oa1-x32.google.com with SMTP id 586e51a60fabf-1322d768ba7so3404281fac.5 for ; Thu, 27 Oct 2022 11:56:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5s9xzALqle7XzdyodUJimyArDl2cVxLW3LvkXiOQyB8=; b=AJFfetP9aH8mVobrCP1dP1XFL5HElDjchW/TOLA+v0jZ9XsF8npxH5tHUFrIa/US5y p+F5QW3L8sCC2hB+5/laEV5TBlcJuP9Pnv3NgTL6UCsdSgnpShT7cGU+XcTfs+lSHmto pvcTXZ03LcAMz6GhJ6EzCy3WF3BEfWRe8Tiu7ZjdsM89S0lb0Re8twJhjLtpQ9f9oNsx kgor6adTNpdamqYBNeFGQAOba0I8VdaXWCiFzJSSnsdZ180vl23QjJi6/9B9/EbZ+eaI e5UaJd8+3FMrotOhXUhKNz5bhVksFvqy2Rdqru7mIy8tX8BkZNExOWH7Y5H+iUqXqGMR ibmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5s9xzALqle7XzdyodUJimyArDl2cVxLW3LvkXiOQyB8=; b=aB+0rxJLFe5s1ebMnBa8s/ZA7m6rnWmW3AwwpXGhV2D5JwNGFnX/ymCFvZxJ9d0jmT cCKPotOCM5Vcy6WKT3K+R/2ydZnga1x5o+jxFoaBdF3BCUXVSr8CePEQuEIUnHPZAnwG 5MdjJRFBpNiU+CxDqYuaisvBHOKLC94uzdniPQP13knHjN8+fOlusWsMTY0kc8KhxnI4 3qQ0+Qn2pkZd11OuJEMG/eGW7eiueoQGHm3WphbNQ0YTF1rSQx+QGPdgwJFV/iNwpy+7 V9GK6mVkohMf8913MhKS8riBd/vw2f5zndU4RO5j+q3Y0l7fnt7QBP8aJZ5BPJfZZcgx m0GA== X-Gm-Message-State: ACrzQf0V6E5/4LI69BQ8wvNsMDDJlQVJPlnPqC0uw0FONi1qPGkt5maJ QAWwARvJzwig+pekCj0bayY= X-Google-Smtp-Source: AMsMyM4TrGxzVIK13plKU0hlDjWBz8yU4D2uARFZsmXOk/7tTVZ6lwGh8dTMueD6h7HOPwVafmd49Q== X-Received: by 2002:a05:6870:40d3:b0:13a:e746:75a7 with SMTP id l19-20020a05687040d300b0013ae74675a7mr6291691oal.5.1666896986098; Thu, 27 Oct 2022 11:56:26 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f015-3653-e617-fa3f.res6.spectrum.com. [2603:8081:140c:1a00:f015:3653:e617:fa3f]) by smtp.googlemail.com with ESMTPSA id f1-20020a4a8f41000000b0049602fb9b4csm732736ool.46.2022.10.27.11.56.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Oct 2022 11:56:25 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, jhack@hpe.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 13/17] RDMA/rxe: Extend rxe_init_req_packet() for frags Date: Thu, 27 Oct 2022 13:55:07 -0500 Message-Id: <20221027185510.33808-14-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221027185510.33808-1-rpearsonhpe@gmail.com> References: <20221027185510.33808-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add code to rxe_build_req_packet() to allocate space for the pad and icrc if the skb is fragmented. This is in preparation for supporting fragmented skbs. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 9 +++- drivers/infiniband/sw/rxe/rxe_req.c | 74 ++++++++++++++++++++++++----- 2 files changed, 71 insertions(+), 12 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 12fd5811cd79..cab6acad7a83 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -179,8 +179,15 @@ void rxe_srq_cleanup(struct rxe_pool_elem *elem); void rxe_dealloc(struct ib_device *ib_dev); -int rxe_completer(void *arg); +/* rxe_req.c */ +int rxe_prepare_pad_icrc(struct rxe_pkt_info *pkt, struct sk_buff *skb, + int payload, bool frag); int rxe_requester(void *arg); + +/* rxe_comp.c */ +int rxe_completer(void *arg); + +/* rxe_resp.c */ int rxe_responder(void *arg); /* rxe_icrc.c */ diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index b4bbccc3c008..ea9ab63a2dc1 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -438,27 +438,79 @@ static void rxe_init_roce_hdrs(struct rxe_qp *qp, struct rxe_send_wqe *wqe, } static int rxe_init_payload(struct rxe_qp *qp, struct rxe_send_wqe *wqe, - struct rxe_pkt_info *pkt, u32 payload, - struct sk_buff *skb) + struct rxe_pkt_info *pkt, int pad, u32 payload, + struct sk_buff *skb, bool frag) { + int len = skb_tailroom(skb); + int tot_len = payload + pad + RXE_ICRC_SIZE; + int access = 0; int skb_offset = 0; + int op; + void *addr; void *data; int err = 0; if (wqe->wr.send_flags & IB_SEND_INLINE) { + if (WARN_ON(frag)) + return -EINVAL; + if (len < tot_len) + return -EINVAL; data = &wqe->dma.inline_data[wqe->dma.sge_offset]; memcpy(payload_addr(pkt), data, payload); wqe->dma.resid -= payload; wqe->dma.sge_offset += payload; } else { - err = rxe_copy_dma_data(skb, qp->pd, 0, &wqe->dma, - payload_addr(pkt), skb_offset, - payload, RXE_COPY_FROM_MR); + op = frag ? RXE_FRAG_FROM_MR : RXE_COPY_FROM_MR; + addr = frag ? NULL : payload_addr(pkt); + err = rxe_copy_dma_data(skb, qp->pd, access, &wqe->dma, + addr, skb_offset, payload, op); } return err; } +/** + * rxe_prepare_pad_icrc() - Alloc space if fragmented and init pad and icrc + * @pkt: packet info + * @skb: packet buffer + * @payload: roce payload + * @frag: true if skb is fragmented + * + * Returns: 0 on success else an error + */ +int rxe_prepare_pad_icrc(struct rxe_pkt_info *pkt, struct sk_buff *skb, + int payload, bool frag) +{ + struct rxe_phys_buf dmabuf; + size_t offset; + u64 iova; + u8 *addr; + int err = 0; + int pad = (-payload) & 0x3; + + if (frag) { + /* allocate bytes at the end of the skb linear buffer + * and build a frag pointing at it + */ + WARN_ON((skb->end - skb->tail) < 8); + addr = skb_end_pointer(skb) - RXE_ICRC_SIZE - pad; + iova = (uintptr_t)addr; + dmabuf.addr = iova & PAGE_MASK; + offset = iova & ~PAGE_MASK; + err = rxe_add_frag(skb, &dmabuf, pad + RXE_ICRC_SIZE, offset); + if (err) + goto err; + } else { + addr = payload_addr(pkt) + payload; + } + + /* init pad and icrc to zero */ + memset(addr, 0, pad + RXE_ICRC_SIZE); + +err: + return err; +} + static struct sk_buff *rxe_init_req_packet(struct rxe_qp *qp, struct rxe_send_wqe *wqe, int opcode, u32 payload, @@ -468,9 +520,9 @@ static struct sk_buff *rxe_init_req_packet(struct rxe_qp *qp, struct sk_buff *skb; struct rxe_av *av; struct rxe_ah *ah; - void *padp; int pad; int err = -EINVAL; + bool frag = false; pkt->rxe = rxe; pkt->opcode = opcode; @@ -498,15 +550,15 @@ static struct sk_buff *rxe_init_req_packet(struct rxe_qp *qp, rxe_init_roce_hdrs(qp, wqe, pkt, pad); if (pkt->mask & RXE_WRITE_OR_SEND_MASK) { - err = rxe_init_payload(qp, wqe, pkt, payload, skb); + err = rxe_init_payload(qp, wqe, pkt, pad, payload, skb, frag); if (err) goto err_out; } - if (pad) { - padp = payload_addr(pkt) + payload; - memset(padp, 0, pad); - } + /* handle pad and icrc */ + err = rxe_prepare_pad_icrc(pkt, skb, payload, frag); + if (err) + goto err_out; /* IP and UDP network headers */ err = rxe_prepare(av, pkt, skb); From patchwork Thu Oct 27 18:55:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13022544 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6664DFA3740 for ; Thu, 27 Oct 2022 18:56:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236168AbiJ0S4i (ORCPT ); Thu, 27 Oct 2022 14:56:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57480 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236144AbiJ0S4c (ORCPT ); Thu, 27 Oct 2022 14:56:32 -0400 Received: from mail-oa1-x2b.google.com (mail-oa1-x2b.google.com [IPv6:2001:4860:4864:20::2b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CE8FF51A06 for ; Thu, 27 Oct 2022 11:56:27 -0700 (PDT) Received: by mail-oa1-x2b.google.com with SMTP id 586e51a60fabf-13b103a3e5dso3413194fac.2 for ; Thu, 27 Oct 2022 11:56:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VOPwY/DG0MJuT0Lgeyya4NjCN0MMe0sWGf6UN1jwdp8=; b=G7cpjckiavAJy/DZqFr02BVpIN2j+kizxw8Zc813S3L/6ikadq+OKOMz8x2Nnf1r+K vTv2QTjjzG+ljA0QHpUY6Y2TjjUvfkWJgNPHUK9H47avx79FZCeiM+MFBlt97QYqJ11D AYHBuls/8XfwBbk40Wc7efHRzMuAe014+XnTX4e0ZuXTZT+JNosD9WqxWbFxTBIhxkMN KMn4mDocTmB+8wUDrD9YZV2M21zF9vY2+k0Gowsu52jrn3CvjyIGZw9jtGF2CVKntDoj tSWEq42E1HksYxA5V+ZypWNu7K3XTZF3kTTYbaG6xNEjq0qqsPqymaM02KiItXcvH164 bttw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VOPwY/DG0MJuT0Lgeyya4NjCN0MMe0sWGf6UN1jwdp8=; b=E+HfLRE/obPpy56E8imfIxXABCyGmkP4IlpXmgbI6zx8WkdlSS4IkTTwEyDA1Qy1b6 E4OBifBiNn/H+Qm02QSjt7AB1yhx79QLwvRfZBbxeFQzTA3B9TnY/hmuZq6ZZm5EZZim A6l7xThV1OCwOPGTFUOye5b50jHGlXWIHi3nF8KA3uWGTq/6XL5BBO0eA99O2sn6XPs/ Da9d6eMgwU9sPj2sLSZSrjw+z0XUPIDYkTsKF0qmBMiw3GgDttSvxlFH6ElFJrJ9Y8OP y79HhnULQiOCekDN2AhKEJNrG+AP4HoLIUV3Tj3PfU0y8fjrvpsQC8M57ysIRwZOAccV 9AUg== X-Gm-Message-State: ACrzQf3e+MBrkoSUH0dhZGh8d2pBruyZrfEp6sx+9VQozQpraJYvH00N OFV1kLGUaVMRUvlkqhUEqxM= X-Google-Smtp-Source: AMsMyM7PQNdOZ+WEretOnxu4dW8gKOo2rSzieM1EjMg9ChvEZlm8EzFQUQrnmlEE6G81AChjQKtqXg== X-Received: by 2002:a05:6870:96ab:b0:13b:8fb9:47a with SMTP id o43-20020a05687096ab00b0013b8fb9047amr6267492oaq.15.1666896987125; Thu, 27 Oct 2022 11:56:27 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f015-3653-e617-fa3f.res6.spectrum.com. [2603:8081:140c:1a00:f015:3653:e617:fa3f]) by smtp.googlemail.com with ESMTPSA id f1-20020a4a8f41000000b0049602fb9b4csm732736ool.46.2022.10.27.11.56.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Oct 2022 11:56:26 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, jhack@hpe.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 14/17] RDMA/rxe: Extend response packets for frags Date: Thu, 27 Oct 2022 13:55:08 -0500 Message-Id: <20221027185510.33808-15-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221027185510.33808-1-rpearsonhpe@gmail.com> References: <20221027185510.33808-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend prepare_ack_packet(), read_reply() and send_common_ack() in rxe_resp.c to support fragmented skbs. Adjust calls to these routines for the changed API. This is in preparation for using fragmented skbs. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_resp.c | 89 +++++++++++++++++----------- 1 file changed, 54 insertions(+), 35 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 8868415b71b6..79dcd0f37140 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -660,10 +660,8 @@ static enum resp_states atomic_reply(struct rxe_qp *qp, static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, struct rxe_pkt_info *ack, - int opcode, - int payload, - u32 psn, - u8 syndrome) + int opcode, int payload, u32 psn, + u8 syndrome, bool *fragp) { struct rxe_dev *rxe = to_rdev(qp->ibqp.device); struct sk_buff *skb; @@ -682,7 +680,7 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, ack->psn = psn; ack->port_num = 1; - skb = rxe_init_packet(qp, &qp->pri_av, ack, NULL); + skb = rxe_init_packet(qp, &qp->pri_av, ack, fragp); if (!skb) return NULL; @@ -698,12 +696,14 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, atmack_set_orig(ack, qp->resp.res->atomic.orig_val); err = rxe_prepare(&qp->pri_av, ack, skb); - if (err) { - kfree_skb(skb); - return NULL; - } + if (err) + goto err_free_skb; return skb; + +err_free_skb: + kfree_skb(skb); + return NULL; } /** @@ -775,6 +775,8 @@ static enum resp_states read_reply(struct rxe_qp *qp, struct resp_res *res = qp->resp.res; struct rxe_mr *mr; int skb_offset = 0; + bool frag; + enum rxe_mr_copy_op op; if (!res) { res = rxe_prepare_res(qp, req_pkt, RXE_READ_MASK); @@ -787,8 +789,10 @@ static enum resp_states read_reply(struct rxe_qp *qp, qp->resp.mr = NULL; } else { mr = rxe_recheck_mr(qp, res->read.rkey); - if (!mr) - return RESPST_ERR_RKEY_VIOLATION; + if (!mr) { + state = RESPST_ERR_RKEY_VIOLATION; + goto err_out; + } } if (res->read.resid <= mtu) @@ -797,8 +801,10 @@ static enum resp_states read_reply(struct rxe_qp *qp, opcode = IB_OPCODE_RC_RDMA_READ_RESPONSE_FIRST; } else { mr = rxe_recheck_mr(qp, res->read.rkey); - if (!mr) - return RESPST_ERR_RKEY_VIOLATION; + if (!mr) { + state = RESPST_ERR_RKEY_VIOLATION; + goto err_out; + } if (res->read.resid > mtu) opcode = IB_OPCODE_RC_RDMA_READ_RESPONSE_MIDDLE; @@ -806,35 +812,33 @@ static enum resp_states read_reply(struct rxe_qp *qp, opcode = IB_OPCODE_RC_RDMA_READ_RESPONSE_LAST; } - res->state = rdatm_res_state_next; - payload = min_t(int, res->read.resid, mtu); skb = prepare_ack_packet(qp, &ack_pkt, opcode, payload, - res->cur_psn, AETH_ACK_UNLIMITED); - if (!skb) - return RESPST_ERR_RNR; + res->cur_psn, AETH_ACK_UNLIMITED, &frag); + if (!skb) { + state = RESPST_ERR_RNR; + goto err_put_mr; + } + op = frag ? RXE_FRAG_FROM_MR : RXE_COPY_FROM_MR; err = rxe_copy_mr_data(skb, mr, res->read.va, payload_addr(&ack_pkt), - skb_offset, payload, RXE_COPY_FROM_MR); + skb_offset, payload, op); if (err) { - kfree_skb(skb); - rxe_put(mr); - return RESPST_ERR_RKEY_VIOLATION; + state = RESPST_ERR_RKEY_VIOLATION; + goto err_free_skb; } - if (mr) - rxe_put(mr); - - if (bth_pad(&ack_pkt)) { - u8 *pad = payload_addr(&ack_pkt) + payload; - - memset(pad, 0, bth_pad(&ack_pkt)); - } + err = rxe_prepare_pad_icrc(&ack_pkt, skb, payload, frag); + if (err) + goto err_free_skb; err = rxe_xmit_packet(qp, &ack_pkt, skb); - if (err) - return RESPST_ERR_RNR; + if (err) { + /* rxe_xmit_packet will consume the packet */ + state = RESPST_ERR_RNR; + goto err_put_mr; + } res->read.va += payload; res->read.resid -= payload; @@ -851,6 +855,16 @@ static enum resp_states read_reply(struct rxe_qp *qp, state = RESPST_CLEANUP; } + /* keep these after all error exits */ + res->state = rdatm_res_state_next; + rxe_put(mr); + return state; + +err_free_skb: + kfree_skb(skb); +err_put_mr: + rxe_put(mr); +err_out: return state; } @@ -1041,14 +1055,19 @@ static int send_common_ack(struct rxe_qp *qp, u8 syndrome, u32 psn, int opcode, const char *msg) { int err; - struct rxe_pkt_info ack_pkt; + struct rxe_pkt_info ack; struct sk_buff *skb; + int payload = 0; - skb = prepare_ack_packet(qp, &ack_pkt, opcode, 0, psn, syndrome); + skb = prepare_ack_packet(qp, &ack, opcode, payload, + psn, syndrome, NULL); if (!skb) return -ENOMEM; - err = rxe_xmit_packet(qp, &ack_pkt, skb); + /* doesn't fail if frag == false */ + (void)rxe_prepare_pad_icrc(&ack, skb, payload, false); + + err = rxe_xmit_packet(qp, &ack, skb); if (err) pr_err_ratelimited("Failed sending %s\n", msg); From patchwork Thu Oct 27 18:55:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13022545 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70EC8FA3742 for ; Thu, 27 Oct 2022 18:56:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236451AbiJ0S4j (ORCPT ); Thu, 27 Oct 2022 14:56:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57214 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236337AbiJ0S4c (ORCPT ); Thu, 27 Oct 2022 14:56:32 -0400 Received: from mail-oi1-x22e.google.com (mail-oi1-x22e.google.com [IPv6:2607:f8b0:4864:20::22e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E4A98501B3 for ; Thu, 27 Oct 2022 11:56:28 -0700 (PDT) Received: by mail-oi1-x22e.google.com with SMTP id i7so3404429oif.4 for ; Thu, 27 Oct 2022 11:56:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=GaFNnE3LTQv6kJv+8drqhqVvuAL0z5lzANxUuMdNOXA=; b=LMoWzFJcY8PDOR+26C2wiCTtzduLLUmzCZv39k4tKqpk5FhQTH9/Xe9M8e3Oq4UXsc +PlMREU0NOm26UyJv09JNQ/FwHL82kZG5ps3/uy29ZBZeJ74U6Y2XNoM7BZpf7wlioAP iZVay5N7pQStN7wm2lAp9QIadfNPnkyEgv+6iEA9LLz2zMSQ0KibWR26B/VCz+wZ83X9 1od3FMN5t/ly1fjxGp2p3dHCPF92EH2co7DZWMUCdm72AgXUHahiV65PKkDGReakWFEy q6lMF0WEcp74fZE0eQVi0ELn8+VwbaZvWEpDNg8O5RvpznrR5YPFSgqd+a5PKPYoqKG3 CezA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GaFNnE3LTQv6kJv+8drqhqVvuAL0z5lzANxUuMdNOXA=; b=EoVp4uJGvTuVXXPWtcN94+PlYV/+tB0S8QxcMHTqAORsWTlpPD+THdGOzQKAkAGHB1 w+kDmbFobvxJp3Kv5pPnAgLilZOKnGb4pYtkGdcFrRFCrZF4R/YX/g3v9CY7DVmI9izd xkXwxAsAS6nFobjSVbWkxmCYXTUYtsk/7dU8odVGSHcoEiabGYxynsWUCOZF+wMQSJIE MrOHA5Fdd16N0hfYyMSF0ozWNPPh10DB1BIKbMvZf9ABfkF1X4aNnF8KXdDlz9xn8vgS GSNHjgwUwRkG90x6yp6H+RTkfh3ccQc14G8+xx1a6tjjp5JU5i7JPjNPQR41OS56l9zA a3Gw== X-Gm-Message-State: ACrzQf32c9jAkU9NIfDjgSXWrrgONMHKYP+qIsFtdjVxiLQAP1EurTxu qw/Fgw7PBiLjYqPqHsgORNPgkpAjrq4= X-Google-Smtp-Source: AMsMyM7ATt1cQXY16Jxdn3e7kxS3z6ciJ7/3So+l3rUSncfiu/I/Y2x9QRvMlApScllGhkwtKEbyAg== X-Received: by 2002:a05:6808:65a:b0:359:a964:4154 with SMTP id z26-20020a056808065a00b00359a9644154mr5521484oih.249.1666896988181; Thu, 27 Oct 2022 11:56:28 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f015-3653-e617-fa3f.res6.spectrum.com. [2603:8081:140c:1a00:f015:3653:e617:fa3f]) by smtp.googlemail.com with ESMTPSA id f1-20020a4a8f41000000b0049602fb9b4csm732736ool.46.2022.10.27.11.56.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Oct 2022 11:56:27 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, jhack@hpe.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 15/17] RDMA/rxe: Extend send/write_data_in() for frags Date: Thu, 27 Oct 2022 13:55:09 -0500 Message-Id: <20221027185510.33808-16-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221027185510.33808-1-rpearsonhpe@gmail.com> References: <20221027185510.33808-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend send_data_in() and write_data_in() in rxe_resp.c to support fragmented received skbs. This is in preparation for using fragmented skbs. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_resp.c | 103 +++++++++++++++++---------- 1 file changed, 65 insertions(+), 38 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 79dcd0f37140..cd50ae080eda 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -518,45 +518,89 @@ static enum resp_states check_rkey(struct rxe_qp *qp, return state; } -static enum resp_states send_data_in(struct rxe_qp *qp, void *data_addr, - int data_len) +/** + * rxe_send_data_in() - Copy payload data into receive buffer + * @qp: The queue pair + * @pkt: Request packet info + * + * Copy the packet payload into the receive buffer at the current offset. + * If a UD message also copy the IP header into the receive buffer. + * + * Returns: 0 if successful else an error resp_states value. + */ +static enum resp_states rxe_send_data_in(struct rxe_qp *qp, + struct rxe_pkt_info *pkt) { - struct sk_buff *skb = NULL; + struct sk_buff *skb = PKT_TO_SKB(pkt); + int nr_frags = skb_shinfo(skb)->nr_frags; + u8 *data_addr = payload_addr(pkt); + int data_len = payload_size(pkt); + union rdma_network_hdr hdr; + enum rxe_mr_copy_op op; int skb_offset = 0; int err; + /* Per IBA for UD packets copy the IP header into the receive buffer */ + if (qp_type(qp) == IB_QPT_UD || qp_type(qp) == IB_QPT_GSI) { + if (skb->protocol == htons(ETH_P_IP)) { + memset(&hdr.reserved, 0, sizeof(hdr.reserved)); + memcpy(&hdr.roce4grh, ip_hdr(skb), sizeof(hdr.roce4grh)); + } else { + memcpy(&hdr.ibgrh, ipv6_hdr(skb), sizeof(hdr)); + } + err = rxe_copy_dma_data(skb, qp->pd, IB_ACCESS_LOCAL_WRITE, + &qp->resp.wqe->dma, &hdr, skb_offset, + sizeof(hdr), RXE_COPY_TO_MR); + if (err) + goto err_out; + } + + op = nr_frags ? RXE_FRAG_TO_MR : RXE_COPY_TO_MR; + skb_offset = data_addr - skb_transport_header(skb); err = rxe_copy_dma_data(skb, qp->pd, IB_ACCESS_LOCAL_WRITE, &qp->resp.wqe->dma, data_addr, - skb_offset, data_len, RXE_COPY_TO_MR); - if (unlikely(err)) - return (err == -ENOSPC) ? RESPST_ERR_LENGTH - : RESPST_ERR_MALFORMED_WQE; + skb_offset, data_len, op); + if (err) + goto err_out; return RESPST_NONE; + +err_out: + return (err == -ENOSPC) ? RESPST_ERR_LENGTH + : RESPST_ERR_MALFORMED_WQE; } -static enum resp_states write_data_in(struct rxe_qp *qp, - struct rxe_pkt_info *pkt) +/** + * rxe_write_data_in() - Copy payload data to iova + * @qp: The queue pair + * @pkt: Request packet info + * + * Copy the packet payload to current iova and update iova. + * + * Returns: 0 if successful else an error resp_states value. + */ +static enum resp_states rxe_write_data_in(struct rxe_qp *qp, + struct rxe_pkt_info *pkt) { struct sk_buff *skb = PKT_TO_SKB(pkt); - enum resp_states rc = RESPST_NONE; + int nr_frags = skb_shinfo(skb)->nr_frags; + u8 *data_addr = payload_addr(pkt); int data_len = payload_size(pkt); + enum rxe_mr_copy_op op; + int skb_offset; int err; - int skb_offset = 0; + op = nr_frags ? RXE_FRAG_TO_MR : RXE_COPY_TO_MR; + skb_offset = data_addr - skb_transport_header(skb); err = rxe_copy_mr_data(skb, qp->resp.mr, qp->resp.va + qp->resp.offset, - payload_addr(pkt), skb_offset, data_len, - RXE_COPY_TO_MR); - if (err) { - rc = RESPST_ERR_RKEY_VIOLATION; - goto out; - } + data_addr, skb_offset, data_len, op); + if (err) + return RESPST_ERR_RKEY_VIOLATION; qp->resp.va += data_len; qp->resp.resid -= data_len; -out: - return rc; + return RESPST_NONE; } static struct resp_res *rxe_prepare_res(struct rxe_qp *qp, @@ -882,30 +926,13 @@ static int invalidate_rkey(struct rxe_qp *qp, u32 rkey) static enum resp_states execute(struct rxe_qp *qp, struct rxe_pkt_info *pkt) { enum resp_states err; - struct sk_buff *skb = PKT_TO_SKB(pkt); - union rdma_network_hdr hdr; if (pkt->mask & RXE_SEND_MASK) { - if (qp_type(qp) == IB_QPT_UD || - qp_type(qp) == IB_QPT_GSI) { - if (skb->protocol == htons(ETH_P_IP)) { - memset(&hdr.reserved, 0, - sizeof(hdr.reserved)); - memcpy(&hdr.roce4grh, ip_hdr(skb), - sizeof(hdr.roce4grh)); - err = send_data_in(qp, &hdr, sizeof(hdr)); - } else { - err = send_data_in(qp, ipv6_hdr(skb), - sizeof(hdr)); - } - if (err) - return err; - } - err = send_data_in(qp, payload_addr(pkt), payload_size(pkt)); + err = rxe_send_data_in(qp, pkt); if (err) return err; } else if (pkt->mask & RXE_WRITE_MASK) { - err = write_data_in(qp, pkt); + err = rxe_write_data_in(qp, pkt); if (err) return err; } else if (pkt->mask & RXE_READ_MASK) { From patchwork Thu Oct 27 18:55:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13022546 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3062AECAAA1 for ; Thu, 27 Oct 2022 18:56:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236374AbiJ0S4k (ORCPT ); Thu, 27 Oct 2022 14:56:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56896 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236361AbiJ0S4c (ORCPT ); Thu, 27 Oct 2022 14:56:32 -0400 Received: from mail-ot1-x333.google.com (mail-ot1-x333.google.com [IPv6:2607:f8b0:4864:20::333]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 78FA318E0A for ; Thu, 27 Oct 2022 11:56:29 -0700 (PDT) Received: by mail-ot1-x333.google.com with SMTP id d18-20020a05683025d200b00661c6f1b6a4so1615778otu.1 for ; Thu, 27 Oct 2022 11:56:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=tKi9h7WR9eMA4SBNRrq6cYz+ShYuF0Y3JvfUyZYr89o=; b=j7iBOKSHfJaPm/Y2NcipGr23x5FLCCl0zdS1Dt1kaADKXJMC0ZiUMU/9rH/OsEJTKk LwsVuHyPx0X/ubwLDCQFGdkLFXA7kbobfkXNiTrY6TsxjZeDe8DAaN8YH6XOd4mlPzwf vF92WDTMFYHFOUwxelYbQxFrvrVrH+yrlHB6cSOChjKdDPk8VGF8poK1IrUGevoY+Myu NLjyaFQ0q/gaRXTSKBD5H4SRwQk0bWBiEa0Y1kKkyHojf7OpnPIDT1QPMDModE1Chz8r iJqJ/sNLX6sndOMrzVGeOI1W7qUkHvM2Fj6dwAR4F1O6pfhZBhv7PQsqYLfEqVyR+N92 79rA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tKi9h7WR9eMA4SBNRrq6cYz+ShYuF0Y3JvfUyZYr89o=; b=SuONx9OEmBKxQw9fNLcJwvK631IurSbdXwCwqFvv/SNQDbdDor5o7M+INWglJFXiuw bAbfV6HfA83KE5W4M6XaOLyTTYa8qE6VbpLre2Q5gG8ITh2wVx0tvk3r0b/KMEXhptOC 5Wn8xxOD0OwTMH+UqBPMZzPBnGlF5D3Z7diNqx115WOVR60RUMiLC40nLRRkHwyc8ix3 pksmtuKGWqgQWjFzAHHwxc92uIKZYrrbzHxxB2LMrUQN7VnVHPUI9/CAj3U8wcVp3LZf YSQHXgbmo5R2iD4m0GPbnvGRFxevejGFe/tCJ2qUSu3UrNbyzOh/9J9QmEqzVUqmKTR6 duZw== X-Gm-Message-State: ACrzQf3SBB6HQ6y+evjQtghy32BbpqBNfBjgce8reqHkZfRT0SxEzp/E RJaZi71BLgpN0pnhJ6nepXg= X-Google-Smtp-Source: AMsMyM6PGlajGwjqcvgSZgENifM8+VP7y9QLTR3r8CbvlB0UUb4udmu3Itd9Yfaqh9+Iaz80exJ3hA== X-Received: by 2002:a05:6830:131a:b0:666:fe36:1f86 with SMTP id p26-20020a056830131a00b00666fe361f86mr6897545otq.272.1666896989221; Thu, 27 Oct 2022 11:56:29 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f015-3653-e617-fa3f.res6.spectrum.com. [2603:8081:140c:1a00:f015:3653:e617:fa3f]) by smtp.googlemail.com with ESMTPSA id f1-20020a4a8f41000000b0049602fb9b4csm732736ool.46.2022.10.27.11.56.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Oct 2022 11:56:28 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, jhack@hpe.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 16/17] RDMA/rxe: Extend do_read() in rxe_comp,c for frags Date: Thu, 27 Oct 2022 13:55:10 -0500 Message-Id: <20221027185510.33808-17-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221027185510.33808-1-rpearsonhpe@gmail.com> References: <20221027185510.33808-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend do_read() in rxe_comp.c to support fragmented skbs. Rename rxe_do_read(). Adjust caller's API. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_comp.c | 40 ++++++++++++++++++---------- 1 file changed, 26 insertions(+), 14 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c index 3c1ecc88446d..85b3a4a6b55b 100644 --- a/drivers/infiniband/sw/rxe/rxe_comp.c +++ b/drivers/infiniband/sw/rxe/rxe_comp.c @@ -348,22 +348,34 @@ static inline enum comp_state check_ack(struct rxe_qp *qp, return COMPST_ERROR; } -static inline enum comp_state do_read(struct rxe_qp *qp, - struct rxe_pkt_info *pkt, - struct rxe_send_wqe *wqe) +/** + * rxe_do_read() - Process read reply packet + * @qp: The queue pair + * @pkt: Packet info + * @wqe: The current work request + * + * Copy payload from incoming read reply packet into current + * iova. + * + * Returns: 0 on success else an error comp_state + */ +static inline enum comp_state rxe_do_read(struct rxe_qp *qp, + struct rxe_pkt_info *pkt, + struct rxe_send_wqe *wqe) { struct sk_buff *skb = PKT_TO_SKB(pkt); - int skb_offset = 0; - int ret; - - ret = rxe_copy_dma_data(skb, qp->pd, IB_ACCESS_LOCAL_WRITE, - &wqe->dma, payload_addr(pkt), - skb_offset, payload_size(pkt), - RXE_COPY_TO_MR); - if (ret) { - wqe->status = IB_WC_LOC_PROT_ERR; + int nr_frags = skb_shinfo(skb)->nr_frags; + u8 *data_addr = payload_addr(pkt); + int data_len = payload_size(pkt); + enum rxe_mr_copy_op op = nr_frags ? RXE_FRAG_TO_MR : RXE_COPY_TO_MR; + int skb_offset = data_addr - skb_transport_header(skb); + int err; + + err = rxe_copy_dma_data(skb, qp->pd, IB_ACCESS_LOCAL_WRITE, + &wqe->dma, data_addr, + skb_offset, data_len, op); + if (err) return COMPST_ERROR; - } if (wqe->dma.resid == 0 && (pkt->mask & RXE_END_MASK)) return COMPST_COMP_ACK; @@ -625,7 +637,7 @@ int rxe_completer(void *arg) break; case COMPST_READ: - state = do_read(qp, pkt, wqe); + state = rxe_do_read(qp, pkt, wqe); break; case COMPST_ATOMIC: From patchwork Thu Oct 27 18:55:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13022547 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00708FA3740 for ; Thu, 27 Oct 2022 18:56:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236361AbiJ0S4l (ORCPT ); Thu, 27 Oct 2022 14:56:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57082 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236392AbiJ0S4e (ORCPT ); Thu, 27 Oct 2022 14:56:34 -0400 Received: from mail-oi1-x22c.google.com (mail-oi1-x22c.google.com [IPv6:2607:f8b0:4864:20::22c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2B450120B2 for ; Thu, 27 Oct 2022 11:56:31 -0700 (PDT) Received: by mail-oi1-x22c.google.com with SMTP id l5so3390979oif.7 for ; Thu, 27 Oct 2022 11:56:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FHGXOAOhbEcwRQzY0mcWLUvYuFLqS/3dakiHV4dZphs=; b=OgCoTdRtchxhiwUkTc6P6SMQVXWD2wF30F35OikqTxhP6qcKI7ukNFOu/kadpYqJKG 2vFRuD5hwl8hSyJBHBF4djfLYzqDGAFGga4juWzhUGvPgJ21taTMWyR/ErlWFPsuAP9w iHXcNLVS9AiTcPOi1IKRCAJmAFMrfPXvKx27975xSNcRVtMKgSWhEQeZ7R7t+0f9R1nx s+beVNt+PsGCqHAUR3gEfsagM8w63E7UZebJ/tYeQpyFLO2JwUivQ/boSRw4Xm4s1Gkf ueHFBPRe51+X83MEn3PmeewMC+sdodQWOCxosTnE0U9tyhDTe8zMizeBYI5jQ7vwojO0 O8rQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FHGXOAOhbEcwRQzY0mcWLUvYuFLqS/3dakiHV4dZphs=; b=qpA9tEufwHgLTVmm4r8OqpcqilT/Cla1LeniLhZr198K4Fl2uFmVLLP/cNgjDT/uLB D2kIC8oWOffyozCMS5efgWWvKvFr29vAlpQxTDGFo0N/fPiolwhzyi3qLgLdCVnNeRHw QiJbc6J3lP6+4s4hk/xHSBBr+nAs4OEFnVw3AuQ6hOcoIy+K6x2KMYE+GsBflIa6P7ia HjfM10Fnm4fcD+AbHoCkaUtLwJLwfw18x8JJoUUmMflOfmjAW2CwI+0fpAzeJJMFaTGV Z41XpllS7gjzWNXBAA/r8vfBGwQPvm3s4ACtJR+YIejiA1I06s3ghdheHK64pbSjzuQ5 zXsw== X-Gm-Message-State: ACrzQf0OhpIATUHBW1jKhbDkQYULXGn0jBl734Lbb7AZR2S1CoJJdOKM mcSp0/xWVfFlrxHfmo0Xv6A= X-Google-Smtp-Source: AMsMyM6rzoUkAdHrNyKUTAdrZ23Waqa4ZEEh76uvkI9Dtu8uIbiamhs4pMW3Pxhm5eoM9bX4b20vuw== X-Received: by 2002:a05:6808:1408:b0:355:77b:b78c with SMTP id w8-20020a056808140800b00355077bb78cmr5478853oiv.169.1666896990548; Thu, 27 Oct 2022 11:56:30 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f015-3653-e617-fa3f.res6.spectrum.com. [2603:8081:140c:1a00:f015:3653:e617:fa3f]) by smtp.googlemail.com with ESMTPSA id f1-20020a4a8f41000000b0049602fb9b4csm732736ool.46.2022.10.27.11.56.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Oct 2022 11:56:30 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, jhack@hpe.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 17/17] RDMA/rxe: Enable sg code in rxe Date: Thu, 27 Oct 2022 13:55:11 -0500 Message-Id: <20221027185510.33808-18-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221027185510.33808-1-rpearsonhpe@gmail.com> References: <20221027185510.33808-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Make changes to enable sg code in rxe. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe.c | 2 +- drivers/infiniband/sw/rxe/rxe_req.c | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c index 388d8103ec20..fd5e916ecce9 100644 --- a/drivers/infiniband/sw/rxe/rxe.c +++ b/drivers/infiniband/sw/rxe/rxe.c @@ -14,7 +14,7 @@ MODULE_DESCRIPTION("Soft RDMA transport"); MODULE_LICENSE("Dual BSD/GPL"); /* if true allow using fragmented skbs */ -bool rxe_use_sg; +bool rxe_use_sg = true; /* free resources for a rxe device all objects created for this device must * have been destroyed diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index ea9ab63a2dc1..758346977da3 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -521,8 +521,8 @@ static struct sk_buff *rxe_init_req_packet(struct rxe_qp *qp, struct rxe_av *av; struct rxe_ah *ah; int pad; + bool frag; int err = -EINVAL; - bool frag = false; pkt->rxe = rxe; pkt->opcode = opcode; @@ -543,7 +543,7 @@ static struct sk_buff *rxe_init_req_packet(struct rxe_qp *qp, pad + RXE_ICRC_SIZE; /* init skb */ - skb = rxe_init_packet(qp, av, pkt, NULL); + skb = rxe_init_packet(qp, av, pkt, &frag); if (unlikely(!skb)) goto err_out;