From patchwork Mon Oct 31 20:27:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13026325 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6950FECAAA1 for ; Mon, 31 Oct 2022 20:28:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229935AbiJaU2V (ORCPT ); Mon, 31 Oct 2022 16:28:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56082 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229916AbiJaU2S (ORCPT ); Mon, 31 Oct 2022 16:28:18 -0400 Received: from mail-oa1-x34.google.com (mail-oa1-x34.google.com [IPv6:2001:4860:4864:20::34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B41512ADA for ; Mon, 31 Oct 2022 13:28:17 -0700 (PDT) Received: by mail-oa1-x34.google.com with SMTP id 586e51a60fabf-13bd19c3b68so14703262fac.7 for ; Mon, 31 Oct 2022 13:28:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=fTx3ZwrPzjrPtYAsIakHh9HEYCDZ3bW/H1ENJ8+eUII=; b=AmIH4Z2l1l+4ucjKPYESN+19BiuwOQ5ogqWZW3m4t25wJrdZs6SRnhnhqB3rrOutKD jES5gsRoFEtcO/1shmJg/WS1Ms76ot6NABLhSlNEHjZdJQR95CsLpHn/zA2NX3d3XVkz jNIZ0dDwpyrXKQzKDfMT8Jop+4BxmZ8RbDmxcXSpOcQkjqhN8KvviDqO77dAOVl4Kg3z XVtl3Cx/v2Ft2P9Gcufpn19bx5eQRqVI+piR/yp+rrvMiikkkyP9VCxTenE0j4Zw2WU7 CvDKLsBMWGD1sqsJpyRHhRavmzHCB/zK7D8K2Xjvo2l3smu6XW7sireM7/B870r6Py/e gTQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=fTx3ZwrPzjrPtYAsIakHh9HEYCDZ3bW/H1ENJ8+eUII=; b=NdkqrSUQGeYYgRpAJ/CPg4cFE4IjDqQ0eJ0JlKpuFqpM4kR7r2oyQKfl7RHEyNCBTV XtNYFMv2zrAV2gVDVjQ+qoYbYb1dA8+odBKtI+RSSf1t03IvxOF4nblwFieuaCRozekO u8IRBbiqtU/nNUhZKm8jPXWm2Gyyex+vh3JFy2Z5UO5tKxNsNkeEoDcezfsKfdxvJodi SJ1Y/EPDkzMCrH793GYMzz6NpSIjl3gbHGRl/KyPD2B+FTzssxGh+rJftkfceR5E0TAw Bp/bl6iDR26jdxs2CG64vWQPbZJ4mL6KXoyFiRaDZbQpAJx7trt0Nh50FpV0xEp5qHf+ Xf5A== X-Gm-Message-State: ACrzQf2+GjnfQSbiSl/MTVByGfTTLT7cTt9vb17CImiEBqCHRRJ4hoM7 lhCh/CxJ9Vkp+cb+vE54aQk= X-Google-Smtp-Source: AMsMyM616kc9/ongQIm9XaO/5OghHxKRlrBCZBn3deusKCvq/ykWKeNt2VfTjkURAQNsqtMahpojFQ== X-Received: by 2002:a05:6870:b392:b0:136:71ed:c874 with SMTP id w18-20020a056870b39200b0013671edc874mr18223975oap.66.1667248096413; Mon, 31 Oct 2022 13:28:16 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-ce7d-a808-badd-629d.res6.spectrum.com. [2603:8081:140c:1a00:ce7d:a808:badd:629d]) by smtp.googlemail.com with ESMTPSA id w1-20020a056808018100b00342e8bd2299sm2721215oic.6.2022.10.31.13.28.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Oct 2022 13:28:15 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 01/18] RDMA/rxe: Isolate code to fill request roce headers Date: Mon, 31 Oct 2022 15:27:49 -0500 Message-Id: <20221031202805.19138-1-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Isolate the code to fill in roce headers in a request packet into a subroutine named init_roce_headers. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_req.c | 106 +++++++++++++++------------- 1 file changed, 57 insertions(+), 49 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index f63771207970..bcfbc78c0b53 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -377,79 +377,87 @@ static inline int get_mtu(struct rxe_qp *qp) return rxe->port.mtu_cap; } -static struct sk_buff *init_req_packet(struct rxe_qp *qp, - struct rxe_av *av, - struct rxe_send_wqe *wqe, - int opcode, u32 payload, - struct rxe_pkt_info *pkt) +static void rxe_init_roce_hdrs(struct rxe_qp *qp, struct rxe_send_wqe *wqe, + struct rxe_pkt_info *pkt, int pad) { - struct rxe_dev *rxe = to_rdev(qp->ibqp.device); - struct sk_buff *skb; - struct rxe_send_wr *ibwr = &wqe->wr; - int pad = (-payload) & 0x3; - int paylen; - int solicited; - u32 qp_num; - int ack_req; - - /* length from start of bth to end of icrc */ - paylen = rxe_opcode[opcode].length + payload + pad + RXE_ICRC_SIZE; - pkt->paylen = paylen; - - /* init skb */ - skb = rxe_init_packet(rxe, av, paylen, pkt); - if (unlikely(!skb)) - return NULL; + struct rxe_send_wr *wr = &wqe->wr; + int is_send; + int is_write_imm; + int is_end; + int solicited; + u32 dst_qpn; + u32 qkey; + int ack_req; /* init bth */ - solicited = (ibwr->send_flags & IB_SEND_SOLICITED) && - (pkt->mask & RXE_END_MASK) && - ((pkt->mask & (RXE_SEND_MASK)) || - (pkt->mask & (RXE_WRITE_MASK | RXE_IMMDT_MASK)) == - (RXE_WRITE_MASK | RXE_IMMDT_MASK)); - - qp_num = (pkt->mask & RXE_DETH_MASK) ? ibwr->wr.ud.remote_qpn : - qp->attr.dest_qp_num; - - ack_req = ((pkt->mask & RXE_END_MASK) || - (qp->req.noack_pkts++ > RXE_MAX_PKT_PER_ACK)); + is_send = pkt->mask & RXE_SEND_MASK; + is_write_imm = (pkt->mask & RXE_WRITE_MASK) && + (pkt->mask & RXE_IMMDT_MASK); + is_end = pkt->mask & RXE_END_MASK; + solicited = (wr->send_flags & IB_SEND_SOLICITED) && is_end && + (is_send || is_write_imm); + dst_qpn = (pkt->mask & RXE_DETH_MASK) ? wr->wr.ud.remote_qpn : + qp->attr.dest_qp_num; + ack_req = is_end || (qp->req.noack_pkts++ > RXE_MAX_PKT_PER_ACK); if (ack_req) qp->req.noack_pkts = 0; - bth_init(pkt, pkt->opcode, solicited, 0, pad, IB_DEFAULT_PKEY_FULL, qp_num, - ack_req, pkt->psn); + bth_init(pkt, pkt->opcode, solicited, 0, pad, IB_DEFAULT_PKEY_FULL, + dst_qpn, ack_req, pkt->psn); - /* init optional headers */ + /* init extended headers */ if (pkt->mask & RXE_RETH_MASK) { - reth_set_rkey(pkt, ibwr->wr.rdma.rkey); + reth_set_rkey(pkt, wr->wr.rdma.rkey); reth_set_va(pkt, wqe->iova); reth_set_len(pkt, wqe->dma.resid); } if (pkt->mask & RXE_IMMDT_MASK) - immdt_set_imm(pkt, ibwr->ex.imm_data); + immdt_set_imm(pkt, wr->ex.imm_data); if (pkt->mask & RXE_IETH_MASK) - ieth_set_rkey(pkt, ibwr->ex.invalidate_rkey); + ieth_set_rkey(pkt, wr->ex.invalidate_rkey); if (pkt->mask & RXE_ATMETH_MASK) { atmeth_set_va(pkt, wqe->iova); - if (opcode == IB_OPCODE_RC_COMPARE_SWAP) { - atmeth_set_swap_add(pkt, ibwr->wr.atomic.swap); - atmeth_set_comp(pkt, ibwr->wr.atomic.compare_add); + if (pkt->opcode == IB_OPCODE_RC_COMPARE_SWAP) { + atmeth_set_swap_add(pkt, wr->wr.atomic.swap); + atmeth_set_comp(pkt, wr->wr.atomic.compare_add); } else { - atmeth_set_swap_add(pkt, ibwr->wr.atomic.compare_add); + atmeth_set_swap_add(pkt, wr->wr.atomic.compare_add); } - atmeth_set_rkey(pkt, ibwr->wr.atomic.rkey); + atmeth_set_rkey(pkt, wr->wr.atomic.rkey); } if (pkt->mask & RXE_DETH_MASK) { - if (qp->ibqp.qp_num == 1) - deth_set_qkey(pkt, GSI_QKEY); - else - deth_set_qkey(pkt, ibwr->wr.ud.remote_qkey); - deth_set_sqp(pkt, qp->ibqp.qp_num); + qkey = (qp->ibqp.qp_num == 1) ? GSI_QKEY : + wr->wr.ud.remote_qkey; + deth_set_qkey(pkt, qkey); + deth_set_sqp(pkt, qp_num(qp)); } +} + +static struct sk_buff *init_req_packet(struct rxe_qp *qp, + struct rxe_av *av, + struct rxe_send_wqe *wqe, + int opcode, u32 payload, + struct rxe_pkt_info *pkt) +{ + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); + struct sk_buff *skb; + int pad = (-payload) & 0x3; + int paylen; + + /* length from start of bth to end of icrc */ + paylen = rxe_opcode[opcode].length + payload + pad + RXE_ICRC_SIZE; + pkt->paylen = paylen; + + /* init skb */ + skb = rxe_init_packet(rxe, av, paylen, pkt); + if (unlikely(!skb)) + return NULL; + + rxe_init_roce_hdrs(qp, wqe, pkt, pad); return skb; } From patchwork Mon Oct 31 20:27:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13026326 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29DFFFA3741 for ; Mon, 31 Oct 2022 20:28:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229674AbiJaU2b (ORCPT ); Mon, 31 Oct 2022 16:28:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56722 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229536AbiJaU2a (ORCPT ); Mon, 31 Oct 2022 16:28:30 -0400 Received: from mail-oi1-x22e.google.com (mail-oi1-x22e.google.com [IPv6:2607:f8b0:4864:20::22e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C7ECA65C1 for ; Mon, 31 Oct 2022 13:28:29 -0700 (PDT) Received: by mail-oi1-x22e.google.com with SMTP id r76so7424876oie.13 for ; Mon, 31 Oct 2022 13:28:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=OEmM98qWgdC7JibJI9d8dkTVJi2BBzT4fZsK+PNxQ0c=; b=LhXphGaQbFkko/hOvbhUZhvsXGT4pLXhSF8t4T0UMvvDC0hcj7WcaPE4XrhRcshp+d QLmuGR29k6PqJ1IyMlbDLa+FmuTCKP4+1mU9Fql5y8SqdqUqE6N+kAfg1wsyg+7OV7ep mRhzQjxku5h/EjCzo2Hb8/I4/FeQEQJRTAhZ66U2ahn0s5l9fb0fm73Wz+M/QgD5NRnj ekbCsk8Xs+nQ1LFtz1PFdpLFbcctwh4wwYC5juwrRW3Ser5YHRrWBgrX9/4cXVp4jqJ6 VCCNfyc53ujUPVZOmLQJQtLXkpDl5W2X8x0YRur9qrZcKQgZkyL9GMddvRpXh2F4V0SS gq9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=OEmM98qWgdC7JibJI9d8dkTVJi2BBzT4fZsK+PNxQ0c=; b=ynzmS55If85TUB8xbf3h9GDJPGGQt5KAups12gcq2KudwotKEUHwHSfSz6hSmPcsCt BT/E/5wA1yxyc3BWEF+Nn2ePVn8t8XKKk4B1VOF6ckAxxy+/bUGm9rcn6SX1SIpwbTMu lfPyUQCMv5aWTvYaC9I8jM1qnqoTWdWjkCeKlOvvKr55IqhpIV6ZS8zGfw1EO4vMSUd7 djOhomDb/1S0+qHGYEZ1l0PvcIX5fGBsWyuytYpbY6u5ge7YU3u9DOdLXQlsY4XeqIzJ 7OgAfjJOTNwpDs1sdVW06vJIoQFmHr5blMX8koGtdvRKcnHhp+vhrMp6+kx+OG94dN0I vUIA== X-Gm-Message-State: ACrzQf1/Z1F7K6WktCF0V1jqQQyroCyNin+9WRkuCOf2/vcUHQiBskxF tldM+6IxLm6hmVZ92c2Nc2I= X-Google-Smtp-Source: AMsMyM4hpfFC9LoUhXbZe6FFpb/pBPn4rIftuwKlui//dFVaIGjpE66MqiuArCDY60IPpno7pcQqww== X-Received: by 2002:a05:6808:17a7:b0:355:3cba:7d41 with SMTP id bg39-20020a05680817a700b003553cba7d41mr7490165oib.244.1667248109234; Mon, 31 Oct 2022 13:28:29 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-ce7d-a808-badd-629d.res6.spectrum.com. [2603:8081:140c:1a00:ce7d:a808:badd:629d]) by smtp.googlemail.com with ESMTPSA id w1-20020a056808018100b00342e8bd2299sm2721215oic.6.2022.10.31.13.28.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Oct 2022 13:28:28 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 02/18] RDMA/rxe: Isolate request payload code in a subroutine Date: Mon, 31 Oct 2022 15:27:51 -0500 Message-Id: <20221031202805.19138-2-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221031202805.19138-1-rpearsonhpe@gmail.com> References: <20221031202805.19138-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Isolate the code that fills the payload of a request packet into a subroutine named rxe_init_payload(). Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_req.c | 34 +++++++++++++++++------------ 1 file changed, 20 insertions(+), 14 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index bcfbc78c0b53..10a75f4e3608 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -437,6 +437,25 @@ static void rxe_init_roce_hdrs(struct rxe_qp *qp, struct rxe_send_wqe *wqe, } } +static int rxe_init_payload(struct rxe_qp *qp, struct rxe_send_wqe *wqe, + struct rxe_pkt_info *pkt, u32 payload) +{ + void *data; + int err = 0; + + if (wqe->wr.send_flags & IB_SEND_INLINE) { + data = &wqe->dma.inline_data[wqe->dma.sge_offset]; + memcpy(payload_addr(pkt), data, payload); + wqe->dma.resid -= payload; + wqe->dma.sge_offset += payload; + } else { + err = copy_data(qp->pd, 0, &wqe->dma, payload_addr(pkt), + payload, RXE_FROM_MR_OBJ); + } + + return err; +} + static struct sk_buff *init_req_packet(struct rxe_qp *qp, struct rxe_av *av, struct rxe_send_wqe *wqe, @@ -473,20 +492,7 @@ static int finish_packet(struct rxe_qp *qp, struct rxe_av *av, return err; if (pkt->mask & RXE_WRITE_OR_SEND_MASK) { - if (wqe->wr.send_flags & IB_SEND_INLINE) { - u8 *tmp = &wqe->dma.inline_data[wqe->dma.sge_offset]; - - memcpy(payload_addr(pkt), tmp, payload); - - wqe->dma.resid -= payload; - wqe->dma.sge_offset += payload; - } else { - err = copy_data(qp->pd, 0, &wqe->dma, - payload_addr(pkt), payload, - RXE_FROM_MR_OBJ); - if (err) - return err; - } + err = rxe_init_payload(qp, wqe, pkt, payload); if (bth_pad(pkt)) { u8 *pad = payload_addr(pkt) + payload; From patchwork Mon Oct 31 20:27:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13026327 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9B29ECAAA1 for ; Mon, 31 Oct 2022 20:28:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229934AbiJaU2c (ORCPT ); Mon, 31 Oct 2022 16:28:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56798 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229916AbiJaU2b (ORCPT ); Mon, 31 Oct 2022 16:28:31 -0400 Received: from mail-oi1-x229.google.com (mail-oi1-x229.google.com [IPv6:2607:f8b0:4864:20::229]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F1A05DFBE for ; Mon, 31 Oct 2022 13:28:30 -0700 (PDT) Received: by mail-oi1-x229.google.com with SMTP id v81so5279712oie.5 for ; Mon, 31 Oct 2022 13:28:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wQ6dFnbpCBGzx2uSlsp27KdeIPoR+QwGNYkEiMDPy6A=; b=KlwQ3qruBnkXbzgKXqdyV3u+WSiePHk5wWhX/uStlIF0sE6N+yObUYfhpQZGTfJpiR IKQUKOiCbq5laBw+vy7H/DLmbUX1T+57R+s0LzWdjUBRl/IYS2/MfHtXmOuAs0K/tkkM rVBgIfP81PCB5PEcM8M51Dtdt2PIkgTNrp4N30Ri5OnnC09axd9NmAtigevegxbggtxv Kj0oScAT2nYICyhAWtcFyMmBKqj0V+bqWYIpM41FmLmW/vaFul1o5XUxpdr/i66ZoNsA y644pJU4gdS9dh0ct7Wfy9t9KACcKtjSo5pR0Wv4oA7OBpFq9iuK6HE3LQ3hAPmyNPDh stkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wQ6dFnbpCBGzx2uSlsp27KdeIPoR+QwGNYkEiMDPy6A=; b=wNCs7ANdgOdtUhA/c+J6+pMdaqCp6GzyhWFdXpW8fviesRrvWnkw4x3bvClupxRSnT MhmrmLBOMjVDugqz7IuDcmlnsu7ab+CURj8mhX2yUhobh4y+NGlAVOLoPxc4g5Mtz4An YQnHUnFB+pUpvIZTt4QySmXrVAWTQXcAuJ1BQnixIrVLCQ5ThkpGcghn8BpDSKuqrxMk F2mx55woJcUJB5S3zyjmi1EwFxQQXz/YOtVWrhZe/wpEmcUxBWTkN/8vbkSLvW2WZ9Oq PQrAbBquIgSQroxzdPsYTc2Hf6MWEPX3cBJ/5My/ICfVxYz/uWxfpm91WmIacMw4Pn3/ 6XZw== X-Gm-Message-State: ACrzQf3YN0zfpzDYIowlETKflbOBKRGj8bCI4thTm8X+uLcybJEVSuI6 JNcZCSt5/TG+ZuwcpuI9Ag8= X-Google-Smtp-Source: AMsMyM4TR9M1T6CbODHeB8xbHJH6/DveaBJ+ImecvKNOA0DrXxx1547ZiPhNfXvkzM4GLHPihvNKdQ== X-Received: by 2002:a05:6808:1408:b0:355:77b:b78c with SMTP id w8-20020a056808140800b00355077bb78cmr7354763oiv.169.1667248110345; Mon, 31 Oct 2022 13:28:30 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-ce7d-a808-badd-629d.res6.spectrum.com. [2603:8081:140c:1a00:ce7d:a808:badd:629d]) by smtp.googlemail.com with ESMTPSA id w1-20020a056808018100b00342e8bd2299sm2721215oic.6.2022.10.31.13.28.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Oct 2022 13:28:29 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 03/18] RDMA/rxe: Remove paylen parameter from rxe_init_packet Date: Mon, 31 Oct 2022 15:27:52 -0500 Message-Id: <20221031202805.19138-3-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221031202805.19138-1-rpearsonhpe@gmail.com> References: <20221031202805.19138-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Cleanup rxe_init_paylen by removing paylen as a parameter since it is already available in pkt. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 2 +- drivers/infiniband/sw/rxe/rxe_net.c | 6 +++--- drivers/infiniband/sw/rxe/rxe_req.c | 2 +- drivers/infiniband/sw/rxe/rxe_resp.c | 4 ++-- 4 files changed, 7 insertions(+), 7 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index c2a5c8814a48..574a6afc1199 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -92,7 +92,7 @@ void rxe_mw_cleanup(struct rxe_pool_elem *elem); /* rxe_net.c */ struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, - int paylen, struct rxe_pkt_info *pkt); + struct rxe_pkt_info *pkt); int rxe_prepare(struct rxe_av *av, struct rxe_pkt_info *pkt, struct sk_buff *skb); int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index 35f327b9d4b8..1e4456f5cda2 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -443,7 +443,7 @@ int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, } struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, - int paylen, struct rxe_pkt_info *pkt) + struct rxe_pkt_info *pkt) { unsigned int hdr_len; struct sk_buff *skb = NULL; @@ -468,7 +468,7 @@ struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, rcu_read_unlock(); goto out; } - skb = alloc_skb(paylen + hdr_len + LL_RESERVED_SPACE(ndev), + skb = alloc_skb(pkt->paylen + hdr_len + LL_RESERVED_SPACE(ndev), GFP_ATOMIC); if (unlikely(!skb)) { @@ -489,7 +489,7 @@ struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, pkt->rxe = rxe; pkt->port_num = port_num; - pkt->hdr = skb_put(skb, paylen); + pkt->hdr = skb_put(skb, pkt->paylen); pkt->mask |= RXE_GRH_MASK; out: diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 10a75f4e3608..e9e865a5674f 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -472,7 +472,7 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp, pkt->paylen = paylen; /* init skb */ - skb = rxe_init_packet(rxe, av, paylen, pkt); + skb = rxe_init_packet(rxe, av, pkt); if (unlikely(!skb)) return NULL; diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 95d372db934d..c7f60c7b361c 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -670,15 +670,15 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, */ pad = (-payload) & 0x3; paylen = rxe_opcode[opcode].length + payload + pad + RXE_ICRC_SIZE; + ack->paylen = paylen; - skb = rxe_init_packet(rxe, &qp->pri_av, paylen, ack); + skb = rxe_init_packet(rxe, &qp->pri_av, ack); if (!skb) return NULL; ack->qp = qp; ack->opcode = opcode; ack->mask = rxe_opcode[opcode].mask; - ack->paylen = paylen; ack->psn = psn; bth_init(ack, opcode, 0, 0, pad, IB_DEFAULT_PKEY_FULL, From patchwork Mon Oct 31 20:27:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13026328 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D06C1FA3745 for ; Mon, 31 Oct 2022 20:28:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229916AbiJaU2e (ORCPT ); Mon, 31 Oct 2022 16:28:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56916 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229944AbiJaU2d (ORCPT ); Mon, 31 Oct 2022 16:28:33 -0400 Received: from mail-oi1-x22e.google.com (mail-oi1-x22e.google.com [IPv6:2607:f8b0:4864:20::22e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF91D13CC8 for ; Mon, 31 Oct 2022 13:28:32 -0700 (PDT) Received: by mail-oi1-x22e.google.com with SMTP id r76so7425042oie.13 for ; Mon, 31 Oct 2022 13:28:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=U+WYMXQX2dKXNDBeVU1IFd90dts9p1P8JVwi8WrPSSM=; b=XzzXvsb/FpwhKMPV76CMdGdHTRr9NyuvW/t8SFQ7cV9BhYBCpGdEsEnc4m+pHJmMoT U2Dn9d3li6MNpUg4ZE663lkEPgQTVYBnyMRF3mhCCtrquu3UApgmZRVMWCMH8cPX4wf0 HhCeVIs6bJY57y8/22I307ORsEOF4UqKS+ib/yZyImRMm2VVeHhiD/DnXKR2WP9yn2eL uJPiR2uMUQpCycVRuVjmHi8TyFkkzerhu+BpSiFP5jLJF5vA4VTYiQnnVneLheJSvnBv nxHt/mq+9hDG5aVK0rbx1EobzFgIA+4QA/MCIScrPEWhoyma2Medwe8nzl6/PPLminxv REDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=U+WYMXQX2dKXNDBeVU1IFd90dts9p1P8JVwi8WrPSSM=; b=qMW7HpRZTMBfqiogsnfH3IJGvLZC++khw5/uELIiZ8i+JJf5RUxHsPnqImRFd9td+t m2kvtZrs2u5MJoxh3K2C62HEOkynGULS4EpSLvZHgvzTAb14d3E/hjzfwTNHBOtyX3U2 tx7uCy10w4LbEc5hDzUhRRlOwBCxkKcCN/rqFNQLLhCKGYohaJq94h4RJjDL8CebfEUF 08BXU9VWPP3AWNTwikBtEbdafj+OdiMwekRtaQWKdSyHO9/JMJRYV7GDGvqbh+WyTjDm 15BUq9J7GeHipDXEMKvbYZYmQxdkFTQ6l31gBjFL+7fpNygSCNVbW1me+79nfuQh6hqV 5AMQ== X-Gm-Message-State: ACrzQf0XAYOXqc1MiibRBJYulS4CE4mRsYiW6neq+ayxENgvnHdOdq7+ ZDC//4fhRzuwAnHhZfoG1IcdpI0/bFk= X-Google-Smtp-Source: AMsMyM6pjQQ6h/Zk5ISK/VOJaTXw5NhysvgSDWhTNTM2LE/K8/mz0ehMjEp1ZsNcW8+q1axp5NjZ3g== X-Received: by 2002:aca:34c5:0:b0:353:f98f:641e with SMTP id b188-20020aca34c5000000b00353f98f641emr16070931oia.94.1667248112580; Mon, 31 Oct 2022 13:28:32 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-ce7d-a808-badd-629d.res6.spectrum.com. [2603:8081:140c:1a00:ce7d:a808:badd:629d]) by smtp.googlemail.com with ESMTPSA id w1-20020a056808018100b00342e8bd2299sm2721215oic.6.2022.10.31.13.28.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Oct 2022 13:28:32 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 04/18] RDMA/rxe: Isolate code to build request packet Date: Mon, 31 Oct 2022 15:27:53 -0500 Message-Id: <20221031202805.19138-4-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221031202805.19138-1-rpearsonhpe@gmail.com> References: <20221031202805.19138-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Isolate all the code to build a request packet into a single subroutine called rxe_init_req_packet(). Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_req.c | 121 ++++++++++++--------------- drivers/infiniband/sw/rxe/rxe_resp.c | 11 +-- 2 files changed, 58 insertions(+), 74 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index e9e865a5674f..6177c513e5b5 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -456,51 +456,76 @@ static int rxe_init_payload(struct rxe_qp *qp, struct rxe_send_wqe *wqe, return err; } -static struct sk_buff *init_req_packet(struct rxe_qp *qp, - struct rxe_av *av, - struct rxe_send_wqe *wqe, - int opcode, u32 payload, - struct rxe_pkt_info *pkt) +static struct sk_buff *rxe_init_req_packet(struct rxe_qp *qp, + struct rxe_send_wqe *wqe, + int opcode, u32 payload, + struct rxe_pkt_info *pkt) { struct rxe_dev *rxe = to_rdev(qp->ibqp.device); - struct sk_buff *skb; - int pad = (-payload) & 0x3; - int paylen; + struct sk_buff *skb = NULL; + struct rxe_av *av; + struct rxe_ah *ah = NULL; + void *padp; + int pad; + int err = -EINVAL; + + pkt->rxe = rxe; + pkt->opcode = opcode; + pkt->qp = qp; + pkt->psn = qp->req.psn; + pkt->mask = rxe_opcode[opcode].mask; + pkt->wqe = wqe; + pkt->port_num = 1; + + /* get address vector and address handle for UD qps only */ + av = rxe_get_av(pkt, &ah); + if (unlikely(!av)) + goto err_out; /* length from start of bth to end of icrc */ - paylen = rxe_opcode[opcode].length + payload + pad + RXE_ICRC_SIZE; - pkt->paylen = paylen; + pad = (-payload) & 0x3; + pkt->paylen = rxe_opcode[opcode].length + payload + + pad + RXE_ICRC_SIZE; /* init skb */ skb = rxe_init_packet(rxe, av, pkt); if (unlikely(!skb)) - return NULL; + goto err_out; rxe_init_roce_hdrs(qp, wqe, pkt, pad); - return skb; -} + if (pkt->mask & RXE_WRITE_OR_SEND_MASK) { + err = rxe_init_payload(qp, wqe, pkt, payload); + if (err) + goto err_out; + } -static int finish_packet(struct rxe_qp *qp, struct rxe_av *av, - struct rxe_send_wqe *wqe, struct rxe_pkt_info *pkt, - struct sk_buff *skb, u32 payload) -{ - int err; + if (pad) { + padp = payload_addr(pkt) + payload; + memset(padp, 0, pad); + } + /* IP and UDP network headers */ err = rxe_prepare(av, pkt, skb); if (err) - return err; + goto err_out; - if (pkt->mask & RXE_WRITE_OR_SEND_MASK) { - err = rxe_init_payload(qp, wqe, pkt, payload); - if (bth_pad(pkt)) { - u8 *pad = payload_addr(pkt) + payload; + if (ah) + rxe_put(ah); - memset(pad, 0, bth_pad(pkt)); - } - } + return skb; - return 0; +err_out: + if (err == -EFAULT) + wqe->status = IB_WC_LOC_PROT_ERR; + else + wqe->status = IB_WC_LOC_QP_OP_ERR; + if (skb) + kfree_skb(skb); + if (ah) + rxe_put(ah); + + return NULL; } static void update_wqe_state(struct rxe_qp *qp, @@ -630,7 +655,6 @@ static int rxe_do_local_ops(struct rxe_qp *qp, struct rxe_send_wqe *wqe) int rxe_requester(void *arg) { struct rxe_qp *qp = (struct rxe_qp *)arg; - struct rxe_dev *rxe = to_rdev(qp->ibqp.device); struct rxe_pkt_info pkt; struct sk_buff *skb; struct rxe_send_wqe *wqe; @@ -643,8 +667,6 @@ int rxe_requester(void *arg) struct rxe_send_wqe rollback_wqe; u32 rollback_psn; struct rxe_queue *q = qp->sq.queue; - struct rxe_ah *ah; - struct rxe_av *av; if (!rxe_get(qp)) return -EAGAIN; @@ -753,44 +775,9 @@ int rxe_requester(void *arg) payload = mtu; } - pkt.rxe = rxe; - pkt.opcode = opcode; - pkt.qp = qp; - pkt.psn = qp->req.psn; - pkt.mask = rxe_opcode[opcode].mask; - pkt.wqe = wqe; - - av = rxe_get_av(&pkt, &ah); - if (unlikely(!av)) { - pr_err("qp#%d Failed no address vector\n", qp_num(qp)); - wqe->status = IB_WC_LOC_QP_OP_ERR; - goto err; - } - - skb = init_req_packet(qp, av, wqe, opcode, payload, &pkt); - if (unlikely(!skb)) { - pr_err("qp#%d Failed allocating skb\n", qp_num(qp)); - wqe->status = IB_WC_LOC_QP_OP_ERR; - if (ah) - rxe_put(ah); - goto err; - } - - err = finish_packet(qp, av, wqe, &pkt, skb, payload); - if (unlikely(err)) { - pr_debug("qp#%d Error during finish packet\n", qp_num(qp)); - if (err == -EFAULT) - wqe->status = IB_WC_LOC_PROT_ERR; - else - wqe->status = IB_WC_LOC_QP_OP_ERR; - kfree_skb(skb); - if (ah) - rxe_put(ah); + skb = rxe_init_req_packet(qp, wqe, opcode, payload, &pkt); + if (unlikely(!skb)) goto err; - } - - if (ah) - rxe_put(ah); /* * To prevent a race on wqe access between requester and completer, diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index c7f60c7b361c..44b5c159cef9 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -665,22 +665,19 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, int pad; int err; - /* - * allocate packet - */ pad = (-payload) & 0x3; paylen = rxe_opcode[opcode].length + payload + pad + RXE_ICRC_SIZE; ack->paylen = paylen; - skb = rxe_init_packet(rxe, &qp->pri_av, ack); - if (!skb) - return NULL; - ack->qp = qp; ack->opcode = opcode; ack->mask = rxe_opcode[opcode].mask; ack->psn = psn; + skb = rxe_init_packet(rxe, &qp->pri_av, ack); + if (!skb) + return NULL; + bth_init(ack, opcode, 0, 0, pad, IB_DEFAULT_PKEY_FULL, qp->attr.dest_qp_num, 0, psn); From patchwork Mon Oct 31 20:27:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13026329 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB847FA3741 for ; Mon, 31 Oct 2022 20:28:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229874AbiJaU2g (ORCPT ); Mon, 31 Oct 2022 16:28:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56978 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229939AbiJaU2f (ORCPT ); Mon, 31 Oct 2022 16:28:35 -0400 Received: from mail-oi1-x22e.google.com (mail-oi1-x22e.google.com [IPv6:2607:f8b0:4864:20::22e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5EF3A13CC8 for ; Mon, 31 Oct 2022 13:28:34 -0700 (PDT) Received: by mail-oi1-x22e.google.com with SMTP id y67so13986516oiy.1 for ; Mon, 31 Oct 2022 13:28:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=aKMD9CSVYsJ0M/9k6DWFpfS0Q7slCSuxZ6idqjgC6HQ=; b=lJjc5NUsXJTuq+BEd9CREoohkGMiZXB0ng8f9660nLqT7jDtI5HhZseDT57sWm0GiL KAVrGIWRU6ioJQa4vf3IGX3sQvCJtog3eNrGFdLiMKAl//kU+8LEfDKuQtZkUekdm/gu WlrQvsZ/p8GAcM60CtkbzYq2y7KTEc7PGUSguJkIJOVRU3zdV6OI4M9m6PPoFOfPXsFH sYezqqXb2xgUGnVg86P8UB9y16v5Q7O33n8oiiees9Xh2mvnONOwS6JyS7YrzNG/Uz3b 6vDXu/3ZYTON0VbeUg0g1yNeZvm15ICBiGl92OQBWzdVY20DBfsgrDMSfMtTDf+RbXMR i73A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aKMD9CSVYsJ0M/9k6DWFpfS0Q7slCSuxZ6idqjgC6HQ=; b=7zzNht7qFArYlUL+cW9hAwiu3XXplI++MujLcXmjPfHLPNlJAKLcjHOIcTRIrF6B6d PXZpimdDP9AdgLgrknDYXX6Tfo9rvTg96kYBj2J3WNtqxrNJEW+uQKTbTxMb9CfJiTGo Hs/TJ7yb5wbfnUrrjxirvhZUdFBmpv2A62bf6Xt7JiGn/Ol38QotZYr/9HKVqkhD5JIT OAiv5V8Wd58N4P4VuD6/oZnqiauUJsqigVGySTq2A3WW/fSXNkrij1/JEHQ7QJwU49u7 ABWSn21obgkIah1a1t+NzMT2VycXYBYkzzYOY+rT65AooSrbLYesKn+lmGol6W7vnM7q 6qLQ== X-Gm-Message-State: ACrzQf1YEw/plekUXfx4o1bno/8NKoMcDJ0INHCMESlFNy46CnmZz2xB JcGYsxpb7SVX5o0GIsmQqRE= X-Google-Smtp-Source: AMsMyM6bAoYALcy2smKZXoHgd+Mrnd5c8CqNGwbDmEb5lQ51HUGojwXeKhpeFNLeFZOQ6TFn5DpP2A== X-Received: by 2002:a05:6808:16a3:b0:351:5153:a6e1 with SMTP id bb35-20020a05680816a300b003515153a6e1mr15501036oib.56.1667248113724; Mon, 31 Oct 2022 13:28:33 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-ce7d-a808-badd-629d.res6.spectrum.com. [2603:8081:140c:1a00:ce7d:a808:badd:629d]) by smtp.googlemail.com with ESMTPSA id w1-20020a056808018100b00342e8bd2299sm2721215oic.6.2022.10.31.13.28.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Oct 2022 13:28:33 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 05/18] RDMA/rxe: Add sg fragment ops Date: Mon, 31 Oct 2022 15:27:54 -0500 Message-Id: <20221031202805.19138-5-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221031202805.19138-1-rpearsonhpe@gmail.com> References: <20221031202805.19138-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Rename rxe_mr_copy_dir to rxe_mr_copy_op and add new operations for copying between an skb fragment list and an mr. This is in preparation for supporting fragmented skbs. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_comp.c | 4 ++-- drivers/infiniband/sw/rxe/rxe_loc.h | 4 ++-- drivers/infiniband/sw/rxe/rxe_mr.c | 14 +++++++------- drivers/infiniband/sw/rxe/rxe_req.c | 2 +- drivers/infiniband/sw/rxe/rxe_resp.c | 9 +++++---- drivers/infiniband/sw/rxe/rxe_verbs.h | 15 ++++++++++++--- 6 files changed, 29 insertions(+), 19 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c index c9170dd99f3a..77640e35ae88 100644 --- a/drivers/infiniband/sw/rxe/rxe_comp.c +++ b/drivers/infiniband/sw/rxe/rxe_comp.c @@ -356,7 +356,7 @@ static inline enum comp_state do_read(struct rxe_qp *qp, ret = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, &wqe->dma, payload_addr(pkt), - payload_size(pkt), RXE_TO_MR_OBJ); + payload_size(pkt), RXE_COPY_TO_MR); if (ret) { wqe->status = IB_WC_LOC_PROT_ERR; return COMPST_ERROR; @@ -378,7 +378,7 @@ static inline enum comp_state do_atomic(struct rxe_qp *qp, ret = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, &wqe->dma, &atomic_orig, - sizeof(u64), RXE_TO_MR_OBJ); + sizeof(u64), RXE_COPY_TO_MR); if (ret) { wqe->status = IB_WC_LOC_PROT_ERR; return COMPST_ERROR; diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 574a6afc1199..ff803a957ac1 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -69,9 +69,9 @@ int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova, int access, struct rxe_mr *mr); int rxe_mr_init_fast(int max_pages, struct rxe_mr *mr); int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, - enum rxe_mr_copy_dir dir); + enum rxe_mr_copy_op op); int copy_data(struct rxe_pd *pd, int access, struct rxe_dma_info *dma, - void *addr, int length, enum rxe_mr_copy_dir dir); + void *addr, int length, enum rxe_mr_copy_op op); void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length); struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key, enum rxe_mr_lookup_type type); diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index d4f10c2d1aa7..60a8034f1416 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -290,7 +290,7 @@ void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length) * a mr object starting at iova. */ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, - enum rxe_mr_copy_dir dir) + enum rxe_mr_copy_op op) { int err; int bytes; @@ -307,9 +307,9 @@ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, if (mr->ibmr.type == IB_MR_TYPE_DMA) { u8 *src, *dest; - src = (dir == RXE_TO_MR_OBJ) ? addr : ((void *)(uintptr_t)iova); + src = (op == RXE_COPY_TO_MR) ? addr : ((void *)(uintptr_t)iova); - dest = (dir == RXE_TO_MR_OBJ) ? ((void *)(uintptr_t)iova) : addr; + dest = (op == RXE_COPY_TO_MR) ? ((void *)(uintptr_t)iova) : addr; memcpy(dest, src, length); @@ -333,8 +333,8 @@ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, u8 *src, *dest; va = (u8 *)(uintptr_t)buf->addr + offset; - src = (dir == RXE_TO_MR_OBJ) ? addr : va; - dest = (dir == RXE_TO_MR_OBJ) ? va : addr; + src = (op == RXE_COPY_TO_MR) ? addr : va; + dest = (op == RXE_COPY_TO_MR) ? va : addr; bytes = buf->size - offset; @@ -372,7 +372,7 @@ int copy_data( struct rxe_dma_info *dma, void *addr, int length, - enum rxe_mr_copy_dir dir) + enum rxe_mr_copy_op op) { int bytes; struct rxe_sge *sge = &dma->sge[dma->cur_sge]; @@ -433,7 +433,7 @@ int copy_data( if (bytes > 0) { iova = sge->addr + offset; - err = rxe_mr_copy(mr, iova, addr, bytes, dir); + err = rxe_mr_copy(mr, iova, addr, bytes, op); if (err) goto err2; diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 6177c513e5b5..b111a6ddf66c 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -450,7 +450,7 @@ static int rxe_init_payload(struct rxe_qp *qp, struct rxe_send_wqe *wqe, wqe->dma.sge_offset += payload; } else { err = copy_data(qp->pd, 0, &wqe->dma, payload_addr(pkt), - payload, RXE_FROM_MR_OBJ); + payload, RXE_COPY_FROM_MR); } return err; diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 44b5c159cef9..023df0562258 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -524,7 +524,7 @@ static enum resp_states send_data_in(struct rxe_qp *qp, void *data_addr, int err; err = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, &qp->resp.wqe->dma, - data_addr, data_len, RXE_TO_MR_OBJ); + data_addr, data_len, RXE_COPY_TO_MR); if (unlikely(err)) return (err == -ENOSPC) ? RESPST_ERR_LENGTH : RESPST_ERR_MALFORMED_WQE; @@ -540,7 +540,7 @@ static enum resp_states write_data_in(struct rxe_qp *qp, int data_len = payload_size(pkt); err = rxe_mr_copy(qp->resp.mr, qp->resp.va + qp->resp.offset, - payload_addr(pkt), data_len, RXE_TO_MR_OBJ); + payload_addr(pkt), data_len, RXE_COPY_TO_MR); if (err) { rc = RESPST_ERR_RKEY_VIOLATION; goto out; @@ -807,8 +807,9 @@ static enum resp_states read_reply(struct rxe_qp *qp, return RESPST_ERR_RNR; err = rxe_mr_copy(mr, res->read.va, payload_addr(&ack_pkt), - payload, RXE_FROM_MR_OBJ); - rxe_put(mr); + payload, RXE_COPY_FROM_MR); + if (mr) + rxe_put(mr); if (err) { kfree_skb(skb); return RESPST_ERR_RKEY_VIOLATION; diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 22a299b0a9f0..08275b0c7a6e 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -267,9 +267,18 @@ enum rxe_mr_state { RXE_MR_STATE_VALID, }; -enum rxe_mr_copy_dir { - RXE_TO_MR_OBJ, - RXE_FROM_MR_OBJ, +/** + * enum rxe_mr_copy_op - Operations peformed by rxe_copy_mr/dma_data() + * @RXE_COPY_TO_MR: Copy data from packet to MR(s) + * @RXE_COPY_FROM_MR: Copy data from MR(s) to packet + * @RXE_FRAG_TO_MR: Copy data from frag list to MR(s) + * @RXE_FRAG_FROM_MR: Copy data from MR(s) to frag list + */ +enum rxe_mr_copy_op { + RXE_COPY_TO_MR, + RXE_COPY_FROM_MR, + RXE_FRAG_TO_MR, + RXE_FRAG_FROM_MR, }; enum rxe_mr_lookup_type { From patchwork Mon Oct 31 20:27:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13026330 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24CB7ECAAA1 for ; Mon, 31 Oct 2022 20:28:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229468AbiJaU2h (ORCPT ); Mon, 31 Oct 2022 16:28:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56978 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229536AbiJaU2g (ORCPT ); Mon, 31 Oct 2022 16:28:36 -0400 Received: from mail-oa1-x2f.google.com (mail-oa1-x2f.google.com [IPv6:2001:4860:4864:20::2f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9EAB75F8B for ; Mon, 31 Oct 2022 13:28:35 -0700 (PDT) Received: by mail-oa1-x2f.google.com with SMTP id 586e51a60fabf-13b6c1c89bdso14683056fac.13 for ; Mon, 31 Oct 2022 13:28:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Zkz4bwOdtuQaDxsUm4Et7kpA3fRhuxZ16vjanIUl/bk=; b=a5tyqbj8bH615XX2JUFNkHiMLqs1pab5N/DsBxI9yFNguOtzFYJPaa2oCv5cEbCXIe CcVhkQakNk7t3Bc9I/e/6XfzCBvSrVmMXnXpOXE//hBXIYlmBifcy3SpslDp1KmcptMC NCNktFYjqXlcZieAt6rmHxBDEvJopr8gP2m0GIHcJSBQsPGimqzIIlCp7G/slEg4GZ6P kfRLs/fgYw1j38zq3UggvEmO4GBJ67zr4q+57YxuLLYRaM/LcXtMyn87eQ5kfn0fhOlS EhT9XNq3DKHGzeKbbVFFW7onKJ9Znu5Otc7qk1QNqKgDK7zEfcbFnvJKTMpnZ6rnUo0I 19UA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Zkz4bwOdtuQaDxsUm4Et7kpA3fRhuxZ16vjanIUl/bk=; b=xr9vpMaL8ILUY2gCJLH0soBHB/yO9AJeGHxLPxHX2oMn3yT3m8PXZm5MTH4XxqPFLq b3+s5UtzeJL+ZgFF5AG4aOZSXarQUAivpFLIkiH9N0CHxCKzZFKNRpdB7Ds2pH2lhhiG C3iWBDkTU44luRdKUSmP9AAvXEVuTT3hGd4YVJkVBd3VKx9XntRHfaem717cyrBEUZCy OxP5gkg/bRegu244/jJUsIjRGJOaREdAaQP3OZYd8WJ9WEj/VaLUjWgQH/lBdqOhbGtH ehlxq8nQS40Yxl5N/ydUDzNnntAfj9+qiay8D7HA6R0GxzryCFM/wkWAs8o/GC+Dkj7z 1KBw== X-Gm-Message-State: ACrzQf3z+NRZZjCDYO3L5ExhGrj9kSX4QcAFTEU0h2+0SEy1L/e183Y1 XnAlSCAzl0M8poLL+jaYcyM= X-Google-Smtp-Source: AMsMyM44e+fhoTt729hB15y9KuWKWo07Soawkv0IB4rDtJp8kSRSiR3Ty5KgpW9SqxKHiAoBPIz72Q== X-Received: by 2002:a05:6870:6088:b0:12c:19b0:f878 with SMTP id t8-20020a056870608800b0012c19b0f878mr8652639oae.70.1667248114977; Mon, 31 Oct 2022 13:28:34 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-ce7d-a808-badd-629d.res6.spectrum.com. [2603:8081:140c:1a00:ce7d:a808:badd:629d]) by smtp.googlemail.com with ESMTPSA id w1-20020a056808018100b00342e8bd2299sm2721215oic.6.2022.10.31.13.28.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Oct 2022 13:28:34 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 06/18] RDMA/rxe: Add rxe_add_frag() to rxe_mr.c Date: Mon, 31 Oct 2022 15:27:55 -0500 Message-Id: <20221031202805.19138-6-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221031202805.19138-1-rpearsonhpe@gmail.com> References: <20221031202805.19138-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add the subroutine rxe_add_frag() to add a fragment to an skb. This is in preparation for supporting fragmented skbs. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 2 ++ drivers/infiniband/sw/rxe/rxe_mr.c | 34 +++++++++++++++++++++++++++++ 2 files changed, 36 insertions(+) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index ff803a957ac1..81a611778d44 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -68,6 +68,8 @@ void rxe_mr_init_dma(int access, struct rxe_mr *mr); int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova, int access, struct rxe_mr *mr); int rxe_mr_init_fast(int max_pages, struct rxe_mr *mr); +int rxe_add_frag(struct sk_buff *skb, struct rxe_phys_buf *buf, + int length, int offset); int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, enum rxe_mr_copy_op op); int copy_data(struct rxe_pd *pd, int access, struct rxe_dma_info *dma, diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 60a8034f1416..2dcf37f32330 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -286,6 +286,40 @@ void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length) return addr; } +/** + * rxe_add_frag() - Add a frag to a nonlinear packet + * @skb: The packet buffer + * @buf: Kernel buffer info + * @length: Length of fragment + * @offset: Offset of fragment in buf + * + * Returns: 0 on success else a negative errno + */ +int rxe_add_frag(struct sk_buff *skb, struct rxe_phys_buf *buf, + int length, int offset) +{ + int nr_frags = skb_shinfo(skb)->nr_frags; + skb_frag_t *frag = &skb_shinfo(skb)->frags[nr_frags]; + + if (nr_frags >= MAX_SKB_FRAGS) { + pr_debug("%s: nr_frags (%d) >= MAX_SKB_FRAGS\n", + __func__, nr_frags); + return -EINVAL; + } + + frag->bv_len = length; + frag->bv_offset = offset; + frag->bv_page = virt_to_page(buf->addr); + /* because kfree_skb will call put_page() */ + get_page(frag->bv_page); + skb_shinfo(skb)->nr_frags++; + + skb->data_len += length; + skb->len += length; + + return 0; +} + /* copy data from a range (vaddr, vaddr+length-1) to or from * a mr object starting at iova. */ From patchwork Mon Oct 31 20:27:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13026331 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 611B3FA3741 for ; Mon, 31 Oct 2022 20:28:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229928AbiJaU2j (ORCPT ); Mon, 31 Oct 2022 16:28:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57048 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229787AbiJaU2i (ORCPT ); Mon, 31 Oct 2022 16:28:38 -0400 Received: from mail-oa1-x2f.google.com (mail-oa1-x2f.google.com [IPv6:2001:4860:4864:20::2f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0590713D0F for ; Mon, 31 Oct 2022 13:28:37 -0700 (PDT) Received: by mail-oa1-x2f.google.com with SMTP id 586e51a60fabf-13b6c1c89bdso14683140fac.13 for ; Mon, 31 Oct 2022 13:28:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=NUUrwK5X8XzRkku41V3eAXwI8CumESsTLSiUb8Gn8ys=; b=BSydw51cAHtDyWG2J/Rez+VrzUbUEH0BIFXEtw4CBkXh58pOf4BVDyBS6OALE8tEuv PnP+C2NboWSCT5mF5DLGs+Y1iW4JLKOLT0RY6eFtGhpft5WbTEYkq1e4Cr3itN5utKbU akN+OUC52vz31Xj5GjlfpvlpmfzfZMXDmA782zKG4Raqj2JaFV/edkxvoZ4l2gA8yO9U Dwl2dGWTZw1AkdDyvmi/w1UIK4UXfCNyxsbRyHyMWRmQbTLoBX9rxbswNwbaFiNNeywj ixfLMLTvcJwiz3Kshin8PpHRH05wJXsFEKE4DoGoi0sWWYfbZSvI7IXraxSowra83j+k Q1IA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NUUrwK5X8XzRkku41V3eAXwI8CumESsTLSiUb8Gn8ys=; b=QywkPCkM0r6tjXwjIG/0741OGWNEnjNU/E4oox4/dUjuTyWvqcC5Zl9FGGx4gI3YiK 5jvbNQjRi0wTWXV7ocPeajhQnGYg//5L5Px6V8pjMJ+zON30c54sRtTSWO5wewo6uaas Hyys0t9C+ggFN7ETZJsJ6qNYEG98/uNQqa6PASqKayQdERNINRtvFFdPYzlOnsygF2qM oPvGp/nuikZjVmW9J8elMo84r455dWMkEi6Fxtyr4hlWQUvQj2WNp3izY5eUKuQm2Q+/ yHE+aSQcztpq6XPnfum426kEU+G3E+NVmsm8iNhc1lFERsbhC9jtFGBchbyMYu8cgXQ8 U2Qg== X-Gm-Message-State: ACrzQf08aXGjUQ5YwDBOrnmQQJFLRDHQ9JwAUqTdwcCcO8hICTInc82t hSc2LVEl1Qx6v1l3xJ20ZMA= X-Google-Smtp-Source: AMsMyM4tum+trVFdIAUdBoRAHOw43MVjZN4ETufnnDp7rAyit3eFaPTmOVSkjEVuSfSag3HwwL8eWg== X-Received: by 2002:a05:6870:6189:b0:13b:9640:6ed7 with SMTP id a9-20020a056870618900b0013b96406ed7mr18238604oah.216.1667248116754; Mon, 31 Oct 2022 13:28:36 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-ce7d-a808-badd-629d.res6.spectrum.com. [2603:8081:140c:1a00:ce7d:a808:badd:629d]) by smtp.googlemail.com with ESMTPSA id w1-20020a056808018100b00342e8bd2299sm2721215oic.6.2022.10.31.13.28.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Oct 2022 13:28:35 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 07/18] RDMA/rxe: Add routine to compute the number of frags Date: Mon, 31 Oct 2022 15:27:56 -0500 Message-Id: <20221031202805.19138-7-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221031202805.19138-1-rpearsonhpe@gmail.com> References: <20221031202805.19138-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add a subroutine named rxe_num_mr_frags() to compute the number of skb frags needed to hold length bytes in an skb when sending data from an mr starting at iova. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 1 + drivers/infiniband/sw/rxe/rxe_mr.c | 68 +++++++++++++++++++++++++++++ 2 files changed, 69 insertions(+) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 81a611778d44..87fb052c1d0a 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -70,6 +70,7 @@ int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova, int rxe_mr_init_fast(int max_pages, struct rxe_mr *mr); int rxe_add_frag(struct sk_buff *skb, struct rxe_phys_buf *buf, int length, int offset); +int rxe_num_mr_frags(struct rxe_mr *mr, u64 iova, int length); int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, enum rxe_mr_copy_op op); int copy_data(struct rxe_pd *pd, int access, struct rxe_dma_info *dma, diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 2dcf37f32330..23abcf2a0198 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -320,6 +320,74 @@ int rxe_add_frag(struct sk_buff *skb, struct rxe_phys_buf *buf, return 0; } +/** + * rxe_num_mr_frags() - Compute the number of skb frags needed to copy + * length bytes from an mr to an skb frag list. + * @mr: mr to copy data from + * @iova: iova in memory region as starting point + * @length: number of bytes to transfer + * + * Returns: the number of frags needed or a negative error + */ +int rxe_num_mr_frags(struct rxe_mr *mr, u64 iova, int length) +{ + struct rxe_phys_buf *buf; + struct rxe_map **map; + size_t buf_offset; + int bytes; + int m; + int i; + int num_frags = 0; + int err; + + if (length == 0) + return 0; + + if (mr->type == IB_MR_TYPE_DMA) { + while (length > 0) { + buf_offset = iova & ~PAGE_MASK; + bytes = PAGE_SIZE - buf_offset; + if (bytes > length) + bytes = length; + length -= bytes; + num_frags++; + } + + return num_frags; + } + + WARN_ON_ONCE(!mr->map); + + err = mr_check_range(mr, iova, length); + if (err) + return err; + + lookup_iova(mr, iova, &m, &i, &buf_offset); + + map = mr->map + m; + buf = map[0]->buf + i; + + while (length > 0) { + bytes = buf->size - buf_offset; + if (bytes > length) + bytes = length; + length -= bytes; + buf_offset = 0; + buf++; + i++; + num_frags++; + + /* we won't overrun since we checked range above */ + if (i == RXE_BUF_PER_MAP) { + i = 0; + map++; + buf = map[0]->buf; + } + } + + return num_frags; +} + /* copy data from a range (vaddr, vaddr+length-1) to or from * a mr object starting at iova. */ From patchwork Mon Oct 31 20:27:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13026332 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4BA8FFA3741 for ; Mon, 31 Oct 2022 20:28:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229939AbiJaU2l (ORCPT ); Mon, 31 Oct 2022 16:28:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56964 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229553AbiJaU2k (ORCPT ); Mon, 31 Oct 2022 16:28:40 -0400 Received: from mail-oi1-x22c.google.com (mail-oi1-x22c.google.com [IPv6:2607:f8b0:4864:20::22c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B71313E33 for ; Mon, 31 Oct 2022 13:28:39 -0700 (PDT) Received: by mail-oi1-x22c.google.com with SMTP id p127so13930276oih.9 for ; Mon, 31 Oct 2022 13:28:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=sp90QASqkoEDHPYCIDVhjfWBKi7P/IYIQ3+aOjk6wK4=; b=C4MpBjHQvwGEgKdnj744UGNhWDrKPNaGbYkkDZa3q0R3/Gi/L90EnaCWHvenR3PRKD bQFybnvZu5w8xLJlg/XBIvXvTImztjoS2Nzhnou5x6BSE0vhDDG+viUylURhUAh/iTA6 t5muYiJ+IcGZjUojYNeVe7EuwO7pq3zKcxV+CsV7UNh6WEU8rDXKbhxNjq79sPNuFA52 o+YNCACsTFAl7HgRdfznpR8oErISQ0haoO2LP9jw9KJJmevhWT47KoqYuhBkekYLET2L uiuC9l4mqV4OJX0i0AzsrQiFyiLe9b/Hida0DEIxGUVJHURpb0nn1DS+A3JEHFG8uF+/ G9pw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sp90QASqkoEDHPYCIDVhjfWBKi7P/IYIQ3+aOjk6wK4=; b=xIM/6Ev04udz6CXD0sN9J6fV1e+nBmxaNq+8b8KXjDyG6Tn8L/quwZ/0ctstmah7Ia U28lWXtsLqoQ91USbLQIVs4Xfb2FuwodXogfH6l4bZtnhMTCoQoz6hR7SiqLTXsPQiMf bTzMd6V7rGydjpEYvOUUXbqAJy/egWfU95dxGXxzMZ1+qikBtyDijBLbMNqDke/8Tdpy 5mgTQ4khho4IEXoZUnSvxKCziGVC7/0Inx2xrC0UP0K1fZM7zEbFUYF9jPnWk7tjsCrk YWo6RovY/lNqEu1zfWftdvgviB9HR0Y+DH25oqYMrZkHgsC1g1hNxYztAiZRB/soF/7Y dpcQ== X-Gm-Message-State: ACrzQf0rtLGBqB6N0fAIGrBNYXDAEsOTZzRX6/bfgDIyboInWNeb++oP wiQKv9vsdp0m1BL7uanAcQDP9qRp/5k= X-Google-Smtp-Source: AMsMyM63XTkko9QFv694dyXojoSoe/dWGB0i3lxIul1l0FYtg30hXMT7LGtSlEnnhNpujLCRH8LUQg== X-Received: by 2002:aca:dac2:0:b0:354:a921:a87e with SMTP id r185-20020acadac2000000b00354a921a87emr16189369oig.292.1667248118353; Mon, 31 Oct 2022 13:28:38 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-ce7d-a808-badd-629d.res6.spectrum.com. [2603:8081:140c:1a00:ce7d:a808:badd:629d]) by smtp.googlemail.com with ESMTPSA id w1-20020a056808018100b00342e8bd2299sm2721215oic.6.2022.10.31.13.28.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Oct 2022 13:28:37 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 08/18] RDMA/rxe: Extend rxe_mr_copy to support skb frags Date: Mon, 31 Oct 2022 15:27:57 -0500 Message-Id: <20221031202805.19138-8-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221031202805.19138-1-rpearsonhpe@gmail.com> References: <20221031202805.19138-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org rxe_mr_copy() currently supports copying between an mr and a contiguous region of kernel memory. Rename rxe_mr_copy() to rxe_copy_mr_data(). Extend the operations to support copying between an mr and an skb fragment list. Fixup calls to rxe_mr_copy() to support the new API. This is in preparation for supporting fragmented skbs. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 3 + drivers/infiniband/sw/rxe/rxe_mr.c | 144 +++++++++++++++++++-------- drivers/infiniband/sw/rxe/rxe_resp.c | 20 ++-- 3 files changed, 117 insertions(+), 50 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 87fb052c1d0a..c62fc2613a01 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -71,6 +71,9 @@ int rxe_mr_init_fast(int max_pages, struct rxe_mr *mr); int rxe_add_frag(struct sk_buff *skb, struct rxe_phys_buf *buf, int length, int offset); int rxe_num_mr_frags(struct rxe_mr *mr, u64 iova, int length); +int rxe_copy_mr_data(struct sk_buff *skb, struct rxe_mr *mr, u64 iova, + void *addr, int skb_offset, int length, + enum rxe_mr_copy_op op); int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, enum rxe_mr_copy_op op); int copy_data(struct rxe_pd *pd, int access, struct rxe_dma_info *dma, diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 23abcf2a0198..37d35413da94 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -343,7 +343,7 @@ int rxe_num_mr_frags(struct rxe_mr *mr, u64 iova, int length) if (length == 0) return 0; - if (mr->type == IB_MR_TYPE_DMA) { + if (mr->ibmr.type == IB_MR_TYPE_DMA) { while (length > 0) { buf_offset = iova & ~PAGE_MASK; bytes = PAGE_SIZE - buf_offset; @@ -388,70 +388,130 @@ int rxe_num_mr_frags(struct rxe_mr *mr, u64 iova, int length) return num_frags; } -/* copy data from a range (vaddr, vaddr+length-1) to or from - * a mr object starting at iova. +/** + * rxe_copy_mr_data() - transfer data between an MR and a packet + * @skb: the packet buffer + * @mr: the MR + * @iova: the address in the MR + * @addr: the address in the packet (TO/FROM MR only) + * @length: the length to transfer + * @op: copy operation (TO MR, FROM MR or FRAG MR) + * + * Copy data from a range (addr, addr+length-1) in a packet + * to or from a range in an MR object at (iova, iova+length-1). + * Or, build a frag list referencing the MR range. + * + * Caller must verify that the access permissions support the + * operation. + * + * Returns: 0 on success or an error */ -int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, - enum rxe_mr_copy_op op) +int rxe_copy_mr_data(struct sk_buff *skb, struct rxe_mr *mr, u64 iova, + void *addr, int skb_offset, int length, + enum rxe_mr_copy_op op) { - int err; - int bytes; - u8 *va; - struct rxe_map **map; - struct rxe_phys_buf *buf; - int m; - int i; - size_t offset; + struct rxe_phys_buf dmabuf; + struct rxe_phys_buf *buf; + struct rxe_map **map; + size_t buf_offset; + int bytes; + void *va; + int m; + int i; + int err = 0; if (length == 0) return 0; - if (mr->ibmr.type == IB_MR_TYPE_DMA) { - u8 *src, *dest; - - src = (op == RXE_COPY_TO_MR) ? addr : ((void *)(uintptr_t)iova); - - dest = (op == RXE_COPY_TO_MR) ? ((void *)(uintptr_t)iova) : addr; + switch (mr->ibmr.type) { + case IB_MR_TYPE_DMA: + va = (void *)(uintptr_t)iova; + switch (op) { + case RXE_COPY_TO_MR: + memcpy(va, addr, length); + break; + case RXE_COPY_FROM_MR: + memcpy(addr, va, length); + break; + case RXE_FRAG_TO_MR: + err = skb_copy_bits(skb, skb_offset, va, length); + if (err) + return err; + break; + case RXE_FRAG_FROM_MR: + /* limit frag length to PAGE_SIZE */ + while (length) { + dmabuf.addr = iova & PAGE_MASK; + buf_offset = iova & ~PAGE_MASK; + bytes = PAGE_SIZE - buf_offset; + if (bytes > length) + bytes = length; + err = rxe_add_frag(skb, &dmabuf, bytes, + buf_offset); + if (err) + return err; + iova += bytes; + length -= bytes; + } + break; + } + return 0; - memcpy(dest, src, length); + case IB_MR_TYPE_MEM_REG: + case IB_MR_TYPE_USER: + break; - return 0; + default: + pr_warn("%s: mr type (%d) not supported\n", + __func__, mr->ibmr.type); + return -EINVAL; } WARN_ON_ONCE(!mr->map); err = mr_check_range(mr, iova, length); - if (err) { - err = -EFAULT; - goto err1; - } + if (err) + return -EFAULT; - lookup_iova(mr, iova, &m, &i, &offset); + lookup_iova(mr, iova, &m, &i, &buf_offset); map = mr->map + m; - buf = map[0]->buf + i; + buf = map[0]->buf + i; while (length > 0) { - u8 *src, *dest; - - va = (u8 *)(uintptr_t)buf->addr + offset; - src = (op == RXE_COPY_TO_MR) ? addr : va; - dest = (op == RXE_COPY_TO_MR) ? va : addr; - - bytes = buf->size - offset; - + va = (void *)(uintptr_t)buf->addr + buf_offset; + bytes = buf->size - buf_offset; if (bytes > length) bytes = length; - memcpy(dest, src, bytes); + switch (op) { + case RXE_COPY_TO_MR: + memcpy(va, addr, bytes); + break; + case RXE_COPY_FROM_MR: + memcpy(addr, va, bytes); + break; + case RXE_FRAG_TO_MR: + err = skb_copy_bits(skb, skb_offset, va, bytes); + if (err) + return err; + break; + case RXE_FRAG_FROM_MR: + err = rxe_add_frag(skb, buf, bytes, buf_offset); + if (err) + return err; + break; + } - length -= bytes; - addr += bytes; + length -= bytes; + addr += bytes; - offset = 0; + buf_offset = 0; + skb_offset += bytes; buf++; i++; + /* we won't overrun since we checked range above */ if (i == RXE_BUF_PER_MAP) { i = 0; map++; @@ -460,9 +520,6 @@ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, } return 0; - -err1: - return err; } /* copy data in or out of a wqe, i.e. sg list @@ -535,7 +592,8 @@ int copy_data( if (bytes > 0) { iova = sge->addr + offset; - err = rxe_mr_copy(mr, iova, addr, bytes, op); + err = rxe_copy_mr_data(NULL, mr, iova, addr, + 0, bytes, op); if (err) goto err2; diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 023df0562258..5f00477544fa 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -535,12 +535,15 @@ static enum resp_states send_data_in(struct rxe_qp *qp, void *data_addr, static enum resp_states write_data_in(struct rxe_qp *qp, struct rxe_pkt_info *pkt) { + struct sk_buff *skb = PKT_TO_SKB(pkt); enum resp_states rc = RESPST_NONE; - int err; int data_len = payload_size(pkt); + int err; + int skb_offset = 0; - err = rxe_mr_copy(qp->resp.mr, qp->resp.va + qp->resp.offset, - payload_addr(pkt), data_len, RXE_COPY_TO_MR); + err = rxe_copy_mr_data(skb, qp->resp.mr, qp->resp.va + qp->resp.offset, + payload_addr(pkt), skb_offset, data_len, + RXE_COPY_TO_MR); if (err) { rc = RESPST_ERR_RKEY_VIOLATION; goto out; @@ -766,6 +769,7 @@ static enum resp_states read_reply(struct rxe_qp *qp, int err; struct resp_res *res = qp->resp.res; struct rxe_mr *mr; + int skb_offset = 0; if (!res) { res = rxe_prepare_res(qp, req_pkt, RXE_READ_MASK); @@ -806,15 +810,17 @@ static enum resp_states read_reply(struct rxe_qp *qp, if (!skb) return RESPST_ERR_RNR; - err = rxe_mr_copy(mr, res->read.va, payload_addr(&ack_pkt), - payload, RXE_COPY_FROM_MR); - if (mr) - rxe_put(mr); + err = rxe_copy_mr_data(skb, mr, res->read.va, payload_addr(&ack_pkt), + skb_offset, payload, RXE_COPY_FROM_MR); if (err) { kfree_skb(skb); + rxe_put(mr); return RESPST_ERR_RKEY_VIOLATION; } + if (mr) + rxe_put(mr); + if (bth_pad(&ack_pkt)) { u8 *pad = payload_addr(&ack_pkt) + payload; From patchwork Mon Oct 31 20:27:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13026333 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 867F9ECAAA1 for ; Mon, 31 Oct 2022 20:28:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229536AbiJaU2o (ORCPT ); Mon, 31 Oct 2022 16:28:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56962 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229929AbiJaU2n (ORCPT ); Mon, 31 Oct 2022 16:28:43 -0400 Received: from mail-oi1-x22a.google.com (mail-oi1-x22a.google.com [IPv6:2607:f8b0:4864:20::22a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 87B2C13EB9 for ; Mon, 31 Oct 2022 13:28:40 -0700 (PDT) Received: by mail-oi1-x22a.google.com with SMTP id m204so2233506oib.6 for ; Mon, 31 Oct 2022 13:28:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=tjCtF35HNc5lMjI9u6SvZPnqLYKOdrRqQUk6KF4IPio=; b=MZnlBKVm1vuNF9JqClEJdbLHLv7jmXiYSwBiKtlt7dc7kXm2JJzeOlfIGZ2H1m/HI+ Wy9yDICd+hgQtDKH6P7dU1aFG/sFq4kjp9zSBougYGSsBCvtc0khfMCslXltMlWwF0sz Iv6+/fDEBRcjPmxsK4SptAvBFULkr1NmVL/Nil9XxfQFs//z7SXbej/Gn+iinvgaXA/O whMPl6HaHo5dq8m6g0OvyCxWxbmoo+r/rx4ez5k3L3lNcz5csRuXs4grOdIAKilIMQQB l47e+Zr6c3jXyNpvezndHkp5mNqAw3ctJTGncvShpxt+qyL932mOEKvpPdrTvQnnLx8C XK0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tjCtF35HNc5lMjI9u6SvZPnqLYKOdrRqQUk6KF4IPio=; b=E+uft8JJIebRYhIUZ0/W1qJunWjVUMhFSDxdKnvBSxmuKKiudr2V1yH7me7RrEaf+3 YRAfQqW76b6fGXDIWsq92CbwVwMK7sltOq42Us932ZX3bnccuIWHMEQLuwbbfyviUflq fLA517xYhIvUd+fcrOmWT+w66jZy5DRxABlvqI/eWHh4ck3RQkSdINMqAhsH9G/Jgpf2 Qsqv3nCFrzdzc/bHuCXXGEw9zZ1X1715TGvwHrs3JaekPZAQUfdvGUOd2QJY0YxUMdrq ZuLNWy7HvHjWOO9fiprrmOi1CoCLRbWHePgmCRgOiejVrhZu83TA99bjAtXrHonjBtG4 Z0RA== X-Gm-Message-State: ACrzQf1QvdLSFyNCbTS3irHSu7o8XHHR+8OM77s1h4LDCnOPdsJWqYDE XfnlQmwCRcF+UbCy61HtjQM= X-Google-Smtp-Source: AMsMyM5KUaC2NEcvSOCVtiBMdOq/n4VtyiQ5YXRsUmHgzedxDoid8JmNEaWz1nnlKUQgK70CNwq5dg== X-Received: by 2002:a05:6808:1b0b:b0:35a:1a7c:ba9e with SMTP id bx11-20020a0568081b0b00b0035a1a7cba9emr1754423oib.25.1667248119830; Mon, 31 Oct 2022 13:28:39 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-ce7d-a808-badd-629d.res6.spectrum.com. [2603:8081:140c:1a00:ce7d:a808:badd:629d]) by smtp.googlemail.com with ESMTPSA id w1-20020a056808018100b00342e8bd2299sm2721215oic.6.2022.10.31.13.28.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Oct 2022 13:28:39 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 09/18] RDMA/rxe: Add routine to compute number of frags for dma Date: Mon, 31 Oct 2022 15:27:58 -0500 Message-Id: <20221031202805.19138-9-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221031202805.19138-1-rpearsonhpe@gmail.com> References: <20221031202805.19138-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add routine named rxe_num_dma_frags() to compute the number of skb frags needed to copy length bytes from a dma info struct. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 4 +- drivers/infiniband/sw/rxe/rxe_mr.c | 67 ++++++++++++++++++++++++++++- 2 files changed, 69 insertions(+), 2 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index c62fc2613a01..4c30ffaccc92 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -76,10 +76,12 @@ int rxe_copy_mr_data(struct sk_buff *skb, struct rxe_mr *mr, u64 iova, enum rxe_mr_copy_op op); int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, enum rxe_mr_copy_op op); +int rxe_num_dma_frags(const struct rxe_pd *pd, const struct rxe_dma_info *dma, + int length); int copy_data(struct rxe_pd *pd, int access, struct rxe_dma_info *dma, void *addr, int length, enum rxe_mr_copy_op op); void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length); -struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key, +struct rxe_mr *lookup_mr(const struct rxe_pd *pd, int access, u32 key, enum rxe_mr_lookup_type type); int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length); int advance_dma_data(struct rxe_dma_info *dma, unsigned int length); diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 37d35413da94..99d0b5afefc3 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -522,6 +522,71 @@ int rxe_copy_mr_data(struct sk_buff *skb, struct rxe_mr *mr, u64 iova, return 0; } +/** + * rxe_num_dma_frags() - Count the number of skb frags needed to copy + * length bytes from a dma info struct to an skb + * @pd: protection domain used by dma entries + * @dma: dma info + * @length: number of bytes to copy + * + * Returns: number of frags needed or negative error + */ +int rxe_num_dma_frags(const struct rxe_pd *pd, const struct rxe_dma_info *dma, + int length) +{ + int cur_sge = dma->cur_sge; + const struct rxe_sge *sge = &dma->sge[cur_sge]; + int buf_offset = dma->sge_offset; + int resid = dma->resid; + struct rxe_mr *mr = NULL; + int bytes; + u64 iova; + int ret; + int num_frags = 0; + + if (length == 0) + return 0; + + if (length > resid) + return -EINVAL; + + while (length > 0) { + if (buf_offset >= sge->length) { + if (mr) + rxe_put(mr); + + sge++; + cur_sge++; + buf_offset = 0; + + if (cur_sge >= dma->num_sge) + return -ENOSPC; + if (!sge->length) + continue; + } + + mr = lookup_mr(pd, 0, sge->lkey, RXE_LOOKUP_LOCAL); + if (!mr) + return -EINVAL; + + bytes = min_t(int, length, sge->length - buf_offset); + if (bytes > 0) { + iova = sge->addr + buf_offset; + ret = rxe_num_mr_frags(mr, iova, length); + if (ret < 0) { + rxe_put(mr); + return ret; + } + + buf_offset += bytes; + resid -= bytes; + length -= bytes; + } + } + + return num_frags; +} + /* copy data in or out of a wqe, i.e. sg list * under the control of a dma descriptor */ @@ -658,7 +723,7 @@ int advance_dma_data(struct rxe_dma_info *dma, unsigned int length) * (3) verify that the mr can support the requested access * (4) verify that mr state is valid */ -struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key, +struct rxe_mr *lookup_mr(const struct rxe_pd *pd, int access, u32 key, enum rxe_mr_lookup_type type) { struct rxe_mr *mr; From patchwork Mon Oct 31 20:27:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13026335 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD6C2FA3741 for ; Mon, 31 Oct 2022 20:28:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229995AbiJaU2s (ORCPT ); Mon, 31 Oct 2022 16:28:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57276 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229959AbiJaU2p (ORCPT ); Mon, 31 Oct 2022 16:28:45 -0400 Received: from mail-ot1-x334.google.com (mail-ot1-x334.google.com [IPv6:2607:f8b0:4864:20::334]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 12038DEA4 for ; Mon, 31 Oct 2022 13:28:42 -0700 (PDT) Received: by mail-ot1-x334.google.com with SMTP id k59-20020a9d19c1000000b0066c45cdfca5so2972799otk.10 for ; Mon, 31 Oct 2022 13:28:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=EHxFM1J0AbwRN3LfhUbzpJKABUqkA+7pjVZ0MMGxkN4=; b=JRg10oc9ZjTFdY9yYZSmNXtWhoWXRwZY7ZZLibyFCo0MBJTqpeZ2AlZfm7qxkreXLR 9jy27uwDAzFpAIsgncr0Smp47MiVdOUzWqb5lvcic7AF9z/sCOBHAYLs1X2xketrdN9C bL2uAhV1kOl4HfBf1fifrsBVn4pgLPs72626WZhD/Yezyqj+fxN1q7t+1lmnEFs3dZOp bWaz0O4/T2IQ2Q/hxzdGZlqOgHQ7EKGt0njHNQqTKx/J1rxVbqlCJ+0C/xgv8T9UEWMN paRia3EX64nr0wVzjtMvI6/HHjpEBUIh8DYifjZX/qirX/3nunDwdoA+vj4Vwqhh3ebs Xxxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=EHxFM1J0AbwRN3LfhUbzpJKABUqkA+7pjVZ0MMGxkN4=; b=HDVlc7LRcQUL8dONdf7+pbN5mrfvmxxtYCS81vAC0/OgUgoKzlfQEbc7tybvqJra3t Ce0cSJlolTRRvGSzxIsxPARke0ez2bX7Bv7ueGdl2E/3RkEhdEh3Bz9W3Cq7a5se8IfE Miw0hPg2kbZLxIr4bPx9LKaTbG3dAHCuAiajFE8LQjdl06Nw7kQYe3khcrziQWLJCz7B ln4HhKBQ/LPqfUXAHH+zAQePeE2pnft3seJNr5Qb0Dc402Gijn3jJkeLhkmlBDq3Vmzu NM8w5OcomssiW3ImEvJX4TtWvTK8cHdpAqk6kfaLH+PFnG5NWq43THTAVMohW5znMp61 JfWg== X-Gm-Message-State: ACrzQf0i5dillu3Lh+0PEd1NlMxgBziAwRG0/s1UoOw3SCucVXUyn6kd fN9j37fbL40tc+5Fnfv56e4= X-Google-Smtp-Source: AMsMyM7PiM7bFuIN5XWteW9OUM51U2iLZFT5sdnljjQpkCmCcbPccpSEVfZuj3QnFlcGNn4IBiaQYQ== X-Received: by 2002:a9d:2625:0:b0:66c:5694:f4ce with SMTP id a34-20020a9d2625000000b0066c5694f4cemr2596298otb.145.1667248121300; Mon, 31 Oct 2022 13:28:41 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-ce7d-a808-badd-629d.res6.spectrum.com. [2603:8081:140c:1a00:ce7d:a808:badd:629d]) by smtp.googlemail.com with ESMTPSA id w1-20020a056808018100b00342e8bd2299sm2721215oic.6.2022.10.31.13.28.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Oct 2022 13:28:40 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 10/18] RDMA/rxe: Extend copy_data to support skb frags Date: Mon, 31 Oct 2022 15:27:59 -0500 Message-Id: <20221031202805.19138-10-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221031202805.19138-1-rpearsonhpe@gmail.com> References: <20221031202805.19138-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org copy_data() currently supports copying between an mr and the scatter-gather list of a wqe. Rename copy_data() to rxe_copy_dma_data(). Extend the operations to support copying between a sg list and an skb fragment list. Fixup calls to copy_data() to support the new API. This is in preparation for supporting fragmented skbs. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_comp.c | 17 ++-- drivers/infiniband/sw/rxe/rxe_loc.h | 5 +- drivers/infiniband/sw/rxe/rxe_mr.c | 122 ++++++++++++--------------- drivers/infiniband/sw/rxe/rxe_req.c | 11 ++- drivers/infiniband/sw/rxe/rxe_resp.c | 7 +- 5 files changed, 79 insertions(+), 83 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c index 77640e35ae88..3c1ecc88446d 100644 --- a/drivers/infiniband/sw/rxe/rxe_comp.c +++ b/drivers/infiniband/sw/rxe/rxe_comp.c @@ -352,11 +352,14 @@ static inline enum comp_state do_read(struct rxe_qp *qp, struct rxe_pkt_info *pkt, struct rxe_send_wqe *wqe) { + struct sk_buff *skb = PKT_TO_SKB(pkt); + int skb_offset = 0; int ret; - ret = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, - &wqe->dma, payload_addr(pkt), - payload_size(pkt), RXE_COPY_TO_MR); + ret = rxe_copy_dma_data(skb, qp->pd, IB_ACCESS_LOCAL_WRITE, + &wqe->dma, payload_addr(pkt), + skb_offset, payload_size(pkt), + RXE_COPY_TO_MR); if (ret) { wqe->status = IB_WC_LOC_PROT_ERR; return COMPST_ERROR; @@ -372,13 +375,15 @@ static inline enum comp_state do_atomic(struct rxe_qp *qp, struct rxe_pkt_info *pkt, struct rxe_send_wqe *wqe) { + struct sk_buff *skb = NULL; + int skb_offset = 0; int ret; u64 atomic_orig = atmack_orig(pkt); - ret = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, - &wqe->dma, &atomic_orig, - sizeof(u64), RXE_COPY_TO_MR); + ret = rxe_copy_dma_data(skb, qp->pd, IB_ACCESS_LOCAL_WRITE, + &wqe->dma, &atomic_orig, + skb_offset, sizeof(u64), RXE_COPY_TO_MR); if (ret) { wqe->status = IB_WC_LOC_PROT_ERR; return COMPST_ERROR; diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 4c30ffaccc92..dbead759123d 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -78,8 +78,9 @@ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, enum rxe_mr_copy_op op); int rxe_num_dma_frags(const struct rxe_pd *pd, const struct rxe_dma_info *dma, int length); -int copy_data(struct rxe_pd *pd, int access, struct rxe_dma_info *dma, - void *addr, int length, enum rxe_mr_copy_op op); +int rxe_copy_dma_data(struct sk_buff *skb, struct rxe_pd *pd, int access, + struct rxe_dma_info *dma, void *addr, + int skb_offset, int length, enum rxe_mr_copy_op op); void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length); struct rxe_mr *lookup_mr(const struct rxe_pd *pd, int access, u32 key, enum rxe_mr_lookup_type type); diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 99d0b5afefc3..6fe5bbe43a60 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -587,100 +587,84 @@ int rxe_num_dma_frags(const struct rxe_pd *pd, const struct rxe_dma_info *dma, return num_frags; } -/* copy data in or out of a wqe, i.e. sg list - * under the control of a dma descriptor +/** + * rxe_copy_dma_data() - transfer data between a packet and a wqe + * @skb: packet buffer (FRAG MR only) + * @pd: PD which MRs must match + * @access: access permission for MRs in sge (TO MR only) + * @dma: dma info from a wqe + * @addr: payload address in packet (TO/FROM MR only) + * @skb_offset: offset of data in skb (RXE_FRAG_TO_MR only) + * @length: payload length + * @op: copy operation (RXE_COPY_TO/FROM_MR or RXE_FRAG_TO/FROM_MR) + * + * Iterate over scatter/gather list in dma info starting from the + * current location until the payload length is used up and for each + * entry copy or build a frag list referencing the MR obtained from + * the lkey in the sge. This routine is called once for each packet + * sent or received to/from the wqe. + * + * Returns: 0 on success or an error */ -int copy_data( - struct rxe_pd *pd, - int access, - struct rxe_dma_info *dma, - void *addr, - int length, - enum rxe_mr_copy_op op) +int rxe_copy_dma_data(struct sk_buff *skb, struct rxe_pd *pd, int access, + struct rxe_dma_info *dma, void *addr, + int skb_offset, int length, enum rxe_mr_copy_op op) { - int bytes; - struct rxe_sge *sge = &dma->sge[dma->cur_sge]; - int offset = dma->sge_offset; - int resid = dma->resid; - struct rxe_mr *mr = NULL; - u64 iova; - int err; + struct rxe_sge *sge = &dma->sge[dma->cur_sge]; + int buf_offset = dma->sge_offset; + int resid = dma->resid; + struct rxe_mr *mr = NULL; + int bytes; + u64 iova; + int err = 0; if (length == 0) return 0; - if (length > resid) { - err = -EINVAL; - goto err2; - } - - if (sge->length && (offset < sge->length)) { - mr = lookup_mr(pd, access, sge->lkey, RXE_LOOKUP_LOCAL); - if (!mr) { - err = -EINVAL; - goto err1; - } - } + if (length > resid) + return -EINVAL; while (length > 0) { - bytes = length; - - if (offset >= sge->length) { - if (mr) { + if (buf_offset >= sge->length) { + if (mr) rxe_put(mr); - mr = NULL; - } + sge++; dma->cur_sge++; - offset = 0; - - if (dma->cur_sge >= dma->num_sge) { - err = -ENOSPC; - goto err2; - } + buf_offset = 0; - if (sge->length) { - mr = lookup_mr(pd, access, sge->lkey, - RXE_LOOKUP_LOCAL); - if (!mr) { - err = -EINVAL; - goto err1; - } - } else { + if (dma->cur_sge >= dma->num_sge) + return -ENOSPC; + if (!sge->length) continue; - } } - if (bytes > sge->length - offset) - bytes = sge->length - offset; + mr = lookup_mr(pd, access, sge->lkey, RXE_LOOKUP_LOCAL); + if (!mr) + return -EINVAL; + bytes = min_t(int, length, sge->length - buf_offset); if (bytes > 0) { - iova = sge->addr + offset; - - err = rxe_copy_mr_data(NULL, mr, iova, addr, - 0, bytes, op); + iova = sge->addr + buf_offset; + err = rxe_copy_mr_data(skb, mr, iova, addr, + skb_offset, bytes, op); if (err) - goto err2; + goto err_put; - offset += bytes; - resid -= bytes; - length -= bytes; - addr += bytes; + addr += bytes; + buf_offset += bytes; + skb_offset += bytes; + resid -= bytes; + length -= bytes; } } - dma->sge_offset = offset; - dma->resid = resid; + dma->sge_offset = buf_offset; + dma->resid = resid; +err_put: if (mr) rxe_put(mr); - - return 0; - -err2: - if (mr) - rxe_put(mr); -err1: return err; } diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index b111a6ddf66c..ea0132797613 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -438,8 +438,10 @@ static void rxe_init_roce_hdrs(struct rxe_qp *qp, struct rxe_send_wqe *wqe, } static int rxe_init_payload(struct rxe_qp *qp, struct rxe_send_wqe *wqe, - struct rxe_pkt_info *pkt, u32 payload) + struct rxe_pkt_info *pkt, u32 payload, + struct sk_buff *skb) { + int skb_offset = 0; void *data; int err = 0; @@ -449,8 +451,9 @@ static int rxe_init_payload(struct rxe_qp *qp, struct rxe_send_wqe *wqe, wqe->dma.resid -= payload; wqe->dma.sge_offset += payload; } else { - err = copy_data(qp->pd, 0, &wqe->dma, payload_addr(pkt), - payload, RXE_COPY_FROM_MR); + err = rxe_copy_dma_data(skb, qp->pd, 0, &wqe->dma, + payload_addr(pkt), skb_offset, + payload, RXE_COPY_FROM_MR); } return err; @@ -495,7 +498,7 @@ static struct sk_buff *rxe_init_req_packet(struct rxe_qp *qp, rxe_init_roce_hdrs(qp, wqe, pkt, pad); if (pkt->mask & RXE_WRITE_OR_SEND_MASK) { - err = rxe_init_payload(qp, wqe, pkt, payload); + err = rxe_init_payload(qp, wqe, pkt, payload, skb); if (err) goto err_out; } diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 5f00477544fa..589306de7647 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -521,10 +521,13 @@ static enum resp_states check_rkey(struct rxe_qp *qp, static enum resp_states send_data_in(struct rxe_qp *qp, void *data_addr, int data_len) { + struct sk_buff *skb = NULL; + int skb_offset = 0; int err; - err = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, &qp->resp.wqe->dma, - data_addr, data_len, RXE_COPY_TO_MR); + err = rxe_copy_dma_data(skb, qp->pd, IB_ACCESS_LOCAL_WRITE, + &qp->resp.wqe->dma, data_addr, + skb_offset, data_len, RXE_COPY_TO_MR); if (unlikely(err)) return (err == -ENOSPC) ? RESPST_ERR_LENGTH : RESPST_ERR_MALFORMED_WQE; From patchwork Mon Oct 31 20:28:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13026334 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA35AECAAA1 for ; Mon, 31 Oct 2022 20:28:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229962AbiJaU2r (ORCPT ); Mon, 31 Oct 2022 16:28:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57274 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229956AbiJaU2p (ORCPT ); Mon, 31 Oct 2022 16:28:45 -0400 Received: from mail-oa1-x34.google.com (mail-oa1-x34.google.com [IPv6:2001:4860:4864:20::34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1DF2913E33 for ; Mon, 31 Oct 2022 13:28:43 -0700 (PDT) Received: by mail-oa1-x34.google.com with SMTP id 586e51a60fabf-13bd19c3b68so14704471fac.7 for ; Mon, 31 Oct 2022 13:28:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=l30vVl9aiXBUKymiww2nBcsuZjeanYsTn+Oz+jhyI10=; b=Yk+X8y0VVzubVeVA9oJ6PTwxrcC0qFYF/4w8PMEOtABgeSOS3zmoWJIAjx9df1w/YU pxxmmHG77EQ8Myko0J5evdz/4SXIp3hEohF9/GVoup+0NFtgyUsLcJ61HPsf65UACxhl HtKZHfJno4aicG2AX+OYynV+vHq0XzrQEy55KBhjS5z5oJobJSv/9VzQivMu8TPNNKZa g0gKLqolYu3Hvi1b0sr+vf+N8wPE00m494RjTL1OoQyNq/tGGAncu0reEvGwXLHju+CD OcfDuV8U81yQPWG/bhIiWmqabQPWfcpi7cM+3AkES5JSQykFlWOEnAjHBQj1VWsKwCLy YBlw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=l30vVl9aiXBUKymiww2nBcsuZjeanYsTn+Oz+jhyI10=; b=tzBJTGRvU1gL/D7BzcHtvQPxtJils+JZLPmim7sOv1DW9m4jum5QT2V2E8sVQ2p3wY FK4HGqY3nZld+fOM/NjG5fXK1SYRCYNSGFKFIeR3r98S2Kf/H3s9akahT510IN7BC6fq NXyH5z7Dz+tC9IQZGYaysTKRuphehwyi9bHmMdYJLejN+/IqYpukFxi3qawLZej0B0u+ HsKmEmRaRkDE4LhfoREvYUl0Nuz7hZblbmGCMKEZJfaZdwX6OcdHb5DgmDeeK3sOJzUP XqNtN7+gzGRT8rVS8Q+cWQxOPQkw2nEPAWMka4w31oQB7jfq1/EHuMZLNxIg0LSlbTyy 9OVw== X-Gm-Message-State: ACrzQf377uWPAAPv2mA0/qTMAaspLxO0lJhVb8tzgnvalG9bk5NdgscL J82RyNGIKmVJ1AXZ8tXbEvrS0WVlOMU= X-Google-Smtp-Source: AMsMyM46/SW+lfIPUPRrrHV7zDT7rayXlA/t3B2XuW75Yng+t75HtauCH8FDkFHbKZPn60ZYEy0xBQ== X-Received: by 2002:a05:6870:a7a1:b0:137:2e20:97e0 with SMTP id x33-20020a056870a7a100b001372e2097e0mr18029896oao.133.1667248122825; Mon, 31 Oct 2022 13:28:42 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-ce7d-a808-badd-629d.res6.spectrum.com. [2603:8081:140c:1a00:ce7d:a808:badd:629d]) by smtp.googlemail.com with ESMTPSA id w1-20020a056808018100b00342e8bd2299sm2721215oic.6.2022.10.31.13.28.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Oct 2022 13:28:42 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 11/18] RDMA/rxe: Replace rxe by qp as a parameter Date: Mon, 31 Oct 2022 15:28:00 -0500 Message-Id: <20221031202805.19138-11-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221031202805.19138-1-rpearsonhpe@gmail.com> References: <20221031202805.19138-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Replace rxe as a parameter by qp in rxe_init_packet(). This will allow some simplification. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 2 +- drivers/infiniband/sw/rxe/rxe_net.c | 3 ++- drivers/infiniband/sw/rxe/rxe_req.c | 2 +- drivers/infiniband/sw/rxe/rxe_resp.c | 3 +-- 4 files changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index dbead759123d..4e5fbc33277d 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -100,7 +100,7 @@ struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey); void rxe_mw_cleanup(struct rxe_pool_elem *elem); /* rxe_net.c */ -struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, +struct sk_buff *rxe_init_packet(struct rxe_qp *qp, struct rxe_av *av, struct rxe_pkt_info *pkt); int rxe_prepare(struct rxe_av *av, struct rxe_pkt_info *pkt, struct sk_buff *skb); diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index 1e4456f5cda2..faabc444d546 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -442,9 +442,10 @@ int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, return err; } -struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, +struct sk_buff *rxe_init_packet(struct rxe_qp *qp, struct rxe_av *av, struct rxe_pkt_info *pkt) { + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); unsigned int hdr_len; struct sk_buff *skb = NULL; struct net_device *ndev; diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index ea0132797613..0a4b8825bd55 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -491,7 +491,7 @@ static struct sk_buff *rxe_init_req_packet(struct rxe_qp *qp, pad + RXE_ICRC_SIZE; /* init skb */ - skb = rxe_init_packet(rxe, av, pkt); + skb = rxe_init_packet(qp, av, pkt); if (unlikely(!skb)) goto err_out; diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 589306de7647..8503d22f9114 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -665,7 +665,6 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, u32 psn, u8 syndrome) { - struct rxe_dev *rxe = to_rdev(qp->ibqp.device); struct sk_buff *skb; int paylen; int pad; @@ -680,7 +679,7 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, ack->mask = rxe_opcode[opcode].mask; ack->psn = psn; - skb = rxe_init_packet(rxe, &qp->pri_av, ack); + skb = rxe_init_packet(qp, &qp->pri_av, ack); if (!skb) return NULL; From patchwork Mon Oct 31 20:28:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13026336 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3454BECAAA1 for ; Mon, 31 Oct 2022 20:28:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230039AbiJaU2t (ORCPT ); Mon, 31 Oct 2022 16:28:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57274 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229952AbiJaU2r (ORCPT ); Mon, 31 Oct 2022 16:28:47 -0400 Received: from mail-oa1-x2c.google.com (mail-oa1-x2c.google.com [IPv6:2001:4860:4864:20::2c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C9AC313EB5 for ; Mon, 31 Oct 2022 13:28:44 -0700 (PDT) Received: by mail-oa1-x2c.google.com with SMTP id 586e51a60fabf-13c569e5ff5so14331759fac.6 for ; Mon, 31 Oct 2022 13:28:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=S9aWvP5E+EuHgWPrBiedIOT+sa781DTiq4CA4EkIugM=; b=k3EYxISd6drpmcOhGYsKRCvPZKLoS7RXq/4JM5irmJXLxNgqfgnmRAIK717IrWXxCB +9eZriH27rBJVbdW2Pr7AGP8ro8HJBxq7ZEuxRDKmQwVcLqbGKy9uOcBMZRQLCVBB3I/ PLWiK1Q32Fc3LEef1EgptsxinX90YF2wnoOHBk/A4kY4RUyw0JJedp9LIiT455AUDEG/ dJpHG6Lriox7xqRO0FjqBxkPSuJA1r84dViLVULjmR97Xv8POx4NQRrALFvbktfv5fkb 7+mEZkIwnn2vVAKFeZAQzMlhWCiWyDzgPKAGQDVZ7cjYQnbWttdwTETSaaBflS/pDubM 8LHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=S9aWvP5E+EuHgWPrBiedIOT+sa781DTiq4CA4EkIugM=; b=m+1LZdTYtI6SLvWYSBS89ULHmjMHEVFr/dEVV2H8n3Z/uyugRKIWSxMJrz27GeaXB8 JUl/GJSnqLSC2HyFmnv+RtZvN7Rlssu201B3E4JS7OU5S0Gsx4eFQS559NtbWtiDMIq8 jdRAfDt4JJkM1TgsA4HLqou7AxuBI6pBA9toKabSU02KtB8BANZ0lX/iX7kCDx2NCReW WWIk0928lZGNRVpavWWlo4kZI9rpDmi31RuMkPmd5GYnObhG+F8vHLU3dY7oLdtLjb+h W/FwC6i/NZ1zPj+WNSmBJe2zUOM+ab2W/RW3kHGFYcyGf0WCrkS4LzqccvzPqxhDaeAC jIxA== X-Gm-Message-State: ACrzQf14L0Wu9dcVJvLMX+j4obttclhvCGwaBoG+cM/YhLWhPr6KAQO6 yozijwi/2TJLlK83sFZRtkU= X-Google-Smtp-Source: AMsMyM4dA5s3MAqDAgl+T9rUSn1YgAPVFi2O16aanX1umDzHOVBLNGpiukw7d8r2SeCYacwGUSsAIw== X-Received: by 2002:a05:6870:428d:b0:133:24a1:c245 with SMTP id y13-20020a056870428d00b0013324a1c245mr8648886oah.153.1667248124001; Mon, 31 Oct 2022 13:28:44 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-ce7d-a808-badd-629d.res6.spectrum.com. [2603:8081:140c:1a00:ce7d:a808:badd:629d]) by smtp.googlemail.com with ESMTPSA id w1-20020a056808018100b00342e8bd2299sm2721215oic.6.2022.10.31.13.28.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Oct 2022 13:28:43 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 12/18] RDMA/rxe: Extend rxe_init_packet() to support frags Date: Mon, 31 Oct 2022 15:28:01 -0500 Message-Id: <20221031202805.19138-12-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221031202805.19138-1-rpearsonhpe@gmail.com> References: <20221031202805.19138-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add a subroutine rxe_can_use_sg() to determine if a packet is a candidate for a fragmented skb. Add a global variable rxe_use_sg to control whether to support nonlinear skbs. Modify rxe_init_packet() to test if the packet should use a fragmented skb. Fixup calls to rxe_init_packet() to use the new API but disable creating nonlinear skbs for now. This is in preparation for using fragmented skbs. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe.c | 3 ++ drivers/infiniband/sw/rxe/rxe.h | 3 ++ drivers/infiniband/sw/rxe/rxe_loc.h | 2 +- drivers/infiniband/sw/rxe/rxe_mr.c | 12 +++-- drivers/infiniband/sw/rxe/rxe_net.c | 79 +++++++++++++++++++++++++--- drivers/infiniband/sw/rxe/rxe_req.c | 2 +- drivers/infiniband/sw/rxe/rxe_resp.c | 7 ++- 7 files changed, 92 insertions(+), 16 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c index 51daac5c4feb..388d8103ec20 100644 --- a/drivers/infiniband/sw/rxe/rxe.c +++ b/drivers/infiniband/sw/rxe/rxe.c @@ -13,6 +13,9 @@ MODULE_AUTHOR("Bob Pearson, Frank Zago, John Groves, Kamal Heib"); MODULE_DESCRIPTION("Soft RDMA transport"); MODULE_LICENSE("Dual BSD/GPL"); +/* if true allow using fragmented skbs */ +bool rxe_use_sg; + /* free resources for a rxe device all objects created for this device must * have been destroyed */ diff --git a/drivers/infiniband/sw/rxe/rxe.h b/drivers/infiniband/sw/rxe/rxe.h index 30fbdf3bc76a..c78fb497d9c3 100644 --- a/drivers/infiniband/sw/rxe/rxe.h +++ b/drivers/infiniband/sw/rxe/rxe.h @@ -30,6 +30,9 @@ #include "rxe_verbs.h" #include "rxe_loc.h" +/* if true allow using fragmented skbs */ +extern bool rxe_use_sg; + /* * Version 1 and Version 2 are identical on 64 bit machines, but on 32 bit * machines Version 2 has a different struct layout. diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 4e5fbc33277d..12fd5811cd79 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -101,7 +101,7 @@ void rxe_mw_cleanup(struct rxe_pool_elem *elem); /* rxe_net.c */ struct sk_buff *rxe_init_packet(struct rxe_qp *qp, struct rxe_av *av, - struct rxe_pkt_info *pkt); + struct rxe_pkt_info *pkt, bool *is_frag); int rxe_prepare(struct rxe_av *av, struct rxe_pkt_info *pkt, struct sk_buff *skb); int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 6fe5bbe43a60..cf538d97c7a5 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -541,7 +541,7 @@ int rxe_num_dma_frags(const struct rxe_pd *pd, const struct rxe_dma_info *dma, struct rxe_mr *mr = NULL; int bytes; u64 iova; - int ret; + int nf; int num_frags = 0; if (length == 0) @@ -572,18 +572,22 @@ int rxe_num_dma_frags(const struct rxe_pd *pd, const struct rxe_dma_info *dma, bytes = min_t(int, length, sge->length - buf_offset); if (bytes > 0) { iova = sge->addr + buf_offset; - ret = rxe_num_mr_frags(mr, iova, length); - if (ret < 0) { + nf = rxe_num_mr_frags(mr, iova, length); + if (nf < 0) { rxe_put(mr); - return ret; + return nf; } + num_frags += nf; buf_offset += bytes; resid -= bytes; length -= bytes; } } + if (mr) + rxe_put(mr); + return num_frags; } diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index faabc444d546..c6d8f5c80562 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -442,8 +442,60 @@ int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, return err; } +/** + * rxe_can_use_sg() - determine if packet is a candidate for fragmenting + * @rxe: the rxe device + * @pkt: packet info + * + * Limit to packets with: + * rxe_use_sg set + * qp is RC + * ndev supports SG + * #sges less than #frags for sends + * + * Returns: true if conditions are met else 0 + */ +static bool rxe_can_use_sg(struct rxe_qp *qp, struct rxe_pkt_info *pkt) +{ + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); + int length = pkt->paylen - rxe_opcode[pkt->opcode].length + - RXE_ICRC_SIZE; + int nf; + + if (!rxe_use_sg) + return false; + if (qp_type(pkt->qp) != IB_QPT_RC) + return false; + if (!(rxe->ndev->features & NETIF_F_SG)) + return false; + + /* check we don't have a pathological sge list with lots of + * short segments. Recall we need one extra frag for icrc. + */ + if (pkt->mask & RXE_SEND_MASK) { + nf = rxe_num_dma_frags(qp->pd, &pkt->wqe->dma, length); + return (nf >= 0 && nf <= MAX_SKB_FRAGS - 1) ? true : false; + } + + return true; +} + +#define RXE_MIN_SKB_SIZE (256) + +/** + * rxe_init_packet - allocate and initialize a new skb + * @qp: the queue pair + * @av: remote address vector + * @pkt: packet info + * @frag: optional return value for fragmented skb + * on call if frag == NULL do not use fragmented skb + * on return if not NULL set *frag to 1 + * if packet will be fragmented else 0 + * + * Returns: an skb on success else NULL + */ struct sk_buff *rxe_init_packet(struct rxe_qp *qp, struct rxe_av *av, - struct rxe_pkt_info *pkt) + struct rxe_pkt_info *pkt, bool *frag) { struct rxe_dev *rxe = to_rdev(qp->ibqp.device); unsigned int hdr_len; @@ -451,6 +503,7 @@ struct sk_buff *rxe_init_packet(struct rxe_qp *qp, struct rxe_av *av, struct net_device *ndev; const struct ib_gid_attr *attr; const int port_num = 1; + int skb_size; attr = rdma_get_gid_attr(&rxe->ib_dev, port_num, av->grh.sgid_index); if (IS_ERR(attr)) @@ -469,9 +522,19 @@ struct sk_buff *rxe_init_packet(struct rxe_qp *qp, struct rxe_av *av, rcu_read_unlock(); goto out; } - skb = alloc_skb(pkt->paylen + hdr_len + LL_RESERVED_SPACE(ndev), - GFP_ATOMIC); + skb_size = LL_RESERVED_SPACE(ndev) + hdr_len + pkt->paylen; + if (frag) { + if (rxe_use_sg && (skb_size > RXE_MIN_SKB_SIZE) && + rxe_can_use_sg(qp, pkt)) { + skb_size = RXE_MIN_SKB_SIZE; + *frag = true; + } else { + *frag = false; + } + } + + skb = alloc_skb(skb_size, GFP_ATOMIC); if (unlikely(!skb)) { rcu_read_unlock(); goto out; @@ -480,7 +543,7 @@ struct sk_buff *rxe_init_packet(struct rxe_qp *qp, struct rxe_av *av, skb_reserve(skb, hdr_len + LL_RESERVED_SPACE(ndev)); /* FIXME: hold reference to this netdev until life of this skb. */ - skb->dev = ndev; + skb->dev = ndev; rcu_read_unlock(); if (av->network_type == RXE_NETWORK_TYPE_IPV4) @@ -488,10 +551,10 @@ struct sk_buff *rxe_init_packet(struct rxe_qp *qp, struct rxe_av *av, else skb->protocol = htons(ETH_P_IPV6); - pkt->rxe = rxe; - pkt->port_num = port_num; - pkt->hdr = skb_put(skb, pkt->paylen); - pkt->mask |= RXE_GRH_MASK; + if (frag && *frag) + pkt->hdr = skb_put(skb, rxe_opcode[pkt->opcode].length); + else + pkt->hdr = skb_put(skb, pkt->paylen); out: rdma_put_gid_attr(attr); diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 0a4b8825bd55..71a65f2a5d6d 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -491,7 +491,7 @@ static struct sk_buff *rxe_init_req_packet(struct rxe_qp *qp, pad + RXE_ICRC_SIZE; /* init skb */ - skb = rxe_init_packet(qp, av, pkt); + skb = rxe_init_packet(qp, av, pkt, NULL); if (unlikely(!skb)) goto err_out; diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 8503d22f9114..8868415b71b6 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -665,6 +665,7 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, u32 psn, u8 syndrome) { + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); struct sk_buff *skb; int paylen; int pad; @@ -672,14 +673,16 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, pad = (-payload) & 0x3; paylen = rxe_opcode[opcode].length + payload + pad + RXE_ICRC_SIZE; - ack->paylen = paylen; + ack->rxe = rxe; ack->qp = qp; ack->opcode = opcode; ack->mask = rxe_opcode[opcode].mask; + ack->paylen = paylen; ack->psn = psn; + ack->port_num = 1; - skb = rxe_init_packet(qp, &qp->pri_av, ack); + skb = rxe_init_packet(qp, &qp->pri_av, ack, NULL); if (!skb) return NULL; From patchwork Mon Oct 31 20:28:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13026337 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D1F8ECAAA1 for ; Mon, 31 Oct 2022 20:28:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230002AbiJaU2v (ORCPT ); Mon, 31 Oct 2022 16:28:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57082 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229956AbiJaU2s (ORCPT ); Mon, 31 Oct 2022 16:28:48 -0400 Received: from mail-oa1-x2d.google.com (mail-oa1-x2d.google.com [IPv6:2001:4860:4864:20::2d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DF21613D5F for ; Mon, 31 Oct 2022 13:28:45 -0700 (PDT) Received: by mail-oa1-x2d.google.com with SMTP id 586e51a60fabf-13ba9a4430cso14705451fac.11 for ; Mon, 31 Oct 2022 13:28:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Th0OSjRZIl1OjiycBvrEPmKLUM7WABC3AjbwfaNykSU=; b=ZBjBMv/7h1rksAjjLizTNFKQFp4e3KgnTXkiFfF1P75hc14FCqHzig4+pd0LGKqqn6 99A9a+va9wB2iCJDt0+NzYi/2ZbrFV8I5QiB2KNZoQGP+t7Y+TpT26hHLIEUtJK7SDWE rcT++Va/B7DYbgurk24K3/EZ6c7R0n6RiIgx/MwAJvtq/4Uh4vHonk75IitzQ6riZWB1 W+jlC7nx4uJOuyKJcmSycYT0YyOqhtijCiV7pqWoauIP1p35JlDRh9hmHhyirsyJhLQd wQmotdb8BiqFgeo/BTKV0OaTzuL7r0LNZggsPWAOlpt3ubuTsR8+S/tvAmAs1gakMboA IHjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Th0OSjRZIl1OjiycBvrEPmKLUM7WABC3AjbwfaNykSU=; b=cOV+mcGd0bwLFIamkIMUEh92Iz5nUNLK6QxQyTaHgGV+p7XM030qK9h2rIyo0s8IWc IgltrFQ3BXfPTm1SO7/uoNULG3HxGPzu0upTuhWTuIT/YPc1azbQUrsghFZDErV8/0CC mbBeTSOCBaBaE3VfVHMIahz6/FyCaw028y8tFtRAH+0MDNyd5DJ0a2g/wS84/BXAhw2o +Jb8Xg7IeE4rKjyBfEhgXgucyOKt9ESpONPym0sY6g6wG9odvlYGD8IEklKS3ynilR8p nurlFLlvbgIaDCVHihiowUIdxnuvsuke3XeHJTW60gPERKQ2sqI1Wg9l1XAvFREQMTgr pUbQ== X-Gm-Message-State: ACrzQf1q5/Yp7uGJhV+xoefHzj0rwAHtLk7PX06USgY7CP0nSeRfpNMV sluXNhPx2eUXtPV5O3zC3AA= X-Google-Smtp-Source: AMsMyM6HvEpLe+IvB5sfbuPu7GZlLdViY09j3wLytv7T/gfSxYvpk+cuotstt7x+UDy+mSWM69Woaw== X-Received: by 2002:a05:6870:b410:b0:13b:7955:513f with SMTP id x16-20020a056870b41000b0013b7955513fmr18491223oap.25.1667248125199; Mon, 31 Oct 2022 13:28:45 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-ce7d-a808-badd-629d.res6.spectrum.com. [2603:8081:140c:1a00:ce7d:a808:badd:629d]) by smtp.googlemail.com with ESMTPSA id w1-20020a056808018100b00342e8bd2299sm2721215oic.6.2022.10.31.13.28.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Oct 2022 13:28:44 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 13/18] RDMA/rxe: Extend rxe_icrc.c to support frags Date: Mon, 31 Oct 2022 15:28:02 -0500 Message-Id: <20221031202805.19138-13-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221031202805.19138-1-rpearsonhpe@gmail.com> References: <20221031202805.19138-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend the subroutines rxe_icrc_generate() and rxe_icrc_check() to support skb frags. This is in preparation for supporting fragmented skbs. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_icrc.c | 65 ++++++++++++++++++++++++---- drivers/infiniband/sw/rxe/rxe_net.c | 55 ++++++++++++++++++----- drivers/infiniband/sw/rxe/rxe_recv.c | 1 + 3 files changed, 100 insertions(+), 21 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_icrc.c b/drivers/infiniband/sw/rxe/rxe_icrc.c index 46bb07c5c4df..699730a13c92 100644 --- a/drivers/infiniband/sw/rxe/rxe_icrc.c +++ b/drivers/infiniband/sw/rxe/rxe_icrc.c @@ -63,7 +63,7 @@ static __be32 rxe_crc32(struct rxe_dev *rxe, __be32 crc, void *next, size_t len) /** * rxe_icrc_hdr() - Compute the partial ICRC for the network and transport - * headers of a packet. + * headers of a packet. * @skb: packet buffer * @pkt: packet information * @@ -129,6 +129,56 @@ static __be32 rxe_icrc_hdr(struct sk_buff *skb, struct rxe_pkt_info *pkt) return crc; } +/** + * rxe_icrc_payload() - Compute the ICRC for a packet payload and also + * compute the address of the icrc in the packet. + * @skb: packet buffer + * @pkt: packet information + * @icrc: current icrc i.e. including headers + * @icrcp: returned pointer to icrc in skb + * + * Return: 0 if the values match else an error + */ +static __be32 rxe_icrc_payload(struct sk_buff *skb, struct rxe_pkt_info *pkt, + __be32 icrc, __be32 **icrcp) +{ + struct skb_shared_info *shinfo = skb_shinfo(skb); + skb_frag_t *frag; + u8 *addr; + int hdr_len; + int len; + int i; + + /* handle any payload left in the linear buffer */ + hdr_len = rxe_opcode[pkt->opcode].length; + addr = pkt->hdr + hdr_len; + len = skb_tail_pointer(skb) - skb_transport_header(skb) + - sizeof(struct udphdr) - hdr_len; + if (!shinfo->nr_frags) { + len -= RXE_ICRC_SIZE; + *icrcp = (__be32 *)(addr + len); + } + if (len > 0) + icrc = rxe_crc32(pkt->rxe, icrc, payload_addr(pkt), len); + WARN_ON(len < 0); + + /* handle any payload in frags */ + for (i = 0; i < shinfo->nr_frags; i++) { + frag = &shinfo->frags[i]; + addr = page_to_virt(frag->bv_page) + frag->bv_offset; + len = frag->bv_len; + if (i == shinfo->nr_frags - 1) { + len -= RXE_ICRC_SIZE; + *icrcp = (__be32 *)(addr + len); + } + if (len > 0) + icrc = rxe_crc32(pkt->rxe, icrc, addr, len); + WARN_ON(len < 0); + } + + return icrc; +} + /** * rxe_icrc_check() - Compute ICRC for a packet and compare to the ICRC * delivered in the packet. @@ -143,13 +193,11 @@ int rxe_icrc_check(struct sk_buff *skb, struct rxe_pkt_info *pkt) __be32 pkt_icrc; __be32 icrc; - icrcp = (__be32 *)(pkt->hdr + pkt->paylen - RXE_ICRC_SIZE); - pkt_icrc = *icrcp; - icrc = rxe_icrc_hdr(skb, pkt); - icrc = rxe_crc32(pkt->rxe, icrc, (u8 *)payload_addr(pkt), - payload_size(pkt) + bth_pad(pkt)); + icrc = rxe_icrc_payload(skb, pkt, icrc, &icrcp); + icrc = ~icrc; + pkt_icrc = *icrcp; if (unlikely(icrc != pkt_icrc)) return -EINVAL; @@ -167,9 +215,8 @@ void rxe_icrc_generate(struct sk_buff *skb, struct rxe_pkt_info *pkt) __be32 *icrcp; __be32 icrc; - icrcp = (__be32 *)(pkt->hdr + pkt->paylen - RXE_ICRC_SIZE); icrc = rxe_icrc_hdr(skb, pkt); - icrc = rxe_crc32(pkt->rxe, icrc, (u8 *)payload_addr(pkt), - payload_size(pkt) + bth_pad(pkt)); + icrc = rxe_icrc_payload(skb, pkt, icrc, &icrcp); + *icrcp = ~icrc; } diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index c6d8f5c80562..395e9d7d81c3 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -134,32 +134,51 @@ static int rxe_udp_encap_recv(struct sock *sk, struct sk_buff *skb) struct rxe_dev *rxe; struct net_device *ndev = skb->dev; struct rxe_pkt_info *pkt = SKB_TO_PKT(skb); + u8 opcode; + u8 buf[1]; + u8 *p; - /* takes a reference on rxe->ib_dev - * drop when skb is freed - */ + /* Takes a reference on rxe->ib_dev. Drop when skb is freed */ rxe = rxe_get_dev_from_net(ndev); if (!rxe && is_vlan_dev(ndev)) rxe = rxe_get_dev_from_net(vlan_dev_real_dev(ndev)); if (!rxe) - goto drop; + goto err_drop; - if (skb_linearize(skb)) { - ib_device_put(&rxe->ib_dev); - goto drop; + /* Get bth opcode out of skb */ + p = skb_header_pointer(skb, sizeof(struct udphdr), 1, buf); + if (!p) + goto err_device_put; + opcode = *p; + + /* If using fragmented skbs make sure roce headers + * are in linear buffer else make skb linear + */ + if (rxe_use_sg && skb_is_nonlinear(skb)) { + int delta = rxe_opcode[opcode].length - + (skb_headlen(skb) - sizeof(struct udphdr)); + + if (delta > 0 && !__pskb_pull_tail(skb, delta)) + goto err_device_put; + } else { + if (skb_linearize(skb)) + goto err_device_put; } udph = udp_hdr(skb); pkt->rxe = rxe; pkt->port_num = 1; pkt->hdr = (u8 *)(udph + 1); - pkt->mask = RXE_GRH_MASK; + pkt->mask = rxe_opcode[opcode].mask | RXE_GRH_MASK; pkt->paylen = be16_to_cpu(udph->len) - sizeof(*udph); rxe_rcv(skb); return 0; -drop: + +err_device_put: + ib_device_put(&rxe->ib_dev); +err_drop: kfree_skb(skb); return 0; @@ -385,21 +404,32 @@ static int rxe_send(struct sk_buff *skb, struct rxe_pkt_info *pkt) */ static int rxe_loopback(struct sk_buff *skb, struct rxe_pkt_info *pkt) { - memcpy(SKB_TO_PKT(skb), pkt, sizeof(*pkt)); + struct rxe_pkt_info *newpkt; + int err; + /* make loopback line up with rxe_udp_encap_recv */ if (skb->protocol == htons(ETH_P_IP)) skb_pull(skb, sizeof(struct iphdr)); else skb_pull(skb, sizeof(struct ipv6hdr)); + skb_reset_transport_header(skb); + + newpkt = SKB_TO_PKT(skb); + memcpy(newpkt, pkt, sizeof(*newpkt)); + newpkt->hdr = skb_transport_header(skb) + sizeof(struct udphdr); if (WARN_ON(!ib_device_try_get(&pkt->rxe->ib_dev))) { kfree_skb(skb); - return -EIO; + err = -EINVAL; + goto drop; } rxe_rcv(skb); - return 0; + +drop: + kfree_skb(skb); + return err; } int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, @@ -415,6 +445,7 @@ int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, goto drop; } + /* skb->data points at IP header */ rxe_icrc_generate(skb, pkt); if (pkt->mask & RXE_LOOPBACK_MASK) diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c index 434a693cd4a5..ba786e5c6266 100644 --- a/drivers/infiniband/sw/rxe/rxe_recv.c +++ b/drivers/infiniband/sw/rxe/rxe_recv.c @@ -329,6 +329,7 @@ void rxe_rcv(struct sk_buff *skb) if (unlikely(err)) goto drop; + /* skb->data points at UDP header */ err = rxe_icrc_check(skb, pkt); if (unlikely(err)) goto drop; From patchwork Mon Oct 31 20:28:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13026338 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D8BDFA3741 for ; Mon, 31 Oct 2022 20:28:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230052AbiJaU2x (ORCPT ); Mon, 31 Oct 2022 16:28:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57226 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230031AbiJaU2t (ORCPT ); Mon, 31 Oct 2022 16:28:49 -0400 Received: from mail-ot1-x32c.google.com (mail-ot1-x32c.google.com [IPv6:2607:f8b0:4864:20::32c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 263FA13E36 for ; Mon, 31 Oct 2022 13:28:47 -0700 (PDT) Received: by mail-ot1-x32c.google.com with SMTP id 16-20020a9d0490000000b0066938311495so7395650otm.4 for ; Mon, 31 Oct 2022 13:28:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=IFN8jQE/48Dw0iUXJPRfQ4xJofloB8GyBk/xzS0kbsM=; b=guPPbmh4HEDcQoXBg2JBFb7GntMcnQkU/ijjyXyYL+yS4ScKbbJhsfpvU7ulsNTDcx vhrVyRRYs+cnBAbSy6qv1wYG/cjjcPHV5WGzANel6hfcEQfsFbItfeMEDLjOWGPMH0YE aCVg9rQ/0FfTjecTV4tYeWu93KARMeaEjiJC9xLeqJVT6EM98GxiD19v+O2ORmFQb6rB dmtihBA9kuiTcPQ0VFAbpDY2WoY/DlHLYqF+MIZNvOcUi8OULKt3z7VMg0BWFaBqa/pb njSRMg5jjAt+8fw+g2XoFUzXMUYVl6VUHZsEfHDeNLJkqL8Itl9eECE9RBOC2dcZwGJ7 ccyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=IFN8jQE/48Dw0iUXJPRfQ4xJofloB8GyBk/xzS0kbsM=; b=L6JyzDRpjHAdUr+P2GHTnam9O9NwLkPxbBYn+1AWCfdcRbSW08fmCpaNrWneLvV4kD 9XbgozreCROSe7g7gkrKqeaFf1XJTCJKcV9Rzi3kGLxY1XAn17Bwyz3za9e+zjot/PIh WwS5QD6Uy7sTS8AfwukssJ0EcCU7uFgocjG4wtbrB1ZIveQlZKBRSQekgxO1oLwuj/fe 82L9mJyzKc8uWGSLU3BWk3DHCINIDRLo7/W18taazQMh9ArlgmzlbvsEDlXY3/e6GYJe +Nb5ag+7TvutU3ZWU5eCYogZW6H6oIPv1ZdlCLH2WYxlVFEDBqQ4bGCp9nrhSAhb9fuw izBA== X-Gm-Message-State: ACrzQf2YYJeANnnZuRVMrgF7iaK4nz2vNfwrvKe83+b8LDTZoRngX1O2 drdsIJjmPyj3HsJqTMYMQXw= X-Google-Smtp-Source: AMsMyM4H2jYht/1iZUBY9VYIVo+CZwjMTefeXC+PkPyapJKTZz9cq6acN1A5UX1BRU7b4NMolkzZxA== X-Received: by 2002:a05:6830:1f34:b0:66c:4a42:9ca5 with SMTP id e20-20020a0568301f3400b0066c4a429ca5mr4481171oth.175.1667248126466; Mon, 31 Oct 2022 13:28:46 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-ce7d-a808-badd-629d.res6.spectrum.com. [2603:8081:140c:1a00:ce7d:a808:badd:629d]) by smtp.googlemail.com with ESMTPSA id w1-20020a056808018100b00342e8bd2299sm2721215oic.6.2022.10.31.13.28.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Oct 2022 13:28:45 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 14/18] RDMA/rxe: Extend rxe_init_req_packet() for frags Date: Mon, 31 Oct 2022 15:28:03 -0500 Message-Id: <20221031202805.19138-14-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221031202805.19138-1-rpearsonhpe@gmail.com> References: <20221031202805.19138-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add code to rxe_build_req_packet() to allocate space for the pad and icrc if the skb is fragmented. This is in preparation for supporting fragmented skbs. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 9 +++- drivers/infiniband/sw/rxe/rxe_req.c | 74 ++++++++++++++++++++++++----- 2 files changed, 71 insertions(+), 12 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 12fd5811cd79..cab6acad7a83 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -179,8 +179,15 @@ void rxe_srq_cleanup(struct rxe_pool_elem *elem); void rxe_dealloc(struct ib_device *ib_dev); -int rxe_completer(void *arg); +/* rxe_req.c */ +int rxe_prepare_pad_icrc(struct rxe_pkt_info *pkt, struct sk_buff *skb, + int payload, bool frag); int rxe_requester(void *arg); + +/* rxe_comp.c */ +int rxe_completer(void *arg); + +/* rxe_resp.c */ int rxe_responder(void *arg); /* rxe_icrc.c */ diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 71a65f2a5d6d..984e3e957aef 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -438,27 +438,79 @@ static void rxe_init_roce_hdrs(struct rxe_qp *qp, struct rxe_send_wqe *wqe, } static int rxe_init_payload(struct rxe_qp *qp, struct rxe_send_wqe *wqe, - struct rxe_pkt_info *pkt, u32 payload, - struct sk_buff *skb) + struct rxe_pkt_info *pkt, int pad, u32 payload, + struct sk_buff *skb, bool frag) { + int len = skb_tailroom(skb); + int tot_len = payload + pad + RXE_ICRC_SIZE; + int access = 0; int skb_offset = 0; + int op; + void *addr; void *data; int err = 0; if (wqe->wr.send_flags & IB_SEND_INLINE) { + if (WARN_ON(frag)) + return -EINVAL; + if (len < tot_len) + return -EINVAL; data = &wqe->dma.inline_data[wqe->dma.sge_offset]; memcpy(payload_addr(pkt), data, payload); wqe->dma.resid -= payload; wqe->dma.sge_offset += payload; } else { - err = rxe_copy_dma_data(skb, qp->pd, 0, &wqe->dma, - payload_addr(pkt), skb_offset, - payload, RXE_COPY_FROM_MR); + op = frag ? RXE_FRAG_FROM_MR : RXE_COPY_FROM_MR; + addr = frag ? NULL : payload_addr(pkt); + err = rxe_copy_dma_data(skb, qp->pd, access, &wqe->dma, + addr, skb_offset, payload, op); } return err; } +/** + * rxe_prepare_pad_icrc() - Alloc space if fragmented and init pad and icrc + * @pkt: packet info + * @skb: packet buffer + * @payload: roce payload + * @frag: true if skb is fragmented + * + * Returns: 0 on success else an error + */ +int rxe_prepare_pad_icrc(struct rxe_pkt_info *pkt, struct sk_buff *skb, + int payload, bool frag) +{ + struct rxe_phys_buf dmabuf; + size_t offset; + u64 iova; + u8 *addr; + int err = 0; + int pad = (-payload) & 0x3; + + if (frag) { + /* allocate bytes at the end of the skb linear buffer + * and build a frag pointing at it + */ + WARN_ON((skb->end - skb->tail) < 8); + addr = skb_end_pointer(skb) - RXE_ICRC_SIZE - pad; + iova = (uintptr_t)addr; + dmabuf.addr = iova & PAGE_MASK; + offset = iova & ~PAGE_MASK; + err = rxe_add_frag(skb, &dmabuf, pad + RXE_ICRC_SIZE, offset); + if (err) + goto err; + } else { + addr = payload_addr(pkt) + payload; + } + + /* init pad and icrc to zero */ + memset(addr, 0, pad + RXE_ICRC_SIZE); + +err: + return err; +} + static struct sk_buff *rxe_init_req_packet(struct rxe_qp *qp, struct rxe_send_wqe *wqe, int opcode, u32 payload, @@ -468,9 +520,9 @@ static struct sk_buff *rxe_init_req_packet(struct rxe_qp *qp, struct sk_buff *skb = NULL; struct rxe_av *av; struct rxe_ah *ah = NULL; - void *padp; int pad; int err = -EINVAL; + bool frag = false; pkt->rxe = rxe; pkt->opcode = opcode; @@ -498,15 +550,15 @@ static struct sk_buff *rxe_init_req_packet(struct rxe_qp *qp, rxe_init_roce_hdrs(qp, wqe, pkt, pad); if (pkt->mask & RXE_WRITE_OR_SEND_MASK) { - err = rxe_init_payload(qp, wqe, pkt, payload, skb); + err = rxe_init_payload(qp, wqe, pkt, pad, payload, skb, frag); if (err) goto err_out; } - if (pad) { - padp = payload_addr(pkt) + payload; - memset(padp, 0, pad); - } + /* handle pad and icrc */ + err = rxe_prepare_pad_icrc(pkt, skb, payload, frag); + if (err) + goto err_out; /* IP and UDP network headers */ err = rxe_prepare(av, pkt, skb); From patchwork Mon Oct 31 20:28:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13026339 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DAC94FA3745 for ; Mon, 31 Oct 2022 20:28:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230064AbiJaU2z (ORCPT ); Mon, 31 Oct 2022 16:28:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57224 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229750AbiJaU2v (ORCPT ); Mon, 31 Oct 2022 16:28:51 -0400 Received: from mail-oa1-x2b.google.com (mail-oa1-x2b.google.com [IPv6:2001:4860:4864:20::2b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6AA4465C1 for ; Mon, 31 Oct 2022 13:28:48 -0700 (PDT) Received: by mail-oa1-x2b.google.com with SMTP id 586e51a60fabf-13be3ef361dso14696839fac.12 for ; Mon, 31 Oct 2022 13:28:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=dLzLWTomdtoGHoA/nrl17d7871WW6NKhWXD7ZVvErmU=; b=AtqwiPltltFtL6OCpjQus7VZ8c6mf/9PDL6xhGHVyy91H1PM2memIQEf9++5j+2bwe eFUJCTcuJgs8ZZavRcDwqOL5OIyVo5mLoe5T/Ug5N/xE93Nd/kMxfxzAVslwDos/ByUt hia/FkQ0oe3O6k9nin0px+8FhrtwyD8KcxceNAkvfosLSpiYg7+fqyFMoY4J9VIZFPPT 0N2EbGTDp4YUg8k4AIXVk9M7mMULcjZfvSr797S8tPS4ejpcLzYWqYcOkKZA99z1/p1R GihNPUetds/sY3JtF+nMzNbv8kwkLmGMtrNNHu68BrEhk+INDCkFKBMMTAHOH5I+mIF2 TdYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=dLzLWTomdtoGHoA/nrl17d7871WW6NKhWXD7ZVvErmU=; b=moSOcfsYl4DCuoWV0nszCxxAOklDxWsbNqNbdkBVlXen4n8CLzn9gfxOuN6NesMQR/ k9pNNdi4c27cvhp2Icp/iNuCwk+vkfAbsgPz0NadXajRqApfrVO4IautTRRsW4vk/WQU xBgqDtAGibPekg5PhUVx6A74wTeIxpvbQQfYxl1BnE9uoGDpXqamWonjCGmBUoH2y0IM CIBjvianUmEoDMOS5CUkfGi9abagf0O7yPVyn5EabYKwLRMvQU9T1keB1ZbU4qvjdHu+ LJVVJyC8DAT3m9JDac776vd00Lav7F6XzCOw5M9fWvt/Ls7udt7TqCeglkjzBD5fpVvC XTwQ== X-Gm-Message-State: ACrzQf1pxm5eJ3mrfiyNFxozEl2DfCCqLzZJZsB3Xeuco1+he5+CN+H+ dziG0koZTKnaWOAGBoU9lvaRzLzMVV4= X-Google-Smtp-Source: AMsMyM6JzIXeDls9Wzd0zYz52qIHu675ZbqrS5xs3IfQyJ4WFjlY589WWayuKlrlPthXsExKJpwucA== X-Received: by 2002:a05:6870:c897:b0:11b:de9f:57c2 with SMTP id er23-20020a056870c89700b0011bde9f57c2mr17912101oab.267.1667248127698; Mon, 31 Oct 2022 13:28:47 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-ce7d-a808-badd-629d.res6.spectrum.com. [2603:8081:140c:1a00:ce7d:a808:badd:629d]) by smtp.googlemail.com with ESMTPSA id w1-20020a056808018100b00342e8bd2299sm2721215oic.6.2022.10.31.13.28.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Oct 2022 13:28:47 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 15/18] RDMA/rxe: Extend response packets for frags Date: Mon, 31 Oct 2022 15:28:04 -0500 Message-Id: <20221031202805.19138-15-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221031202805.19138-1-rpearsonhpe@gmail.com> References: <20221031202805.19138-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend prepare_ack_packet(), read_reply() and send_common_ack() in rxe_resp.c to support fragmented skbs. Adjust calls to these routines for the changed API. This is in preparation for using fragmented skbs. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_resp.c | 89 +++++++++++++++++----------- 1 file changed, 55 insertions(+), 34 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 8868415b71b6..905e19ee9ca5 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -660,10 +660,8 @@ static enum resp_states atomic_reply(struct rxe_qp *qp, static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, struct rxe_pkt_info *ack, - int opcode, - int payload, - u32 psn, - u8 syndrome) + int opcode, int payload, u32 psn, + u8 syndrome, bool *fragp) { struct rxe_dev *rxe = to_rdev(qp->ibqp.device); struct sk_buff *skb; @@ -682,7 +680,7 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, ack->psn = psn; ack->port_num = 1; - skb = rxe_init_packet(qp, &qp->pri_av, ack, NULL); + skb = rxe_init_packet(qp, &qp->pri_av, ack, fragp); if (!skb) return NULL; @@ -698,12 +696,14 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp, atmack_set_orig(ack, qp->resp.res->atomic.orig_val); err = rxe_prepare(&qp->pri_av, ack, skb); - if (err) { - kfree_skb(skb); - return NULL; - } + if (err) + goto err_free_skb; return skb; + +err_free_skb: + kfree_skb(skb); + return NULL; } /** @@ -775,6 +775,8 @@ static enum resp_states read_reply(struct rxe_qp *qp, struct resp_res *res = qp->resp.res; struct rxe_mr *mr; int skb_offset = 0; + bool frag; + enum rxe_mr_copy_op op; if (!res) { res = rxe_prepare_res(qp, req_pkt, RXE_READ_MASK); @@ -787,8 +789,10 @@ static enum resp_states read_reply(struct rxe_qp *qp, qp->resp.mr = NULL; } else { mr = rxe_recheck_mr(qp, res->read.rkey); - if (!mr) - return RESPST_ERR_RKEY_VIOLATION; + if (!mr) { + state = RESPST_ERR_RKEY_VIOLATION; + goto err_out; + } } if (res->read.resid <= mtu) @@ -797,8 +801,10 @@ static enum resp_states read_reply(struct rxe_qp *qp, opcode = IB_OPCODE_RC_RDMA_READ_RESPONSE_FIRST; } else { mr = rxe_recheck_mr(qp, res->read.rkey); - if (!mr) - return RESPST_ERR_RKEY_VIOLATION; + if (!mr) { + state = RESPST_ERR_RKEY_VIOLATION; + goto err_out; + } if (res->read.resid > mtu) opcode = IB_OPCODE_RC_RDMA_READ_RESPONSE_MIDDLE; @@ -806,35 +812,35 @@ static enum resp_states read_reply(struct rxe_qp *qp, opcode = IB_OPCODE_RC_RDMA_READ_RESPONSE_LAST; } - res->state = rdatm_res_state_next; - payload = min_t(int, res->read.resid, mtu); skb = prepare_ack_packet(qp, &ack_pkt, opcode, payload, - res->cur_psn, AETH_ACK_UNLIMITED); - if (!skb) - return RESPST_ERR_RNR; + res->cur_psn, AETH_ACK_UNLIMITED, &frag); + if (!skb) { + state = RESPST_ERR_RNR; + goto err_put_mr; + } + op = frag ? RXE_FRAG_FROM_MR : RXE_COPY_FROM_MR; err = rxe_copy_mr_data(skb, mr, res->read.va, payload_addr(&ack_pkt), - skb_offset, payload, RXE_COPY_FROM_MR); + skb_offset, payload, op); if (err) { - kfree_skb(skb); - rxe_put(mr); - return RESPST_ERR_RKEY_VIOLATION; + state = RESPST_ERR_RKEY_VIOLATION; + goto err_free_skb; } - if (mr) - rxe_put(mr); - - if (bth_pad(&ack_pkt)) { - u8 *pad = payload_addr(&ack_pkt) + payload; - - memset(pad, 0, bth_pad(&ack_pkt)); + err = rxe_prepare_pad_icrc(&ack_pkt, skb, payload, frag); + if (err) { + state = RESPST_ERR_RNR; + goto err_free_skb; } err = rxe_xmit_packet(qp, &ack_pkt, skb); - if (err) - return RESPST_ERR_RNR; + if (err) { + /* rxe_xmit_packet will consume the packet */ + state = RESPST_ERR_RNR; + goto err_put_mr; + } res->read.va += payload; res->read.resid -= payload; @@ -851,6 +857,16 @@ static enum resp_states read_reply(struct rxe_qp *qp, state = RESPST_CLEANUP; } + /* keep these after all error exits */ + res->state = rdatm_res_state_next; + rxe_put(mr); + return state; + +err_free_skb: + kfree_skb(skb); +err_put_mr: + rxe_put(mr); +err_out: return state; } @@ -1041,14 +1057,19 @@ static int send_common_ack(struct rxe_qp *qp, u8 syndrome, u32 psn, int opcode, const char *msg) { int err; - struct rxe_pkt_info ack_pkt; + struct rxe_pkt_info ack; struct sk_buff *skb; + int payload = 0; - skb = prepare_ack_packet(qp, &ack_pkt, opcode, 0, psn, syndrome); + skb = prepare_ack_packet(qp, &ack, opcode, payload, + psn, syndrome, NULL); if (!skb) return -ENOMEM; - err = rxe_xmit_packet(qp, &ack_pkt, skb); + /* doesn't fail if frag == false */ + (void)rxe_prepare_pad_icrc(&ack, skb, payload, false); + + err = rxe_xmit_packet(qp, &ack, skb); if (err) pr_err_ratelimited("Failed sending %s\n", msg); From patchwork Mon Oct 31 20:28:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13026340 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 211AAFA3741 for ; Mon, 31 Oct 2022 20:29:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229553AbiJaU3D (ORCPT ); Mon, 31 Oct 2022 16:29:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57254 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229991AbiJaU2v (ORCPT ); Mon, 31 Oct 2022 16:28:51 -0400 Received: from mail-oa1-x2d.google.com (mail-oa1-x2d.google.com [IPv6:2001:4860:4864:20::2d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 915D313EB5 for ; Mon, 31 Oct 2022 13:28:49 -0700 (PDT) Received: by mail-oa1-x2d.google.com with SMTP id 586e51a60fabf-13be3ef361dso14696898fac.12 for ; Mon, 31 Oct 2022 13:28:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=lQXANIXnZWN6FLQ8unl+4sta4NHs4OlNjlowzxlQAEM=; b=ZWVzrNMbA26oe1P6lLXIKbVg2I6jh6nodyHpRjVvVF1Bh68RLkBLHEGlTz8RA0Tg+U vZf8M5vJOq2EogZT0oBqCCuCCT3P33bsy15Zvfr21bGt6h0xCopE2nPwTa8PGqcPVwhx dGnxyQA6KgeE0ntzA+qnKtIaDO3x0Bb1x6l16Tylgu3/sK9qdjnuBhKkk9d9CT7RDthw uFCEDmFMXQfYJ8qI+iUxcgqtjHgrxlMh1oqdVnVjMGYNMIBZjbomqehUmZ27ynXIlrSg +mk0dt/lZ8sfrAGRQ1pVU3qM2ONZxkrAu3b5TRl7Q3N3NWzHLngW7c2aA2NLJ807O18V /57Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lQXANIXnZWN6FLQ8unl+4sta4NHs4OlNjlowzxlQAEM=; b=ralvpf75X3UfPtmMZ0MEL3c6mTUoFAOFzQxWIPxXnpZOeETHhv6kf3WgwQk79+wpIH uqtTCU3XEo3tY+nwAOzJOCMSeo8sQGG1WYod4P1eaGGeyXhCcj+nOC0F5wshXRShww4T yWq02Dsh1SqGVP5Rv4VLgLarp9J9mgUujpy0qIGmzfwdhQBxqQV78u70I0aV7PxQxr9F ZXfuo6Vq45pPzmsXCt3yH2E7nBN1uBvxDXpY0PWFfLH6yKePdmDhMkISH0mpYAAOPcx+ gq99fUk7N701wa4q7G96dWbXkF6Mrdycu0jeniAvgrzsN6Yyn9y0vKz3yXn8kH/SLZpy orKA== X-Gm-Message-State: ACrzQf3MR7W8e8r3qT2Vh2/5iQfpavK2Xu62DfGdnDNlUW9MErX79i1P 4YKcsSxIodHEU6NtaQ919nU= X-Google-Smtp-Source: AMsMyM7/6VG7CBs2h2aWBL75o05lzkP/zE/B1ROrxwM61mGckO9YhyxOoOECSL9tWrBvoW1rD5riYg== X-Received: by 2002:a05:6870:a91b:b0:131:f14a:30c2 with SMTP id eq27-20020a056870a91b00b00131f14a30c2mr8078991oab.286.1667248128964; Mon, 31 Oct 2022 13:28:48 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-ce7d-a808-badd-629d.res6.spectrum.com. [2603:8081:140c:1a00:ce7d:a808:badd:629d]) by smtp.googlemail.com with ESMTPSA id w1-20020a056808018100b00342e8bd2299sm2721215oic.6.2022.10.31.13.28.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Oct 2022 13:28:48 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 16/18] RDMA/rxe: Extend send/write_data_in() for frags Date: Mon, 31 Oct 2022 15:28:05 -0500 Message-Id: <20221031202805.19138-16-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221031202805.19138-1-rpearsonhpe@gmail.com> References: <20221031202805.19138-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend send_data_in() and write_data_in() in rxe_resp.c to support fragmented received skbs. This is in preparation for using fragmented skbs. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_resp.c | 103 +++++++++++++++++---------- 1 file changed, 65 insertions(+), 38 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 905e19ee9ca5..419e8af235aa 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -518,45 +518,89 @@ static enum resp_states check_rkey(struct rxe_qp *qp, return state; } -static enum resp_states send_data_in(struct rxe_qp *qp, void *data_addr, - int data_len) +/** + * rxe_send_data_in() - Copy payload data into receive buffer + * @qp: The queue pair + * @pkt: Request packet info + * + * Copy the packet payload into the receive buffer at the current offset. + * If a UD message also copy the IP header into the receive buffer. + * + * Returns: 0 if successful else an error resp_states value. + */ +static enum resp_states rxe_send_data_in(struct rxe_qp *qp, + struct rxe_pkt_info *pkt) { - struct sk_buff *skb = NULL; + struct sk_buff *skb = PKT_TO_SKB(pkt); + int nr_frags = skb_shinfo(skb)->nr_frags; + u8 *data_addr = payload_addr(pkt); + int data_len = payload_size(pkt); + union rdma_network_hdr hdr; + enum rxe_mr_copy_op op; int skb_offset = 0; int err; + /* Per IBA for UD packets copy the IP header into the receive buffer */ + if (qp_type(qp) == IB_QPT_UD || qp_type(qp) == IB_QPT_GSI) { + if (skb->protocol == htons(ETH_P_IP)) { + memset(&hdr.reserved, 0, sizeof(hdr.reserved)); + memcpy(&hdr.roce4grh, ip_hdr(skb), sizeof(hdr.roce4grh)); + } else { + memcpy(&hdr.ibgrh, ipv6_hdr(skb), sizeof(hdr)); + } + err = rxe_copy_dma_data(skb, qp->pd, IB_ACCESS_LOCAL_WRITE, + &qp->resp.wqe->dma, &hdr, skb_offset, + sizeof(hdr), RXE_COPY_TO_MR); + if (err) + goto err_out; + } + + op = nr_frags ? RXE_FRAG_TO_MR : RXE_COPY_TO_MR; + skb_offset = data_addr - skb_transport_header(skb); err = rxe_copy_dma_data(skb, qp->pd, IB_ACCESS_LOCAL_WRITE, &qp->resp.wqe->dma, data_addr, - skb_offset, data_len, RXE_COPY_TO_MR); - if (unlikely(err)) - return (err == -ENOSPC) ? RESPST_ERR_LENGTH - : RESPST_ERR_MALFORMED_WQE; + skb_offset, data_len, op); + if (err) + goto err_out; return RESPST_NONE; + +err_out: + return (err == -ENOSPC) ? RESPST_ERR_LENGTH + : RESPST_ERR_MALFORMED_WQE; } -static enum resp_states write_data_in(struct rxe_qp *qp, - struct rxe_pkt_info *pkt) +/** + * rxe_write_data_in() - Copy payload data to iova + * @qp: The queue pair + * @pkt: Request packet info + * + * Copy the packet payload to current iova and update iova. + * + * Returns: 0 if successful else an error resp_states value. + */ +static enum resp_states rxe_write_data_in(struct rxe_qp *qp, + struct rxe_pkt_info *pkt) { struct sk_buff *skb = PKT_TO_SKB(pkt); - enum resp_states rc = RESPST_NONE; + int nr_frags = skb_shinfo(skb)->nr_frags; + u8 *data_addr = payload_addr(pkt); int data_len = payload_size(pkt); + enum rxe_mr_copy_op op; + int skb_offset; int err; - int skb_offset = 0; + op = nr_frags ? RXE_FRAG_TO_MR : RXE_COPY_TO_MR; + skb_offset = data_addr - skb_transport_header(skb); err = rxe_copy_mr_data(skb, qp->resp.mr, qp->resp.va + qp->resp.offset, - payload_addr(pkt), skb_offset, data_len, - RXE_COPY_TO_MR); - if (err) { - rc = RESPST_ERR_RKEY_VIOLATION; - goto out; - } + data_addr, skb_offset, data_len, op); + if (err) + return RESPST_ERR_RKEY_VIOLATION; qp->resp.va += data_len; qp->resp.resid -= data_len; -out: - return rc; + return RESPST_NONE; } static struct resp_res *rxe_prepare_res(struct rxe_qp *qp, @@ -884,30 +928,13 @@ static int invalidate_rkey(struct rxe_qp *qp, u32 rkey) static enum resp_states execute(struct rxe_qp *qp, struct rxe_pkt_info *pkt) { enum resp_states err; - struct sk_buff *skb = PKT_TO_SKB(pkt); - union rdma_network_hdr hdr; if (pkt->mask & RXE_SEND_MASK) { - if (qp_type(qp) == IB_QPT_UD || - qp_type(qp) == IB_QPT_GSI) { - if (skb->protocol == htons(ETH_P_IP)) { - memset(&hdr.reserved, 0, - sizeof(hdr.reserved)); - memcpy(&hdr.roce4grh, ip_hdr(skb), - sizeof(hdr.roce4grh)); - err = send_data_in(qp, &hdr, sizeof(hdr)); - } else { - err = send_data_in(qp, ipv6_hdr(skb), - sizeof(hdr)); - } - if (err) - return err; - } - err = send_data_in(qp, payload_addr(pkt), payload_size(pkt)); + err = rxe_send_data_in(qp, pkt); if (err) return err; } else if (pkt->mask & RXE_WRITE_MASK) { - err = write_data_in(qp, pkt); + err = rxe_write_data_in(qp, pkt); if (err) return err; } else if (pkt->mask & RXE_READ_MASK) { From patchwork Mon Oct 31 20:28:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13026341 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14601FA3745 for ; Mon, 31 Oct 2022 20:29:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229750AbiJaU3E (ORCPT ); Mon, 31 Oct 2022 16:29:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57274 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229959AbiJaU2w (ORCPT ); Mon, 31 Oct 2022 16:28:52 -0400 Received: from mail-oa1-x35.google.com (mail-oa1-x35.google.com [IPv6:2001:4860:4864:20::35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B5ABD13FAC for ; Mon, 31 Oct 2022 13:28:50 -0700 (PDT) Received: by mail-oa1-x35.google.com with SMTP id 586e51a60fabf-13bd2aea61bso14790801fac.0 for ; Mon, 31 Oct 2022 13:28:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=tKi9h7WR9eMA4SBNRrq6cYz+ShYuF0Y3JvfUyZYr89o=; b=bBxzCMlJIKmqxS1ioSNPDGCGwRJAyOYa3geQo5Q0DNKM61WpA0gc+KLt0e/LTd+24D xokQH1vwpGwgNnZjE4JyV8E9Q3nBXpjdx+xhCfFqyE8PeR4Hrw0cYj9xl9/82oXja1If 3+PLglat0LJwXxX6uGljz4Zj5qCCLafy9JXNplZS/IEUgXH0QJ5I648OKdkClFBvtxDC jMTznTTJW9vlvoikIrtuRvNphWVLGTFycgkE1pBaVHn4oIIhnRkb/JW4Sau9jrHzaakU Dv00q8C97/J1YLBIdkqAdzU/M9+9c7UkrMD6L6Ol2WFTvWnDfjCHOE7t4xI+EGXHgkiO dMVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tKi9h7WR9eMA4SBNRrq6cYz+ShYuF0Y3JvfUyZYr89o=; b=AdTgMLnsLoSKCuj1SgX/u4IZh5RGNf6w5FmY+zysV8NlJGI150MwK+eULk/t081CVV C1wBiIp6z3UGEunuHbh5itf9djQQbru2TbDocl6VHlsl6LOHPl4yWoHIR+bRX3JB8yM4 b1rj/Hpl+2ELKkc7ahO4NQ+fCev1NwyKToBQbzzzEfPLe1H0zqRYfNwMUtTPgt03jknk GWskfpDbLQvaJes6EifUf5D/CaS+BZ29UBycoZJM41hmYL21SZaNfTvCFbB24hvPgzm+ tIXwFVKTDXyrC4zYIgPI6xdGsj+IvTgb9wCe6z716KVM0yJu+jSGz7r6cR7DLk/2i/HP hNCA== X-Gm-Message-State: ACrzQf3+3NSO+ANHiTcOWYtaJ30icfwxh5I4ZtExJM2SuuxXc6wfilGY 4VxmGxO/aTERjToSUyt0UyKUinXAgKg= X-Google-Smtp-Source: AMsMyM5FXNXjst4LtqVRv1cJaQpFuffqFbUsU/CDrsqUnYBSTM5I9ZCJukhJnb0hArllUTxie83cbQ== X-Received: by 2002:a05:6870:b392:b0:136:71ed:c874 with SMTP id w18-20020a056870b39200b0013671edc874mr18225570oap.66.1667248130094; Mon, 31 Oct 2022 13:28:50 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-ce7d-a808-badd-629d.res6.spectrum.com. [2603:8081:140c:1a00:ce7d:a808:badd:629d]) by smtp.googlemail.com with ESMTPSA id w1-20020a056808018100b00342e8bd2299sm2721215oic.6.2022.10.31.13.28.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Oct 2022 13:28:49 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 17/18] RDMA/rxe: Extend do_read() in rxe_comp,c for frags Date: Mon, 31 Oct 2022 15:28:06 -0500 Message-Id: <20221031202805.19138-17-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221031202805.19138-1-rpearsonhpe@gmail.com> References: <20221031202805.19138-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend do_read() in rxe_comp.c to support fragmented skbs. Rename rxe_do_read(). Adjust caller's API. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_comp.c | 40 ++++++++++++++++++---------- 1 file changed, 26 insertions(+), 14 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c index 3c1ecc88446d..85b3a4a6b55b 100644 --- a/drivers/infiniband/sw/rxe/rxe_comp.c +++ b/drivers/infiniband/sw/rxe/rxe_comp.c @@ -348,22 +348,34 @@ static inline enum comp_state check_ack(struct rxe_qp *qp, return COMPST_ERROR; } -static inline enum comp_state do_read(struct rxe_qp *qp, - struct rxe_pkt_info *pkt, - struct rxe_send_wqe *wqe) +/** + * rxe_do_read() - Process read reply packet + * @qp: The queue pair + * @pkt: Packet info + * @wqe: The current work request + * + * Copy payload from incoming read reply packet into current + * iova. + * + * Returns: 0 on success else an error comp_state + */ +static inline enum comp_state rxe_do_read(struct rxe_qp *qp, + struct rxe_pkt_info *pkt, + struct rxe_send_wqe *wqe) { struct sk_buff *skb = PKT_TO_SKB(pkt); - int skb_offset = 0; - int ret; - - ret = rxe_copy_dma_data(skb, qp->pd, IB_ACCESS_LOCAL_WRITE, - &wqe->dma, payload_addr(pkt), - skb_offset, payload_size(pkt), - RXE_COPY_TO_MR); - if (ret) { - wqe->status = IB_WC_LOC_PROT_ERR; + int nr_frags = skb_shinfo(skb)->nr_frags; + u8 *data_addr = payload_addr(pkt); + int data_len = payload_size(pkt); + enum rxe_mr_copy_op op = nr_frags ? RXE_FRAG_TO_MR : RXE_COPY_TO_MR; + int skb_offset = data_addr - skb_transport_header(skb); + int err; + + err = rxe_copy_dma_data(skb, qp->pd, IB_ACCESS_LOCAL_WRITE, + &wqe->dma, data_addr, + skb_offset, data_len, op); + if (err) return COMPST_ERROR; - } if (wqe->dma.resid == 0 && (pkt->mask & RXE_END_MASK)) return COMPST_COMP_ACK; @@ -625,7 +637,7 @@ int rxe_completer(void *arg) break; case COMPST_READ: - state = do_read(qp, pkt, wqe); + state = rxe_do_read(qp, pkt, wqe); break; case COMPST_ATOMIC: From patchwork Mon Oct 31 20:28:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 13026342 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5261BECAAA1 for ; Mon, 31 Oct 2022 20:29:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229929AbiJaU3F (ORCPT ); Mon, 31 Oct 2022 16:29:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57250 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229880AbiJaU2y (ORCPT ); Mon, 31 Oct 2022 16:28:54 -0400 Received: from mail-oi1-x22d.google.com (mail-oi1-x22d.google.com [IPv6:2607:f8b0:4864:20::22d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0153113DD7 for ; Mon, 31 Oct 2022 13:28:52 -0700 (PDT) Received: by mail-oi1-x22d.google.com with SMTP id t62so3143892oib.12 for ; Mon, 31 Oct 2022 13:28:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fP+kS18ux1vXKk+clUeAUlAJBUbYbdRVWffzkGjI8i0=; b=kWme63R9Bnnw4FkhP/X2R0pS369JRfqmaZsOknu7a7OTnC2XKDFmyKK4NSaIbQaTl7 B8E15sbmwoKR7ly3uvJLykKIMST5PT0PXNEMCinJQECgnWOTPxwN7icbVuCTK4J4NSWm fCz30vfRjPc75C2/JLZee2/NftiwaLjCFOCCKlUz8JZxpnrCCRZ+4Eifii9r1/qxISwV 5aNmSvnDmLVABBqkFyrDIrASkV9AonjYR1B10CGnt2szJVgEvXO/GEDFqP61vlcajcGD 17jqwJ40gHKNHEgvGDNy9YHqjlHOz4LZVBVtbmUY9wwIZ9+UWXFqcisG9QohCg6BcS49 EQFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fP+kS18ux1vXKk+clUeAUlAJBUbYbdRVWffzkGjI8i0=; b=hdiZELO7g2TgElYhNYKjUxJznwFigbyCnirgZbtkJlATtNExz/vSE7AKU/tc32MLEE 8isaOjQHxGb/FEjTG8as2PNiT7BASb8VAL+37hnB+Oa0z4kZyKLPgGSHsmrpd88UT3YW or//5AFhDYPv44cmp+kR2J0L/SPN8Jmuu0OBkXCh0Oun1wDA6895XM87xe+6uzQudBOF DerD++GT9bIoWc1o73jI/hGUMOM0f6HiFRujGdLLHnKac2fl/wXhYg/jl593uUwT26gn yN+KpVtPBC6p8Ezw3t+oVod+24t3b6MlStIlxuK47njDNhyIa4Bo6A+77YrjGqIcANUp 5Q1A== X-Gm-Message-State: ACrzQf2gxWtw9OpRYM7NKA04MTwHIFC8FgaAjMxVWo0MnOAo1E5rcYVc F3jniAmaYulRNEMRJ2JLdqc= X-Google-Smtp-Source: AMsMyM5LdbBTOem3dmLcvopm8hTINhtpUBq+Xu4WU9uMceDYbY7bmno7WgYoNth/8E15wKAJVKH2IQ== X-Received: by 2002:a05:6808:1057:b0:359:ef57:46b4 with SMTP id c23-20020a056808105700b00359ef5746b4mr5821862oih.286.1667248131352; Mon, 31 Oct 2022 13:28:51 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-ce7d-a808-badd-629d.res6.spectrum.com. [2603:8081:140c:1a00:ce7d:a808:badd:629d]) by smtp.googlemail.com with ESMTPSA id w1-20020a056808018100b00342e8bd2299sm2721215oic.6.2022.10.31.13.28.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Oct 2022 13:28:50 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, leon@kernel.org, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 18/18] RDMA/rxe: Enable sg code in rxe Date: Mon, 31 Oct 2022 15:28:07 -0500 Message-Id: <20221031202805.19138-18-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221031202805.19138-1-rpearsonhpe@gmail.com> References: <20221031202805.19138-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Make changes to enable sg code in rxe. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe.c | 2 +- drivers/infiniband/sw/rxe/rxe_req.c | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c index 388d8103ec20..fd5e916ecce9 100644 --- a/drivers/infiniband/sw/rxe/rxe.c +++ b/drivers/infiniband/sw/rxe/rxe.c @@ -14,7 +14,7 @@ MODULE_DESCRIPTION("Soft RDMA transport"); MODULE_LICENSE("Dual BSD/GPL"); /* if true allow using fragmented skbs */ -bool rxe_use_sg; +bool rxe_use_sg = true; /* free resources for a rxe device all objects created for this device must * have been destroyed diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index 984e3e957aef..a3760a84aa5d 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -521,8 +521,8 @@ static struct sk_buff *rxe_init_req_packet(struct rxe_qp *qp, struct rxe_av *av; struct rxe_ah *ah = NULL; int pad; + bool frag; int err = -EINVAL; - bool frag = false; pkt->rxe = rxe; pkt->opcode = opcode; @@ -543,7 +543,7 @@ static struct sk_buff *rxe_init_req_packet(struct rxe_qp *qp, pad + RXE_ICRC_SIZE; /* init skb */ - skb = rxe_init_packet(qp, av, pkt, NULL); + skb = rxe_init_packet(qp, av, pkt, &frag); if (unlikely(!skb)) goto err_out;