From patchwork Sat Sep 17 03:10:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12978986 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2274ECAAD8 for ; Sat, 17 Sep 2022 03:10:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229552AbiIQDKs (ORCPT ); Fri, 16 Sep 2022 23:10:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53668 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229457AbiIQDKq (ORCPT ); Fri, 16 Sep 2022 23:10:46 -0400 Received: from mail-oa1-x35.google.com (mail-oa1-x35.google.com [IPv6:2001:4860:4864:20::35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 347F64D26D for ; Fri, 16 Sep 2022 20:10:44 -0700 (PDT) Received: by mail-oa1-x35.google.com with SMTP id 586e51a60fabf-1278a61bd57so54698365fac.7 for ; Fri, 16 Sep 2022 20:10:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date; bh=N+HFvYsN9f3IreSqbp8SrkkQdK4b4GU/q0s3J8TRGp4=; b=Moc+6ieyPhhD36WAdaD1hMkuh93yg6ePo7/K4t4mylnRmoF4C1lriLVUB11na1AtNe PxW9Q4ag6jdxv1FaLetwCCNHmamc4KmN9LHPvbOXLupe0/zBfeGmB/rgD95VqeaszeSr muRqbMeTjWBthFW9oS4uh8YLeH0m+k4O0wvvn2Ta21XYoliVaTfxt94769LgDoxP4yOv qccFV1+7gvKcOj1riFgTdQxQfWiwp+z+vqrF+l+ITcunV/G83WrjhqIRGfRCF2BABbmi JlRjlrQECbzvLHm5+imDsE3L10ucYjBOtu7HmkgJfvb9uIa6/88m746laC+x23zC2Bw4 eWcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date; bh=N+HFvYsN9f3IreSqbp8SrkkQdK4b4GU/q0s3J8TRGp4=; b=y2l5m3uXlF9xrV0ikotjcuA3/kT61kZZPsVD3pQSnuDKoYsv0j+I1gdutUA3TgskML bQgDMoPMCwjKskT2Rb4hKS24IxqTlxfQon3Z6gX/3+UAENoBTP57iEJPQLF1GrzWnyRE o6h5xhT0+Y61mDf0Ehn8oe3En/39n8HDbLOSZwZJwdk4s5hrMbT9FsSvk3Zz5htc/jp2 +6NXY+jKKeZfD2mefH0RE6wSIdllLCPy+s098lBUaDQ5ynvO3ksxxG00/KgIRYXqG6iY Ir4LF9NSZOlrCEs+i+aYCBMk1kWdZpCGEJnnnVWQDAy3Hq3OPNdZB2NSZesHRROXQMEJ IXNg== X-Gm-Message-State: ACgBeo2C2v6h5Pv5ZI7XvpvRrxEPiqIibti1ZVoCPX74ub69A3pMKLJD 8K+E1m5JmvVmjtmDLFaiCVw= X-Google-Smtp-Source: AA6agR6CbZKSnLLdlmPYrUrdcOl+F3hccf2FjLQhiNY8GOokpxbLjpLEzobrr/yykzuj4TcZGJ2taA== X-Received: by 2002:a05:6870:4597:b0:11c:ab4e:9fe8 with SMTP id y23-20020a056870459700b0011cab4e9fe8mr10502841oao.123.1663384243464; Fri, 16 Sep 2022 20:10:43 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f9ea-fe1d-a45c-bca2.res6.spectrum.com. [2603:8081:140c:1a00:f9ea:fe1d:a45c:bca2]) by smtp.googlemail.com with ESMTPSA id be36-20020a05687058a400b000f5e89a9c60sm4464800oab.3.2022.09.16.20.10.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Sep 2022 20:10:42 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, lizhijian@fujitsu.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 01/13] RDMA/rxe: Replace START->FIRST, END->LAST Date: Fri, 16 Sep 2022 22:10:20 -0500 Message-Id: <20220917031028.21187-1-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Replace RXE_START_MASK by RXE_FIRST_MASK, RXE_END_MASK by RXE_LAST_MASK and add RXE_ONLY_MASK = FIRST | LAST to match normal IBA usage. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_comp.c | 6 +- drivers/infiniband/sw/rxe/rxe_net.c | 2 +- drivers/infiniband/sw/rxe/rxe_opcode.c | 143 +++++++++++-------------- drivers/infiniband/sw/rxe/rxe_opcode.h | 5 +- drivers/infiniband/sw/rxe/rxe_req.c | 10 +- drivers/infiniband/sw/rxe/rxe_resp.c | 4 +- 6 files changed, 76 insertions(+), 94 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c index fb0c008af78c..1f10ae4a35d5 100644 --- a/drivers/infiniband/sw/rxe/rxe_comp.c +++ b/drivers/infiniband/sw/rxe/rxe_comp.c @@ -221,7 +221,7 @@ static inline enum comp_state check_ack(struct rxe_qp *qp, switch (qp->comp.opcode) { case -1: /* Will catch all *_ONLY cases. */ - if (!(mask & RXE_START_MASK)) + if (!(mask & RXE_FIRST_MASK)) return COMPST_ERROR; break; @@ -354,7 +354,7 @@ static inline enum comp_state do_read(struct rxe_qp *qp, return COMPST_ERROR; } - if (wqe->dma.resid == 0 && (pkt->mask & RXE_END_MASK)) + if (wqe->dma.resid == 0 && (pkt->mask & RXE_LAST_MASK)) return COMPST_COMP_ACK; return COMPST_UPDATE_COMP; @@ -636,7 +636,7 @@ int rxe_completer(void *arg) break; case COMPST_UPDATE_COMP: - if (pkt->mask & RXE_END_MASK) + if (pkt->mask & RXE_LAST_MASK) qp->comp.opcode = -1; else qp->comp.opcode = pkt->opcode; diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index c53f4529f098..d46190ad082f 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -428,7 +428,7 @@ int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, } if ((qp_type(qp) != IB_QPT_RC) && - (pkt->mask & RXE_END_MASK)) { + (pkt->mask & RXE_LAST_MASK)) { pkt->wqe->state = wqe_state_done; rxe_run_task(&qp->comp.task, 1); } diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.c b/drivers/infiniband/sw/rxe/rxe_opcode.c index d4ba4d506f17..0ea587c15931 100644 --- a/drivers/infiniband/sw/rxe/rxe_opcode.c +++ b/drivers/infiniband/sw/rxe/rxe_opcode.c @@ -107,7 +107,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_SEND_FIRST] = { .name = "IB_OPCODE_RC_SEND_FIRST", .mask = RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_RWR_MASK | - RXE_SEND_MASK | RXE_START_MASK, + RXE_SEND_MASK | RXE_FIRST_MASK, .length = RXE_BTH_BYTES, .offset = { [RXE_BTH] = 0, @@ -127,7 +127,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_SEND_LAST] = { .name = "IB_OPCODE_RC_SEND_LAST", .mask = RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | - RXE_SEND_MASK | RXE_END_MASK, + RXE_SEND_MASK | RXE_LAST_MASK, .length = RXE_BTH_BYTES, .offset = { [RXE_BTH] = 0, @@ -137,7 +137,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_SEND_LAST_WITH_IMMEDIATE] = { .name = "IB_OPCODE_RC_SEND_LAST_WITH_IMMEDIATE", .mask = RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_COMP_MASK | RXE_SEND_MASK | RXE_END_MASK, + RXE_COMP_MASK | RXE_SEND_MASK | RXE_LAST_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES, .offset = { [RXE_BTH] = 0, @@ -149,8 +149,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_SEND_ONLY] = { .name = "IB_OPCODE_RC_SEND_ONLY", .mask = RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | - RXE_RWR_MASK | RXE_SEND_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_RWR_MASK | RXE_SEND_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES, .offset = { [RXE_BTH] = 0, @@ -161,7 +160,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { .name = "IB_OPCODE_RC_SEND_ONLY_WITH_IMMEDIATE", .mask = RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | RXE_RWR_MASK | RXE_SEND_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES, .offset = { [RXE_BTH] = 0, @@ -173,7 +172,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_RDMA_WRITE_FIRST] = { .name = "IB_OPCODE_RC_RDMA_WRITE_FIRST", .mask = RXE_RETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_WRITE_MASK | RXE_START_MASK, + RXE_WRITE_MASK | RXE_FIRST_MASK, .length = RXE_BTH_BYTES + RXE_RETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -195,7 +194,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_RDMA_WRITE_LAST] = { .name = "IB_OPCODE_RC_RDMA_WRITE_LAST", .mask = RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_WRITE_MASK | - RXE_END_MASK, + RXE_LAST_MASK, .length = RXE_BTH_BYTES, .offset = { [RXE_BTH] = 0, @@ -206,7 +205,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { .name = "IB_OPCODE_RC_RDMA_WRITE_LAST_WITH_IMMEDIATE", .mask = RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_WRITE_MASK | RXE_COMP_MASK | RXE_RWR_MASK | - RXE_END_MASK, + RXE_LAST_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES, .offset = { [RXE_BTH] = 0, @@ -218,8 +217,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_RDMA_WRITE_ONLY] = { .name = "IB_OPCODE_RC_RDMA_WRITE_ONLY", .mask = RXE_RETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_WRITE_MASK | RXE_START_MASK | - RXE_END_MASK, + RXE_WRITE_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_RETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -231,9 +229,8 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_RDMA_WRITE_ONLY_WITH_IMMEDIATE] = { .name = "IB_OPCODE_RC_RDMA_WRITE_ONLY_WITH_IMMEDIATE", .mask = RXE_RETH_MASK | RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | - RXE_REQ_MASK | RXE_WRITE_MASK | - RXE_COMP_MASK | RXE_RWR_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_REQ_MASK | RXE_WRITE_MASK | RXE_COMP_MASK | + RXE_RWR_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES + RXE_RETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -248,7 +245,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_RDMA_READ_REQUEST] = { .name = "IB_OPCODE_RC_RDMA_READ_REQUEST", .mask = RXE_RETH_MASK | RXE_REQ_MASK | RXE_READ_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_RETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -260,7 +257,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_RDMA_READ_RESPONSE_FIRST] = { .name = "IB_OPCODE_RC_RDMA_READ_RESPONSE_FIRST", .mask = RXE_AETH_MASK | RXE_PAYLOAD_MASK | RXE_ACK_MASK | - RXE_START_MASK, + RXE_FIRST_MASK, .length = RXE_BTH_BYTES + RXE_AETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -281,7 +278,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_RDMA_READ_RESPONSE_LAST] = { .name = "IB_OPCODE_RC_RDMA_READ_RESPONSE_LAST", .mask = RXE_AETH_MASK | RXE_PAYLOAD_MASK | RXE_ACK_MASK | - RXE_END_MASK, + RXE_LAST_MASK, .length = RXE_BTH_BYTES + RXE_AETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -293,7 +290,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_RDMA_READ_RESPONSE_ONLY] = { .name = "IB_OPCODE_RC_RDMA_READ_RESPONSE_ONLY", .mask = RXE_AETH_MASK | RXE_PAYLOAD_MASK | RXE_ACK_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_AETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -304,8 +301,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { }, [IB_OPCODE_RC_ACKNOWLEDGE] = { .name = "IB_OPCODE_RC_ACKNOWLEDGE", - .mask = RXE_AETH_MASK | RXE_ACK_MASK | RXE_START_MASK | - RXE_END_MASK, + .mask = RXE_AETH_MASK | RXE_ACK_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_AETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -317,7 +313,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_ATOMIC_ACKNOWLEDGE] = { .name = "IB_OPCODE_RC_ATOMIC_ACKNOWLEDGE", .mask = RXE_AETH_MASK | RXE_ATMACK_MASK | RXE_ACK_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_ATMACK_BYTES + RXE_AETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -332,7 +328,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_COMPARE_SWAP] = { .name = "IB_OPCODE_RC_COMPARE_SWAP", .mask = RXE_ATMETH_MASK | RXE_REQ_MASK | RXE_ATOMIC_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_ATMETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -344,7 +340,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_FETCH_ADD] = { .name = "IB_OPCODE_RC_FETCH_ADD", .mask = RXE_ATMETH_MASK | RXE_REQ_MASK | RXE_ATOMIC_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_ATMETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -356,7 +352,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_SEND_LAST_WITH_INVALIDATE] = { .name = "IB_OPCODE_RC_SEND_LAST_WITH_INVALIDATE", .mask = RXE_IETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_COMP_MASK | RXE_SEND_MASK | RXE_END_MASK, + RXE_COMP_MASK | RXE_SEND_MASK | RXE_LAST_MASK, .length = RXE_BTH_BYTES + RXE_IETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -369,7 +365,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { .name = "IB_OPCODE_RC_SEND_ONLY_INV", .mask = RXE_IETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | RXE_RWR_MASK | RXE_SEND_MASK | - RXE_END_MASK | RXE_START_MASK, + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_IETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -383,7 +379,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_UC_SEND_FIRST] = { .name = "IB_OPCODE_UC_SEND_FIRST", .mask = RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_RWR_MASK | - RXE_SEND_MASK | RXE_START_MASK, + RXE_SEND_MASK | RXE_FIRST_MASK, .length = RXE_BTH_BYTES, .offset = { [RXE_BTH] = 0, @@ -403,7 +399,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_UC_SEND_LAST] = { .name = "IB_OPCODE_UC_SEND_LAST", .mask = RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | - RXE_SEND_MASK | RXE_END_MASK, + RXE_SEND_MASK | RXE_LAST_MASK, .length = RXE_BTH_BYTES, .offset = { [RXE_BTH] = 0, @@ -413,7 +409,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_UC_SEND_LAST_WITH_IMMEDIATE] = { .name = "IB_OPCODE_UC_SEND_LAST_WITH_IMMEDIATE", .mask = RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_COMP_MASK | RXE_SEND_MASK | RXE_END_MASK, + RXE_COMP_MASK | RXE_SEND_MASK | RXE_LAST_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES, .offset = { [RXE_BTH] = 0, @@ -425,8 +421,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_UC_SEND_ONLY] = { .name = "IB_OPCODE_UC_SEND_ONLY", .mask = RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | - RXE_RWR_MASK | RXE_SEND_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_RWR_MASK | RXE_SEND_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES, .offset = { [RXE_BTH] = 0, @@ -437,7 +432,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { .name = "IB_OPCODE_UC_SEND_ONLY_WITH_IMMEDIATE", .mask = RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | RXE_RWR_MASK | RXE_SEND_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES, .offset = { [RXE_BTH] = 0, @@ -449,7 +444,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_UC_RDMA_WRITE_FIRST] = { .name = "IB_OPCODE_UC_RDMA_WRITE_FIRST", .mask = RXE_RETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_WRITE_MASK | RXE_START_MASK, + RXE_WRITE_MASK | RXE_FIRST_MASK, .length = RXE_BTH_BYTES + RXE_RETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -471,7 +466,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_UC_RDMA_WRITE_LAST] = { .name = "IB_OPCODE_UC_RDMA_WRITE_LAST", .mask = RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_WRITE_MASK | - RXE_END_MASK, + RXE_LAST_MASK, .length = RXE_BTH_BYTES, .offset = { [RXE_BTH] = 0, @@ -482,7 +477,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { .name = "IB_OPCODE_UC_RDMA_WRITE_LAST_WITH_IMMEDIATE", .mask = RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_WRITE_MASK | RXE_COMP_MASK | RXE_RWR_MASK | - RXE_END_MASK, + RXE_LAST_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES, .offset = { [RXE_BTH] = 0, @@ -494,8 +489,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_UC_RDMA_WRITE_ONLY] = { .name = "IB_OPCODE_UC_RDMA_WRITE_ONLY", .mask = RXE_RETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_WRITE_MASK | RXE_START_MASK | - RXE_END_MASK, + RXE_WRITE_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_RETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -507,9 +501,8 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_UC_RDMA_WRITE_ONLY_WITH_IMMEDIATE] = { .name = "IB_OPCODE_UC_RDMA_WRITE_ONLY_WITH_IMMEDIATE", .mask = RXE_RETH_MASK | RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | - RXE_REQ_MASK | RXE_WRITE_MASK | - RXE_COMP_MASK | RXE_RWR_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_REQ_MASK | RXE_WRITE_MASK | RXE_COMP_MASK | + RXE_RWR_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES + RXE_RETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -527,7 +520,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { .name = "IB_OPCODE_RD_SEND_FIRST", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_RWR_MASK | RXE_SEND_MASK | - RXE_START_MASK, + RXE_FIRST_MASK, .length = RXE_BTH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -542,8 +535,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_SEND_MIDDLE] = { .name = "IB_OPCODE_RD_SEND_MIDDLE", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_PAYLOAD_MASK | - RXE_REQ_MASK | RXE_SEND_MASK | - RXE_MIDDLE_MASK, + RXE_REQ_MASK | RXE_SEND_MASK | RXE_MIDDLE_MASK, .length = RXE_BTH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -559,7 +551,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { .name = "IB_OPCODE_RD_SEND_LAST", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | RXE_SEND_MASK | - RXE_END_MASK, + RXE_LAST_MASK, .length = RXE_BTH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -574,9 +566,8 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_SEND_LAST_WITH_IMMEDIATE] = { .name = "IB_OPCODE_RD_SEND_LAST_WITH_IMMEDIATE", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_IMMDT_MASK | - RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_COMP_MASK | RXE_SEND_MASK | - RXE_END_MASK, + RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | + RXE_SEND_MASK | RXE_LAST_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { @@ -597,7 +588,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { .name = "IB_OPCODE_RD_SEND_ONLY", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | RXE_RWR_MASK | - RXE_SEND_MASK | RXE_START_MASK | RXE_END_MASK, + RXE_SEND_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -612,9 +603,8 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_SEND_ONLY_WITH_IMMEDIATE] = { .name = "IB_OPCODE_RD_SEND_ONLY_WITH_IMMEDIATE", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_IMMDT_MASK | - RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_COMP_MASK | RXE_RWR_MASK | RXE_SEND_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | + RXE_RWR_MASK | RXE_SEND_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { @@ -634,8 +624,8 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_RDMA_WRITE_FIRST] = { .name = "IB_OPCODE_RD_RDMA_WRITE_FIRST", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_RETH_MASK | - RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_WRITE_MASK | RXE_START_MASK, + RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_WRITE_MASK | + RXE_FIRST_MASK, .length = RXE_BTH_BYTES + RXE_RETH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { @@ -655,8 +645,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_RDMA_WRITE_MIDDLE] = { .name = "IB_OPCODE_RD_RDMA_WRITE_MIDDLE", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_PAYLOAD_MASK | - RXE_REQ_MASK | RXE_WRITE_MASK | - RXE_MIDDLE_MASK, + RXE_REQ_MASK | RXE_WRITE_MASK | RXE_MIDDLE_MASK, .length = RXE_BTH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -671,8 +660,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_RDMA_WRITE_LAST] = { .name = "IB_OPCODE_RD_RDMA_WRITE_LAST", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_PAYLOAD_MASK | - RXE_REQ_MASK | RXE_WRITE_MASK | - RXE_END_MASK, + RXE_REQ_MASK | RXE_WRITE_MASK | RXE_LAST_MASK, .length = RXE_BTH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -687,9 +675,8 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_RDMA_WRITE_LAST_WITH_IMMEDIATE] = { .name = "IB_OPCODE_RD_RDMA_WRITE_LAST_WITH_IMMEDIATE", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_IMMDT_MASK | - RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_WRITE_MASK | RXE_COMP_MASK | RXE_RWR_MASK | - RXE_END_MASK, + RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_WRITE_MASK | + RXE_COMP_MASK | RXE_RWR_MASK | RXE_LAST_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { @@ -709,9 +696,8 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_RDMA_WRITE_ONLY] = { .name = "IB_OPCODE_RD_RDMA_WRITE_ONLY", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_RETH_MASK | - RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_WRITE_MASK | RXE_START_MASK | - RXE_END_MASK, + RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_WRITE_MASK | + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_RETH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { @@ -731,10 +717,9 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_RDMA_WRITE_ONLY_WITH_IMMEDIATE] = { .name = "IB_OPCODE_RD_RDMA_WRITE_ONLY_WITH_IMMEDIATE", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_RETH_MASK | - RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | - RXE_REQ_MASK | RXE_WRITE_MASK | - RXE_COMP_MASK | RXE_RWR_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | + RXE_WRITE_MASK | RXE_COMP_MASK | RXE_RWR_MASK | + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES + RXE_RETH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { @@ -759,8 +744,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_RDMA_READ_REQUEST] = { .name = "IB_OPCODE_RD_RDMA_READ_REQUEST", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_RETH_MASK | - RXE_REQ_MASK | RXE_READ_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_REQ_MASK | RXE_READ_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_RETH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { @@ -779,9 +763,8 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { }, [IB_OPCODE_RD_RDMA_READ_RESPONSE_FIRST] = { .name = "IB_OPCODE_RD_RDMA_READ_RESPONSE_FIRST", - .mask = RXE_RDETH_MASK | RXE_AETH_MASK | - RXE_PAYLOAD_MASK | RXE_ACK_MASK | - RXE_START_MASK, + .mask = RXE_RDETH_MASK | RXE_AETH_MASK | RXE_PAYLOAD_MASK | + RXE_ACK_MASK | RXE_FIRST_MASK, .length = RXE_BTH_BYTES + RXE_AETH_BYTES + RXE_RDETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -808,7 +791,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_RDMA_READ_RESPONSE_LAST] = { .name = "IB_OPCODE_RD_RDMA_READ_RESPONSE_LAST", .mask = RXE_RDETH_MASK | RXE_AETH_MASK | RXE_PAYLOAD_MASK | - RXE_ACK_MASK | RXE_END_MASK, + RXE_ACK_MASK | RXE_LAST_MASK, .length = RXE_BTH_BYTES + RXE_AETH_BYTES + RXE_RDETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -823,7 +806,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_RDMA_READ_RESPONSE_ONLY] = { .name = "IB_OPCODE_RD_RDMA_READ_RESPONSE_ONLY", .mask = RXE_RDETH_MASK | RXE_AETH_MASK | RXE_PAYLOAD_MASK | - RXE_ACK_MASK | RXE_START_MASK | RXE_END_MASK, + RXE_ACK_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_AETH_BYTES + RXE_RDETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -838,7 +821,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_ACKNOWLEDGE] = { .name = "IB_OPCODE_RD_ACKNOWLEDGE", .mask = RXE_RDETH_MASK | RXE_AETH_MASK | RXE_ACK_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_AETH_BYTES + RXE_RDETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -850,7 +833,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_ATOMIC_ACKNOWLEDGE] = { .name = "IB_OPCODE_RD_ATOMIC_ACKNOWLEDGE", .mask = RXE_RDETH_MASK | RXE_AETH_MASK | RXE_ATMACK_MASK | - RXE_ACK_MASK | RXE_START_MASK | RXE_END_MASK, + RXE_ACK_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_ATMACK_BYTES + RXE_AETH_BYTES + RXE_RDETH_BYTES, .offset = { @@ -866,8 +849,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_COMPARE_SWAP] = { .name = "RD_COMPARE_SWAP", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_ATMETH_MASK | - RXE_REQ_MASK | RXE_ATOMIC_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_REQ_MASK | RXE_ATOMIC_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_ATMETH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { @@ -887,8 +869,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_FETCH_ADD] = { .name = "IB_OPCODE_RD_FETCH_ADD", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_ATMETH_MASK | - RXE_REQ_MASK | RXE_ATOMIC_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_REQ_MASK | RXE_ATOMIC_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_ATMETH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { @@ -911,7 +892,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { .name = "IB_OPCODE_UD_SEND_ONLY", .mask = RXE_DETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | RXE_RWR_MASK | RXE_SEND_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_DETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -924,7 +905,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { .name = "IB_OPCODE_UD_SEND_ONLY_WITH_IMMEDIATE", .mask = RXE_DETH_MASK | RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | RXE_RWR_MASK | - RXE_SEND_MASK | RXE_START_MASK | RXE_END_MASK, + RXE_SEND_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES + RXE_DETH_BYTES, .offset = { [RXE_BTH] = 0, diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.h b/drivers/infiniband/sw/rxe/rxe_opcode.h index 8f9aaaf260f2..d2b6a8232e92 100644 --- a/drivers/infiniband/sw/rxe/rxe_opcode.h +++ b/drivers/infiniband/sw/rxe/rxe_opcode.h @@ -75,9 +75,10 @@ enum rxe_hdr_mask { RXE_RWR_MASK = BIT(NUM_HDR_TYPES + 6), RXE_COMP_MASK = BIT(NUM_HDR_TYPES + 7), - RXE_START_MASK = BIT(NUM_HDR_TYPES + 8), + RXE_FIRST_MASK = BIT(NUM_HDR_TYPES + 8), RXE_MIDDLE_MASK = BIT(NUM_HDR_TYPES + 9), - RXE_END_MASK = BIT(NUM_HDR_TYPES + 10), + RXE_LAST_MASK = BIT(NUM_HDR_TYPES + 10), + RXE_ONLY_MASK = RXE_FIRST_MASK | RXE_LAST_MASK, RXE_LOOPBACK_MASK = BIT(NUM_HDR_TYPES + 12), diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index f63771207970..e136abc802af 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -403,7 +403,7 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp, /* init bth */ solicited = (ibwr->send_flags & IB_SEND_SOLICITED) && - (pkt->mask & RXE_END_MASK) && + (pkt->mask & RXE_LAST_MASK) && ((pkt->mask & (RXE_SEND_MASK)) || (pkt->mask & (RXE_WRITE_MASK | RXE_IMMDT_MASK)) == (RXE_WRITE_MASK | RXE_IMMDT_MASK)); @@ -411,7 +411,7 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp, qp_num = (pkt->mask & RXE_DETH_MASK) ? ibwr->wr.ud.remote_qpn : qp->attr.dest_qp_num; - ack_req = ((pkt->mask & RXE_END_MASK) || + ack_req = ((pkt->mask & RXE_LAST_MASK) || (qp->req.noack_pkts++ > RXE_MAX_PKT_PER_ACK)); if (ack_req) qp->req.noack_pkts = 0; @@ -493,7 +493,7 @@ static void update_wqe_state(struct rxe_qp *qp, struct rxe_send_wqe *wqe, struct rxe_pkt_info *pkt) { - if (pkt->mask & RXE_END_MASK) { + if (pkt->mask & RXE_LAST_MASK) { if (qp_type(qp) == IB_QPT_RC) wqe->state = wqe_state_pending; } else { @@ -513,7 +513,7 @@ static void update_wqe_psn(struct rxe_qp *qp, if (num_pkt == 0) num_pkt = 1; - if (pkt->mask & RXE_START_MASK) { + if (pkt->mask & RXE_FIRST_MASK) { wqe->first_psn = qp->req.psn; wqe->last_psn = (qp->req.psn + num_pkt - 1) & BTH_PSN_MASK; } @@ -550,7 +550,7 @@ static void update_state(struct rxe_qp *qp, struct rxe_pkt_info *pkt) { qp->req.opcode = pkt->opcode; - if (pkt->mask & RXE_END_MASK) + if (pkt->mask & RXE_LAST_MASK) qp->req.wqe_index = queue_next_index(qp->sq.queue, qp->req.wqe_index); diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 7c336db5cb54..cb560cbe418d 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -147,7 +147,7 @@ static enum resp_states check_psn(struct rxe_qp *qp, case IB_QPT_UC: if (qp->resp.drop_msg || diff != 0) { - if (pkt->mask & RXE_START_MASK) { + if (pkt->mask & RXE_FIRST_MASK) { qp->resp.drop_msg = 0; return RESPST_CHK_OP_SEQ; } @@ -901,7 +901,7 @@ static enum resp_states execute(struct rxe_qp *qp, struct rxe_pkt_info *pkt) return RESPST_ERR_INVALIDATE_RKEY; } - if (pkt->mask & RXE_END_MASK) + if (pkt->mask & RXE_LAST_MASK) /* We successfully processed this new request. */ qp->resp.msn++; From patchwork Sat Sep 17 03:10:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12978987 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9243ECAAA1 for ; Sat, 17 Sep 2022 03:10:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229457AbiIQDKt (ORCPT ); Fri, 16 Sep 2022 23:10:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53674 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229509AbiIQDKq (ORCPT ); Fri, 16 Sep 2022 23:10:46 -0400 Received: from mail-ot1-x332.google.com (mail-ot1-x332.google.com [IPv6:2607:f8b0:4864:20::332]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 559EF4DF25 for ; Fri, 16 Sep 2022 20:10:45 -0700 (PDT) Received: by mail-ot1-x332.google.com with SMTP id x23-20020a056830409700b00655c6dace73so13827190ott.11 for ; Fri, 16 Sep 2022 20:10:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=dZydl9VOtrbPxiFnDDs3otuazpDHAJBf6IwhOk9Dnjk=; b=DH1E/t4Odsxrl3hdAcN3MAydOyE7894VjQs7/lXXEo2gW0/ro8yt4rRXrOHDKUHsqQ HY1baXqqmtw92iImn9W/k5lq9Jhcn6CUpmlm986mo9z9eertYnQvlKVdmcUVdS2B7Yyj YeG4sGh2Q/9aykRiawVU2VA7qVehUUvbU2n0LMKTUfVaMUEn0+04GUDV/2fw9VB28RUm 2QL1m38Z89FY3ov/rebFfLz8kaN25WBo96Hl7GIJsCouYE9dYRVqX4j0TVvzJfcJNdJw jwQD4yUm+RLYHuRddE9DbTKYItMwDyFbC6YoIBO1tChqYywP1vcIfRPoZSEsELJ9Jj+g BbzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=dZydl9VOtrbPxiFnDDs3otuazpDHAJBf6IwhOk9Dnjk=; b=xKbMGpiwJ5iCfM7D7z53p/DpcFojg13wnmhVpRH6J2ipcOYe45+MT3byuHSbmlwEZL GMQg+UDmZmbrFQBDUz70wH8K8+Pp66QCLVqdT5zUeqFRap6I7itxMqZvt7Jirq4ZSVPp 8mtdadSo2jss72tSHEAXuti3uUs/hvhqUpKNEzrbKbRG2vS4EXda2HxUn/LFvatOPF1u jW8ZDcDSRz8vokvmyD7a7TwT13FDd6Jt7IRRX0MgXvAfmu/0hy9iWGqKZBCGQ9/LlLFi 7/WUBwwlvLcdvqhl+DFKicuDUZTvFQ5lNe/P59HrRmDWOWKzvivz+JpUXAss+9Ds97im 6FTg== X-Gm-Message-State: ACrzQf2GkA5z104vXwyUBllTrTvcqGpTyBsVgHvEvp3FrrG2XQCTJOJH x9TGJ+f+qhYq0sB/smnQBxGvWrEBaaQ= X-Google-Smtp-Source: AMsMyM55kHkoKdvG7iwps8sdedeBuofO5kGEl2StMDVFjmoCIPb65ScRrcQBjwOGzN84YQVWpoIbFQ== X-Received: by 2002:a05:6830:4422:b0:658:7273:1955 with SMTP id q34-20020a056830442200b0065872731955mr3576232otv.60.1663384244529; Fri, 16 Sep 2022 20:10:44 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f9ea-fe1d-a45c-bca2.res6.spectrum.com. [2603:8081:140c:1a00:f9ea:fe1d:a45c:bca2]) by smtp.googlemail.com with ESMTPSA id be36-20020a05687058a400b000f5e89a9c60sm4464800oab.3.2022.09.16.20.10.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Sep 2022 20:10:43 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, lizhijian@fujitsu.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 02/13] RDMA/rxe: Move next_opcode() to rxe_opcode.c Date: Fri, 16 Sep 2022 22:10:21 -0500 Message-Id: <20220917031028.21187-2-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220917031028.21187-1-rpearsonhpe@gmail.com> References: <20220917031028.21187-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Move next_opcode() from rxe_req.c to rxe_opcode.c. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 3 + drivers/infiniband/sw/rxe/rxe_opcode.c | 156 ++++++++++++++++++++++++- drivers/infiniband/sw/rxe/rxe_req.c | 156 ------------------------- 3 files changed, 157 insertions(+), 158 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 22f6cc31d1d6..5526d83697c7 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -99,6 +99,9 @@ int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, struct sk_buff *skb); const char *rxe_parent_name(struct rxe_dev *rxe, unsigned int port_num); +/* opcode.c */ +int next_opcode(struct rxe_qp *qp, struct rxe_send_wqe *wqe, u32 opcode); + /* rxe_qp.c */ int rxe_qp_chk_init(struct rxe_dev *rxe, struct ib_qp_init_attr *init); int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd, diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.c b/drivers/infiniband/sw/rxe/rxe_opcode.c index 0ea587c15931..6b1a1f197c4d 100644 --- a/drivers/infiniband/sw/rxe/rxe_opcode.c +++ b/drivers/infiniband/sw/rxe/rxe_opcode.c @@ -5,8 +5,8 @@ */ #include -#include "rxe_opcode.h" -#include "rxe_hdr.h" + +#include "rxe.h" /* useful information about work request opcodes and pkt opcodes in * table form @@ -919,3 +919,155 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { }, }; + +static int next_opcode_rc(struct rxe_qp *qp, u32 opcode, int fits) +{ + switch (opcode) { + case IB_WR_RDMA_WRITE: + if (qp->req.opcode == IB_OPCODE_RC_RDMA_WRITE_FIRST || + qp->req.opcode == IB_OPCODE_RC_RDMA_WRITE_MIDDLE) + return fits ? + IB_OPCODE_RC_RDMA_WRITE_LAST : + IB_OPCODE_RC_RDMA_WRITE_MIDDLE; + else + return fits ? + IB_OPCODE_RC_RDMA_WRITE_ONLY : + IB_OPCODE_RC_RDMA_WRITE_FIRST; + + case IB_WR_RDMA_WRITE_WITH_IMM: + if (qp->req.opcode == IB_OPCODE_RC_RDMA_WRITE_FIRST || + qp->req.opcode == IB_OPCODE_RC_RDMA_WRITE_MIDDLE) + return fits ? + IB_OPCODE_RC_RDMA_WRITE_LAST_WITH_IMMEDIATE : + IB_OPCODE_RC_RDMA_WRITE_MIDDLE; + else + return fits ? + IB_OPCODE_RC_RDMA_WRITE_ONLY_WITH_IMMEDIATE : + IB_OPCODE_RC_RDMA_WRITE_FIRST; + + case IB_WR_SEND: + if (qp->req.opcode == IB_OPCODE_RC_SEND_FIRST || + qp->req.opcode == IB_OPCODE_RC_SEND_MIDDLE) + return fits ? + IB_OPCODE_RC_SEND_LAST : + IB_OPCODE_RC_SEND_MIDDLE; + else + return fits ? + IB_OPCODE_RC_SEND_ONLY : + IB_OPCODE_RC_SEND_FIRST; + + case IB_WR_SEND_WITH_IMM: + if (qp->req.opcode == IB_OPCODE_RC_SEND_FIRST || + qp->req.opcode == IB_OPCODE_RC_SEND_MIDDLE) + return fits ? + IB_OPCODE_RC_SEND_LAST_WITH_IMMEDIATE : + IB_OPCODE_RC_SEND_MIDDLE; + else + return fits ? + IB_OPCODE_RC_SEND_ONLY_WITH_IMMEDIATE : + IB_OPCODE_RC_SEND_FIRST; + + case IB_WR_RDMA_READ: + return IB_OPCODE_RC_RDMA_READ_REQUEST; + + case IB_WR_ATOMIC_CMP_AND_SWP: + return IB_OPCODE_RC_COMPARE_SWAP; + + case IB_WR_ATOMIC_FETCH_AND_ADD: + return IB_OPCODE_RC_FETCH_ADD; + + case IB_WR_SEND_WITH_INV: + if (qp->req.opcode == IB_OPCODE_RC_SEND_FIRST || + qp->req.opcode == IB_OPCODE_RC_SEND_MIDDLE) + return fits ? IB_OPCODE_RC_SEND_LAST_WITH_INVALIDATE : + IB_OPCODE_RC_SEND_MIDDLE; + else + return fits ? IB_OPCODE_RC_SEND_ONLY_WITH_INVALIDATE : + IB_OPCODE_RC_SEND_FIRST; + case IB_WR_REG_MR: + case IB_WR_LOCAL_INV: + return opcode; + } + + return -EINVAL; +} + +static int next_opcode_uc(struct rxe_qp *qp, u32 opcode, int fits) +{ + switch (opcode) { + case IB_WR_RDMA_WRITE: + if (qp->req.opcode == IB_OPCODE_UC_RDMA_WRITE_FIRST || + qp->req.opcode == IB_OPCODE_UC_RDMA_WRITE_MIDDLE) + return fits ? + IB_OPCODE_UC_RDMA_WRITE_LAST : + IB_OPCODE_UC_RDMA_WRITE_MIDDLE; + else + return fits ? + IB_OPCODE_UC_RDMA_WRITE_ONLY : + IB_OPCODE_UC_RDMA_WRITE_FIRST; + + case IB_WR_RDMA_WRITE_WITH_IMM: + if (qp->req.opcode == IB_OPCODE_UC_RDMA_WRITE_FIRST || + qp->req.opcode == IB_OPCODE_UC_RDMA_WRITE_MIDDLE) + return fits ? + IB_OPCODE_UC_RDMA_WRITE_LAST_WITH_IMMEDIATE : + IB_OPCODE_UC_RDMA_WRITE_MIDDLE; + else + return fits ? + IB_OPCODE_UC_RDMA_WRITE_ONLY_WITH_IMMEDIATE : + IB_OPCODE_UC_RDMA_WRITE_FIRST; + + case IB_WR_SEND: + if (qp->req.opcode == IB_OPCODE_UC_SEND_FIRST || + qp->req.opcode == IB_OPCODE_UC_SEND_MIDDLE) + return fits ? + IB_OPCODE_UC_SEND_LAST : + IB_OPCODE_UC_SEND_MIDDLE; + else + return fits ? + IB_OPCODE_UC_SEND_ONLY : + IB_OPCODE_UC_SEND_FIRST; + + case IB_WR_SEND_WITH_IMM: + if (qp->req.opcode == IB_OPCODE_UC_SEND_FIRST || + qp->req.opcode == IB_OPCODE_UC_SEND_MIDDLE) + return fits ? + IB_OPCODE_UC_SEND_LAST_WITH_IMMEDIATE : + IB_OPCODE_UC_SEND_MIDDLE; + else + return fits ? + IB_OPCODE_UC_SEND_ONLY_WITH_IMMEDIATE : + IB_OPCODE_UC_SEND_FIRST; + } + + return -EINVAL; +} + +int next_opcode(struct rxe_qp *qp, struct rxe_send_wqe *wqe, u32 opcode) +{ + int fits = (wqe->dma.resid <= qp->mtu); + + switch (qp_type(qp)) { + case IB_QPT_RC: + return next_opcode_rc(qp, opcode, fits); + + case IB_QPT_UC: + return next_opcode_uc(qp, opcode, fits); + + case IB_QPT_UD: + case IB_QPT_GSI: + switch (opcode) { + case IB_WR_SEND: + return IB_OPCODE_UD_SEND_ONLY; + + case IB_WR_SEND_WITH_IMM: + return IB_OPCODE_UD_SEND_ONLY_WITH_IMMEDIATE; + } + break; + + default: + break; + } + + return -EINVAL; +} diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index e136abc802af..d2a9abfed596 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -11,9 +11,6 @@ #include "rxe_loc.h" #include "rxe_queue.h" -static int next_opcode(struct rxe_qp *qp, struct rxe_send_wqe *wqe, - u32 opcode); - static inline void retry_first_write_send(struct rxe_qp *qp, struct rxe_send_wqe *wqe, int npsn) { @@ -194,159 +191,6 @@ static int rxe_wqe_is_fenced(struct rxe_qp *qp, struct rxe_send_wqe *wqe) atomic_read(&qp->req.rd_atomic) != qp->attr.max_rd_atomic; } -static int next_opcode_rc(struct rxe_qp *qp, u32 opcode, int fits) -{ - switch (opcode) { - case IB_WR_RDMA_WRITE: - if (qp->req.opcode == IB_OPCODE_RC_RDMA_WRITE_FIRST || - qp->req.opcode == IB_OPCODE_RC_RDMA_WRITE_MIDDLE) - return fits ? - IB_OPCODE_RC_RDMA_WRITE_LAST : - IB_OPCODE_RC_RDMA_WRITE_MIDDLE; - else - return fits ? - IB_OPCODE_RC_RDMA_WRITE_ONLY : - IB_OPCODE_RC_RDMA_WRITE_FIRST; - - case IB_WR_RDMA_WRITE_WITH_IMM: - if (qp->req.opcode == IB_OPCODE_RC_RDMA_WRITE_FIRST || - qp->req.opcode == IB_OPCODE_RC_RDMA_WRITE_MIDDLE) - return fits ? - IB_OPCODE_RC_RDMA_WRITE_LAST_WITH_IMMEDIATE : - IB_OPCODE_RC_RDMA_WRITE_MIDDLE; - else - return fits ? - IB_OPCODE_RC_RDMA_WRITE_ONLY_WITH_IMMEDIATE : - IB_OPCODE_RC_RDMA_WRITE_FIRST; - - case IB_WR_SEND: - if (qp->req.opcode == IB_OPCODE_RC_SEND_FIRST || - qp->req.opcode == IB_OPCODE_RC_SEND_MIDDLE) - return fits ? - IB_OPCODE_RC_SEND_LAST : - IB_OPCODE_RC_SEND_MIDDLE; - else - return fits ? - IB_OPCODE_RC_SEND_ONLY : - IB_OPCODE_RC_SEND_FIRST; - - case IB_WR_SEND_WITH_IMM: - if (qp->req.opcode == IB_OPCODE_RC_SEND_FIRST || - qp->req.opcode == IB_OPCODE_RC_SEND_MIDDLE) - return fits ? - IB_OPCODE_RC_SEND_LAST_WITH_IMMEDIATE : - IB_OPCODE_RC_SEND_MIDDLE; - else - return fits ? - IB_OPCODE_RC_SEND_ONLY_WITH_IMMEDIATE : - IB_OPCODE_RC_SEND_FIRST; - - case IB_WR_RDMA_READ: - return IB_OPCODE_RC_RDMA_READ_REQUEST; - - case IB_WR_ATOMIC_CMP_AND_SWP: - return IB_OPCODE_RC_COMPARE_SWAP; - - case IB_WR_ATOMIC_FETCH_AND_ADD: - return IB_OPCODE_RC_FETCH_ADD; - - case IB_WR_SEND_WITH_INV: - if (qp->req.opcode == IB_OPCODE_RC_SEND_FIRST || - qp->req.opcode == IB_OPCODE_RC_SEND_MIDDLE) - return fits ? IB_OPCODE_RC_SEND_LAST_WITH_INVALIDATE : - IB_OPCODE_RC_SEND_MIDDLE; - else - return fits ? IB_OPCODE_RC_SEND_ONLY_WITH_INVALIDATE : - IB_OPCODE_RC_SEND_FIRST; - case IB_WR_REG_MR: - case IB_WR_LOCAL_INV: - return opcode; - } - - return -EINVAL; -} - -static int next_opcode_uc(struct rxe_qp *qp, u32 opcode, int fits) -{ - switch (opcode) { - case IB_WR_RDMA_WRITE: - if (qp->req.opcode == IB_OPCODE_UC_RDMA_WRITE_FIRST || - qp->req.opcode == IB_OPCODE_UC_RDMA_WRITE_MIDDLE) - return fits ? - IB_OPCODE_UC_RDMA_WRITE_LAST : - IB_OPCODE_UC_RDMA_WRITE_MIDDLE; - else - return fits ? - IB_OPCODE_UC_RDMA_WRITE_ONLY : - IB_OPCODE_UC_RDMA_WRITE_FIRST; - - case IB_WR_RDMA_WRITE_WITH_IMM: - if (qp->req.opcode == IB_OPCODE_UC_RDMA_WRITE_FIRST || - qp->req.opcode == IB_OPCODE_UC_RDMA_WRITE_MIDDLE) - return fits ? - IB_OPCODE_UC_RDMA_WRITE_LAST_WITH_IMMEDIATE : - IB_OPCODE_UC_RDMA_WRITE_MIDDLE; - else - return fits ? - IB_OPCODE_UC_RDMA_WRITE_ONLY_WITH_IMMEDIATE : - IB_OPCODE_UC_RDMA_WRITE_FIRST; - - case IB_WR_SEND: - if (qp->req.opcode == IB_OPCODE_UC_SEND_FIRST || - qp->req.opcode == IB_OPCODE_UC_SEND_MIDDLE) - return fits ? - IB_OPCODE_UC_SEND_LAST : - IB_OPCODE_UC_SEND_MIDDLE; - else - return fits ? - IB_OPCODE_UC_SEND_ONLY : - IB_OPCODE_UC_SEND_FIRST; - - case IB_WR_SEND_WITH_IMM: - if (qp->req.opcode == IB_OPCODE_UC_SEND_FIRST || - qp->req.opcode == IB_OPCODE_UC_SEND_MIDDLE) - return fits ? - IB_OPCODE_UC_SEND_LAST_WITH_IMMEDIATE : - IB_OPCODE_UC_SEND_MIDDLE; - else - return fits ? - IB_OPCODE_UC_SEND_ONLY_WITH_IMMEDIATE : - IB_OPCODE_UC_SEND_FIRST; - } - - return -EINVAL; -} - -static int next_opcode(struct rxe_qp *qp, struct rxe_send_wqe *wqe, - u32 opcode) -{ - int fits = (wqe->dma.resid <= qp->mtu); - - switch (qp_type(qp)) { - case IB_QPT_RC: - return next_opcode_rc(qp, opcode, fits); - - case IB_QPT_UC: - return next_opcode_uc(qp, opcode, fits); - - case IB_QPT_UD: - case IB_QPT_GSI: - switch (opcode) { - case IB_WR_SEND: - return IB_OPCODE_UD_SEND_ONLY; - - case IB_WR_SEND_WITH_IMM: - return IB_OPCODE_UD_SEND_ONLY_WITH_IMMEDIATE; - } - break; - - default: - break; - } - - return -EINVAL; -} - static inline int check_init_depth(struct rxe_qp *qp, struct rxe_send_wqe *wqe) { int depth; From patchwork Sat Sep 17 03:10:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12978988 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A60AFC6FA82 for ; Sat, 17 Sep 2022 03:10:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229509AbiIQDKv (ORCPT ); Fri, 16 Sep 2022 23:10:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53704 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229539AbiIQDKr (ORCPT ); Fri, 16 Sep 2022 23:10:47 -0400 Received: from mail-oa1-x31.google.com (mail-oa1-x31.google.com [IPv6:2001:4860:4864:20::31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9E8B74F39E for ; Fri, 16 Sep 2022 20:10:46 -0700 (PDT) Received: by mail-oa1-x31.google.com with SMTP id 586e51a60fabf-12c8312131fso4529147fac.4 for ; Fri, 16 Sep 2022 20:10:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=1O4gVNR3dcE8FWJuOdKt0Ng7mhdOLr6Amkjy6/uxb+o=; b=SfuzLFSufopvky1rRwHJCqM3vAwuDejF64udYVJv0Cij+3IxhKCiMVEttXEaAJPwN1 p3dhRZZK38TPeaDeR9XLYCOGDzg3w5fKJwagkglRC2mjqHAbnnBSQ+wSpEZaDaTEoL7O bW1hAsiQ4TbX7MHYksv9yUKe1WT6bacUK7ZmvO+EqovwsaJE1RCkwuOxvmsqJP+Bpq1/ YV1Xe41Q12iv/K8Dw7thNnGqQhFkdd0FDVjRsU//6ITtuYxw9En7y9U2KGJJ6NC3r7c6 gWfu5AIek9IGnk8vEsgeEq1Vye+Oh+ba2fE1QKsIOIBiXr7sDVvZdt0rEMnQNXr8MxCt OQBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=1O4gVNR3dcE8FWJuOdKt0Ng7mhdOLr6Amkjy6/uxb+o=; b=bXO3QVLfcghwsO6+UVXcSlyxOd359najsUsGXGfgQgyJEkeSgcfTJUc+i0rYM6MAhJ ZhiemfT3KgZZreXfivWTueL3u55nuN8tMVJ+I9UxtJJEmnUjv7tjLZ2HPOJ36xyiE6VA 3vPFDWukofY4144F+FK8dxbmcWzzTqEqrSqNmcKVA4lEXoXTErVuNQrD2gYmsAG9CefZ /fsciU/Qk50HENitch016EKxXa/BkVkPd6sFkGe5zVZgwpQCZGjozVUW763JwmAu86fU hl2tHseZ8eqU4ay1mb+88D77jOFWBEzTNbgkdUciwb2PfaQsyJo8inEhzwGnjrAXQqqr vmeA== X-Gm-Message-State: ACgBeo3wrO9CAg8g5pl/gTlLqioTEipNfGjPyOBKlNTDILBoOhEkTtxj 1y3gn+PHRXqlgzatZJgSTb8= X-Google-Smtp-Source: AA6agR4j1rYxIFSn5XhMpilnOf6qAcoMCcG7ujP0c/L4dhw4o5IcQChd3keeouYM2lSi3XNi/iE+Eg== X-Received: by 2002:a05:6870:1494:b0:12a:fadc:c6ce with SMTP id k20-20020a056870149400b0012afadcc6cemr10397771oab.283.1663384245679; Fri, 16 Sep 2022 20:10:45 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f9ea-fe1d-a45c-bca2.res6.spectrum.com. [2603:8081:140c:1a00:f9ea:fe1d:a45c:bca2]) by smtp.googlemail.com with ESMTPSA id be36-20020a05687058a400b000f5e89a9c60sm4464800oab.3.2022.09.16.20.10.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Sep 2022 20:10:45 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, lizhijian@fujitsu.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 03/13] RDMA: Add xrc opcodes to ib_pack.h Date: Fri, 16 Sep 2022 22:10:22 -0500 Message-Id: <20220917031028.21187-3-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220917031028.21187-1-rpearsonhpe@gmail.com> References: <20220917031028.21187-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend ib_pack.h to include xrc opcodes. Signed-off-by: Bob Pearson --- include/rdma/ib_pack.h | 32 +++++++++++++++++++++++++++++++- 1 file changed, 31 insertions(+), 1 deletion(-) diff --git a/include/rdma/ib_pack.h b/include/rdma/ib_pack.h index a9162f25beaf..cc9aac05d38e 100644 --- a/include/rdma/ib_pack.h +++ b/include/rdma/ib_pack.h @@ -56,8 +56,11 @@ enum { IB_OPCODE_UD = 0x60, /* per IBTA 1.3 vol 1 Table 38, A10.3.2 */ IB_OPCODE_CNP = 0x80, + IB_OPCODE_XRC = 0xa0, /* Manufacturer specific */ IB_OPCODE_MSP = 0xe0, + /* opcode type bits */ + IB_OPCODE_TYPE = 0xe0, /* operations -- just used to define real constants */ IB_OPCODE_SEND_FIRST = 0x00, @@ -84,6 +87,8 @@ enum { /* opcode 0x15 is reserved */ IB_OPCODE_SEND_LAST_WITH_INVALIDATE = 0x16, IB_OPCODE_SEND_ONLY_WITH_INVALIDATE = 0x17, + /* opcode command bits */ + IB_OPCODE_CMD = 0x1f, /* real constants follow -- see comment about above IB_OPCODE() macro for more details */ @@ -152,7 +157,32 @@ enum { /* UD */ IB_OPCODE(UD, SEND_ONLY), - IB_OPCODE(UD, SEND_ONLY_WITH_IMMEDIATE) + IB_OPCODE(UD, SEND_ONLY_WITH_IMMEDIATE), + + /* XRC */ + IB_OPCODE(XRC, SEND_FIRST), + IB_OPCODE(XRC, SEND_MIDDLE), + IB_OPCODE(XRC, SEND_LAST), + IB_OPCODE(XRC, SEND_LAST_WITH_IMMEDIATE), + IB_OPCODE(XRC, SEND_ONLY), + IB_OPCODE(XRC, SEND_ONLY_WITH_IMMEDIATE), + IB_OPCODE(XRC, RDMA_WRITE_FIRST), + IB_OPCODE(XRC, RDMA_WRITE_MIDDLE), + IB_OPCODE(XRC, RDMA_WRITE_LAST), + IB_OPCODE(XRC, RDMA_WRITE_LAST_WITH_IMMEDIATE), + IB_OPCODE(XRC, RDMA_WRITE_ONLY), + IB_OPCODE(XRC, RDMA_WRITE_ONLY_WITH_IMMEDIATE), + IB_OPCODE(XRC, RDMA_READ_REQUEST), + IB_OPCODE(XRC, RDMA_READ_RESPONSE_FIRST), + IB_OPCODE(XRC, RDMA_READ_RESPONSE_MIDDLE), + IB_OPCODE(XRC, RDMA_READ_RESPONSE_LAST), + IB_OPCODE(XRC, RDMA_READ_RESPONSE_ONLY), + IB_OPCODE(XRC, ACKNOWLEDGE), + IB_OPCODE(XRC, ATOMIC_ACKNOWLEDGE), + IB_OPCODE(XRC, COMPARE_SWAP), + IB_OPCODE(XRC, FETCH_ADD), + IB_OPCODE(XRC, SEND_LAST_WITH_INVALIDATE), + IB_OPCODE(XRC, SEND_ONLY_WITH_INVALIDATE), }; enum { From patchwork Sat Sep 17 03:10:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12978989 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A985C6FA8B for ; Sat, 17 Sep 2022 03:10:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229539AbiIQDKw (ORCPT ); Fri, 16 Sep 2022 23:10:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53760 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229498AbiIQDKt (ORCPT ); Fri, 16 Sep 2022 23:10:49 -0400 Received: from mail-oa1-x2b.google.com (mail-oa1-x2b.google.com [IPv6:2001:4860:4864:20::2b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 97A174D26D for ; Fri, 16 Sep 2022 20:10:47 -0700 (PDT) Received: by mail-oa1-x2b.google.com with SMTP id 586e51a60fabf-1274ec87ad5so54801639fac.0 for ; Fri, 16 Sep 2022 20:10:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=9I4wcPZ1fJouFFXV3leJFvUefnvjt0shk2pKmQXTP1E=; b=YXCqBUwuxGgqIcAfYfMsvClujDL2dPLVjuYhyxa+MAA/6k7hK12PVvI7BMtmw7MDEq lEe40Y9pT7Xwb5tywyeaK/OoBoujdWK1vBPyBAGVByA4bVbAriJzTwJ25nFhjTbgAPAi tF1onP87fFA5fFQqkEE43NEdnE8mmTtSO+j/HzwS7PHGlk3Ya0mgy4g0xIVpVfPCNN+h vSqzp6BxC5KCwh4lQ6lbsqkEtZMFm0DJ9ObcrqFaLlWooGMFbkbLkJ/u9yAABgENXMXk D4O7pq6iPt2LoUFmsob6egKaqAXFwdYSwT7xRw4sjnLfdgMxwIz/7eGF+hXxXIbijWGO hjzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=9I4wcPZ1fJouFFXV3leJFvUefnvjt0shk2pKmQXTP1E=; b=TmfAjkMv2AtQ0CzXOmCsFnqSQfKnug04eu+feqouVDJrH3OFeKUxmjijmBgEV8mtU+ hL+pqJKXfMDHCFFj0O7nimdDaVBSyUrqN0F5QnXiMyPVg3c7pWmG9Y3QQv7475Xao4Fw roWplNQ7euAemzvd4kwsCnrPAvSt6e3yWqOXwy8fjnvK2T+oJax+Ukkw8Jpnk68umJMg ntLr8ltPzBzaiPo6AdXLEf5GjCu2ULmmz3GQUxZxT5T5n24P3NYEwLfhhi8/HnYIabNC tX8VdgX71rFFUeTyoXKMCZJrW4DCuY8Lge2si7W8940i0+gQzxuHURfBBtMqO3VqgkJX +ULA== X-Gm-Message-State: ACgBeo2j+anYDkamUqqZGdFSOJhBJfqf83AyBtygUWuJsIVBl2cG9nA0 mxbvpJs5mp6KyaQoPTOYiMOJTZqylxM= X-Google-Smtp-Source: AA6agR5sWev4p3/aj8vgewp4mYyYm+Me5bYKrdKJ0xS7isIvQtVhJ+xKsgGvF18lbcbeuYcHyIz5Ow== X-Received: by 2002:a05:6870:3048:b0:127:aa64:a40f with SMTP id u8-20020a056870304800b00127aa64a40fmr10317759oau.23.1663384246817; Fri, 16 Sep 2022 20:10:46 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f9ea-fe1d-a45c-bca2.res6.spectrum.com. [2603:8081:140c:1a00:f9ea:fe1d:a45c:bca2]) by smtp.googlemail.com with ESMTPSA id be36-20020a05687058a400b000f5e89a9c60sm4464800oab.3.2022.09.16.20.10.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Sep 2022 20:10:46 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, lizhijian@fujitsu.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 04/13] RDMA/rxe: Extend opcodes and headers to support xrc Date: Fri, 16 Sep 2022 22:10:23 -0500 Message-Id: <20220917031028.21187-4-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220917031028.21187-1-rpearsonhpe@gmail.com> References: <20220917031028.21187-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend rxe_hdr.h to include the xrceth header and extend opcode tables in rxe_opcode.c to support xrc operations and qps. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_hdr.h | 36 +++ drivers/infiniband/sw/rxe/rxe_opcode.c | 379 +++++++++++++++++++++++-- drivers/infiniband/sw/rxe/rxe_opcode.h | 4 +- 3 files changed, 395 insertions(+), 24 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_hdr.h b/drivers/infiniband/sw/rxe/rxe_hdr.h index e432f9e37795..e947bcf75209 100644 --- a/drivers/infiniband/sw/rxe/rxe_hdr.h +++ b/drivers/infiniband/sw/rxe/rxe_hdr.h @@ -900,6 +900,41 @@ static inline void ieth_set_rkey(struct rxe_pkt_info *pkt, u32 rkey) rxe_opcode[pkt->opcode].offset[RXE_IETH], rkey); } +/****************************************************************************** + * XRC Extended Transport Header + ******************************************************************************/ +struct rxe_xrceth { + __be32 srqn; +}; + +#define XRCETH_SRQN_MASK (0x00ffffff) + +static inline u32 __xrceth_srqn(void *arg) +{ + struct rxe_xrceth *xrceth = arg; + + return be32_to_cpu(xrceth->srqn); +} + +static inline void __xrceth_set_srqn(void *arg, u32 srqn) +{ + struct rxe_xrceth *xrceth = arg; + + xrceth->srqn = cpu_to_be32(srqn & XRCETH_SRQN_MASK); +} + +static inline u32 xrceth_srqn(struct rxe_pkt_info *pkt) +{ + return __xrceth_srqn(pkt->hdr + + rxe_opcode[pkt->opcode].offset[RXE_XRCETH]); +} + +static inline void xrceth_set_srqn(struct rxe_pkt_info *pkt, u32 srqn) +{ + __xrceth_set_srqn(pkt->hdr + + rxe_opcode[pkt->opcode].offset[RXE_XRCETH], srqn); +} + enum rxe_hdr_length { RXE_BTH_BYTES = sizeof(struct rxe_bth), RXE_DETH_BYTES = sizeof(struct rxe_deth), @@ -909,6 +944,7 @@ enum rxe_hdr_length { RXE_ATMACK_BYTES = sizeof(struct rxe_atmack), RXE_ATMETH_BYTES = sizeof(struct rxe_atmeth), RXE_IETH_BYTES = sizeof(struct rxe_ieth), + RXE_XRCETH_BYTES = sizeof(struct rxe_xrceth), RXE_RDETH_BYTES = sizeof(struct rxe_rdeth), }; diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.c b/drivers/infiniband/sw/rxe/rxe_opcode.c index 6b1a1f197c4d..4ae926a37ef8 100644 --- a/drivers/infiniband/sw/rxe/rxe_opcode.c +++ b/drivers/infiniband/sw/rxe/rxe_opcode.c @@ -15,51 +15,58 @@ struct rxe_wr_opcode_info rxe_wr_opcode_info[] = { [IB_WR_RDMA_WRITE] = { .name = "IB_WR_RDMA_WRITE", .mask = { - [IB_QPT_RC] = WR_INLINE_MASK | WR_WRITE_MASK, - [IB_QPT_UC] = WR_INLINE_MASK | WR_WRITE_MASK, + [IB_QPT_RC] = WR_INLINE_MASK | WR_WRITE_MASK, + [IB_QPT_UC] = WR_INLINE_MASK | WR_WRITE_MASK, + [IB_QPT_XRC_INI] = WR_INLINE_MASK | WR_WRITE_MASK, }, }, [IB_WR_RDMA_WRITE_WITH_IMM] = { .name = "IB_WR_RDMA_WRITE_WITH_IMM", .mask = { - [IB_QPT_RC] = WR_INLINE_MASK | WR_WRITE_MASK, - [IB_QPT_UC] = WR_INLINE_MASK | WR_WRITE_MASK, + [IB_QPT_RC] = WR_INLINE_MASK | WR_WRITE_MASK, + [IB_QPT_UC] = WR_INLINE_MASK | WR_WRITE_MASK, + [IB_QPT_XRC_INI] = WR_INLINE_MASK | WR_WRITE_MASK, }, }, [IB_WR_SEND] = { .name = "IB_WR_SEND", .mask = { - [IB_QPT_GSI] = WR_INLINE_MASK | WR_SEND_MASK, - [IB_QPT_RC] = WR_INLINE_MASK | WR_SEND_MASK, - [IB_QPT_UC] = WR_INLINE_MASK | WR_SEND_MASK, - [IB_QPT_UD] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_GSI] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_RC] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_UC] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_UD] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_XRC_INI] = WR_INLINE_MASK | WR_SEND_MASK, }, }, [IB_WR_SEND_WITH_IMM] = { .name = "IB_WR_SEND_WITH_IMM", .mask = { - [IB_QPT_GSI] = WR_INLINE_MASK | WR_SEND_MASK, - [IB_QPT_RC] = WR_INLINE_MASK | WR_SEND_MASK, - [IB_QPT_UC] = WR_INLINE_MASK | WR_SEND_MASK, - [IB_QPT_UD] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_GSI] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_RC] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_UC] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_UD] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_XRC_INI] = WR_INLINE_MASK | WR_SEND_MASK, }, }, [IB_WR_RDMA_READ] = { .name = "IB_WR_RDMA_READ", .mask = { - [IB_QPT_RC] = WR_READ_MASK, + [IB_QPT_RC] = WR_READ_MASK, + [IB_QPT_XRC_INI] = WR_READ_MASK, }, }, [IB_WR_ATOMIC_CMP_AND_SWP] = { .name = "IB_WR_ATOMIC_CMP_AND_SWP", .mask = { - [IB_QPT_RC] = WR_ATOMIC_MASK, + [IB_QPT_RC] = WR_ATOMIC_MASK, + [IB_QPT_XRC_INI] = WR_ATOMIC_MASK, }, }, [IB_WR_ATOMIC_FETCH_AND_ADD] = { .name = "IB_WR_ATOMIC_FETCH_AND_ADD", .mask = { - [IB_QPT_RC] = WR_ATOMIC_MASK, + [IB_QPT_RC] = WR_ATOMIC_MASK, + [IB_QPT_XRC_INI] = WR_ATOMIC_MASK, }, }, [IB_WR_LSO] = { @@ -71,34 +78,39 @@ struct rxe_wr_opcode_info rxe_wr_opcode_info[] = { [IB_WR_SEND_WITH_INV] = { .name = "IB_WR_SEND_WITH_INV", .mask = { - [IB_QPT_RC] = WR_INLINE_MASK | WR_SEND_MASK, - [IB_QPT_UC] = WR_INLINE_MASK | WR_SEND_MASK, - [IB_QPT_UD] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_RC] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_UC] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_UD] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_XRC_INI] = WR_INLINE_MASK | WR_SEND_MASK, }, }, [IB_WR_RDMA_READ_WITH_INV] = { .name = "IB_WR_RDMA_READ_WITH_INV", .mask = { - [IB_QPT_RC] = WR_READ_MASK, + [IB_QPT_RC] = WR_READ_MASK, + [IB_QPT_XRC_INI] = WR_READ_MASK, }, }, [IB_WR_LOCAL_INV] = { .name = "IB_WR_LOCAL_INV", .mask = { - [IB_QPT_RC] = WR_LOCAL_OP_MASK, + [IB_QPT_RC] = WR_LOCAL_OP_MASK, + [IB_QPT_XRC_INI] = WR_LOCAL_OP_MASK, }, }, [IB_WR_REG_MR] = { .name = "IB_WR_REG_MR", .mask = { - [IB_QPT_RC] = WR_LOCAL_OP_MASK, + [IB_QPT_RC] = WR_LOCAL_OP_MASK, + [IB_QPT_XRC_INI] = WR_LOCAL_OP_MASK, }, }, [IB_WR_BIND_MW] = { .name = "IB_WR_BIND_MW", .mask = { - [IB_QPT_RC] = WR_LOCAL_OP_MASK, - [IB_QPT_UC] = WR_LOCAL_OP_MASK, + [IB_QPT_RC] = WR_LOCAL_OP_MASK, + [IB_QPT_UC] = WR_LOCAL_OP_MASK, + [IB_QPT_XRC_INI] = WR_LOCAL_OP_MASK, }, }, }; @@ -918,6 +930,327 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { } }, + /* XRC */ + [IB_OPCODE_XRC_SEND_FIRST] = { + .name = "IB_OPCODE_XRC_SEND_FIRST", + .mask = RXE_XRCETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | + RXE_RWR_MASK | RXE_SEND_MASK | RXE_FIRST_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + } + }, + [IB_OPCODE_XRC_SEND_MIDDLE] = { + .name = "IB_OPCODE_XRC_SEND_MIDDLE", + .mask = RXE_XRCETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | + RXE_SEND_MASK | RXE_MIDDLE_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + } + }, + [IB_OPCODE_XRC_SEND_LAST] = { + .name = "IB_OPCODE_XRC_SEND_LAST", + .mask = RXE_XRCETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | + RXE_COMP_MASK | RXE_SEND_MASK | RXE_LAST_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + } + }, + [IB_OPCODE_XRC_SEND_LAST_WITH_IMMEDIATE] = { + .name = "IB_OPCODE_XRC_SEND_LAST_WITH_IMMEDIATE", + .mask = RXE_XRCETH_MASK | RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | + RXE_REQ_MASK | RXE_COMP_MASK | RXE_SEND_MASK | + RXE_LAST_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES + RXE_IMMDT_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_IMMDT] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_IMMDT_BYTES, + } + }, + [IB_OPCODE_XRC_SEND_ONLY] = { + .name = "IB_OPCODE_XRC_SEND_ONLY", + .mask = RXE_XRCETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | + RXE_COMP_MASK | RXE_RWR_MASK | RXE_SEND_MASK | + RXE_ONLY_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + } + }, + [IB_OPCODE_XRC_SEND_ONLY_WITH_IMMEDIATE] = { + .name = "IB_OPCODE_XRC_SEND_ONLY_WITH_IMMEDIATE", + .mask = RXE_XRCETH_MASK | RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | + RXE_REQ_MASK | RXE_COMP_MASK | RXE_RWR_MASK | + RXE_SEND_MASK | RXE_ONLY_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES + RXE_IMMDT_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_IMMDT] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_IMMDT_BYTES, + } + }, + [IB_OPCODE_XRC_RDMA_WRITE_FIRST] = { + .name = "IB_OPCODE_XRC_RDMA_WRITE_FIRST", + .mask = RXE_XRCETH_MASK | RXE_RETH_MASK | RXE_PAYLOAD_MASK | + RXE_REQ_MASK | RXE_WRITE_MASK | RXE_FIRST_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES + RXE_RETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_RETH] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_RETH_BYTES, + } + }, + [IB_OPCODE_XRC_RDMA_WRITE_MIDDLE] = { + .name = "IB_OPCODE_XRC_RDMA_WRITE_MIDDLE", + .mask = RXE_XRCETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | + RXE_WRITE_MASK | RXE_MIDDLE_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + } + }, + [IB_OPCODE_XRC_RDMA_WRITE_LAST] = { + .name = "IB_OPCODE_XRC_RDMA_WRITE_LAST", + .mask = RXE_XRCETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | + RXE_WRITE_MASK | RXE_LAST_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + } + }, + [IB_OPCODE_XRC_RDMA_WRITE_LAST_WITH_IMMEDIATE] = { + .name = "IB_OPCODE_XRC_RDMA_WRITE_LAST_WITH_IMMEDIATE", + .mask = RXE_XRCETH_MASK | RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | + RXE_REQ_MASK | RXE_WRITE_MASK | RXE_COMP_MASK | + RXE_RWR_MASK | RXE_LAST_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES + RXE_IMMDT_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_IMMDT] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_IMMDT_BYTES, + } + }, + [IB_OPCODE_XRC_RDMA_WRITE_ONLY] = { + .name = "IB_OPCODE_XRC_RDMA_WRITE_ONLY", + .mask = RXE_XRCETH_MASK | RXE_RETH_MASK | RXE_PAYLOAD_MASK | + RXE_REQ_MASK | RXE_WRITE_MASK | RXE_ONLY_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES + RXE_RETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_RETH] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_RETH_BYTES, + } + }, + [IB_OPCODE_XRC_RDMA_WRITE_ONLY_WITH_IMMEDIATE] = { + .name = "IB_OPCODE_XRC_RDMA_WRITE_ONLY_WITH_IMMEDIATE", + .mask = RXE_XRCETH_MASK | RXE_RETH_MASK | RXE_IMMDT_MASK | + RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_WRITE_MASK | + RXE_COMP_MASK | RXE_RWR_MASK | RXE_ONLY_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES + RXE_IMMDT_BYTES + + RXE_RETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_RETH] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + [RXE_IMMDT] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_RETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_RETH_BYTES + + RXE_IMMDT_BYTES, + } + }, + [IB_OPCODE_XRC_RDMA_READ_REQUEST] = { + .name = "IB_OPCODE_XRC_RDMA_READ_REQUEST", + .mask = RXE_XRCETH_MASK | RXE_RETH_MASK | RXE_REQ_MASK | + RXE_READ_MASK | RXE_ONLY_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES + RXE_RETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_RETH] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_RETH_BYTES, + } + }, + [IB_OPCODE_XRC_RDMA_READ_RESPONSE_FIRST] = { + .name = "IB_OPCODE_XRC_RDMA_READ_RESPONSE_FIRST", + .mask = RXE_AETH_MASK | RXE_PAYLOAD_MASK | RXE_ACK_MASK | + RXE_FIRST_MASK, + .length = RXE_BTH_BYTES + RXE_AETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_AETH] = RXE_BTH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_AETH_BYTES, + } + }, + [IB_OPCODE_XRC_RDMA_READ_RESPONSE_MIDDLE] = { + .name = "IB_OPCODE_XRC_RDMA_READ_RESPONSE_MIDDLE", + .mask = RXE_PAYLOAD_MASK | RXE_ACK_MASK | RXE_MIDDLE_MASK, + .length = RXE_BTH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_PAYLOAD] = RXE_BTH_BYTES, + } + }, + [IB_OPCODE_XRC_RDMA_READ_RESPONSE_LAST] = { + .name = "IB_OPCODE_XRC_RDMA_READ_RESPONSE_LAST", + .mask = RXE_AETH_MASK | RXE_PAYLOAD_MASK | RXE_ACK_MASK | + RXE_LAST_MASK, + .length = RXE_BTH_BYTES + RXE_AETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_AETH] = RXE_BTH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_AETH_BYTES, + } + }, + [IB_OPCODE_XRC_RDMA_READ_RESPONSE_ONLY] = { + .name = "IB_OPCODE_XRC_RDMA_READ_RESPONSE_ONLY", + .mask = RXE_AETH_MASK | RXE_PAYLOAD_MASK | RXE_ACK_MASK | + RXE_ONLY_MASK, + .length = RXE_BTH_BYTES + RXE_AETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_AETH] = RXE_BTH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_AETH_BYTES, + } + }, + [IB_OPCODE_XRC_ACKNOWLEDGE] = { + .name = "IB_OPCODE_XRC_ACKNOWLEDGE", + .mask = RXE_AETH_MASK | RXE_ACK_MASK | RXE_ONLY_MASK, + .length = RXE_BTH_BYTES + RXE_AETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_AETH] = RXE_BTH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_AETH_BYTES, + } + }, + [IB_OPCODE_XRC_ATOMIC_ACKNOWLEDGE] = { + .name = "IB_OPCODE_XRC_ATOMIC_ACKNOWLEDGE", + .mask = RXE_AETH_MASK | RXE_ATMACK_MASK | RXE_ACK_MASK | + RXE_ONLY_MASK, + .length = RXE_BTH_BYTES + RXE_ATMACK_BYTES + RXE_AETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_AETH] = RXE_BTH_BYTES, + [RXE_ATMACK] = RXE_BTH_BYTES + + RXE_AETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_AETH_BYTES + + RXE_ATMACK_BYTES, + } + }, + [IB_OPCODE_XRC_COMPARE_SWAP] = { + .name = "IB_OPCODE_XRC_COMPARE_SWAP", + .mask = RXE_XRCETH_MASK | RXE_ATMETH_MASK | RXE_REQ_MASK | + RXE_ATOMIC_MASK | RXE_ONLY_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES + RXE_ATMETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_ATMETH] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_ATMETH_BYTES, + } + }, + [IB_OPCODE_XRC_FETCH_ADD] = { + .name = "IB_OPCODE_XRC_FETCH_ADD", + .mask = RXE_XRCETH_MASK | RXE_ATMETH_MASK | RXE_REQ_MASK | + RXE_ATOMIC_MASK | RXE_ONLY_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES + RXE_ATMETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_ATMETH] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_ATMETH_BYTES, + } + }, + [IB_OPCODE_XRC_SEND_LAST_WITH_INVALIDATE] = { + .name = "IB_OPCODE_XRC_SEND_LAST_WITH_INVALIDATE", + .mask = RXE_XRCETH_MASK | RXE_IETH_MASK | RXE_PAYLOAD_MASK | + RXE_REQ_MASK | RXE_COMP_MASK | RXE_SEND_MASK | + RXE_LAST_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES + RXE_IETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_IETH] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_IETH_BYTES, + } + }, + [IB_OPCODE_XRC_SEND_ONLY_WITH_INVALIDATE] = { + .name = "IB_OPCODE_XRC_SEND_ONLY_INV", + .mask = RXE_XRCETH_MASK | RXE_IETH_MASK | RXE_PAYLOAD_MASK | + RXE_REQ_MASK | RXE_COMP_MASK | RXE_RWR_MASK | + RXE_SEND_MASK | RXE_ONLY_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES + RXE_IETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_IETH] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_IETH_BYTES, + } + }, }; static int next_opcode_rc(struct rxe_qp *qp, u32 opcode, int fits) diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.h b/drivers/infiniband/sw/rxe/rxe_opcode.h index d2b6a8232e92..5528a47f0266 100644 --- a/drivers/infiniband/sw/rxe/rxe_opcode.h +++ b/drivers/infiniband/sw/rxe/rxe_opcode.h @@ -30,7 +30,7 @@ enum rxe_wr_mask { struct rxe_wr_opcode_info { char *name; - enum rxe_wr_mask mask[WR_MAX_QPT]; + enum rxe_wr_mask mask[IB_QPT_MAX]; }; extern struct rxe_wr_opcode_info rxe_wr_opcode_info[]; @@ -44,6 +44,7 @@ enum rxe_hdr_type { RXE_ATMETH, RXE_ATMACK, RXE_IETH, + RXE_XRCETH, RXE_RDETH, RXE_DETH, RXE_IMMDT, @@ -61,6 +62,7 @@ enum rxe_hdr_mask { RXE_ATMETH_MASK = BIT(RXE_ATMETH), RXE_ATMACK_MASK = BIT(RXE_ATMACK), RXE_IETH_MASK = BIT(RXE_IETH), + RXE_XRCETH_MASK = BIT(RXE_XRCETH), RXE_RDETH_MASK = BIT(RXE_RDETH), RXE_DETH_MASK = BIT(RXE_DETH), RXE_PAYLOAD_MASK = BIT(RXE_PAYLOAD), From patchwork Sat Sep 17 03:10:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12978990 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7677ECAAD8 for ; Sat, 17 Sep 2022 03:10:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229648AbiIQDKw (ORCPT ); Fri, 16 Sep 2022 23:10:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53762 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229604AbiIQDKu (ORCPT ); Fri, 16 Sep 2022 23:10:50 -0400 Received: from mail-ot1-x330.google.com (mail-ot1-x330.google.com [IPv6:2607:f8b0:4864:20::330]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 73C324DF01 for ; Fri, 16 Sep 2022 20:10:48 -0700 (PDT) Received: by mail-ot1-x330.google.com with SMTP id br15-20020a056830390f00b0061c9d73b8bdso16101944otb.6 for ; Fri, 16 Sep 2022 20:10:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=ozTLBWu+OgVpUPFoK0XaDtZgeXfSbSLcsDTataCzAd4=; b=BeKDYMerdLVYIZkUKSunMmzkwQ6d2ee6nUWQCzKyEkFFYw4alxs2DcH76yVaoPaWuy G4V5XJ/zH2B6lfrmbWGWHGv9si/8N9HxEfm1jOfHJyNgdRKtXas1sbeh1XqBb7hximu+ K1yP0EZM6IFOvj1ZZi1dtHFdqTkjCc/PpADkhHDQ6NeVq4x2lvSCx1r7W9F/2zew5+9E AIRmLT14sv58ElyVoT7H45TyiKe8j664yLF5DQay7eJDNEDv1jAgb8U43RJs8Wvhw+3r Fjb89QmGIVJlJQBvB4rOUvXCSVflt3SuKZeJPi22cMRzLwCc4/DCAiq3l4BF30JS+pp/ cXwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=ozTLBWu+OgVpUPFoK0XaDtZgeXfSbSLcsDTataCzAd4=; b=18fBwOm7Hy1qEE/TbEWfwRjBbyl6ZbisDnk5bcixCPwU+Tdu+E2YCLF6n2Fx8bq/j5 K3yEmUCX1HG2RT+pGYY2bhTVmbKzjZEhRjf2P1vCg/64akuvSogqPxTTkKRbRxDOrcrl K0UyQij11kyWoga797Ts1WTMOsgjQ3j+nmJNWA8KCvO3g4eTN0scBvy80A3qQ1xJ+AV1 VDIfPXszHYXVK234XV9RdLGfqIu/DJVZ2RmeEBIqhqnMFLoKz4zBrvnXxUfIfC9MCVuh 7msJyhw58yf/X4AAK99McfTl0Nn6WAqsUCETz1OrT9BmyK2pnyMt1powDjE2eI5W6dSZ fNNA== X-Gm-Message-State: ACrzQf0OK3jd4BbX0BLRP5/6qqneMKliijylaGkyfYXOqNzb154UuZfn p89muRqP/K9ypeGfMiWGWmQ= X-Google-Smtp-Source: AMsMyM5SgkYK047+KvZnT/Yx7BXOGPTVAx1NYfHU0L96d0Lfd/UQlIOoGOTHYbrBQfVOO9oiNx05jA== X-Received: by 2002:a05:6830:90b:b0:659:4fe0:7ab6 with SMTP id v11-20020a056830090b00b006594fe07ab6mr3530682ott.385.1663384247790; Fri, 16 Sep 2022 20:10:47 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f9ea-fe1d-a45c-bca2.res6.spectrum.com. [2603:8081:140c:1a00:f9ea:fe1d:a45c:bca2]) by smtp.googlemail.com with ESMTPSA id be36-20020a05687058a400b000f5e89a9c60sm4464800oab.3.2022.09.16.20.10.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Sep 2022 20:10:47 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, lizhijian@fujitsu.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 05/13] RDMA/rxe: Add xrc opcodes to next_opcode() Date: Fri, 16 Sep 2022 22:10:24 -0500 Message-Id: <20220917031028.21187-5-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220917031028.21187-1-rpearsonhpe@gmail.com> References: <20220917031028.21187-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend next_opcode() to support xrc operations. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_opcode.c | 88 ++++++++++++++++++++++++++ 1 file changed, 88 insertions(+) diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.c b/drivers/infiniband/sw/rxe/rxe_opcode.c index 4ae926a37ef8..c2bac0ce444a 100644 --- a/drivers/infiniband/sw/rxe/rxe_opcode.c +++ b/drivers/infiniband/sw/rxe/rxe_opcode.c @@ -1376,6 +1376,91 @@ static int next_opcode_uc(struct rxe_qp *qp, u32 opcode, int fits) return -EINVAL; } +static int next_opcode_xrc(struct rxe_qp *qp, u32 wr_opcode, int fits) +{ + switch (wr_opcode) { + case IB_WR_RDMA_WRITE: + if (qp->req.opcode == IB_OPCODE_XRC_RDMA_WRITE_FIRST || + qp->req.opcode == IB_OPCODE_XRC_RDMA_WRITE_MIDDLE) + return fits ? + IB_OPCODE_XRC_RDMA_WRITE_LAST : + IB_OPCODE_XRC_RDMA_WRITE_MIDDLE; + else + return fits ? + IB_OPCODE_XRC_RDMA_WRITE_ONLY : + IB_OPCODE_XRC_RDMA_WRITE_FIRST; + + case IB_WR_RDMA_WRITE_WITH_IMM: + if (qp->req.opcode == IB_OPCODE_XRC_RDMA_WRITE_FIRST || + qp->req.opcode == IB_OPCODE_XRC_RDMA_WRITE_MIDDLE) + return fits ? + IB_OPCODE_XRC_RDMA_WRITE_LAST_WITH_IMMEDIATE : + IB_OPCODE_XRC_RDMA_WRITE_MIDDLE; + else + return fits ? + IB_OPCODE_XRC_RDMA_WRITE_ONLY_WITH_IMMEDIATE : + IB_OPCODE_XRC_RDMA_WRITE_FIRST; + + case IB_WR_SEND: + if (qp->req.opcode == IB_OPCODE_XRC_SEND_FIRST || + qp->req.opcode == IB_OPCODE_XRC_SEND_MIDDLE) + return fits ? + IB_OPCODE_XRC_SEND_LAST : + IB_OPCODE_XRC_SEND_MIDDLE; + else + return fits ? + IB_OPCODE_XRC_SEND_ONLY : + IB_OPCODE_XRC_SEND_FIRST; + + case IB_WR_SEND_WITH_IMM: + if (qp->req.opcode == IB_OPCODE_XRC_SEND_FIRST || + qp->req.opcode == IB_OPCODE_XRC_SEND_MIDDLE) + return fits ? + IB_OPCODE_XRC_SEND_LAST_WITH_IMMEDIATE : + IB_OPCODE_XRC_SEND_MIDDLE; + else + return fits ? + IB_OPCODE_XRC_SEND_ONLY_WITH_IMMEDIATE : + IB_OPCODE_XRC_SEND_FIRST; + + case IB_WR_RDMA_READ: + return IB_OPCODE_XRC_RDMA_READ_REQUEST; + + case IB_WR_RDMA_READ_WITH_INV: + return IB_OPCODE_XRC_RDMA_READ_REQUEST; + + case IB_WR_ATOMIC_CMP_AND_SWP: + return IB_OPCODE_XRC_COMPARE_SWAP; + + case IB_WR_MASKED_ATOMIC_CMP_AND_SWP: + return -EOPNOTSUPP; + + case IB_WR_ATOMIC_FETCH_AND_ADD: + return IB_OPCODE_XRC_FETCH_ADD; + + case IB_WR_MASKED_ATOMIC_FETCH_AND_ADD: + return -EOPNOTSUPP; + + case IB_WR_SEND_WITH_INV: + if (qp->req.opcode == IB_OPCODE_XRC_SEND_FIRST || + qp->req.opcode == IB_OPCODE_XRC_SEND_MIDDLE) + return fits ? + IB_OPCODE_XRC_SEND_LAST_WITH_INVALIDATE : + IB_OPCODE_XRC_SEND_MIDDLE; + else + return fits ? + IB_OPCODE_XRC_SEND_ONLY_WITH_INVALIDATE : + IB_OPCODE_XRC_SEND_FIRST; + + case IB_WR_LOCAL_INV: + case IB_WR_REG_MR: + case IB_WR_BIND_MW: + return wr_opcode; + } + + return -EINVAL; +} + int next_opcode(struct rxe_qp *qp, struct rxe_send_wqe *wqe, u32 opcode) { int fits = (wqe->dma.resid <= qp->mtu); @@ -1387,6 +1472,9 @@ int next_opcode(struct rxe_qp *qp, struct rxe_send_wqe *wqe, u32 opcode) case IB_QPT_UC: return next_opcode_uc(qp, opcode, fits); + case IB_QPT_XRC_INI: + return next_opcode_xrc(qp, opcode, fits); + case IB_QPT_UD: case IB_QPT_GSI: switch (opcode) { From patchwork Sat Sep 17 03:10:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12978991 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32701ECAAD8 for ; Sat, 17 Sep 2022 03:10:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229683AbiIQDKy (ORCPT ); Fri, 16 Sep 2022 23:10:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53808 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229538AbiIQDKv (ORCPT ); Fri, 16 Sep 2022 23:10:51 -0400 Received: from mail-oi1-x22a.google.com (mail-oi1-x22a.google.com [IPv6:2607:f8b0:4864:20::22a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 79A504DF25 for ; Fri, 16 Sep 2022 20:10:49 -0700 (PDT) Received: by mail-oi1-x22a.google.com with SMTP id w125so8307473oig.3 for ; Fri, 16 Sep 2022 20:10:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=/yphHfq1fscJdeg9GonHY1UqInv08T/3vgxuCO7wSGA=; b=YbI009aIVgGj8Ak+Q3v5IVX1ITvZJUmgTqPSWl8+PwLBz4wjPAl34inMUQd4EohJDH Lh9gwWiYYkAietWHHeB2HpC9yp0RQRQQcM+9vTKH0PGAr3+A6FoBFpYyHbaTsoSRuwRl 8X23PJV1yVaCRl1USzDqR6wr31lke07bXICHQsLPmASGc+z4Vo21V2jxBoKPt/Rso+8I lm7p8R0IJaLajURSFB+zV7wzNGoF/J67hqXn1gHsUQbh4zO+wgxAw4VX5vOUL0oxjgNg H1+K/RVvwIC8TixHghlJykFBO8ZFMulu+n5/D3voWxAd/76Avs1d1YqiiLefasJZvZMi lyLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=/yphHfq1fscJdeg9GonHY1UqInv08T/3vgxuCO7wSGA=; b=Zj8DzL0gvJxOPfuPrQoDSJAOnkF9Ap4l7xvOH02PS7QYauECpV3P7hQiizTyuu9mA6 2DrTCU5R+YQue7BSRegYqm/Jf3F1S6679fodmKlKbFi1W6oYE92b7bM0RX1wm2CFhSZ1 z8O+4rEnCBYffN22nBKO59j7zncXBk+4+bFwHZO5ozMZL0N63mMTdc/ZGPBlBxTxj48r VtNyRPh8VcrLl8dQBqRbC+gCMJRB0cf+8DpAmiPWi1A2SlrCUB7GehFPzL94HpuBM+tK y4jE9l7xq+utjUdNHek9VVhbZpgMXltXgc5Shdp2A/ddujwHrqb1i3kFXXErVAixdo0q 4ODA== X-Gm-Message-State: ACgBeo3X1Xq13ZvfZsd7fkMjFSJscIRGmQp4stBT/QvgoQZCeRab0PZz 7dUiovfK5HXVApWE0TVKle0= X-Google-Smtp-Source: AA6agR5fgaNaEZFOQiqj/Flfo0GWNMQtZZj0hh8yWRgaofZTuof/ya3ajgN2B5ZO7ojsMtt+7r3KJQ== X-Received: by 2002:a05:6808:2190:b0:34d:8ea6:3e9d with SMTP id be16-20020a056808219000b0034d8ea63e9dmr8400446oib.40.1663384248864; Fri, 16 Sep 2022 20:10:48 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f9ea-fe1d-a45c-bca2.res6.spectrum.com. [2603:8081:140c:1a00:f9ea:fe1d:a45c:bca2]) by smtp.googlemail.com with ESMTPSA id be36-20020a05687058a400b000f5e89a9c60sm4464800oab.3.2022.09.16.20.10.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Sep 2022 20:10:48 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, lizhijian@fujitsu.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 06/13] RDMA/rxe: Implement open_xrcd and close_xrcd Date: Fri, 16 Sep 2022 22:10:25 -0500 Message-Id: <20220917031028.21187-6-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220917031028.21187-1-rpearsonhpe@gmail.com> References: <20220917031028.21187-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add rxe_open_xrcd() and rxe_close_xrcd() and add xrcd objects to rxe object pools to implement ib_open_xrcd() and ib_close_xrcd(). Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe.c | 2 ++ drivers/infiniband/sw/rxe/rxe_param.h | 3 +++ drivers/infiniband/sw/rxe/rxe_pool.c | 8 ++++++++ drivers/infiniband/sw/rxe/rxe_pool.h | 1 + drivers/infiniband/sw/rxe/rxe_verbs.c | 23 +++++++++++++++++++++++ drivers/infiniband/sw/rxe/rxe_verbs.h | 11 +++++++++++ 6 files changed, 48 insertions(+) diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c index 51daac5c4feb..acd22980836e 100644 --- a/drivers/infiniband/sw/rxe/rxe.c +++ b/drivers/infiniband/sw/rxe/rxe.c @@ -23,6 +23,7 @@ void rxe_dealloc(struct ib_device *ib_dev) rxe_pool_cleanup(&rxe->uc_pool); rxe_pool_cleanup(&rxe->pd_pool); rxe_pool_cleanup(&rxe->ah_pool); + rxe_pool_cleanup(&rxe->xrcd_pool); rxe_pool_cleanup(&rxe->srq_pool); rxe_pool_cleanup(&rxe->qp_pool); rxe_pool_cleanup(&rxe->cq_pool); @@ -120,6 +121,7 @@ static void rxe_init_pools(struct rxe_dev *rxe) rxe_pool_init(rxe, &rxe->uc_pool, RXE_TYPE_UC); rxe_pool_init(rxe, &rxe->pd_pool, RXE_TYPE_PD); rxe_pool_init(rxe, &rxe->ah_pool, RXE_TYPE_AH); + rxe_pool_init(rxe, &rxe->xrcd_pool, RXE_TYPE_XRCD); rxe_pool_init(rxe, &rxe->srq_pool, RXE_TYPE_SRQ); rxe_pool_init(rxe, &rxe->qp_pool, RXE_TYPE_QP); rxe_pool_init(rxe, &rxe->cq_pool, RXE_TYPE_CQ); diff --git a/drivers/infiniband/sw/rxe/rxe_param.h b/drivers/infiniband/sw/rxe/rxe_param.h index 86c7a8bf3cbb..fa4bf177e123 100644 --- a/drivers/infiniband/sw/rxe/rxe_param.h +++ b/drivers/infiniband/sw/rxe/rxe_param.h @@ -86,6 +86,9 @@ enum rxe_device_param { RXE_MAX_QP_INDEX = DEFAULT_MAX_VALUE, RXE_MAX_QP = DEFAULT_MAX_VALUE - RXE_MIN_QP_INDEX, + RXE_MIN_XRCD_INDEX = 1, + RXE_MAX_XRCD_INDEX = 128, + RXE_MAX_XRCD = 128, RXE_MIN_SRQ_INDEX = 0x00020001, RXE_MAX_SRQ_INDEX = DEFAULT_MAX_VALUE, RXE_MAX_SRQ = DEFAULT_MAX_VALUE - RXE_MIN_SRQ_INDEX, diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index f50620f5a0a1..b54453b68169 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -42,6 +42,14 @@ static const struct rxe_type_info { .max_index = RXE_MAX_AH_INDEX, .max_elem = RXE_MAX_AH_INDEX - RXE_MIN_AH_INDEX + 1, }, + [RXE_TYPE_XRCD] = { + .name = "xrcd", + .size = sizeof(struct rxe_xrcd), + .elem_offset = offsetof(struct rxe_xrcd, elem), + .min_index = RXE_MIN_XRCD_INDEX, + .max_index = RXE_MAX_XRCD_INDEX, + .max_elem = RXE_MAX_XRCD_INDEX - RXE_MIN_XRCD_INDEX + 1, + }, [RXE_TYPE_SRQ] = { .name = "srq", .size = sizeof(struct rxe_srq), diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index 9d83cb32092f..35ac0746a4b8 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -11,6 +11,7 @@ enum rxe_elem_type { RXE_TYPE_UC, RXE_TYPE_PD, RXE_TYPE_AH, + RXE_TYPE_XRCD, RXE_TYPE_SRQ, RXE_TYPE_QP, RXE_TYPE_CQ, diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 9ebe9decad34..4a5da079bf11 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -281,6 +281,26 @@ static int post_one_recv(struct rxe_rq *rq, const struct ib_recv_wr *ibwr) return err; } +static int rxe_alloc_xrcd(struct ib_xrcd *ibxrcd, struct ib_udata *udata) +{ + struct rxe_dev *rxe = to_rdev(ibxrcd->device); + struct rxe_xrcd *xrcd = to_rxrcd(ibxrcd); + int err; + + err = rxe_add_to_pool(&rxe->xrcd_pool, xrcd); + + return err; +} + +static int rxe_dealloc_xrcd(struct ib_xrcd *ibxrcd, struct ib_udata *udata) +{ + struct rxe_xrcd *xrcd = to_rxrcd(ibxrcd); + + rxe_cleanup(xrcd); + + return 0; +} + static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init, struct ib_udata *udata) { @@ -1055,6 +1075,7 @@ static const struct ib_device_ops rxe_dev_ops = { .alloc_mw = rxe_alloc_mw, .alloc_pd = rxe_alloc_pd, .alloc_ucontext = rxe_alloc_ucontext, + .alloc_xrcd = rxe_alloc_xrcd, .attach_mcast = rxe_attach_mcast, .create_ah = rxe_create_ah, .create_cq = rxe_create_cq, @@ -1065,6 +1086,7 @@ static const struct ib_device_ops rxe_dev_ops = { .dealloc_mw = rxe_dealloc_mw, .dealloc_pd = rxe_dealloc_pd, .dealloc_ucontext = rxe_dealloc_ucontext, + .dealloc_xrcd = rxe_dealloc_xrcd, .dereg_mr = rxe_dereg_mr, .destroy_ah = rxe_destroy_ah, .destroy_cq = rxe_destroy_cq, @@ -1103,6 +1125,7 @@ static const struct ib_device_ops rxe_dev_ops = { INIT_RDMA_OBJ_SIZE(ib_cq, rxe_cq, ibcq), INIT_RDMA_OBJ_SIZE(ib_pd, rxe_pd, ibpd), INIT_RDMA_OBJ_SIZE(ib_qp, rxe_qp, ibqp), + INIT_RDMA_OBJ_SIZE(ib_xrcd, rxe_xrcd, ibxrcd), INIT_RDMA_OBJ_SIZE(ib_srq, rxe_srq, ibsrq), INIT_RDMA_OBJ_SIZE(ib_ucontext, rxe_ucontext, ibuc), INIT_RDMA_OBJ_SIZE(ib_mw, rxe_mw, ibmw), diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index a51819d0c345..6c4cfb802dd4 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -93,6 +93,11 @@ struct rxe_rq { struct rxe_queue *queue; }; +struct rxe_xrcd { + struct ib_xrcd ibxrcd; + struct rxe_pool_elem elem; +}; + struct rxe_srq { struct ib_srq ibsrq; struct rxe_pool_elem elem; @@ -383,6 +388,7 @@ struct rxe_dev { struct rxe_pool uc_pool; struct rxe_pool pd_pool; struct rxe_pool ah_pool; + struct rxe_pool xrcd_pool; struct rxe_pool srq_pool; struct rxe_pool qp_pool; struct rxe_pool cq_pool; @@ -432,6 +438,11 @@ static inline struct rxe_ah *to_rah(struct ib_ah *ah) return ah ? container_of(ah, struct rxe_ah, ibah) : NULL; } +static inline struct rxe_xrcd *to_rxrcd(struct ib_xrcd *ibxrcd) +{ + return ibxrcd ? container_of(ibxrcd, struct rxe_xrcd, ibxrcd) : NULL; +} + static inline struct rxe_srq *to_rsrq(struct ib_srq *srq) { return srq ? container_of(srq, struct rxe_srq, ibsrq) : NULL; From patchwork Sat Sep 17 03:10:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12978992 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0582CECAAD8 for ; Sat, 17 Sep 2022 03:11:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229613AbiIQDK4 (ORCPT ); Fri, 16 Sep 2022 23:10:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53950 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229538AbiIQDKy (ORCPT ); Fri, 16 Sep 2022 23:10:54 -0400 Received: from mail-oi1-x22e.google.com (mail-oi1-x22e.google.com [IPv6:2607:f8b0:4864:20::22e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C2A594D26D for ; Fri, 16 Sep 2022 20:10:50 -0700 (PDT) Received: by mail-oi1-x22e.google.com with SMTP id j188so5964753oih.0 for ; Fri, 16 Sep 2022 20:10:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=QdO3klLpVE9C39N+UNoV5oSHsEGmmluYbUXlMnbabqs=; b=fHB39p26Gop+Cy/sqwZ62ALN9/9pH1gR+NeWUQXSuoPjIraqowQ/bXaDKma3iclf7J VDPe3w60DE1a9HFzLBSMg7bzjTAAARHtZwwtiIy1rwyjutv9QBRGHLNPlPEDPbpA/Gk4 lvqbk4BL+5UST/Rn4czn2qKD07sCl2e3dRktfbXBeDalWkd1++3a8j0BoeY7W7NnbMrc iANY/0kYsqiB7ovYW4btwIAMvX8qRo3uYX7UFVF02l++XGQjeVN3i+ncP04TRnKbhgez eEADSRbG+WFHRo49Zv8EUqB2gaes4Imm6ZrcilPljsqMHXrlCXl457CfmA/k0cA9VQQ4 B8EQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=QdO3klLpVE9C39N+UNoV5oSHsEGmmluYbUXlMnbabqs=; b=KBYtp7tlP4WIWmNCwP6ohE7iFgfAqWPi5/P29jb+K2/F25AVw61uX7wM/7sI7zPvoz VtZbbHXuvJodL4DigiS7RrKvZTPWXYLQEwa3ZkQqpojC0kzw90W517JwSYMZsbxmRN4H IO9mi9dh2w+23YNzjlgeXoRrM+f7hU1CZ9YU7dHCasN/337MCxrOP0YHLMPgul1zIj8y i6JydbXANSQ+HWGFT1pxgNHgM8HULqVaeiCylQhvjuCvPMYyKZ/ITqhr2Y/UscFIxhu3 zSk+l/n8Qaofd9YOFwy5qwbKScAy35w4m7zDbSv+lL6JiuyF7QpNz6Pt2PYxjpu6FUn5 GT8Q== X-Gm-Message-State: ACrzQf01Q4jb3D0v+uY1akv3tZ3V88G0qEr9VP02Pk868WBFDJjHWd6e Y5yH4rCmuSnvNAenhgWoZSQ= X-Google-Smtp-Source: AMsMyM7PKSlKC4F44rJIXwmI9uPOEGUcyhdxtJGfK05xDOh/50ZDCkgjG+QBMZ+oLExsciBMCDV/kQ== X-Received: by 2002:a05:6808:10c8:b0:350:6ae1:b764 with SMTP id s8-20020a05680810c800b003506ae1b764mr1999304ois.66.1663384250077; Fri, 16 Sep 2022 20:10:50 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f9ea-fe1d-a45c-bca2.res6.spectrum.com. [2603:8081:140c:1a00:f9ea:fe1d:a45c:bca2]) by smtp.googlemail.com with ESMTPSA id be36-20020a05687058a400b000f5e89a9c60sm4464800oab.3.2022.09.16.20.10.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Sep 2022 20:10:49 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, lizhijian@fujitsu.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 07/13] RDMA/rxe: Extend srq verbs to support xrcd Date: Fri, 16 Sep 2022 22:10:26 -0500 Message-Id: <20220917031028.21187-7-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220917031028.21187-1-rpearsonhpe@gmail.com> References: <20220917031028.21187-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend srq to support xrcd in create verb Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_srq.c | 131 ++++++++++++++------------ drivers/infiniband/sw/rxe/rxe_verbs.c | 13 +-- drivers/infiniband/sw/rxe/rxe_verbs.h | 8 +- include/uapi/rdma/rdma_user_rxe.h | 4 +- 4 files changed, 83 insertions(+), 73 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_srq.c b/drivers/infiniband/sw/rxe/rxe_srq.c index 02b39498c370..fcd1a58c3900 100644 --- a/drivers/infiniband/sw/rxe/rxe_srq.c +++ b/drivers/infiniband/sw/rxe/rxe_srq.c @@ -11,61 +11,85 @@ int rxe_srq_chk_init(struct rxe_dev *rxe, struct ib_srq_init_attr *init) { struct ib_srq_attr *attr = &init->attr; + int err = -EINVAL; - if (attr->max_wr > rxe->attr.max_srq_wr) { - pr_warn("max_wr(%d) > max_srq_wr(%d)\n", - attr->max_wr, rxe->attr.max_srq_wr); - goto err1; + if (init->srq_type == IB_SRQT_TM) { + err = -EOPNOTSUPP; + goto err_out; } - if (attr->max_wr <= 0) { - pr_warn("max_wr(%d) <= 0\n", attr->max_wr); - goto err1; + if (init->srq_type == IB_SRQT_XRC) { + if (!init->ext.cq || !init->ext.xrc.xrcd) + goto err_out; } + if (attr->max_wr > rxe->attr.max_srq_wr) + goto err_out; + + if (attr->max_wr <= 0) + goto err_out; + if (attr->max_wr < RXE_MIN_SRQ_WR) attr->max_wr = RXE_MIN_SRQ_WR; - if (attr->max_sge > rxe->attr.max_srq_sge) { - pr_warn("max_sge(%d) > max_srq_sge(%d)\n", - attr->max_sge, rxe->attr.max_srq_sge); - goto err1; - } + if (attr->max_sge > rxe->attr.max_srq_sge) + goto err_out; if (attr->max_sge < RXE_MIN_SRQ_SGE) attr->max_sge = RXE_MIN_SRQ_SGE; return 0; -err1: - return -EINVAL; +err_out: + pr_debug("%s: failed err = %d\n", __func__, err); + return err; } int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq, struct ib_srq_init_attr *init, struct ib_udata *udata, struct rxe_create_srq_resp __user *uresp) { - int err; - int srq_wqe_size; + struct rxe_pd *pd = to_rpd(srq->ibsrq.pd); + struct rxe_cq *cq; + struct rxe_xrcd *xrcd; struct rxe_queue *q; - enum queue_type type; + int srq_wqe_size; + int err; + + rxe_get(pd); + srq->pd = pd; srq->ibsrq.event_handler = init->event_handler; srq->ibsrq.srq_context = init->srq_context; srq->limit = init->attr.srq_limit; - srq->srq_num = srq->elem.index; srq->rq.max_wr = init->attr.max_wr; srq->rq.max_sge = init->attr.max_sge; - srq_wqe_size = rcv_wqe_size(srq->rq.max_sge); + if (init->srq_type == IB_SRQT_XRC) { + cq = to_rcq(init->ext.cq); + if (cq) { + rxe_get(cq); + srq->cq = to_rcq(init->ext.cq); + } else { + return -EINVAL; + } + xrcd = to_rxrcd(init->ext.xrc.xrcd); + if (xrcd) { + rxe_get(xrcd); + srq->xrcd = to_rxrcd(init->ext.xrc.xrcd); + } + srq->ibsrq.ext.xrc.srq_num = srq->elem.index; + } spin_lock_init(&srq->rq.producer_lock); spin_lock_init(&srq->rq.consumer_lock); - type = QUEUE_TYPE_FROM_CLIENT; - q = rxe_queue_init(rxe, &srq->rq.max_wr, srq_wqe_size, type); + srq_wqe_size = rcv_wqe_size(srq->rq.max_sge); + q = rxe_queue_init(rxe, &srq->rq.max_wr, srq_wqe_size, + QUEUE_TYPE_FROM_CLIENT); if (!q) { - pr_warn("unable to allocate queue for srq\n"); + pr_debug("%s: srq#%d: unable to allocate queue\n", + __func__, srq->elem.index); return -ENOMEM; } @@ -79,66 +103,45 @@ int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq, return err; } - if (uresp) { - if (copy_to_user(&uresp->srq_num, &srq->srq_num, - sizeof(uresp->srq_num))) { - rxe_queue_cleanup(q); - return -EFAULT; - } - } - return 0; } int rxe_srq_chk_attr(struct rxe_dev *rxe, struct rxe_srq *srq, struct ib_srq_attr *attr, enum ib_srq_attr_mask mask) { - if (srq->error) { - pr_warn("srq in error state\n"); - goto err1; - } + int err = -EINVAL; + + if (srq->error) + goto err_out; if (mask & IB_SRQ_MAX_WR) { - if (attr->max_wr > rxe->attr.max_srq_wr) { - pr_warn("max_wr(%d) > max_srq_wr(%d)\n", - attr->max_wr, rxe->attr.max_srq_wr); - goto err1; - } + if (attr->max_wr > rxe->attr.max_srq_wr) + goto err_out; - if (attr->max_wr <= 0) { - pr_warn("max_wr(%d) <= 0\n", attr->max_wr); - goto err1; - } + if (attr->max_wr <= 0) + goto err_out; - if (srq->limit && (attr->max_wr < srq->limit)) { - pr_warn("max_wr (%d) < srq->limit (%d)\n", - attr->max_wr, srq->limit); - goto err1; - } + if (srq->limit && (attr->max_wr < srq->limit)) + goto err_out; if (attr->max_wr < RXE_MIN_SRQ_WR) attr->max_wr = RXE_MIN_SRQ_WR; } if (mask & IB_SRQ_LIMIT) { - if (attr->srq_limit > rxe->attr.max_srq_wr) { - pr_warn("srq_limit(%d) > max_srq_wr(%d)\n", - attr->srq_limit, rxe->attr.max_srq_wr); - goto err1; - } + if (attr->srq_limit > rxe->attr.max_srq_wr) + goto err_out; - if (attr->srq_limit > srq->rq.queue->buf->index_mask) { - pr_warn("srq_limit (%d) > cur limit(%d)\n", - attr->srq_limit, - srq->rq.queue->buf->index_mask); - goto err1; - } + if (attr->srq_limit > srq->rq.queue->buf->index_mask) + goto err_out; } return 0; -err1: - return -EINVAL; +err_out: + pr_debug("%s: srq#%d: failed err = %d\n", __func__, + srq->elem.index, err); + return err; } int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq, @@ -182,6 +185,12 @@ void rxe_srq_cleanup(struct rxe_pool_elem *elem) if (srq->pd) rxe_put(srq->pd); + if (srq->cq) + rxe_put(srq->cq); + + if (srq->xrcd) + rxe_put(srq->xrcd); + if (srq->rq.queue) rxe_queue_cleanup(srq->rq.queue); } diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 4a5da079bf11..ef86f0c5890e 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -306,7 +306,6 @@ static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init, { int err; struct rxe_dev *rxe = to_rdev(ibsrq->device); - struct rxe_pd *pd = to_rpd(ibsrq->pd); struct rxe_srq *srq = to_rsrq(ibsrq); struct rxe_create_srq_resp __user *uresp = NULL; @@ -316,9 +315,6 @@ static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init, uresp = udata->outbuf; } - if (init->srq_type != IB_SRQT_BASIC) - return -EOPNOTSUPP; - err = rxe_srq_chk_init(rxe, init); if (err) return err; @@ -327,13 +323,11 @@ static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init, if (err) return err; - rxe_get(pd); - srq->pd = pd; - err = rxe_srq_from_init(rxe, srq, init, udata, uresp); if (err) goto err_cleanup; + rxe_finalize(srq); return 0; err_cleanup: @@ -367,6 +361,7 @@ static int rxe_modify_srq(struct ib_srq *ibsrq, struct ib_srq_attr *attr, err = rxe_srq_from_attr(rxe, srq, attr, mask, &ucmd, udata); if (err) return err; + return 0; } @@ -380,6 +375,7 @@ static int rxe_query_srq(struct ib_srq *ibsrq, struct ib_srq_attr *attr) attr->max_wr = srq->rq.queue->buf->index_mask; attr->max_sge = srq->rq.max_sge; attr->srq_limit = srq->limit; + return 0; } @@ -546,7 +542,6 @@ static void init_send_wr(struct rxe_qp *qp, struct rxe_send_wr *wr, const struct ib_send_wr *ibwr) { wr->wr_id = ibwr->wr_id; - wr->num_sge = ibwr->num_sge; wr->opcode = ibwr->opcode; wr->send_flags = ibwr->send_flags; @@ -628,6 +623,8 @@ static void init_send_wqe(struct rxe_qp *qp, const struct ib_send_wr *ibwr, return; } + wqe->dma.num_sge = ibwr->num_sge; + if (unlikely(ibwr->send_flags & IB_SEND_INLINE)) copy_inline_data_to_wqe(wqe, ibwr); else diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 6c4cfb802dd4..7dab7fa3ba6c 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -102,13 +102,19 @@ struct rxe_srq { struct ib_srq ibsrq; struct rxe_pool_elem elem; struct rxe_pd *pd; + struct rxe_xrcd *xrcd; /* xrc only */ + struct rxe_cq *cq; /* xrc only */ struct rxe_rq rq; - u32 srq_num; int limit; int error; }; +static inline u32 srq_num(struct rxe_srq *srq) +{ + return srq->ibsrq.ext.xrc.srq_num; +} + enum rxe_qp_state { QP_STATE_RESET, QP_STATE_INIT, diff --git a/include/uapi/rdma/rdma_user_rxe.h b/include/uapi/rdma/rdma_user_rxe.h index f09c5c9e3dd5..514a1b6976fe 100644 --- a/include/uapi/rdma/rdma_user_rxe.h +++ b/include/uapi/rdma/rdma_user_rxe.h @@ -74,7 +74,7 @@ struct rxe_av { struct rxe_send_wr { __aligned_u64 wr_id; - __u32 num_sge; + __u32 srq_num; /* xrc only */ __u32 opcode; __u32 send_flags; union { @@ -191,8 +191,6 @@ struct rxe_create_qp_resp { struct rxe_create_srq_resp { struct mminfo mi; - __u32 srq_num; - __u32 reserved; }; struct rxe_modify_srq_cmd { From patchwork Sat Sep 17 03:10:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12978993 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5C41ECAAA1 for ; Sat, 17 Sep 2022 03:11:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229538AbiIQDLA (ORCPT ); Fri, 16 Sep 2022 23:11:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53952 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229498AbiIQDKy (ORCPT ); Fri, 16 Sep 2022 23:10:54 -0400 Received: from mail-ot1-x32a.google.com (mail-ot1-x32a.google.com [IPv6:2607:f8b0:4864:20::32a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CCF874DF25 for ; Fri, 16 Sep 2022 20:10:51 -0700 (PDT) Received: by mail-ot1-x32a.google.com with SMTP id e35-20020a9d01a6000000b0065798eb8754so7026142ote.2 for ; Fri, 16 Sep 2022 20:10:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=PIF/daVr5yYMTgWd7zk0PdSI0kMkAwJNKjUQnaP5nw8=; b=WocLRzAuGwlclYSMBqEtxO9BCF25EeHaZZZRdq64D6sRmrCVxJdAVrDjkxxRV+woqK DMGrBJol4cmu+rPBXmHUb6wkY0FaVd3o7B0rQhAGWoZwlr2TTzCW0gSdsoVIw8dTezyk T+s+YD1BS+Yr44ulYkTlw8vkdn6JAGzzYiA8Mv4uI7UT9Gb62DmJU4hSTbNXY8RVGxIK 4lebzexNgP8bnHp7cO0OvBmmbVh3mnn7O6NIaf0oo3gSCiSn38YpEyy55A1rGb2fHtD8 qOeN6yW82haFEC9M+6dBo2xeVH2HQFHQkVUqJIMCy3r4XqfyjLaD20mYAob8VwZ9X8vk lA7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=PIF/daVr5yYMTgWd7zk0PdSI0kMkAwJNKjUQnaP5nw8=; b=iRMSh1323wm65NqFjMNE3w9ZUfPzV84LdOWeX9zBoYS2d+eGaNBS7iVK6/CCGsuWnh kJurSaZWJImcxIKJrIS/7YPgj6Wwc9gOFOOda3EcD61p6ftdiVSVrjU0TgvTTaONt40t Gb7xQQasy1PQXsrhZQCAKhlXmRDedXBW7zof8BxXFNxYZgnoQn3dbINTrXJyq+lFD8n0 G1eTdOrAG4UQwqGF2tOmPtQS1CZIRCKYBS+3Itx1tE6E6QKDQgEY9a5N76svCGL+i8z+ SfNXQPhFsh2EezSWzxQhS2SRl82TwwAW8hwTiaIJ0fxEJWvGZKlpdThId18TMeqvzicm i5hg== X-Gm-Message-State: ACrzQf0Z7J8/o6Fq0p9P6WcTV/IySarQTL/9tG/+UmwWW3isbNASnTBM IA8+Ds2McYMzf4WK/0+FIVE= X-Google-Smtp-Source: AMsMyM40LaD5BfusIo8aTqP8bV4y/IZ7tQHFku7q+/2tC2KZmpbIQOtK1s87Y64Pqaae2jPpTkxH6w== X-Received: by 2002:a05:6830:1e16:b0:655:e44b:6241 with SMTP id s22-20020a0568301e1600b00655e44b6241mr3535240otr.229.1663384251065; Fri, 16 Sep 2022 20:10:51 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f9ea-fe1d-a45c-bca2.res6.spectrum.com. [2603:8081:140c:1a00:f9ea:fe1d:a45c:bca2]) by smtp.googlemail.com with ESMTPSA id be36-20020a05687058a400b000f5e89a9c60sm4464800oab.3.2022.09.16.20.10.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Sep 2022 20:10:50 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, lizhijian@fujitsu.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 08/13] RDMA/rxe: Extend rxe_qp.c to support xrc qps Date: Fri, 16 Sep 2022 22:10:27 -0500 Message-Id: <20220917031028.21187-8-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220917031028.21187-1-rpearsonhpe@gmail.com> References: <20220917031028.21187-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend code in rxe_qp.c to support xrc qp types. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_av.c | 3 +- drivers/infiniband/sw/rxe/rxe_loc.h | 7 +- drivers/infiniband/sw/rxe/rxe_qp.c | 307 +++++++++++++++----------- drivers/infiniband/sw/rxe/rxe_verbs.c | 22 +- drivers/infiniband/sw/rxe/rxe_verbs.h | 1 + 5 files changed, 200 insertions(+), 140 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_av.c b/drivers/infiniband/sw/rxe/rxe_av.c index 3b05314ca739..c8f3ec53aa79 100644 --- a/drivers/infiniband/sw/rxe/rxe_av.c +++ b/drivers/infiniband/sw/rxe/rxe_av.c @@ -110,7 +110,8 @@ struct rxe_av *rxe_get_av(struct rxe_pkt_info *pkt, struct rxe_ah **ahp) if (!pkt || !pkt->qp) return NULL; - if (qp_type(pkt->qp) == IB_QPT_RC || qp_type(pkt->qp) == IB_QPT_UC) + if (qp_type(pkt->qp) == IB_QPT_RC || qp_type(pkt->qp) == IB_QPT_UC || + qp_type(pkt->qp) == IB_QPT_XRC_INI) return &pkt->qp->pri_av; if (!pkt->wqe) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 5526d83697c7..c6fb93a749ad 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -103,11 +103,12 @@ const char *rxe_parent_name(struct rxe_dev *rxe, unsigned int port_num); int next_opcode(struct rxe_qp *qp, struct rxe_send_wqe *wqe, u32 opcode); /* rxe_qp.c */ -int rxe_qp_chk_init(struct rxe_dev *rxe, struct ib_qp_init_attr *init); -int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd, +int rxe_qp_chk_init(struct rxe_dev *rxe, struct ib_qp *ibqp, + struct ib_qp_init_attr *init); +int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct ib_qp_init_attr *init, struct rxe_create_qp_resp __user *uresp, - struct ib_pd *ibpd, struct ib_udata *udata); + struct ib_udata *udata); int rxe_qp_to_init(struct rxe_qp *qp, struct ib_qp_init_attr *init); int rxe_qp_chk_attr(struct rxe_dev *rxe, struct rxe_qp *qp, struct ib_qp_attr *attr, int mask); diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c index 1dcbeacb3122..6cbc842b8cbb 100644 --- a/drivers/infiniband/sw/rxe/rxe_qp.c +++ b/drivers/infiniband/sw/rxe/rxe_qp.c @@ -56,30 +56,42 @@ static int rxe_qp_chk_cap(struct rxe_dev *rxe, struct ib_qp_cap *cap, return -EINVAL; } -int rxe_qp_chk_init(struct rxe_dev *rxe, struct ib_qp_init_attr *init) +int rxe_qp_chk_init(struct rxe_dev *rxe, struct ib_qp *ibqp, + struct ib_qp_init_attr *init) { + struct ib_pd *ibpd = ibqp->pd; struct ib_qp_cap *cap = &init->cap; struct rxe_port *port; int port_num = init->port_num; + if (init->create_flags) + return -EOPNOTSUPP; + switch (init->qp_type) { case IB_QPT_GSI: case IB_QPT_RC: case IB_QPT_UC: case IB_QPT_UD: + if (!ibpd || !init->recv_cq || !init->send_cq) + return -EINVAL; + break; + case IB_QPT_XRC_INI: + if (!init->send_cq) + return -EINVAL; + break; + case IB_QPT_XRC_TGT: + if (!init->xrcd) + return -EINVAL; break; default: return -EOPNOTSUPP; } - if (!init->recv_cq || !init->send_cq) { - pr_warn("missing cq\n"); - goto err1; + if (init->qp_type != IB_QPT_XRC_TGT) { + if (rxe_qp_chk_cap(rxe, cap, !!(init->srq || init->xrcd))) + goto err1; } - if (rxe_qp_chk_cap(rxe, cap, !!init->srq)) - goto err1; - if (init->qp_type == IB_QPT_GSI) { if (!rdma_is_port_valid(&rxe->ib_dev, port_num)) { pr_warn("invalid port = %d\n", port_num); @@ -148,49 +160,83 @@ static void cleanup_rd_atomic_resources(struct rxe_qp *qp) static void rxe_qp_init_misc(struct rxe_dev *rxe, struct rxe_qp *qp, struct ib_qp_init_attr *init) { - struct rxe_port *port; - u32 qpn; - + qp->ibqp.qp_type = init->qp_type; qp->sq_sig_type = init->sq_sig_type; qp->attr.path_mtu = 1; qp->mtu = ib_mtu_enum_to_int(qp->attr.path_mtu); - qpn = qp->elem.index; - port = &rxe->port; - switch (init->qp_type) { case IB_QPT_GSI: qp->ibqp.qp_num = 1; - port->qp_gsi_index = qpn; + rxe->port.qp_gsi_index = qp->elem.index; qp->attr.port_num = init->port_num; break; default: - qp->ibqp.qp_num = qpn; + qp->ibqp.qp_num = qp->elem.index; break; } spin_lock_init(&qp->state_lock); - spin_lock_init(&qp->req.task.state_lock); - spin_lock_init(&qp->resp.task.state_lock); - spin_lock_init(&qp->comp.task.state_lock); - - spin_lock_init(&qp->sq.sq_lock); - spin_lock_init(&qp->rq.producer_lock); - spin_lock_init(&qp->rq.consumer_lock); - atomic_set(&qp->ssn, 0); atomic_set(&qp->skb_out, 0); } +static int rxe_prepare_send_queue(struct rxe_dev *rxe, struct rxe_qp *qp, + struct ib_qp_init_attr *init, struct ib_udata *udata, + struct rxe_create_qp_resp __user *uresp) +{ + struct rxe_queue *q; + int wqe_size; + int err; + + qp->sq.max_wr = init->cap.max_send_wr; + + wqe_size = init->cap.max_send_sge*sizeof(struct ib_sge); + wqe_size = max_t(int, wqe_size, init->cap.max_inline_data); + + qp->sq.max_sge = wqe_size/sizeof(struct ib_sge); + qp->sq.max_inline = wqe_size; + wqe_size += sizeof(struct rxe_send_wqe); + + q = rxe_queue_init(rxe, &qp->sq.max_wr, wqe_size, + QUEUE_TYPE_FROM_CLIENT); + if (!q) + return -ENOMEM; + + err = do_mmap_info(rxe, uresp ? &uresp->sq_mi : NULL, udata, + q->buf, q->buf_size, &q->ip); + + if (err) { + vfree(q->buf); + kfree(q); + return err; + } + + init->cap.max_send_sge = qp->sq.max_sge; + init->cap.max_inline_data = qp->sq.max_inline; + + qp->sq.queue = q; + + return 0; +} + static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp, struct ib_qp_init_attr *init, struct ib_udata *udata, struct rxe_create_qp_resp __user *uresp) { int err; - int wqe_size; - enum queue_type type; + + err = rxe_prepare_send_queue(rxe, qp, init, udata, uresp); + if (err) + return err; + + spin_lock_init(&qp->sq.sq_lock); + spin_lock_init(&qp->req.task.state_lock); + spin_lock_init(&qp->comp.task.state_lock); + + skb_queue_head_init(&qp->resp_pkts); err = sock_create_kern(&init_net, AF_INET, SOCK_DGRAM, 0, &qp->sk); if (err < 0) @@ -205,32 +251,6 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp, * (0xc000 - 0xffff). */ qp->src_port = RXE_ROCE_V2_SPORT + (hash_32(qp_num(qp), 14) & 0x3fff); - qp->sq.max_wr = init->cap.max_send_wr; - - /* These caps are limited by rxe_qp_chk_cap() done by the caller */ - wqe_size = max_t(int, init->cap.max_send_sge * sizeof(struct ib_sge), - init->cap.max_inline_data); - qp->sq.max_sge = init->cap.max_send_sge = - wqe_size / sizeof(struct ib_sge); - qp->sq.max_inline = init->cap.max_inline_data = wqe_size; - wqe_size += sizeof(struct rxe_send_wqe); - - type = QUEUE_TYPE_FROM_CLIENT; - qp->sq.queue = rxe_queue_init(rxe, &qp->sq.max_wr, - wqe_size, type); - if (!qp->sq.queue) - return -ENOMEM; - - err = do_mmap_info(rxe, uresp ? &uresp->sq_mi : NULL, udata, - qp->sq.queue->buf, qp->sq.queue->buf_size, - &qp->sq.queue->ip); - - if (err) { - vfree(qp->sq.queue->buf); - kfree(qp->sq.queue); - qp->sq.queue = NULL; - return err; - } qp->req.wqe_index = queue_get_producer(qp->sq.queue, QUEUE_TYPE_FROM_CLIENT); @@ -240,57 +260,71 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp, qp->req.opcode = -1; qp->comp.opcode = -1; - skb_queue_head_init(&qp->req_pkts); - rxe_init_task(&qp->req.task, qp, rxe_requester, "req"); rxe_init_task(&qp->comp.task, qp, rxe_completer, "comp"); qp->qp_timeout_jiffies = 0; /* Can't be set for UD/UC in modify_qp */ - if (init->qp_type == IB_QPT_RC) { + if (init->qp_type == IB_QPT_RC || init->qp_type == IB_QPT_XRC_INI) { timer_setup(&qp->rnr_nak_timer, rnr_nak_timer, 0); timer_setup(&qp->retrans_timer, retransmit_timer, 0); } return 0; } +static int rxe_prepare_recv_queue(struct rxe_dev *rxe, struct rxe_qp *qp, + struct ib_qp_init_attr *init, struct ib_udata *udata, + struct rxe_create_qp_resp __user *uresp) +{ + struct rxe_queue *q; + int wqe_size; + int err; + + qp->rq.max_wr = init->cap.max_recv_wr; + qp->rq.max_sge = init->cap.max_recv_sge; + + wqe_size = sizeof(struct rxe_recv_wqe) + + qp->rq.max_sge*sizeof(struct ib_sge); + + q = rxe_queue_init(rxe, &qp->rq.max_wr, wqe_size, + QUEUE_TYPE_FROM_CLIENT); + if (!q) + return -ENOMEM; + + err = do_mmap_info(rxe, uresp ? &uresp->rq_mi : NULL, udata, + q->buf, q->buf_size, &q->ip); + + if (err) { + vfree(q->buf); + kfree(q); + return err; + } + + qp->rq.queue = q; + + return 0; +} + static int rxe_qp_init_resp(struct rxe_dev *rxe, struct rxe_qp *qp, struct ib_qp_init_attr *init, struct ib_udata *udata, struct rxe_create_qp_resp __user *uresp) { int err; - int wqe_size; - enum queue_type type; - if (!qp->srq) { - qp->rq.max_wr = init->cap.max_recv_wr; - qp->rq.max_sge = init->cap.max_recv_sge; - - wqe_size = rcv_wqe_size(qp->rq.max_sge); - - pr_debug("qp#%d max_wr = %d, max_sge = %d, wqe_size = %d\n", - qp_num(qp), qp->rq.max_wr, qp->rq.max_sge, wqe_size); - - type = QUEUE_TYPE_FROM_CLIENT; - qp->rq.queue = rxe_queue_init(rxe, &qp->rq.max_wr, - wqe_size, type); - if (!qp->rq.queue) - return -ENOMEM; - - err = do_mmap_info(rxe, uresp ? &uresp->rq_mi : NULL, udata, - qp->rq.queue->buf, qp->rq.queue->buf_size, - &qp->rq.queue->ip); - if (err) { - vfree(qp->rq.queue->buf); - kfree(qp->rq.queue); - qp->rq.queue = NULL; + if (!qp->srq && qp_type(qp) != IB_QPT_XRC_TGT) { + err = rxe_prepare_recv_queue(rxe, qp, init, udata, uresp); + if (err) return err; - } + + spin_lock_init(&qp->rq.producer_lock); + spin_lock_init(&qp->rq.consumer_lock); } - skb_queue_head_init(&qp->resp_pkts); + spin_lock_init(&qp->resp.task.state_lock); + + skb_queue_head_init(&qp->req_pkts); rxe_init_task(&qp->resp.task, qp, rxe_responder, "resp"); @@ -303,64 +337,82 @@ static int rxe_qp_init_resp(struct rxe_dev *rxe, struct rxe_qp *qp, } /* called by the create qp verb */ -int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd, +int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct ib_qp_init_attr *init, struct rxe_create_qp_resp __user *uresp, - struct ib_pd *ibpd, struct ib_udata *udata) { int err; + struct rxe_pd *pd = to_rpd(qp->ibqp.pd); struct rxe_cq *rcq = to_rcq(init->recv_cq); struct rxe_cq *scq = to_rcq(init->send_cq); - struct rxe_srq *srq = init->srq ? to_rsrq(init->srq) : NULL; + struct rxe_srq *srq = to_rsrq(init->srq); + struct rxe_xrcd *xrcd = to_rxrcd(init->xrcd); - rxe_get(pd); - rxe_get(rcq); - rxe_get(scq); - if (srq) + if (pd) { + rxe_get(pd); + qp->pd = pd; + } + if (rcq) { + rxe_get(rcq); + qp->rcq = rcq; + atomic_inc(&rcq->num_wq); + } + if (scq) { + rxe_get(scq); + qp->scq = scq; + atomic_inc(&scq->num_wq); + } + if (srq) { rxe_get(srq); - - qp->pd = pd; - qp->rcq = rcq; - qp->scq = scq; - qp->srq = srq; - - atomic_inc(&rcq->num_wq); - atomic_inc(&scq->num_wq); + qp->srq = srq; + } + if (xrcd) { + rxe_get(xrcd); + qp->xrcd = xrcd; + } rxe_qp_init_misc(rxe, qp, init); - err = rxe_qp_init_req(rxe, qp, init, udata, uresp); - if (err) - goto err1; + switch (init->qp_type) { + case IB_QPT_RC: + case IB_QPT_UC: + case IB_QPT_GSI: + case IB_QPT_UD: + err = rxe_qp_init_req(rxe, qp, init, udata, uresp); + if (err) + goto err_out; - err = rxe_qp_init_resp(rxe, qp, init, udata, uresp); - if (err) - goto err2; + err = rxe_qp_init_resp(rxe, qp, init, udata, uresp); + if (err) + goto err_unwind; + break; + case IB_QPT_XRC_INI: + err = rxe_qp_init_req(rxe, qp, init, udata, uresp); + if (err) + goto err_out; + break; + case IB_QPT_XRC_TGT: + err = rxe_qp_init_resp(rxe, qp, init, udata, uresp); + if (err) + goto err_out; + break; + default: + /* not reached */ + err = -EOPNOTSUPP; + goto err_out; + }; qp->attr.qp_state = IB_QPS_RESET; qp->valid = 1; return 0; -err2: +err_unwind: rxe_queue_cleanup(qp->sq.queue); qp->sq.queue = NULL; -err1: - atomic_dec(&rcq->num_wq); - atomic_dec(&scq->num_wq); - - qp->pd = NULL; - qp->rcq = NULL; - qp->scq = NULL; - qp->srq = NULL; - - if (srq) - rxe_put(srq); - rxe_put(scq); - rxe_put(rcq); - rxe_put(pd); - +err_out: + /* rxe_qp_cleanup handles the rest */ return err; } @@ -486,7 +538,8 @@ static void rxe_qp_reset(struct rxe_qp *qp) /* stop request/comp */ if (qp->sq.queue) { - if (qp_type(qp) == IB_QPT_RC) + if (qp_type(qp) == IB_QPT_RC || + qp_type(qp) == IB_QPT_XRC_INI) rxe_disable_task(&qp->comp.task); rxe_disable_task(&qp->req.task); } @@ -530,7 +583,8 @@ static void rxe_qp_reset(struct rxe_qp *qp) rxe_enable_task(&qp->resp.task); if (qp->sq.queue) { - if (qp_type(qp) == IB_QPT_RC) + if (qp_type(qp) == IB_QPT_RC || + qp_type(qp) == IB_QPT_XRC_INI) rxe_enable_task(&qp->comp.task); rxe_enable_task(&qp->req.task); @@ -543,7 +597,8 @@ static void rxe_qp_drain(struct rxe_qp *qp) if (qp->sq.queue) { if (qp->req.state != QP_STATE_DRAINED) { qp->req.state = QP_STATE_DRAIN; - if (qp_type(qp) == IB_QPT_RC) + if (qp_type(qp) == IB_QPT_RC || + qp_type(qp) == IB_QPT_XRC_INI) rxe_run_task(&qp->comp.task, 1); else __rxe_do_task(&qp->comp.task); @@ -563,7 +618,7 @@ void rxe_qp_error(struct rxe_qp *qp) /* drain work and packet queues */ rxe_run_task(&qp->resp.task, 1); - if (qp_type(qp) == IB_QPT_RC) + if (qp_type(qp) == IB_QPT_RC || qp_type(qp) == IB_QPT_XRC_INI) rxe_run_task(&qp->comp.task, 1); else __rxe_do_task(&qp->comp.task); @@ -673,7 +728,8 @@ int rxe_qp_from_attr(struct rxe_qp *qp, struct ib_qp_attr *attr, int mask, qp->attr.sq_psn = (attr->sq_psn & BTH_PSN_MASK); qp->req.psn = qp->attr.sq_psn; qp->comp.psn = qp->attr.sq_psn; - pr_debug("qp#%d set req psn = 0x%x\n", qp_num(qp), qp->req.psn); + pr_debug("qp#%d set req psn = %d comp psn = %d\n", qp_num(qp), + qp->req.psn, qp->comp.psn); } if (mask & IB_QP_PATH_MIG_STATE) @@ -788,7 +844,7 @@ static void rxe_qp_do_cleanup(struct work_struct *work) qp->qp_timeout_jiffies = 0; rxe_cleanup_task(&qp->resp.task); - if (qp_type(qp) == IB_QPT_RC) { + if (qp_type(qp) == IB_QPT_RC || qp_type(qp) == IB_QPT_XRC_INI) { del_timer_sync(&qp->retrans_timer); del_timer_sync(&qp->rnr_nak_timer); } @@ -808,6 +864,9 @@ static void rxe_qp_do_cleanup(struct work_struct *work) if (qp->sq.queue) rxe_queue_cleanup(qp->sq.queue); + if (qp->xrcd) + rxe_put(qp->xrcd); + if (qp->srq) rxe_put(qp->srq); @@ -830,7 +889,7 @@ static void rxe_qp_do_cleanup(struct work_struct *work) if (qp->resp.mr) rxe_put(qp->resp.mr); - if (qp_type(qp) == IB_QPT_RC) + if (qp_type(qp) == IB_QPT_RC || qp_type(qp) == IB_QPT_XRC_INI) sk_dst_reset(qp->sk->sk); free_rd_atomic_resources(qp); diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index ef86f0c5890e..59ba11e52bac 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -416,7 +416,6 @@ static int rxe_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init, { int err; struct rxe_dev *rxe = to_rdev(ibqp->device); - struct rxe_pd *pd = to_rpd(ibqp->pd); struct rxe_qp *qp = to_rqp(ibqp); struct rxe_create_qp_resp __user *uresp = NULL; @@ -424,16 +423,7 @@ static int rxe_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init, if (udata->outlen < sizeof(*uresp)) return -EINVAL; uresp = udata->outbuf; - } - - if (init->create_flags) - return -EOPNOTSUPP; - err = rxe_qp_chk_init(rxe, init); - if (err) - return err; - - if (udata) { if (udata->inlen) return -EINVAL; @@ -442,11 +432,15 @@ static int rxe_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init, qp->is_user = false; } + err = rxe_qp_chk_init(rxe, ibqp, init); + if (err) + return err; + err = rxe_add_to_pool(&rxe->qp_pool, qp); if (err) return err; - err = rxe_qp_from_init(rxe, qp, pd, init, uresp, ibqp->pd, udata); + err = rxe_qp_from_init(rxe, qp, init, uresp, udata); if (err) goto qp_init; @@ -517,6 +511,9 @@ static int validate_send_wr(struct rxe_qp *qp, const struct ib_send_wr *ibwr, int num_sge = ibwr->num_sge; struct rxe_sq *sq = &qp->sq; + if (unlikely(qp_type(qp) == IB_QPT_XRC_TGT)) + return -EOPNOTSUPP; + if (unlikely(num_sge > sq->max_sge)) goto err1; @@ -740,8 +737,9 @@ static int rxe_post_send(struct ib_qp *ibqp, const struct ib_send_wr *wr, /* Utilize process context to do protocol processing */ rxe_run_task(&qp->req.task, 0); return 0; - } else + } else { return rxe_post_send_kernel(qp, wr, bad_wr); + } } static int rxe_post_recv(struct ib_qp *ibqp, const struct ib_recv_wr *wr, diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 7dab7fa3ba6c..ee482a0569b8 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -230,6 +230,7 @@ struct rxe_qp { struct rxe_srq *srq; struct rxe_cq *scq; struct rxe_cq *rcq; + struct rxe_xrcd *xrcd; enum ib_sig_type sq_sig_type; From patchwork Sat Sep 17 03:10:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12978994 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95975ECAAD8 for ; Sat, 17 Sep 2022 03:11:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229498AbiIQDLA (ORCPT ); Fri, 16 Sep 2022 23:11:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53958 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229657AbiIQDKy (ORCPT ); Fri, 16 Sep 2022 23:10:54 -0400 Received: from mail-oi1-x232.google.com (mail-oi1-x232.google.com [IPv6:2607:f8b0:4864:20::232]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D354552E70 for ; Fri, 16 Sep 2022 20:10:52 -0700 (PDT) Received: by mail-oi1-x232.google.com with SMTP id t62so8305318oie.10 for ; Fri, 16 Sep 2022 20:10:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=2sQYsVirRP6MdY15qHzh8WxTTUEdGT3IBBGPdUTLpFI=; b=LyrpYRwhC4gx/ormiSSy3LfQYEnnFAMyj38EwJYhpg6frrLzVcsuWQhXXLL+ILv14N H2zao+iV8nm0L0zUu56g0gzY7mlSL/JSX6sqyGEsB97luQ4erK1icv5UXrhyn4YyqeXz KOhFeoPuFfttONrtwsU7oyft2zVy9j1E3cfkXkk2FQsDwBaZFfwe4iEQ3SzzEUZgEwbw KKcznJaXKk2bc6xi1shpue3JLV2CzzBj+8FFsiUQh7/ysucml81wdwvjscBNKfP+kR/1 LFkEORq2osxizllIeeKu+DRZUgB7D7PEP3kc/SpYtamOqChpPG/MQqciVIZzFOl4/Qh5 TQcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=2sQYsVirRP6MdY15qHzh8WxTTUEdGT3IBBGPdUTLpFI=; b=hqIARYccrKOnZ3YCaM+w/bOOc/ujBOPIIXNmZ0etURoemXW1nJNBi4HppVZfVSJZ5y 3Tv2JowDNVlggoO5s25Kt1r+OQYfc51TWkJtpGsdqSTCFEDlQDs42gMdIkYialKiKOsT 3skwx0VbprMJbnC/psqQwI76VkXxFD5W4SwMlZyLFc6MqTMPtl3IHz4hykrP5kLXsURG 63iYNryDpHk2XPefLWu+3OEApWH8IIaUQvt85/l/d1LiW1we/0IHdNWb6I2PlIRbJ/qh GyY4+AF9uaV4Zq8Iew5s49RMfbiUnwn7pS6Y40KsUKReofzEfA1tTSXP9eF44QmRc6FB hQ1Q== X-Gm-Message-State: ACrzQf0gBvsYZ7+2YH9KYHQaQN6fv7eauN4QyHUL8N78esj0gPqtEviT +/taWsNL9BofAc/7tvL0e6c= X-Google-Smtp-Source: AMsMyM5BUxyoF9c796wZE2qGuUvi3NIKghnNm/2pOMHRrG6HiN2n2gEzgT2jnlUmcdZ0v2olulOUaQ== X-Received: by 2002:a05:6808:1704:b0:34f:66a3:31c3 with SMTP id bc4-20020a056808170400b0034f66a331c3mr3830808oib.224.1663384252092; Fri, 16 Sep 2022 20:10:52 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f9ea-fe1d-a45c-bca2.res6.spectrum.com. [2603:8081:140c:1a00:f9ea:fe1d:a45c:bca2]) by smtp.googlemail.com with ESMTPSA id be36-20020a05687058a400b000f5e89a9c60sm4464800oab.3.2022.09.16.20.10.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Sep 2022 20:10:51 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, lizhijian@fujitsu.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 09/13] RDMA/rxe: Extend rxe_recv.c to support xrc Date: Fri, 16 Sep 2022 22:10:28 -0500 Message-Id: <20220917031028.21187-9-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220917031028.21187-1-rpearsonhpe@gmail.com> References: <20220917031028.21187-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend rxe_recv.c to support xrc packets. Add checks for qp type and check qp->xrcd matches srq->xrcd. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_hdr.h | 5 +- drivers/infiniband/sw/rxe/rxe_recv.c | 79 +++++++++++++++++++++------- 2 files changed, 63 insertions(+), 21 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_hdr.h b/drivers/infiniband/sw/rxe/rxe_hdr.h index e947bcf75209..fb9959d91b8d 100644 --- a/drivers/infiniband/sw/rxe/rxe_hdr.h +++ b/drivers/infiniband/sw/rxe/rxe_hdr.h @@ -14,7 +14,10 @@ struct rxe_pkt_info { struct rxe_dev *rxe; /* device that owns packet */ struct rxe_qp *qp; /* qp that owns packet */ - struct rxe_send_wqe *wqe; /* send wqe */ + union { + struct rxe_send_wqe *wqe; /* send wqe */ + struct rxe_srq *srq; /* srq for recvd xrc packets */ + }; u8 *hdr; /* points to bth */ u32 mask; /* useful info about pkt */ u32 psn; /* bth psn of packet */ diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c index f3ad7b6dbd97..4f35757d3c52 100644 --- a/drivers/infiniband/sw/rxe/rxe_recv.c +++ b/drivers/infiniband/sw/rxe/rxe_recv.c @@ -13,49 +13,51 @@ static int check_type_state(struct rxe_dev *rxe, struct rxe_pkt_info *pkt, struct rxe_qp *qp) { - unsigned int pkt_type; + unsigned int pkt_type = pkt->opcode & IB_OPCODE_TYPE; if (unlikely(!qp->valid)) - goto err1; + goto err_out; - pkt_type = pkt->opcode & 0xe0; switch (qp_type(qp)) { case IB_QPT_RC: - if (unlikely(pkt_type != IB_OPCODE_RC)) { - pr_warn_ratelimited("bad qp type\n"); - goto err1; - } + if (unlikely(pkt_type != IB_OPCODE_RC)) + goto err_out; break; case IB_QPT_UC: - if (unlikely(pkt_type != IB_OPCODE_UC)) { - pr_warn_ratelimited("bad qp type\n"); - goto err1; - } + if (unlikely(pkt_type != IB_OPCODE_UC)) + goto err_out; break; case IB_QPT_UD: case IB_QPT_GSI: - if (unlikely(pkt_type != IB_OPCODE_UD)) { - pr_warn_ratelimited("bad qp type\n"); - goto err1; - } + if (unlikely(pkt_type != IB_OPCODE_UD)) + goto err_out; + break; + case IB_QPT_XRC_INI: + if (unlikely(pkt_type != IB_OPCODE_XRC)) + goto err_out; + break; + case IB_QPT_XRC_TGT: + if (unlikely(pkt_type != IB_OPCODE_XRC)) + goto err_out; break; default: - pr_warn_ratelimited("unsupported qp type\n"); - goto err1; + goto err_out; } if (pkt->mask & RXE_REQ_MASK) { if (unlikely(qp->resp.state != QP_STATE_READY)) - goto err1; + goto err_out; } else if (unlikely(qp->req.state < QP_STATE_READY || qp->req.state > QP_STATE_DRAINED)) { - goto err1; + goto err_out; } return 0; -err1: +err_out: + pr_debug("%s: failed qp#%d: opcode = 0x%02x\n", __func__, + qp->elem.index, pkt->opcode); return -EINVAL; } @@ -166,6 +168,37 @@ static int check_addr(struct rxe_dev *rxe, struct rxe_pkt_info *pkt, return -EINVAL; } +static int check_xrcd(struct rxe_dev *rxe, struct rxe_pkt_info *pkt, + struct rxe_qp *qp) +{ + int err; + + struct rxe_xrcd *xrcd = qp->xrcd; + u32 srqn = xrceth_srqn(pkt); + struct rxe_srq *srq; + + srq = rxe_pool_get_index(&rxe->srq_pool, srqn); + if (unlikely(!srq)) { + err = -EINVAL; + goto err_out; + } + + if (unlikely(srq->xrcd != xrcd)) { + rxe_put(srq); + err = -EINVAL; + goto err_out; + } + + pkt->srq = srq; + + return 0; + +err_out: + pr_debug("%s: qp#%d: failed err = %d\n", __func__, + qp->elem.index, err); + return err; +} + static int hdr_check(struct rxe_pkt_info *pkt) { struct rxe_dev *rxe = pkt->rxe; @@ -205,6 +238,12 @@ static int hdr_check(struct rxe_pkt_info *pkt) err = check_keys(rxe, pkt, qpn, qp); if (unlikely(err)) goto err2; + + if (qp_type(qp) == IB_QPT_XRC_TGT) { + err = check_xrcd(rxe, pkt, qp); + if (unlikely(err)) + goto err2; + } } else { if (unlikely((pkt->mask & RXE_GRH_MASK) == 0)) { pr_warn_ratelimited("no grh for mcast qpn\n"); From patchwork Sat Sep 17 03:12:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12979011 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16369ECAAD8 for ; Sat, 17 Sep 2022 03:12:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229568AbiIQDMw (ORCPT ); Fri, 16 Sep 2022 23:12:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55912 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229713AbiIQDMW (ORCPT ); Fri, 16 Sep 2022 23:12:22 -0400 Received: from mail-oa1-x2f.google.com (mail-oa1-x2f.google.com [IPv6:2001:4860:4864:20::2f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B6F7B9F9A for ; Fri, 16 Sep 2022 20:12:15 -0700 (PDT) Received: by mail-oa1-x2f.google.com with SMTP id 586e51a60fabf-127ba06d03fso54756413fac.3 for ; Fri, 16 Sep 2022 20:12:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date; bh=pmXKEYyCNniaE+BlAFgYZOoUvItL4ct2QR9ZHK2YPsY=; b=p7s7+40uZTIVV0+ZpecKNQSFYtI3aNQm6ICp9dCgiPQZ2m44G8jBF+Jc5C9sO2eDmN OEGrHM8racrc+YJaRLjLBxerNN3zFuohaVLlbwMOZoiI++NkJPWFKk2LjKOIIg5yRAZx n5pMuQy3JvK8XOwSxcMv+qUFCqSuUlYQnIGZvKkyl6wezMK3C0Hhomsaa/cyhk0bUGXr UjDsqE0XZw8h9711k4er3rAD+zc6zXwUzpAAed4Cs4MYXS6Cwtnt0bD9Fctga/J4KmNd +vaeYP/vgfSZZ6Bmw4g29sFibRLlyqyBOpq1bofusrtQKu+5Kd95mu6SBvlISQtj6XKv +3lg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date; bh=pmXKEYyCNniaE+BlAFgYZOoUvItL4ct2QR9ZHK2YPsY=; b=Vbyzy+61edCXMymOGPmBrIqMMzqZTS7l1SwkYiXBPxbeHlMr4HCOfg0ckES084et1o 5/1fs7Gvlax7Fpytqe+MbVzEUDg96sx/kMQFHad/RMA0pF60p8OjMVMURH7xv+aHD/SS UL/e65GmmVkO+/DlG9H49YLHX5+Z07j5T1ir9vvedZJrARAve6h9PavRDBbLNN2oV865 1NKmrPvSgI5050FIbZDYXEjv8I8zKWPtOP8UvmEzLscg2xU+MdwlzP7ZYggOfzw0tHmY G0LImNedoON/nFSlVB/GHlh55L4VDahi5D3epHlGWU2tePGJFmD14JY9G4WyRKc+Rjd+ WLwg== X-Gm-Message-State: ACgBeo0wj8LSWmr8ygbKCT5tL6kXit8EaRhIa/KElr/0iCK/9p9r67Od OInFlA5lLCIOv7s0TqlxOP4mwkJC1xg= X-Google-Smtp-Source: AA6agR5byFRVnIq/S5Zq/pZ8Y6bumjQ4STlMmZHlvT+5D+GOUPk1gx6Z0uf48MsCYap8ZC8jVQEgzg== X-Received: by 2002:a05:6870:538c:b0:11b:e64f:ee1b with SMTP id h12-20020a056870538c00b0011be64fee1bmr9859323oan.92.1663384334339; Fri, 16 Sep 2022 20:12:14 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f9ea-fe1d-a45c-bca2.res6.spectrum.com. [2603:8081:140c:1a00:f9ea:fe1d:a45c:bca2]) by smtp.googlemail.com with ESMTPSA id d184-20020a4a52c1000000b004320b0cc5acsm9490812oob.48.2022.09.16.20.12.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Sep 2022 20:12:13 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, lizhijian@fujitsu.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 11/13] RDMA/rxe: Extend rxe_req.c to support xrc qps Date: Fri, 16 Sep 2022 22:12:11 -0500 Message-Id: <20220917031211.21272-1-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend code in rxe_req.c to support xrc qp types. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_req.c | 38 +++++++++++++++++------------ 1 file changed, 22 insertions(+), 16 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index d2a9abfed596..e7bb969f97f3 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -229,7 +229,7 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp, { struct rxe_dev *rxe = to_rdev(qp->ibqp.device); struct sk_buff *skb; - struct rxe_send_wr *ibwr = &wqe->wr; + struct rxe_send_wr *wr = &wqe->wr; int pad = (-payload) & 0x3; int paylen; int solicited; @@ -246,13 +246,13 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp, return NULL; /* init bth */ - solicited = (ibwr->send_flags & IB_SEND_SOLICITED) && + solicited = (wr->send_flags & IB_SEND_SOLICITED) && (pkt->mask & RXE_LAST_MASK) && ((pkt->mask & (RXE_SEND_MASK)) || (pkt->mask & (RXE_WRITE_MASK | RXE_IMMDT_MASK)) == (RXE_WRITE_MASK | RXE_IMMDT_MASK)); - qp_num = (pkt->mask & RXE_DETH_MASK) ? ibwr->wr.ud.remote_qpn : + qp_num = (pkt->mask & RXE_DETH_MASK) ? wr->wr.ud.remote_qpn : qp->attr.dest_qp_num; ack_req = ((pkt->mask & RXE_LAST_MASK) || @@ -264,34 +264,37 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp, ack_req, pkt->psn); /* init optional headers */ + if (pkt->mask & RXE_XRCETH_MASK) + xrceth_set_srqn(pkt, wr->srq_num); + if (pkt->mask & RXE_RETH_MASK) { - reth_set_rkey(pkt, ibwr->wr.rdma.rkey); + reth_set_rkey(pkt, wr->wr.rdma.rkey); reth_set_va(pkt, wqe->iova); reth_set_len(pkt, wqe->dma.resid); } if (pkt->mask & RXE_IMMDT_MASK) - immdt_set_imm(pkt, ibwr->ex.imm_data); + immdt_set_imm(pkt, wr->ex.imm_data); if (pkt->mask & RXE_IETH_MASK) - ieth_set_rkey(pkt, ibwr->ex.invalidate_rkey); + ieth_set_rkey(pkt, wr->ex.invalidate_rkey); if (pkt->mask & RXE_ATMETH_MASK) { atmeth_set_va(pkt, wqe->iova); - if (opcode == IB_OPCODE_RC_COMPARE_SWAP) { - atmeth_set_swap_add(pkt, ibwr->wr.atomic.swap); - atmeth_set_comp(pkt, ibwr->wr.atomic.compare_add); + if ((opcode & IB_OPCODE_CMD) == IB_OPCODE_COMPARE_SWAP) { + atmeth_set_swap_add(pkt, wr->wr.atomic.swap); + atmeth_set_comp(pkt, wr->wr.atomic.compare_add); } else { - atmeth_set_swap_add(pkt, ibwr->wr.atomic.compare_add); + atmeth_set_swap_add(pkt, wr->wr.atomic.compare_add); } - atmeth_set_rkey(pkt, ibwr->wr.atomic.rkey); + atmeth_set_rkey(pkt, wr->wr.atomic.rkey); } if (pkt->mask & RXE_DETH_MASK) { if (qp->ibqp.qp_num == 1) deth_set_qkey(pkt, GSI_QKEY); else - deth_set_qkey(pkt, ibwr->wr.ud.remote_qkey); + deth_set_qkey(pkt, wr->wr.ud.remote_qkey); deth_set_sqp(pkt, qp->ibqp.qp_num); } @@ -338,8 +341,10 @@ static void update_wqe_state(struct rxe_qp *qp, struct rxe_pkt_info *pkt) { if (pkt->mask & RXE_LAST_MASK) { - if (qp_type(qp) == IB_QPT_RC) + if (qp_type(qp) == IB_QPT_RC || + qp_type(qp) == IB_QPT_XRC_INI) wqe->state = wqe_state_pending; + /* other qp types handled in rxe_xmit_packet() */ } else { wqe->state = wqe_state_processing; } @@ -532,9 +537,10 @@ int rxe_requester(void *arg) goto done; } - if (unlikely(qp_type(qp) == IB_QPT_RC && - psn_compare(qp->req.psn, (qp->comp.psn + - RXE_MAX_UNACKED_PSNS)) > 0)) { + if (unlikely((qp_type(qp) == IB_QPT_RC || + qp_type(qp) == IB_QPT_XRC_INI) && + psn_compare(qp->req.psn, (qp->comp.psn + + RXE_MAX_UNACKED_PSNS)) > 0)) { qp->req.wait_psn = 1; goto exit; } From patchwork Sat Sep 17 03:12:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12979012 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB5A4ECAAA1 for ; Sat, 17 Sep 2022 03:13:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229713AbiIQDNB (ORCPT ); Fri, 16 Sep 2022 23:13:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56038 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229875AbiIQDM1 (ORCPT ); Fri, 16 Sep 2022 23:12:27 -0400 Received: from mail-oi1-x22e.google.com (mail-oi1-x22e.google.com [IPv6:2607:f8b0:4864:20::22e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E0674BD14B for ; Fri, 16 Sep 2022 20:12:25 -0700 (PDT) Received: by mail-oi1-x22e.google.com with SMTP id j188so5967308oih.0 for ; Fri, 16 Sep 2022 20:12:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date; bh=Qv+MFCfjBy9qlntNACjCRozVQRuJY60ehMS3GyMG2/8=; b=TkyA6wF2FaU4uxesiWz4zwavg7rsv+phGbvY+EP4WtF9b8yydMxk0K7+uMgiv5F0YQ 3otSrB4NruGrz1mTN2kBKAxBvOBr4tstHLM4UTNBVZ1V+gYnNwcXXsnmAcwz57rl309p XC9Uok/BddAWe3OK2XToj5qmBlVvHloFJ993ZYweKcCK7UE4GqFm8t877KI8HsOhE9UO eQwO5dhj9cxcsqQtElJjD+gpO+oSwbj4aBVNERutOplbgO3iRqqqSneBb5oFiO7mjrKe 0E2jDV0JWeHHogFLtKY9KhJAP73zotCfC8swOFMnWsG8fx0hcq3ucJol0UF3n93qefQq HJdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date; bh=Qv+MFCfjBy9qlntNACjCRozVQRuJY60ehMS3GyMG2/8=; b=Bd3IZQf8CBhbwvcMJPLWgxBeP97iJfC77ClEu08LjvZU+UlawGoUxVRvcEg8EtzX3Y 7x/ScNQd5CA3JiJ84KIakBUPebGqD6xid3sHN6ffNJjMw5kTl3h6I1jJGH5n4v/hliDB Qr5pKyK+Pwfw5IRIZ0RpjCv70tDM4duCpYLikW6HcuiVKFooVPvB2TPcF2OxDZxR+/Xj rENAKy43Qn47cEORFLYPJogFTMbXjHqRCDIR8ArwJZwTb5kDrKZrjs4guqkXiJq/vPMD /u6fY4Nj8H+QvZ2y0wNBob69RQKulaNUpRfnU0IPtUizO0X0z1bKOJVhAu2YiiVd9CBN A47w== X-Gm-Message-State: ACgBeo3H0gUxPh1UjAQJhn8Ph+lKVJioJMkj9ZnDgrzcrJge2ssA+2Wt pq8D0t3EE5DbwNHJmZN43aY= X-Google-Smtp-Source: AA6agR6ioGQkgMzOqEayJkr8YuwFwOL/xE4H/Yy6lfOqPby2RuomhzL8jAAYxZ4xfI7hmRY74smbOQ== X-Received: by 2002:aca:f2c1:0:b0:34f:bb9b:cdba with SMTP id q184-20020acaf2c1000000b0034fbb9bcdbamr8339294oih.249.1663384345007; Fri, 16 Sep 2022 20:12:25 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f9ea-fe1d-a45c-bca2.res6.spectrum.com. [2603:8081:140c:1a00:f9ea:fe1d:a45c:bca2]) by smtp.googlemail.com with ESMTPSA id a18-20020a05683012d200b00636ed80eab8sm11100865otq.4.2022.09.16.20.12.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Sep 2022 20:12:24 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, lizhijian@fujitsu.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 12/13] RDMA/rxe: Extend rxe_net.c to support xrc qps Date: Fri, 16 Sep 2022 22:12:21 -0500 Message-Id: <20220917031221.21293-1-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend code in rxe_net.c to support xrc qp types. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_net.c | 23 ++++++++++++++++------- 1 file changed, 16 insertions(+), 7 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index d46190ad082f..d9bedd6fc497 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -92,7 +92,7 @@ static struct dst_entry *rxe_find_route(struct net_device *ndev, { struct dst_entry *dst = NULL; - if (qp_type(qp) == IB_QPT_RC) + if (qp_type(qp) == IB_QPT_RC || qp_type(qp) == IB_QPT_XRC_INI) dst = sk_dst_get(qp->sk->sk); if (!dst || !dst_check(dst, qp->dst_cookie)) { @@ -120,7 +120,8 @@ static struct dst_entry *rxe_find_route(struct net_device *ndev, #endif } - if (dst && (qp_type(qp) == IB_QPT_RC)) { + if (dst && (qp_type(qp) == IB_QPT_RC || + qp_type(qp) == IB_QPT_XRC_INI)) { dst_hold(dst); sk_dst_set(qp->sk->sk, dst); } @@ -386,14 +387,23 @@ static int rxe_send(struct sk_buff *skb, struct rxe_pkt_info *pkt) */ static int rxe_loopback(struct sk_buff *skb, struct rxe_pkt_info *pkt) { - memcpy(SKB_TO_PKT(skb), pkt, sizeof(*pkt)); + struct rxe_pkt_info *new_pkt = SKB_TO_PKT(skb); + + memset(new_pkt, 0, sizeof(*new_pkt)); + + /* match rxe_udp_encap_recv */ + new_pkt->rxe = pkt->rxe; + new_pkt->port_num = 1; + new_pkt->hdr = pkt->hdr; + new_pkt->mask = RXE_GRH_MASK; + new_pkt->paylen = pkt->paylen; if (skb->protocol == htons(ETH_P_IP)) skb_pull(skb, sizeof(struct iphdr)); else skb_pull(skb, sizeof(struct ipv6hdr)); - if (WARN_ON(!ib_device_try_get(&pkt->rxe->ib_dev))) { + if (WARN_ON(!ib_device_try_get(&new_pkt->rxe->ib_dev))) { kfree_skb(skb); return -EIO; } @@ -412,7 +422,6 @@ int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, if ((is_request && (qp->req.state != QP_STATE_READY)) || (!is_request && (qp->resp.state != QP_STATE_READY))) { - pr_info("Packet dropped. QP is not in ready state\n"); goto drop; } @@ -427,8 +436,8 @@ int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, return err; } - if ((qp_type(qp) != IB_QPT_RC) && - (pkt->mask & RXE_LAST_MASK)) { + if ((pkt->mask & RXE_REQ_MASK) && (pkt->mask & RXE_LAST_MASK) && + (qp_type(qp) != IB_QPT_RC && qp_type(qp) != IB_QPT_XRC_INI)) { pkt->wqe->state = wqe_state_done; rxe_run_task(&qp->comp.task, 1); } From patchwork Sat Sep 17 03:12:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12979013 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4502ECAAD8 for ; Sat, 17 Sep 2022 03:13:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229709AbiIQDNM (ORCPT ); Fri, 16 Sep 2022 23:13:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55848 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229750AbiIQDMk (ORCPT ); Fri, 16 Sep 2022 23:12:40 -0400 Received: from mail-oa1-x2e.google.com (mail-oa1-x2e.google.com [IPv6:2001:4860:4864:20::2e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BDC64BD1D1 for ; Fri, 16 Sep 2022 20:12:34 -0700 (PDT) Received: by mail-oa1-x2e.google.com with SMTP id 586e51a60fabf-127dca21a7dso54674791fac.12 for ; Fri, 16 Sep 2022 20:12:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date; bh=Ui/s4dF0I6Izv/NUMyg9qVP6krfiZ0OXxFPfLW8eZPo=; b=PL2nLkAu9xIpbTIi8cb/p19iKULtzCb0P++ZGwovkq3VvD74J1qwzljDm3fDIfkhIC 15GkjWfpuEYwT1URq2zVJaYmp41OjLKRBap7C9tnvENW8boJnFBbI01KptBfTqx9JRUO JY+mPGtasfxX3/ZSgs/qxAVQAgJ0PoDoxelPjjBacRAYgVhWpX2LGfSWFFkNv/6TYx9+ 6mHcypBqV7Z9nZeL343K16vTLcdCRiHqnCjNNGbH7q7qMdLpxVrrBB2ykkP9oq1qWv5Y bNIH5XzvpVvJUJnKxhp3YJYUH96PDVeN7axVfOIMvevXkgxzLJsWyzQwsZu24+T6z76S LIkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date; bh=Ui/s4dF0I6Izv/NUMyg9qVP6krfiZ0OXxFPfLW8eZPo=; b=s9QeNQ/f6wDmsbGKd2yn4n+4wZ0lsnFGxdUolo5Xa0/lISl6WzIY0pY5XHxdMsfAr6 zyOwRmN4DwBkU2U58Alp3dA6NfvzPwmjHxy+83fhOostwzxeQpV+EvcNW5t2vyRkn3w/ qp4U3RzpQlbF4No/62K7/4a/zf4LKN/s9uOded8Dy6MmyL1bOa7zCSAIyt+lHAZonSY0 db4aLVoxBGty8OGSwY31UpHJt5vmWsOK4ucPlqHFvx7meCd7OlsntdCYZtODB0Fn1ufZ A9g0PXZknCWI8uD9WFE8hZT2cfY0E638Zu3gv4338OPi9/9iqZsM1hDfrdEXnp94N8w/ XLew== X-Gm-Message-State: ACrzQf3UiH3cUvJGvc1yqcGXSB2D6Z5RBzHBS8eMJjvF/eb7WkAeMw7u p12Qq1LfNc5UBzA+CKv5y6g= X-Google-Smtp-Source: AMsMyM7OeuCxAFI0/wlZuMCXZzGzkbs/0vOAPyN+ayD8qkOwwrcFhhGr0KslsLcPiWTYtBlP+ppoAg== X-Received: by 2002:a05:6870:2049:b0:127:927a:bf40 with SMTP id l9-20020a056870204900b00127927abf40mr4578889oad.248.1663384354045; Fri, 16 Sep 2022 20:12:34 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-f9ea-fe1d-a45c-bca2.res6.spectrum.com. [2603:8081:140c:1a00:f9ea:fe1d:a45c:bca2]) by smtp.googlemail.com with ESMTPSA id m20-20020a4add14000000b00443abbc1f3csm9686443oou.24.2022.09.16.20.12.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Sep 2022 20:12:33 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, lizhijian@fujitsu.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next 13/13] RDMA/rxe: Extend rxe_resp.c to support xrc qps Date: Fri, 16 Sep 2022 22:12:31 -0500 Message-Id: <20220917031231.21314-1-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend code in rxe_resp.c to support xrc qp types. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 3 +- drivers/infiniband/sw/rxe/rxe_mw.c | 14 +-- drivers/infiniband/sw/rxe/rxe_resp.c | 161 +++++++++++++++++++++------ 3 files changed, 138 insertions(+), 40 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index c6fb93a749ad..9381c76bff87 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -87,7 +87,8 @@ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata); int rxe_dealloc_mw(struct ib_mw *ibmw); int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe); int rxe_invalidate_mw(struct rxe_qp *qp, u32 rkey); -struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey); +struct rxe_mw *rxe_lookup_mw(struct rxe_pd *pd, struct rxe_qp *qp, + int access, u32 rkey); void rxe_mw_cleanup(struct rxe_pool_elem *elem); /* rxe_net.c */ diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c index 104993801a80..2a7493526ec2 100644 --- a/drivers/infiniband/sw/rxe/rxe_mw.c +++ b/drivers/infiniband/sw/rxe/rxe_mw.c @@ -280,10 +280,10 @@ int rxe_invalidate_mw(struct rxe_qp *qp, u32 rkey) return ret; } -struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey) +struct rxe_mw *rxe_lookup_mw(struct rxe_pd *pd, struct rxe_qp *qp, + int access, u32 rkey) { struct rxe_dev *rxe = to_rdev(qp->ibqp.device); - struct rxe_pd *pd = to_rpd(qp->ibqp.pd); struct rxe_mw *mw; int index = rkey >> 8; @@ -291,11 +291,11 @@ struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey) if (!mw) return NULL; - if (unlikely((mw->rkey != rkey) || rxe_mw_pd(mw) != pd || - (mw->ibmw.type == IB_MW_TYPE_2 && mw->qp != qp) || - (mw->length == 0) || - (access && !(access & mw->access)) || - mw->state != RXE_MW_STATE_VALID)) { + if ((mw->rkey != rkey) || rxe_mw_pd(mw) != pd || + (mw->ibmw.type == IB_MW_TYPE_2 && + (mw->qp != qp || qp_type(qp) == IB_QPT_XRC_TGT)) || + (mw->length == 0) || (access && !(access & mw->access)) || + mw->state != RXE_MW_STATE_VALID) { rxe_put(mw); return NULL; } diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index cb560cbe418d..b0a97074bc5a 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -88,7 +88,8 @@ void rxe_resp_queue_pkt(struct rxe_qp *qp, struct sk_buff *skb) skb_queue_tail(&qp->req_pkts, skb); - must_sched = (pkt->opcode == IB_OPCODE_RC_RDMA_READ_REQUEST) || + /* mask off opcode type bits */ + must_sched = ((pkt->opcode & 0x1f) == IB_OPCODE_RDMA_READ_REQUEST) || (skb_queue_len(&qp->req_pkts) > 1); rxe_run_task(&qp->resp.task, must_sched); @@ -127,6 +128,7 @@ static enum resp_states check_psn(struct rxe_qp *qp, switch (qp_type(qp)) { case IB_QPT_RC: + case IB_QPT_XRC_TGT: if (diff > 0) { if (qp->resp.sent_psn_nak) return RESPST_CLEANUP; @@ -156,6 +158,7 @@ static enum resp_states check_psn(struct rxe_qp *qp, return RESPST_CLEANUP; } break; + default: break; } @@ -248,6 +251,47 @@ static enum resp_states check_op_seq(struct rxe_qp *qp, } break; + case IB_QPT_XRC_TGT: + switch (qp->resp.opcode) { + case IB_OPCODE_XRC_SEND_FIRST: + case IB_OPCODE_XRC_SEND_MIDDLE: + switch (pkt->opcode) { + case IB_OPCODE_XRC_SEND_MIDDLE: + case IB_OPCODE_XRC_SEND_LAST: + case IB_OPCODE_XRC_SEND_LAST_WITH_IMMEDIATE: + case IB_OPCODE_XRC_SEND_LAST_WITH_INVALIDATE: + return RESPST_CHK_OP_VALID; + default: + return RESPST_ERR_MISSING_OPCODE_LAST_C; + } + + case IB_OPCODE_XRC_RDMA_WRITE_FIRST: + case IB_OPCODE_XRC_RDMA_WRITE_MIDDLE: + switch (pkt->opcode) { + case IB_OPCODE_XRC_RDMA_WRITE_MIDDLE: + case IB_OPCODE_XRC_RDMA_WRITE_LAST: + case IB_OPCODE_XRC_RDMA_WRITE_LAST_WITH_IMMEDIATE: + return RESPST_CHK_OP_VALID; + default: + return RESPST_ERR_MISSING_OPCODE_LAST_C; + } + + default: + switch (pkt->opcode) { + case IB_OPCODE_XRC_SEND_MIDDLE: + case IB_OPCODE_XRC_SEND_LAST: + case IB_OPCODE_XRC_SEND_LAST_WITH_IMMEDIATE: + case IB_OPCODE_XRC_SEND_LAST_WITH_INVALIDATE: + case IB_OPCODE_XRC_RDMA_WRITE_MIDDLE: + case IB_OPCODE_XRC_RDMA_WRITE_LAST: + case IB_OPCODE_XRC_RDMA_WRITE_LAST_WITH_IMMEDIATE: + return RESPST_ERR_MISSING_OPCODE_FIRST; + default: + return RESPST_CHK_OP_VALID; + } + } + break; + default: return RESPST_CHK_OP_VALID; } @@ -258,6 +302,7 @@ static enum resp_states check_op_valid(struct rxe_qp *qp, { switch (qp_type(qp)) { case IB_QPT_RC: + case IB_QPT_XRC_TGT: if (((pkt->mask & RXE_READ_MASK) && !(qp->attr.qp_access_flags & IB_ACCESS_REMOTE_READ)) || ((pkt->mask & RXE_WRITE_MASK) && @@ -290,9 +335,22 @@ static enum resp_states check_op_valid(struct rxe_qp *qp, return RESPST_CHK_RESOURCE; } -static enum resp_states get_srq_wqe(struct rxe_qp *qp) +static struct rxe_srq *get_srq(struct rxe_qp *qp, struct rxe_pkt_info *pkt) +{ + struct rxe_srq *srq; + + if (qp_type(qp) == IB_QPT_XRC_TGT) + srq = pkt->srq; + else if (qp->srq) + srq = qp->srq; + else + srq = NULL; + + return srq; +} + +static enum resp_states get_srq_wqe(struct rxe_qp *qp, struct rxe_srq *srq) { - struct rxe_srq *srq = qp->srq; struct rxe_queue *q = srq->rq.queue; struct rxe_recv_wqe *wqe; struct ib_event ev; @@ -344,7 +402,7 @@ static enum resp_states get_srq_wqe(struct rxe_qp *qp) static enum resp_states check_resource(struct rxe_qp *qp, struct rxe_pkt_info *pkt) { - struct rxe_srq *srq = qp->srq; + struct rxe_srq *srq = get_srq(qp, pkt); if (qp->resp.state == QP_STATE_ERROR) { if (qp->resp.wqe) { @@ -377,7 +435,7 @@ static enum resp_states check_resource(struct rxe_qp *qp, if (pkt->mask & RXE_RWR_MASK) { if (srq) - return get_srq_wqe(qp); + return get_srq_wqe(qp, srq); qp->resp.wqe = queue_head(qp->rq.queue, QUEUE_TYPE_FROM_CLIENT); @@ -387,6 +445,7 @@ static enum resp_states check_resource(struct rxe_qp *qp, return RESPST_CHK_LENGTH; } +/* TODO this should actually do what it says per IBA spec */ static enum resp_states check_length(struct rxe_qp *qp, struct rxe_pkt_info *pkt) { @@ -397,6 +456,9 @@ static enum resp_states check_length(struct rxe_qp *qp, case IB_QPT_UC: return RESPST_CHK_RKEY; + case IB_QPT_XRC_TGT: + return RESPST_CHK_RKEY; + default: return RESPST_CHK_RKEY; } @@ -407,6 +469,7 @@ static enum resp_states check_rkey(struct rxe_qp *qp, { struct rxe_mr *mr = NULL; struct rxe_mw *mw = NULL; + struct rxe_pd *pd; u64 va; u32 rkey; u32 resid; @@ -447,8 +510,11 @@ static enum resp_states check_rkey(struct rxe_qp *qp, resid = qp->resp.resid; pktlen = payload_size(pkt); + /* we have ref counts on qp and pkt->srq so this is just a temp */ + pd = (qp_type(qp) == IB_QPT_XRC_TGT) ? pkt->srq->pd : qp->pd; + if (rkey_is_mw(rkey)) { - mw = rxe_lookup_mw(qp, access, rkey); + mw = rxe_lookup_mw(pd, qp, access, rkey); if (!mw) { pr_debug("%s: no MW matches rkey %#x\n", __func__, rkey); @@ -469,7 +535,7 @@ static enum resp_states check_rkey(struct rxe_qp *qp, rxe_put(mw); rxe_get(mr); } else { - mr = lookup_mr(qp->pd, access, rkey, RXE_LOOKUP_REMOTE); + mr = lookup_mr(pd, access, rkey, RXE_LOOKUP_REMOTE); if (!mr) { pr_debug("%s: no MR matches rkey %#x\n", __func__, rkey); @@ -518,12 +584,12 @@ static enum resp_states check_rkey(struct rxe_qp *qp, return state; } -static enum resp_states send_data_in(struct rxe_qp *qp, void *data_addr, - int data_len) +static enum resp_states send_data_in(struct rxe_pd *pd, struct rxe_qp *qp, + void *data_addr, int data_len) { int err; - err = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, &qp->resp.wqe->dma, + err = copy_data(pd, IB_ACCESS_LOCAL_WRITE, &qp->resp.wqe->dma, data_addr, data_len, RXE_TO_MR_OBJ); if (unlikely(err)) return (err == -ENOSPC) ? RESPST_ERR_LENGTH @@ -627,7 +693,8 @@ static enum resp_states atomic_reply(struct rxe_qp *qp, spin_lock_bh(&atomic_ops_lock); res->atomic.orig_val = value = *vaddr; - if (pkt->opcode == IB_OPCODE_RC_COMPARE_SWAP) { + if (pkt->opcode == IB_OPCODE_RC_COMPARE_SWAP || + pkt->opcode == IB_OPCODE_XRC_COMPARE_SWAP) { if (value == atmeth_comp(pkt)) value = atmeth_swap_add(pkt); } else { @@ -786,24 +853,30 @@ static enum resp_states read_reply(struct rxe_qp *qp, } if (res->read.resid <= mtu) - opcode = IB_OPCODE_RC_RDMA_READ_RESPONSE_ONLY; + opcode = IB_OPCODE_RDMA_READ_RESPONSE_ONLY; else - opcode = IB_OPCODE_RC_RDMA_READ_RESPONSE_FIRST; + opcode = IB_OPCODE_RDMA_READ_RESPONSE_FIRST; } else { mr = rxe_recheck_mr(qp, res->read.rkey); if (!mr) return RESPST_ERR_RKEY_VIOLATION; if (res->read.resid > mtu) - opcode = IB_OPCODE_RC_RDMA_READ_RESPONSE_MIDDLE; + opcode = IB_OPCODE_RDMA_READ_RESPONSE_MIDDLE; else - opcode = IB_OPCODE_RC_RDMA_READ_RESPONSE_LAST; + opcode = IB_OPCODE_RDMA_READ_RESPONSE_LAST; } res->state = rdatm_res_state_next; payload = min_t(int, res->read.resid, mtu); + /* fixup opcode type */ + if (qp_type(qp) == IB_QPT_XRC_TGT) + opcode |= IB_OPCODE_XRC; + else + opcode |= IB_OPCODE_RC; + skb = prepare_ack_packet(qp, &ack_pkt, opcode, payload, res->cur_psn, AETH_ACK_UNLIMITED); if (!skb) @@ -858,6 +931,8 @@ static enum resp_states execute(struct rxe_qp *qp, struct rxe_pkt_info *pkt) enum resp_states err; struct sk_buff *skb = PKT_TO_SKB(pkt); union rdma_network_hdr hdr; + struct rxe_pd *pd = (qp_type(qp) == IB_QPT_XRC_TGT) ? + pkt->srq->pd : qp->pd; if (pkt->mask & RXE_SEND_MASK) { if (qp_type(qp) == IB_QPT_UD || @@ -867,15 +942,15 @@ static enum resp_states execute(struct rxe_qp *qp, struct rxe_pkt_info *pkt) sizeof(hdr.reserved)); memcpy(&hdr.roce4grh, ip_hdr(skb), sizeof(hdr.roce4grh)); - err = send_data_in(qp, &hdr, sizeof(hdr)); + err = send_data_in(pd, qp, &hdr, sizeof(hdr)); } else { - err = send_data_in(qp, ipv6_hdr(skb), + err = send_data_in(pd, qp, ipv6_hdr(skb), sizeof(hdr)); } if (err) return err; } - err = send_data_in(qp, payload_addr(pkt), payload_size(pkt)); + err = send_data_in(pd, qp, payload_addr(pkt), payload_size(pkt)); if (err) return err; } else if (pkt->mask & RXE_WRITE_MASK) { @@ -914,7 +989,7 @@ static enum resp_states execute(struct rxe_qp *qp, struct rxe_pkt_info *pkt) if (pkt->mask & RXE_COMP_MASK) return RESPST_COMPLETE; - else if (qp_type(qp) == IB_QPT_RC) + else if (qp_type(qp) == IB_QPT_RC || qp_type(qp) == IB_QPT_XRC_TGT) return RESPST_ACKNOWLEDGE; else return RESPST_CLEANUP; @@ -928,13 +1003,21 @@ static enum resp_states do_complete(struct rxe_qp *qp, struct ib_uverbs_wc *uwc = &cqe.uibwc; struct rxe_recv_wqe *wqe = qp->resp.wqe; struct rxe_dev *rxe = to_rdev(qp->ibqp.device); + struct rxe_cq *cq; + struct rxe_srq *srq; if (!wqe) goto finish; memset(&cqe, 0, sizeof(cqe)); - if (qp->rcq->is_user) { + /* srq and cq if != 0 are protected by references held by qp or pkt */ + srq = (qp_type(qp) == IB_QPT_XRC_TGT) ? pkt->srq : qp->srq; + cq = (qp_type(qp) == IB_QPT_XRC_TGT) ? pkt->srq->cq : qp->rcq; + + WARN_ON(!cq); + + if (cq->is_user) { uwc->status = qp->resp.status; uwc->qp_num = qp->ibqp.qp_num; uwc->wr_id = wqe->wr_id; @@ -956,7 +1039,7 @@ static enum resp_states do_complete(struct rxe_qp *qp, /* fields after byte_len are different between kernel and user * space */ - if (qp->rcq->is_user) { + if (cq->is_user) { uwc->wc_flags = IB_WC_GRH; if (pkt->mask & RXE_IMMDT_MASK) { @@ -1005,12 +1088,13 @@ static enum resp_states do_complete(struct rxe_qp *qp, } /* have copy for srq and reference for !srq */ - if (!qp->srq) + if (!srq) queue_advance_consumer(qp->rq.queue, QUEUE_TYPE_FROM_CLIENT); qp->resp.wqe = NULL; - if (rxe_cq_post(qp->rcq, &cqe, pkt ? bth_se(pkt) : 1)) + /* either qp or srq is holding a reference to cq */ + if (rxe_cq_post(cq, &cqe, pkt ? bth_se(pkt) : 1)) return RESPST_ERR_CQ_OVERFLOW; finish: @@ -1018,7 +1102,7 @@ static enum resp_states do_complete(struct rxe_qp *qp, return RESPST_CHK_RESOURCE; if (unlikely(!pkt)) return RESPST_DONE; - if (qp_type(qp) == IB_QPT_RC) + if (qp_type(qp) == IB_QPT_RC || qp_type(qp) == IB_QPT_XRC_TGT) return RESPST_ACKNOWLEDGE; else return RESPST_CLEANUP; @@ -1029,9 +1113,13 @@ static int send_ack(struct rxe_qp *qp, u8 syndrome, u32 psn) int err = 0; struct rxe_pkt_info ack_pkt; struct sk_buff *skb; + int opcode; + + opcode = (qp_type(qp) == IB_QPT_XRC_TGT) ? + IB_OPCODE_XRC_ACKNOWLEDGE : + IB_OPCODE_RC_ACKNOWLEDGE; - skb = prepare_ack_packet(qp, &ack_pkt, IB_OPCODE_RC_ACKNOWLEDGE, - 0, psn, syndrome); + skb = prepare_ack_packet(qp, &ack_pkt, opcode, 0, psn, syndrome); if (!skb) { err = -ENOMEM; goto err1; @@ -1050,9 +1138,13 @@ static int send_atomic_ack(struct rxe_qp *qp, u8 syndrome, u32 psn) int err = 0; struct rxe_pkt_info ack_pkt; struct sk_buff *skb; + int opcode; + + opcode = (qp_type(qp) == IB_QPT_XRC_TGT) ? + IB_OPCODE_XRC_ATOMIC_ACKNOWLEDGE : + IB_OPCODE_RC_ATOMIC_ACKNOWLEDGE; - skb = prepare_ack_packet(qp, &ack_pkt, IB_OPCODE_RC_ATOMIC_ACKNOWLEDGE, - 0, psn, syndrome); + skb = prepare_ack_packet(qp, &ack_pkt, opcode, 0, psn, syndrome); if (!skb) { err = -ENOMEM; goto out; @@ -1073,7 +1165,7 @@ static int send_atomic_ack(struct rxe_qp *qp, u8 syndrome, u32 psn) static enum resp_states acknowledge(struct rxe_qp *qp, struct rxe_pkt_info *pkt) { - if (qp_type(qp) != IB_QPT_RC) + if (qp_type(qp) != IB_QPT_RC && qp_type(qp) != IB_QPT_XRC_TGT) return RESPST_CLEANUP; if (qp->resp.aeth_syndrome != AETH_ACK_UNLIMITED) @@ -1094,6 +1186,8 @@ static enum resp_states cleanup(struct rxe_qp *qp, if (pkt) { skb = skb_dequeue(&qp->req_pkts); rxe_put(qp); + if (pkt->srq) + rxe_put(pkt->srq); kfree_skb(skb); ib_device_put(qp->ibqp.device); } @@ -1359,7 +1453,8 @@ int rxe_responder(void *arg) state = do_class_d1e_error(qp); break; case RESPST_ERR_RNR: - if (qp_type(qp) == IB_QPT_RC) { + if (qp_type(qp) == IB_QPT_RC || + qp_type(qp) == IB_QPT_XRC_TGT) { rxe_counter_inc(rxe, RXE_CNT_SND_RNR); /* RC - class B */ send_ack(qp, AETH_RNR_NAK | @@ -1374,7 +1469,8 @@ int rxe_responder(void *arg) break; case RESPST_ERR_RKEY_VIOLATION: - if (qp_type(qp) == IB_QPT_RC) { + if (qp_type(qp) == IB_QPT_RC || + qp_type(qp) == IB_QPT_XRC_TGT) { /* Class C */ do_class_ac_error(qp, AETH_NAK_REM_ACC_ERR, IB_WC_REM_ACCESS_ERR); @@ -1400,7 +1496,8 @@ int rxe_responder(void *arg) break; case RESPST_ERR_LENGTH: - if (qp_type(qp) == IB_QPT_RC) { + if (qp_type(qp) == IB_QPT_RC || + qp_type(qp) == IB_QPT_XRC_TGT) { /* Class C */ do_class_ac_error(qp, AETH_NAK_INVALID_REQ, IB_WC_REM_INV_REQ_ERR);