From patchwork Thu Sep 29 17:08:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12994465 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21625C4332F for ; Thu, 29 Sep 2022 17:09:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234729AbiI2RJO (ORCPT ); Thu, 29 Sep 2022 13:09:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59880 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234959AbiI2RJM (ORCPT ); Thu, 29 Sep 2022 13:09:12 -0400 Received: from mail-oa1-x30.google.com (mail-oa1-x30.google.com [IPv6:2001:4860:4864:20::30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3E4C81CE930 for ; Thu, 29 Sep 2022 10:09:08 -0700 (PDT) Received: by mail-oa1-x30.google.com with SMTP id 586e51a60fabf-1318106fe2cso2517944fac.13 for ; Thu, 29 Sep 2022 10:09:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=Q1GipJr6CGXxXj3WxyziOQP0HpUOttTawpGyAkfgcyg=; b=UCBcX6OoKSoK0yK5RIRiUcQpzOmjUAABLVThAWHMLjAPPCbSpzglXd0vJ3G3AJhchE R8Ei6tHHDoQy/BGwOs9cAiSkme8z5vthBUoxsfh5u5sBbsWVQ4yn1S2axLgo89Qkvl1S KUbFJhXi4JozE1I1fVZ/IS9EAIR8lL1joJZz0r9LSYa3OrOS5YZWwZf/fZliBK5L0yV2 13DWr6DMUgX2l9iQZVScHy5nvSfVMmdMk7AuZHqJqHxrp1zqaK5oE+5QmMhVtmHGb93R H12I8xw8EFRvFTqdnHFw21bBlsQDIyOnhUJ+x7GP00LenujMEFXu2P8Z/M5dNAkYjk7e 1p6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=Q1GipJr6CGXxXj3WxyziOQP0HpUOttTawpGyAkfgcyg=; b=H2XXn69Nsb53UMiafHW/NAUruRn/C1YEfvgaoElkg2cwwAL/U5oxeGifeDF9d951Q7 HZr3GmDm9Qsf79gdk1KbLqtB0ga/PH7i2PdfkjwlqHdBxwJ078tF3lTMDwRwsY6jF/bf UV6kC8ovyiRahiidiIsXmmERH6UfKODSTFpOMNnizbXhsGb9uhx2e0VtGJFbmgV2CRUv i1Y4PcdxbkXylq5BVqT3ROPpsqmlUhkl9VvHzOxn1RzD02YT0Gwwq6fjZULvwuvDOSx1 e/iP+L+uic7SgHtBWSLQfCcDMb4cddrRA5mcNfCMsgjdr05a53TpYnUmjc5jU7v7aMfW FVzg== X-Gm-Message-State: ACrzQf0t2XowG7G6YPQ8Ov30YGVwytvURVRUD8ZaAJ5I32DbT9AnGqwI 8qTEMhMvk/29e1kxE3p9AdrRmWoj9jXmuw== X-Google-Smtp-Source: AMsMyM5N7KBarfUfZdZXY0AbIXL5X5BcW2uQKczc6CP5inB1oALdgXIsg1SAnAv6walIpEEyhAomrA== X-Received: by 2002:a05:6870:b38a:b0:130:ea10:79bf with SMTP id w10-20020a056870b38a00b00130ea1079bfmr9081455oap.217.1664471347463; Thu, 29 Sep 2022 10:09:07 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-c4e7-bfae-90ed-ac81.res6.spectrum.com. [2603:8081:140c:1a00:c4e7:bfae:90ed:ac81]) by smtp.googlemail.com with ESMTPSA id v17-20020a056808005100b00349a06c581fsm2798557oic.3.2022.09.29.10.09.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 29 Sep 2022 10:09:07 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 01/13] RDMA/rxe: Replace START->FIRST, END->LAST Date: Thu, 29 Sep 2022 12:08:25 -0500 Message-Id: <20220929170836.17838-2-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220929170836.17838-1-rpearsonhpe@gmail.com> References: <20220929170836.17838-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Replace RXE_START_MASK by RXE_FIRST_MASK, RXE_END_MASK by RXE_LAST_MASK and add RXE_ONLY_MASK = FIRST | LAST to match normal IBA usage. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_comp.c | 6 +- drivers/infiniband/sw/rxe/rxe_net.c | 2 +- drivers/infiniband/sw/rxe/rxe_opcode.c | 143 +++++++++++-------------- drivers/infiniband/sw/rxe/rxe_opcode.h | 5 +- drivers/infiniband/sw/rxe/rxe_req.c | 10 +- drivers/infiniband/sw/rxe/rxe_resp.c | 4 +- 6 files changed, 76 insertions(+), 94 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c index fb0c008af78c..1f10ae4a35d5 100644 --- a/drivers/infiniband/sw/rxe/rxe_comp.c +++ b/drivers/infiniband/sw/rxe/rxe_comp.c @@ -221,7 +221,7 @@ static inline enum comp_state check_ack(struct rxe_qp *qp, switch (qp->comp.opcode) { case -1: /* Will catch all *_ONLY cases. */ - if (!(mask & RXE_START_MASK)) + if (!(mask & RXE_FIRST_MASK)) return COMPST_ERROR; break; @@ -354,7 +354,7 @@ static inline enum comp_state do_read(struct rxe_qp *qp, return COMPST_ERROR; } - if (wqe->dma.resid == 0 && (pkt->mask & RXE_END_MASK)) + if (wqe->dma.resid == 0 && (pkt->mask & RXE_LAST_MASK)) return COMPST_COMP_ACK; return COMPST_UPDATE_COMP; @@ -636,7 +636,7 @@ int rxe_completer(void *arg) break; case COMPST_UPDATE_COMP: - if (pkt->mask & RXE_END_MASK) + if (pkt->mask & RXE_LAST_MASK) qp->comp.opcode = -1; else qp->comp.opcode = pkt->opcode; diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index c53f4529f098..d46190ad082f 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -428,7 +428,7 @@ int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, } if ((qp_type(qp) != IB_QPT_RC) && - (pkt->mask & RXE_END_MASK)) { + (pkt->mask & RXE_LAST_MASK)) { pkt->wqe->state = wqe_state_done; rxe_run_task(&qp->comp.task, 1); } diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.c b/drivers/infiniband/sw/rxe/rxe_opcode.c index d4ba4d506f17..0ea587c15931 100644 --- a/drivers/infiniband/sw/rxe/rxe_opcode.c +++ b/drivers/infiniband/sw/rxe/rxe_opcode.c @@ -107,7 +107,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_SEND_FIRST] = { .name = "IB_OPCODE_RC_SEND_FIRST", .mask = RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_RWR_MASK | - RXE_SEND_MASK | RXE_START_MASK, + RXE_SEND_MASK | RXE_FIRST_MASK, .length = RXE_BTH_BYTES, .offset = { [RXE_BTH] = 0, @@ -127,7 +127,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_SEND_LAST] = { .name = "IB_OPCODE_RC_SEND_LAST", .mask = RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | - RXE_SEND_MASK | RXE_END_MASK, + RXE_SEND_MASK | RXE_LAST_MASK, .length = RXE_BTH_BYTES, .offset = { [RXE_BTH] = 0, @@ -137,7 +137,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_SEND_LAST_WITH_IMMEDIATE] = { .name = "IB_OPCODE_RC_SEND_LAST_WITH_IMMEDIATE", .mask = RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_COMP_MASK | RXE_SEND_MASK | RXE_END_MASK, + RXE_COMP_MASK | RXE_SEND_MASK | RXE_LAST_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES, .offset = { [RXE_BTH] = 0, @@ -149,8 +149,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_SEND_ONLY] = { .name = "IB_OPCODE_RC_SEND_ONLY", .mask = RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | - RXE_RWR_MASK | RXE_SEND_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_RWR_MASK | RXE_SEND_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES, .offset = { [RXE_BTH] = 0, @@ -161,7 +160,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { .name = "IB_OPCODE_RC_SEND_ONLY_WITH_IMMEDIATE", .mask = RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | RXE_RWR_MASK | RXE_SEND_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES, .offset = { [RXE_BTH] = 0, @@ -173,7 +172,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_RDMA_WRITE_FIRST] = { .name = "IB_OPCODE_RC_RDMA_WRITE_FIRST", .mask = RXE_RETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_WRITE_MASK | RXE_START_MASK, + RXE_WRITE_MASK | RXE_FIRST_MASK, .length = RXE_BTH_BYTES + RXE_RETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -195,7 +194,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_RDMA_WRITE_LAST] = { .name = "IB_OPCODE_RC_RDMA_WRITE_LAST", .mask = RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_WRITE_MASK | - RXE_END_MASK, + RXE_LAST_MASK, .length = RXE_BTH_BYTES, .offset = { [RXE_BTH] = 0, @@ -206,7 +205,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { .name = "IB_OPCODE_RC_RDMA_WRITE_LAST_WITH_IMMEDIATE", .mask = RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_WRITE_MASK | RXE_COMP_MASK | RXE_RWR_MASK | - RXE_END_MASK, + RXE_LAST_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES, .offset = { [RXE_BTH] = 0, @@ -218,8 +217,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_RDMA_WRITE_ONLY] = { .name = "IB_OPCODE_RC_RDMA_WRITE_ONLY", .mask = RXE_RETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_WRITE_MASK | RXE_START_MASK | - RXE_END_MASK, + RXE_WRITE_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_RETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -231,9 +229,8 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_RDMA_WRITE_ONLY_WITH_IMMEDIATE] = { .name = "IB_OPCODE_RC_RDMA_WRITE_ONLY_WITH_IMMEDIATE", .mask = RXE_RETH_MASK | RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | - RXE_REQ_MASK | RXE_WRITE_MASK | - RXE_COMP_MASK | RXE_RWR_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_REQ_MASK | RXE_WRITE_MASK | RXE_COMP_MASK | + RXE_RWR_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES + RXE_RETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -248,7 +245,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_RDMA_READ_REQUEST] = { .name = "IB_OPCODE_RC_RDMA_READ_REQUEST", .mask = RXE_RETH_MASK | RXE_REQ_MASK | RXE_READ_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_RETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -260,7 +257,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_RDMA_READ_RESPONSE_FIRST] = { .name = "IB_OPCODE_RC_RDMA_READ_RESPONSE_FIRST", .mask = RXE_AETH_MASK | RXE_PAYLOAD_MASK | RXE_ACK_MASK | - RXE_START_MASK, + RXE_FIRST_MASK, .length = RXE_BTH_BYTES + RXE_AETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -281,7 +278,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_RDMA_READ_RESPONSE_LAST] = { .name = "IB_OPCODE_RC_RDMA_READ_RESPONSE_LAST", .mask = RXE_AETH_MASK | RXE_PAYLOAD_MASK | RXE_ACK_MASK | - RXE_END_MASK, + RXE_LAST_MASK, .length = RXE_BTH_BYTES + RXE_AETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -293,7 +290,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_RDMA_READ_RESPONSE_ONLY] = { .name = "IB_OPCODE_RC_RDMA_READ_RESPONSE_ONLY", .mask = RXE_AETH_MASK | RXE_PAYLOAD_MASK | RXE_ACK_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_AETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -304,8 +301,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { }, [IB_OPCODE_RC_ACKNOWLEDGE] = { .name = "IB_OPCODE_RC_ACKNOWLEDGE", - .mask = RXE_AETH_MASK | RXE_ACK_MASK | RXE_START_MASK | - RXE_END_MASK, + .mask = RXE_AETH_MASK | RXE_ACK_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_AETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -317,7 +313,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_ATOMIC_ACKNOWLEDGE] = { .name = "IB_OPCODE_RC_ATOMIC_ACKNOWLEDGE", .mask = RXE_AETH_MASK | RXE_ATMACK_MASK | RXE_ACK_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_ATMACK_BYTES + RXE_AETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -332,7 +328,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_COMPARE_SWAP] = { .name = "IB_OPCODE_RC_COMPARE_SWAP", .mask = RXE_ATMETH_MASK | RXE_REQ_MASK | RXE_ATOMIC_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_ATMETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -344,7 +340,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_FETCH_ADD] = { .name = "IB_OPCODE_RC_FETCH_ADD", .mask = RXE_ATMETH_MASK | RXE_REQ_MASK | RXE_ATOMIC_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_ATMETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -356,7 +352,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RC_SEND_LAST_WITH_INVALIDATE] = { .name = "IB_OPCODE_RC_SEND_LAST_WITH_INVALIDATE", .mask = RXE_IETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_COMP_MASK | RXE_SEND_MASK | RXE_END_MASK, + RXE_COMP_MASK | RXE_SEND_MASK | RXE_LAST_MASK, .length = RXE_BTH_BYTES + RXE_IETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -369,7 +365,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { .name = "IB_OPCODE_RC_SEND_ONLY_INV", .mask = RXE_IETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | RXE_RWR_MASK | RXE_SEND_MASK | - RXE_END_MASK | RXE_START_MASK, + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_IETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -383,7 +379,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_UC_SEND_FIRST] = { .name = "IB_OPCODE_UC_SEND_FIRST", .mask = RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_RWR_MASK | - RXE_SEND_MASK | RXE_START_MASK, + RXE_SEND_MASK | RXE_FIRST_MASK, .length = RXE_BTH_BYTES, .offset = { [RXE_BTH] = 0, @@ -403,7 +399,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_UC_SEND_LAST] = { .name = "IB_OPCODE_UC_SEND_LAST", .mask = RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | - RXE_SEND_MASK | RXE_END_MASK, + RXE_SEND_MASK | RXE_LAST_MASK, .length = RXE_BTH_BYTES, .offset = { [RXE_BTH] = 0, @@ -413,7 +409,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_UC_SEND_LAST_WITH_IMMEDIATE] = { .name = "IB_OPCODE_UC_SEND_LAST_WITH_IMMEDIATE", .mask = RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_COMP_MASK | RXE_SEND_MASK | RXE_END_MASK, + RXE_COMP_MASK | RXE_SEND_MASK | RXE_LAST_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES, .offset = { [RXE_BTH] = 0, @@ -425,8 +421,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_UC_SEND_ONLY] = { .name = "IB_OPCODE_UC_SEND_ONLY", .mask = RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | - RXE_RWR_MASK | RXE_SEND_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_RWR_MASK | RXE_SEND_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES, .offset = { [RXE_BTH] = 0, @@ -437,7 +432,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { .name = "IB_OPCODE_UC_SEND_ONLY_WITH_IMMEDIATE", .mask = RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | RXE_RWR_MASK | RXE_SEND_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES, .offset = { [RXE_BTH] = 0, @@ -449,7 +444,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_UC_RDMA_WRITE_FIRST] = { .name = "IB_OPCODE_UC_RDMA_WRITE_FIRST", .mask = RXE_RETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_WRITE_MASK | RXE_START_MASK, + RXE_WRITE_MASK | RXE_FIRST_MASK, .length = RXE_BTH_BYTES + RXE_RETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -471,7 +466,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_UC_RDMA_WRITE_LAST] = { .name = "IB_OPCODE_UC_RDMA_WRITE_LAST", .mask = RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_WRITE_MASK | - RXE_END_MASK, + RXE_LAST_MASK, .length = RXE_BTH_BYTES, .offset = { [RXE_BTH] = 0, @@ -482,7 +477,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { .name = "IB_OPCODE_UC_RDMA_WRITE_LAST_WITH_IMMEDIATE", .mask = RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_WRITE_MASK | RXE_COMP_MASK | RXE_RWR_MASK | - RXE_END_MASK, + RXE_LAST_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES, .offset = { [RXE_BTH] = 0, @@ -494,8 +489,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_UC_RDMA_WRITE_ONLY] = { .name = "IB_OPCODE_UC_RDMA_WRITE_ONLY", .mask = RXE_RETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_WRITE_MASK | RXE_START_MASK | - RXE_END_MASK, + RXE_WRITE_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_RETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -507,9 +501,8 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_UC_RDMA_WRITE_ONLY_WITH_IMMEDIATE] = { .name = "IB_OPCODE_UC_RDMA_WRITE_ONLY_WITH_IMMEDIATE", .mask = RXE_RETH_MASK | RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | - RXE_REQ_MASK | RXE_WRITE_MASK | - RXE_COMP_MASK | RXE_RWR_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_REQ_MASK | RXE_WRITE_MASK | RXE_COMP_MASK | + RXE_RWR_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES + RXE_RETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -527,7 +520,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { .name = "IB_OPCODE_RD_SEND_FIRST", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_RWR_MASK | RXE_SEND_MASK | - RXE_START_MASK, + RXE_FIRST_MASK, .length = RXE_BTH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -542,8 +535,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_SEND_MIDDLE] = { .name = "IB_OPCODE_RD_SEND_MIDDLE", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_PAYLOAD_MASK | - RXE_REQ_MASK | RXE_SEND_MASK | - RXE_MIDDLE_MASK, + RXE_REQ_MASK | RXE_SEND_MASK | RXE_MIDDLE_MASK, .length = RXE_BTH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -559,7 +551,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { .name = "IB_OPCODE_RD_SEND_LAST", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | RXE_SEND_MASK | - RXE_END_MASK, + RXE_LAST_MASK, .length = RXE_BTH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -574,9 +566,8 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_SEND_LAST_WITH_IMMEDIATE] = { .name = "IB_OPCODE_RD_SEND_LAST_WITH_IMMEDIATE", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_IMMDT_MASK | - RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_COMP_MASK | RXE_SEND_MASK | - RXE_END_MASK, + RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | + RXE_SEND_MASK | RXE_LAST_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { @@ -597,7 +588,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { .name = "IB_OPCODE_RD_SEND_ONLY", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | RXE_RWR_MASK | - RXE_SEND_MASK | RXE_START_MASK | RXE_END_MASK, + RXE_SEND_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -612,9 +603,8 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_SEND_ONLY_WITH_IMMEDIATE] = { .name = "IB_OPCODE_RD_SEND_ONLY_WITH_IMMEDIATE", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_IMMDT_MASK | - RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_COMP_MASK | RXE_RWR_MASK | RXE_SEND_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | + RXE_RWR_MASK | RXE_SEND_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { @@ -634,8 +624,8 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_RDMA_WRITE_FIRST] = { .name = "IB_OPCODE_RD_RDMA_WRITE_FIRST", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_RETH_MASK | - RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_WRITE_MASK | RXE_START_MASK, + RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_WRITE_MASK | + RXE_FIRST_MASK, .length = RXE_BTH_BYTES + RXE_RETH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { @@ -655,8 +645,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_RDMA_WRITE_MIDDLE] = { .name = "IB_OPCODE_RD_RDMA_WRITE_MIDDLE", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_PAYLOAD_MASK | - RXE_REQ_MASK | RXE_WRITE_MASK | - RXE_MIDDLE_MASK, + RXE_REQ_MASK | RXE_WRITE_MASK | RXE_MIDDLE_MASK, .length = RXE_BTH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -671,8 +660,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_RDMA_WRITE_LAST] = { .name = "IB_OPCODE_RD_RDMA_WRITE_LAST", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_PAYLOAD_MASK | - RXE_REQ_MASK | RXE_WRITE_MASK | - RXE_END_MASK, + RXE_REQ_MASK | RXE_WRITE_MASK | RXE_LAST_MASK, .length = RXE_BTH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -687,9 +675,8 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_RDMA_WRITE_LAST_WITH_IMMEDIATE] = { .name = "IB_OPCODE_RD_RDMA_WRITE_LAST_WITH_IMMEDIATE", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_IMMDT_MASK | - RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_WRITE_MASK | RXE_COMP_MASK | RXE_RWR_MASK | - RXE_END_MASK, + RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_WRITE_MASK | + RXE_COMP_MASK | RXE_RWR_MASK | RXE_LAST_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { @@ -709,9 +696,8 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_RDMA_WRITE_ONLY] = { .name = "IB_OPCODE_RD_RDMA_WRITE_ONLY", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_RETH_MASK | - RXE_PAYLOAD_MASK | RXE_REQ_MASK | - RXE_WRITE_MASK | RXE_START_MASK | - RXE_END_MASK, + RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_WRITE_MASK | + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_RETH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { @@ -731,10 +717,9 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_RDMA_WRITE_ONLY_WITH_IMMEDIATE] = { .name = "IB_OPCODE_RD_RDMA_WRITE_ONLY_WITH_IMMEDIATE", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_RETH_MASK | - RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | - RXE_REQ_MASK | RXE_WRITE_MASK | - RXE_COMP_MASK | RXE_RWR_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | + RXE_WRITE_MASK | RXE_COMP_MASK | RXE_RWR_MASK | + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES + RXE_RETH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { @@ -759,8 +744,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_RDMA_READ_REQUEST] = { .name = "IB_OPCODE_RD_RDMA_READ_REQUEST", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_RETH_MASK | - RXE_REQ_MASK | RXE_READ_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_REQ_MASK | RXE_READ_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_RETH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { @@ -779,9 +763,8 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { }, [IB_OPCODE_RD_RDMA_READ_RESPONSE_FIRST] = { .name = "IB_OPCODE_RD_RDMA_READ_RESPONSE_FIRST", - .mask = RXE_RDETH_MASK | RXE_AETH_MASK | - RXE_PAYLOAD_MASK | RXE_ACK_MASK | - RXE_START_MASK, + .mask = RXE_RDETH_MASK | RXE_AETH_MASK | RXE_PAYLOAD_MASK | + RXE_ACK_MASK | RXE_FIRST_MASK, .length = RXE_BTH_BYTES + RXE_AETH_BYTES + RXE_RDETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -808,7 +791,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_RDMA_READ_RESPONSE_LAST] = { .name = "IB_OPCODE_RD_RDMA_READ_RESPONSE_LAST", .mask = RXE_RDETH_MASK | RXE_AETH_MASK | RXE_PAYLOAD_MASK | - RXE_ACK_MASK | RXE_END_MASK, + RXE_ACK_MASK | RXE_LAST_MASK, .length = RXE_BTH_BYTES + RXE_AETH_BYTES + RXE_RDETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -823,7 +806,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_RDMA_READ_RESPONSE_ONLY] = { .name = "IB_OPCODE_RD_RDMA_READ_RESPONSE_ONLY", .mask = RXE_RDETH_MASK | RXE_AETH_MASK | RXE_PAYLOAD_MASK | - RXE_ACK_MASK | RXE_START_MASK | RXE_END_MASK, + RXE_ACK_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_AETH_BYTES + RXE_RDETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -838,7 +821,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_ACKNOWLEDGE] = { .name = "IB_OPCODE_RD_ACKNOWLEDGE", .mask = RXE_RDETH_MASK | RXE_AETH_MASK | RXE_ACK_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_AETH_BYTES + RXE_RDETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -850,7 +833,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_ATOMIC_ACKNOWLEDGE] = { .name = "IB_OPCODE_RD_ATOMIC_ACKNOWLEDGE", .mask = RXE_RDETH_MASK | RXE_AETH_MASK | RXE_ATMACK_MASK | - RXE_ACK_MASK | RXE_START_MASK | RXE_END_MASK, + RXE_ACK_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_ATMACK_BYTES + RXE_AETH_BYTES + RXE_RDETH_BYTES, .offset = { @@ -866,8 +849,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_COMPARE_SWAP] = { .name = "RD_COMPARE_SWAP", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_ATMETH_MASK | - RXE_REQ_MASK | RXE_ATOMIC_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_REQ_MASK | RXE_ATOMIC_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_ATMETH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { @@ -887,8 +869,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { [IB_OPCODE_RD_FETCH_ADD] = { .name = "IB_OPCODE_RD_FETCH_ADD", .mask = RXE_RDETH_MASK | RXE_DETH_MASK | RXE_ATMETH_MASK | - RXE_REQ_MASK | RXE_ATOMIC_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_REQ_MASK | RXE_ATOMIC_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_ATMETH_BYTES + RXE_DETH_BYTES + RXE_RDETH_BYTES, .offset = { @@ -911,7 +892,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { .name = "IB_OPCODE_UD_SEND_ONLY", .mask = RXE_DETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | RXE_RWR_MASK | RXE_SEND_MASK | - RXE_START_MASK | RXE_END_MASK, + RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_DETH_BYTES, .offset = { [RXE_BTH] = 0, @@ -924,7 +905,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { .name = "IB_OPCODE_UD_SEND_ONLY_WITH_IMMEDIATE", .mask = RXE_DETH_MASK | RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_COMP_MASK | RXE_RWR_MASK | - RXE_SEND_MASK | RXE_START_MASK | RXE_END_MASK, + RXE_SEND_MASK | RXE_ONLY_MASK, .length = RXE_BTH_BYTES + RXE_IMMDT_BYTES + RXE_DETH_BYTES, .offset = { [RXE_BTH] = 0, diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.h b/drivers/infiniband/sw/rxe/rxe_opcode.h index 8f9aaaf260f2..d2b6a8232e92 100644 --- a/drivers/infiniband/sw/rxe/rxe_opcode.h +++ b/drivers/infiniband/sw/rxe/rxe_opcode.h @@ -75,9 +75,10 @@ enum rxe_hdr_mask { RXE_RWR_MASK = BIT(NUM_HDR_TYPES + 6), RXE_COMP_MASK = BIT(NUM_HDR_TYPES + 7), - RXE_START_MASK = BIT(NUM_HDR_TYPES + 8), + RXE_FIRST_MASK = BIT(NUM_HDR_TYPES + 8), RXE_MIDDLE_MASK = BIT(NUM_HDR_TYPES + 9), - RXE_END_MASK = BIT(NUM_HDR_TYPES + 10), + RXE_LAST_MASK = BIT(NUM_HDR_TYPES + 10), + RXE_ONLY_MASK = RXE_FIRST_MASK | RXE_LAST_MASK, RXE_LOOPBACK_MASK = BIT(NUM_HDR_TYPES + 12), diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index f63771207970..e136abc802af 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -403,7 +403,7 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp, /* init bth */ solicited = (ibwr->send_flags & IB_SEND_SOLICITED) && - (pkt->mask & RXE_END_MASK) && + (pkt->mask & RXE_LAST_MASK) && ((pkt->mask & (RXE_SEND_MASK)) || (pkt->mask & (RXE_WRITE_MASK | RXE_IMMDT_MASK)) == (RXE_WRITE_MASK | RXE_IMMDT_MASK)); @@ -411,7 +411,7 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp, qp_num = (pkt->mask & RXE_DETH_MASK) ? ibwr->wr.ud.remote_qpn : qp->attr.dest_qp_num; - ack_req = ((pkt->mask & RXE_END_MASK) || + ack_req = ((pkt->mask & RXE_LAST_MASK) || (qp->req.noack_pkts++ > RXE_MAX_PKT_PER_ACK)); if (ack_req) qp->req.noack_pkts = 0; @@ -493,7 +493,7 @@ static void update_wqe_state(struct rxe_qp *qp, struct rxe_send_wqe *wqe, struct rxe_pkt_info *pkt) { - if (pkt->mask & RXE_END_MASK) { + if (pkt->mask & RXE_LAST_MASK) { if (qp_type(qp) == IB_QPT_RC) wqe->state = wqe_state_pending; } else { @@ -513,7 +513,7 @@ static void update_wqe_psn(struct rxe_qp *qp, if (num_pkt == 0) num_pkt = 1; - if (pkt->mask & RXE_START_MASK) { + if (pkt->mask & RXE_FIRST_MASK) { wqe->first_psn = qp->req.psn; wqe->last_psn = (qp->req.psn + num_pkt - 1) & BTH_PSN_MASK; } @@ -550,7 +550,7 @@ static void update_state(struct rxe_qp *qp, struct rxe_pkt_info *pkt) { qp->req.opcode = pkt->opcode; - if (pkt->mask & RXE_END_MASK) + if (pkt->mask & RXE_LAST_MASK) qp->req.wqe_index = queue_next_index(qp->sq.queue, qp->req.wqe_index); diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index ed5a09e86417..e62a7f31779f 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -147,7 +147,7 @@ static enum resp_states check_psn(struct rxe_qp *qp, case IB_QPT_UC: if (qp->resp.drop_msg || diff != 0) { - if (pkt->mask & RXE_START_MASK) { + if (pkt->mask & RXE_FIRST_MASK) { qp->resp.drop_msg = 0; return RESPST_CHK_OP_SEQ; } @@ -901,7 +901,7 @@ static enum resp_states execute(struct rxe_qp *qp, struct rxe_pkt_info *pkt) return RESPST_ERR_INVALIDATE_RKEY; } - if (pkt->mask & RXE_END_MASK) + if (pkt->mask & RXE_LAST_MASK) /* We successfully processed this new request. */ qp->resp.msn++; From patchwork Thu Sep 29 17:08:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12994462 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5BEDCC433F5 for ; Thu, 29 Sep 2022 17:09:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236105AbiI2RJN (ORCPT ); Thu, 29 Sep 2022 13:09:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59878 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234729AbiI2RJM (ORCPT ); Thu, 29 Sep 2022 13:09:12 -0400 Received: from mail-oa1-x2b.google.com (mail-oa1-x2b.google.com [IPv6:2001:4860:4864:20::2b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 17B1C1CEDF5 for ; Thu, 29 Sep 2022 10:09:09 -0700 (PDT) Received: by mail-oa1-x2b.google.com with SMTP id 586e51a60fabf-131886d366cso2535183fac.10 for ; Thu, 29 Sep 2022 10:09:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=FS69ZhqHhf8a+VG0KKc2dMHFF+UrTR+gxCYXxxxws8A=; b=Oif+/QzPl6V5U9AbpnAFyVTg2JetTGFZ+Aq8Hxbk/h6JiDxMj5Pvdofni3c/DuDtZy LJB1gZaMn1ue+EgtT3RlqEmGlNLrOSV+GgwFmGKgiGxPrFaoOItvwIel4VqJaZACgw5z 7Z8pZPX+6bPOj6bGQxyWCo6t4Ymv1TXkwpL6ozJFitcTXz+l4hS/39UJhQD3ibj5Ov07 nbTfn/mfmGv7tnX939dXa5+CWsZOd6kxyYlOKq1wsiFpZxqIs78TBI3Dk8sv1ddzTnH+ 9mmeNJpY8BTjTjIxfWns8XsVslsX3XqqC1w0QcthdPZX/012VeUtiSod36noz4CN4bR1 d8jw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=FS69ZhqHhf8a+VG0KKc2dMHFF+UrTR+gxCYXxxxws8A=; b=oB4y/h9edgKDveWEQZPZ4qo7EWq1jvYXmbjgjHS2L03wqASQOtnKEbE66pcAdklfjW 9+Y9mrGmSLPLW3CrsJz9OhUW/YJ7ozRhmfLsrU6wb3anj3BFkfshhp0abuAMsFZxNDmH xtWOr8aCGLAsiiQUEY90iMTxS8/hlX+ZXeIqotNLHQfInN5dmUzsdabilwnzT7yHMZZT RHvK2VF5ET+2km8V5i2DWSqgd143jR2u1cprXOao4HDbX5sKzkmRft8sGI+bg2PeE38z eZJkb7E//DghbEabXdfgUlBSlVQTOobuaDeWNjf+TMTDSwkfDjivW2njuWTkPIm9JzD8 JpdA== X-Gm-Message-State: ACrzQf1pYG568tH/kP6t9/GqLTI9ikr9hH1g1TPDX0qPpmtDn/b//xij Io5E2Nw98bclqwrpVVvsvTc= X-Google-Smtp-Source: AMsMyM7ib/vyranZHO7RtgSr5HL21JxN5F2YoWB/E00OUvIi/4Tfy1HXe2kdsDHhnfHtpmfaAZYwJg== X-Received: by 2002:a05:6870:fbaa:b0:131:a07e:c93c with SMTP id kv42-20020a056870fbaa00b00131a07ec93cmr5913933oab.131.1664471348383; Thu, 29 Sep 2022 10:09:08 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-c4e7-bfae-90ed-ac81.res6.spectrum.com. [2603:8081:140c:1a00:c4e7:bfae:90ed:ac81]) by smtp.googlemail.com with ESMTPSA id v17-20020a056808005100b00349a06c581fsm2798557oic.3.2022.09.29.10.09.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 29 Sep 2022 10:09:07 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 02/13] RDMA/rxe: Move next_opcode() to rxe_opcode.c Date: Thu, 29 Sep 2022 12:08:26 -0500 Message-Id: <20220929170836.17838-3-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220929170836.17838-1-rpearsonhpe@gmail.com> References: <20220929170836.17838-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Move next_opcode() from rxe_req.c to rxe_opcode.c. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_loc.h | 3 + drivers/infiniband/sw/rxe/rxe_opcode.c | 156 ++++++++++++++++++++++++- drivers/infiniband/sw/rxe/rxe_req.c | 156 ------------------------- 3 files changed, 157 insertions(+), 158 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index c2a5c8814a48..a806737168d0 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -99,6 +99,9 @@ int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, struct sk_buff *skb); const char *rxe_parent_name(struct rxe_dev *rxe, unsigned int port_num); +/* opcode.c */ +int next_opcode(struct rxe_qp *qp, struct rxe_send_wqe *wqe, u32 opcode); + /* rxe_qp.c */ int rxe_qp_chk_init(struct rxe_dev *rxe, struct ib_qp_init_attr *init); int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd, diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.c b/drivers/infiniband/sw/rxe/rxe_opcode.c index 0ea587c15931..6b1a1f197c4d 100644 --- a/drivers/infiniband/sw/rxe/rxe_opcode.c +++ b/drivers/infiniband/sw/rxe/rxe_opcode.c @@ -5,8 +5,8 @@ */ #include -#include "rxe_opcode.h" -#include "rxe_hdr.h" + +#include "rxe.h" /* useful information about work request opcodes and pkt opcodes in * table form @@ -919,3 +919,155 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { }, }; + +static int next_opcode_rc(struct rxe_qp *qp, u32 opcode, int fits) +{ + switch (opcode) { + case IB_WR_RDMA_WRITE: + if (qp->req.opcode == IB_OPCODE_RC_RDMA_WRITE_FIRST || + qp->req.opcode == IB_OPCODE_RC_RDMA_WRITE_MIDDLE) + return fits ? + IB_OPCODE_RC_RDMA_WRITE_LAST : + IB_OPCODE_RC_RDMA_WRITE_MIDDLE; + else + return fits ? + IB_OPCODE_RC_RDMA_WRITE_ONLY : + IB_OPCODE_RC_RDMA_WRITE_FIRST; + + case IB_WR_RDMA_WRITE_WITH_IMM: + if (qp->req.opcode == IB_OPCODE_RC_RDMA_WRITE_FIRST || + qp->req.opcode == IB_OPCODE_RC_RDMA_WRITE_MIDDLE) + return fits ? + IB_OPCODE_RC_RDMA_WRITE_LAST_WITH_IMMEDIATE : + IB_OPCODE_RC_RDMA_WRITE_MIDDLE; + else + return fits ? + IB_OPCODE_RC_RDMA_WRITE_ONLY_WITH_IMMEDIATE : + IB_OPCODE_RC_RDMA_WRITE_FIRST; + + case IB_WR_SEND: + if (qp->req.opcode == IB_OPCODE_RC_SEND_FIRST || + qp->req.opcode == IB_OPCODE_RC_SEND_MIDDLE) + return fits ? + IB_OPCODE_RC_SEND_LAST : + IB_OPCODE_RC_SEND_MIDDLE; + else + return fits ? + IB_OPCODE_RC_SEND_ONLY : + IB_OPCODE_RC_SEND_FIRST; + + case IB_WR_SEND_WITH_IMM: + if (qp->req.opcode == IB_OPCODE_RC_SEND_FIRST || + qp->req.opcode == IB_OPCODE_RC_SEND_MIDDLE) + return fits ? + IB_OPCODE_RC_SEND_LAST_WITH_IMMEDIATE : + IB_OPCODE_RC_SEND_MIDDLE; + else + return fits ? + IB_OPCODE_RC_SEND_ONLY_WITH_IMMEDIATE : + IB_OPCODE_RC_SEND_FIRST; + + case IB_WR_RDMA_READ: + return IB_OPCODE_RC_RDMA_READ_REQUEST; + + case IB_WR_ATOMIC_CMP_AND_SWP: + return IB_OPCODE_RC_COMPARE_SWAP; + + case IB_WR_ATOMIC_FETCH_AND_ADD: + return IB_OPCODE_RC_FETCH_ADD; + + case IB_WR_SEND_WITH_INV: + if (qp->req.opcode == IB_OPCODE_RC_SEND_FIRST || + qp->req.opcode == IB_OPCODE_RC_SEND_MIDDLE) + return fits ? IB_OPCODE_RC_SEND_LAST_WITH_INVALIDATE : + IB_OPCODE_RC_SEND_MIDDLE; + else + return fits ? IB_OPCODE_RC_SEND_ONLY_WITH_INVALIDATE : + IB_OPCODE_RC_SEND_FIRST; + case IB_WR_REG_MR: + case IB_WR_LOCAL_INV: + return opcode; + } + + return -EINVAL; +} + +static int next_opcode_uc(struct rxe_qp *qp, u32 opcode, int fits) +{ + switch (opcode) { + case IB_WR_RDMA_WRITE: + if (qp->req.opcode == IB_OPCODE_UC_RDMA_WRITE_FIRST || + qp->req.opcode == IB_OPCODE_UC_RDMA_WRITE_MIDDLE) + return fits ? + IB_OPCODE_UC_RDMA_WRITE_LAST : + IB_OPCODE_UC_RDMA_WRITE_MIDDLE; + else + return fits ? + IB_OPCODE_UC_RDMA_WRITE_ONLY : + IB_OPCODE_UC_RDMA_WRITE_FIRST; + + case IB_WR_RDMA_WRITE_WITH_IMM: + if (qp->req.opcode == IB_OPCODE_UC_RDMA_WRITE_FIRST || + qp->req.opcode == IB_OPCODE_UC_RDMA_WRITE_MIDDLE) + return fits ? + IB_OPCODE_UC_RDMA_WRITE_LAST_WITH_IMMEDIATE : + IB_OPCODE_UC_RDMA_WRITE_MIDDLE; + else + return fits ? + IB_OPCODE_UC_RDMA_WRITE_ONLY_WITH_IMMEDIATE : + IB_OPCODE_UC_RDMA_WRITE_FIRST; + + case IB_WR_SEND: + if (qp->req.opcode == IB_OPCODE_UC_SEND_FIRST || + qp->req.opcode == IB_OPCODE_UC_SEND_MIDDLE) + return fits ? + IB_OPCODE_UC_SEND_LAST : + IB_OPCODE_UC_SEND_MIDDLE; + else + return fits ? + IB_OPCODE_UC_SEND_ONLY : + IB_OPCODE_UC_SEND_FIRST; + + case IB_WR_SEND_WITH_IMM: + if (qp->req.opcode == IB_OPCODE_UC_SEND_FIRST || + qp->req.opcode == IB_OPCODE_UC_SEND_MIDDLE) + return fits ? + IB_OPCODE_UC_SEND_LAST_WITH_IMMEDIATE : + IB_OPCODE_UC_SEND_MIDDLE; + else + return fits ? + IB_OPCODE_UC_SEND_ONLY_WITH_IMMEDIATE : + IB_OPCODE_UC_SEND_FIRST; + } + + return -EINVAL; +} + +int next_opcode(struct rxe_qp *qp, struct rxe_send_wqe *wqe, u32 opcode) +{ + int fits = (wqe->dma.resid <= qp->mtu); + + switch (qp_type(qp)) { + case IB_QPT_RC: + return next_opcode_rc(qp, opcode, fits); + + case IB_QPT_UC: + return next_opcode_uc(qp, opcode, fits); + + case IB_QPT_UD: + case IB_QPT_GSI: + switch (opcode) { + case IB_WR_SEND: + return IB_OPCODE_UD_SEND_ONLY; + + case IB_WR_SEND_WITH_IMM: + return IB_OPCODE_UD_SEND_ONLY_WITH_IMMEDIATE; + } + break; + + default: + break; + } + + return -EINVAL; +} diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index e136abc802af..d2a9abfed596 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -11,9 +11,6 @@ #include "rxe_loc.h" #include "rxe_queue.h" -static int next_opcode(struct rxe_qp *qp, struct rxe_send_wqe *wqe, - u32 opcode); - static inline void retry_first_write_send(struct rxe_qp *qp, struct rxe_send_wqe *wqe, int npsn) { @@ -194,159 +191,6 @@ static int rxe_wqe_is_fenced(struct rxe_qp *qp, struct rxe_send_wqe *wqe) atomic_read(&qp->req.rd_atomic) != qp->attr.max_rd_atomic; } -static int next_opcode_rc(struct rxe_qp *qp, u32 opcode, int fits) -{ - switch (opcode) { - case IB_WR_RDMA_WRITE: - if (qp->req.opcode == IB_OPCODE_RC_RDMA_WRITE_FIRST || - qp->req.opcode == IB_OPCODE_RC_RDMA_WRITE_MIDDLE) - return fits ? - IB_OPCODE_RC_RDMA_WRITE_LAST : - IB_OPCODE_RC_RDMA_WRITE_MIDDLE; - else - return fits ? - IB_OPCODE_RC_RDMA_WRITE_ONLY : - IB_OPCODE_RC_RDMA_WRITE_FIRST; - - case IB_WR_RDMA_WRITE_WITH_IMM: - if (qp->req.opcode == IB_OPCODE_RC_RDMA_WRITE_FIRST || - qp->req.opcode == IB_OPCODE_RC_RDMA_WRITE_MIDDLE) - return fits ? - IB_OPCODE_RC_RDMA_WRITE_LAST_WITH_IMMEDIATE : - IB_OPCODE_RC_RDMA_WRITE_MIDDLE; - else - return fits ? - IB_OPCODE_RC_RDMA_WRITE_ONLY_WITH_IMMEDIATE : - IB_OPCODE_RC_RDMA_WRITE_FIRST; - - case IB_WR_SEND: - if (qp->req.opcode == IB_OPCODE_RC_SEND_FIRST || - qp->req.opcode == IB_OPCODE_RC_SEND_MIDDLE) - return fits ? - IB_OPCODE_RC_SEND_LAST : - IB_OPCODE_RC_SEND_MIDDLE; - else - return fits ? - IB_OPCODE_RC_SEND_ONLY : - IB_OPCODE_RC_SEND_FIRST; - - case IB_WR_SEND_WITH_IMM: - if (qp->req.opcode == IB_OPCODE_RC_SEND_FIRST || - qp->req.opcode == IB_OPCODE_RC_SEND_MIDDLE) - return fits ? - IB_OPCODE_RC_SEND_LAST_WITH_IMMEDIATE : - IB_OPCODE_RC_SEND_MIDDLE; - else - return fits ? - IB_OPCODE_RC_SEND_ONLY_WITH_IMMEDIATE : - IB_OPCODE_RC_SEND_FIRST; - - case IB_WR_RDMA_READ: - return IB_OPCODE_RC_RDMA_READ_REQUEST; - - case IB_WR_ATOMIC_CMP_AND_SWP: - return IB_OPCODE_RC_COMPARE_SWAP; - - case IB_WR_ATOMIC_FETCH_AND_ADD: - return IB_OPCODE_RC_FETCH_ADD; - - case IB_WR_SEND_WITH_INV: - if (qp->req.opcode == IB_OPCODE_RC_SEND_FIRST || - qp->req.opcode == IB_OPCODE_RC_SEND_MIDDLE) - return fits ? IB_OPCODE_RC_SEND_LAST_WITH_INVALIDATE : - IB_OPCODE_RC_SEND_MIDDLE; - else - return fits ? IB_OPCODE_RC_SEND_ONLY_WITH_INVALIDATE : - IB_OPCODE_RC_SEND_FIRST; - case IB_WR_REG_MR: - case IB_WR_LOCAL_INV: - return opcode; - } - - return -EINVAL; -} - -static int next_opcode_uc(struct rxe_qp *qp, u32 opcode, int fits) -{ - switch (opcode) { - case IB_WR_RDMA_WRITE: - if (qp->req.opcode == IB_OPCODE_UC_RDMA_WRITE_FIRST || - qp->req.opcode == IB_OPCODE_UC_RDMA_WRITE_MIDDLE) - return fits ? - IB_OPCODE_UC_RDMA_WRITE_LAST : - IB_OPCODE_UC_RDMA_WRITE_MIDDLE; - else - return fits ? - IB_OPCODE_UC_RDMA_WRITE_ONLY : - IB_OPCODE_UC_RDMA_WRITE_FIRST; - - case IB_WR_RDMA_WRITE_WITH_IMM: - if (qp->req.opcode == IB_OPCODE_UC_RDMA_WRITE_FIRST || - qp->req.opcode == IB_OPCODE_UC_RDMA_WRITE_MIDDLE) - return fits ? - IB_OPCODE_UC_RDMA_WRITE_LAST_WITH_IMMEDIATE : - IB_OPCODE_UC_RDMA_WRITE_MIDDLE; - else - return fits ? - IB_OPCODE_UC_RDMA_WRITE_ONLY_WITH_IMMEDIATE : - IB_OPCODE_UC_RDMA_WRITE_FIRST; - - case IB_WR_SEND: - if (qp->req.opcode == IB_OPCODE_UC_SEND_FIRST || - qp->req.opcode == IB_OPCODE_UC_SEND_MIDDLE) - return fits ? - IB_OPCODE_UC_SEND_LAST : - IB_OPCODE_UC_SEND_MIDDLE; - else - return fits ? - IB_OPCODE_UC_SEND_ONLY : - IB_OPCODE_UC_SEND_FIRST; - - case IB_WR_SEND_WITH_IMM: - if (qp->req.opcode == IB_OPCODE_UC_SEND_FIRST || - qp->req.opcode == IB_OPCODE_UC_SEND_MIDDLE) - return fits ? - IB_OPCODE_UC_SEND_LAST_WITH_IMMEDIATE : - IB_OPCODE_UC_SEND_MIDDLE; - else - return fits ? - IB_OPCODE_UC_SEND_ONLY_WITH_IMMEDIATE : - IB_OPCODE_UC_SEND_FIRST; - } - - return -EINVAL; -} - -static int next_opcode(struct rxe_qp *qp, struct rxe_send_wqe *wqe, - u32 opcode) -{ - int fits = (wqe->dma.resid <= qp->mtu); - - switch (qp_type(qp)) { - case IB_QPT_RC: - return next_opcode_rc(qp, opcode, fits); - - case IB_QPT_UC: - return next_opcode_uc(qp, opcode, fits); - - case IB_QPT_UD: - case IB_QPT_GSI: - switch (opcode) { - case IB_WR_SEND: - return IB_OPCODE_UD_SEND_ONLY; - - case IB_WR_SEND_WITH_IMM: - return IB_OPCODE_UD_SEND_ONLY_WITH_IMMEDIATE; - } - break; - - default: - break; - } - - return -EINVAL; -} - static inline int check_init_depth(struct rxe_qp *qp, struct rxe_send_wqe *wqe) { int depth; From patchwork Thu Sep 29 17:08:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12994463 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BDBFCC43217 for ; Thu, 29 Sep 2022 17:09:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234687AbiI2RJP (ORCPT ); Thu, 29 Sep 2022 13:09:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59884 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235873AbiI2RJN (ORCPT ); Thu, 29 Sep 2022 13:09:13 -0400 Received: from mail-oi1-x229.google.com (mail-oi1-x229.google.com [IPv6:2607:f8b0:4864:20::229]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D1C401CE915 for ; Thu, 29 Sep 2022 10:09:09 -0700 (PDT) Received: by mail-oi1-x229.google.com with SMTP id s125so2259837oie.4 for ; Thu, 29 Sep 2022 10:09:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=1O4gVNR3dcE8FWJuOdKt0Ng7mhdOLr6Amkjy6/uxb+o=; b=kzbA8ADTtAGvWyTjM+f4fINEg6ExVtX1Gm1BpXeGxWqA48H6cfuhv9k6U7H1wctqJQ ZFG5KI07m7lK/Hf8Cfh3num+xZrupqM//0OnxoWOXwGVLORTs7DaT3t1BH9WaRMrJ8rs LTF+YxUwv0nCEiv9jWkpepe0khTEAj+Mt1f4kv5yHSJOX3fufNci3T2ezFYRkZs96SPf E2LeauAr7fNdN6iVMHJdu6FUk9BJaIQCII1RFCPNmgyBuLvXyB9TUQck06Zj9LrPvQT+ 3R4L970IwyHYCiPl1KcND3fZefLn4aYq7OITrmXJmwKtylEr+RKMKyi71VRIMRx5zHNS Hmmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=1O4gVNR3dcE8FWJuOdKt0Ng7mhdOLr6Amkjy6/uxb+o=; b=tWXR1bXBeOMLt6EdiTUg6LnjBPuvYRApJmeNYAJDIuK+fX/gXoIPuNvpo4dT0mbC5p +W924nLR+g23UnYVmyqhBJ+MqRXocpqNTIdnZaoH4eHiebqLBipbmf4PRotauFKpXyON iMNa+2agW15xjCyNwScIABCKhCTNdVIu87b8TnKiiqSp3MzkreDw39Xs2QXofvEyHwOd ZL4GLv99DZdz5triNoptsLBSgYsQJMK/XXjFQFnn3MkEBogtm39o++jghD/hYs9KpeuJ za4rgcLI0jAUiRY0yu8aJIYgy/gOwLNI4pdWmu96fa0bSkpXjC2hwUi8tBukqNWfs2OP TO3A== X-Gm-Message-State: ACrzQf3Ixcw7A7p3/9FLVjex7dtCk7Qimb3ZUF79kxfcjNKcSZiF4Hpi VICCihZfILDxiYH1KpRL2ZY= X-Google-Smtp-Source: AMsMyM7OJuJ8i/J/iDTTi2w8YjOsCP2WqLlELzMdYqTFhfr10GAMd6CxP/QOeTa1U3pcOVgf149++A== X-Received: by 2002:a54:4182:0:b0:34f:f1b4:5421 with SMTP id 2-20020a544182000000b0034ff1b45421mr7152831oiy.146.1664471349204; Thu, 29 Sep 2022 10:09:09 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-c4e7-bfae-90ed-ac81.res6.spectrum.com. [2603:8081:140c:1a00:c4e7:bfae:90ed:ac81]) by smtp.googlemail.com with ESMTPSA id v17-20020a056808005100b00349a06c581fsm2798557oic.3.2022.09.29.10.09.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 29 Sep 2022 10:09:08 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 03/13] RDMA: Add xrc opcodes to ib_pack.h Date: Thu, 29 Sep 2022 12:08:27 -0500 Message-Id: <20220929170836.17838-4-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220929170836.17838-1-rpearsonhpe@gmail.com> References: <20220929170836.17838-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend ib_pack.h to include xrc opcodes. Signed-off-by: Bob Pearson --- include/rdma/ib_pack.h | 32 +++++++++++++++++++++++++++++++- 1 file changed, 31 insertions(+), 1 deletion(-) diff --git a/include/rdma/ib_pack.h b/include/rdma/ib_pack.h index a9162f25beaf..cc9aac05d38e 100644 --- a/include/rdma/ib_pack.h +++ b/include/rdma/ib_pack.h @@ -56,8 +56,11 @@ enum { IB_OPCODE_UD = 0x60, /* per IBTA 1.3 vol 1 Table 38, A10.3.2 */ IB_OPCODE_CNP = 0x80, + IB_OPCODE_XRC = 0xa0, /* Manufacturer specific */ IB_OPCODE_MSP = 0xe0, + /* opcode type bits */ + IB_OPCODE_TYPE = 0xe0, /* operations -- just used to define real constants */ IB_OPCODE_SEND_FIRST = 0x00, @@ -84,6 +87,8 @@ enum { /* opcode 0x15 is reserved */ IB_OPCODE_SEND_LAST_WITH_INVALIDATE = 0x16, IB_OPCODE_SEND_ONLY_WITH_INVALIDATE = 0x17, + /* opcode command bits */ + IB_OPCODE_CMD = 0x1f, /* real constants follow -- see comment about above IB_OPCODE() macro for more details */ @@ -152,7 +157,32 @@ enum { /* UD */ IB_OPCODE(UD, SEND_ONLY), - IB_OPCODE(UD, SEND_ONLY_WITH_IMMEDIATE) + IB_OPCODE(UD, SEND_ONLY_WITH_IMMEDIATE), + + /* XRC */ + IB_OPCODE(XRC, SEND_FIRST), + IB_OPCODE(XRC, SEND_MIDDLE), + IB_OPCODE(XRC, SEND_LAST), + IB_OPCODE(XRC, SEND_LAST_WITH_IMMEDIATE), + IB_OPCODE(XRC, SEND_ONLY), + IB_OPCODE(XRC, SEND_ONLY_WITH_IMMEDIATE), + IB_OPCODE(XRC, RDMA_WRITE_FIRST), + IB_OPCODE(XRC, RDMA_WRITE_MIDDLE), + IB_OPCODE(XRC, RDMA_WRITE_LAST), + IB_OPCODE(XRC, RDMA_WRITE_LAST_WITH_IMMEDIATE), + IB_OPCODE(XRC, RDMA_WRITE_ONLY), + IB_OPCODE(XRC, RDMA_WRITE_ONLY_WITH_IMMEDIATE), + IB_OPCODE(XRC, RDMA_READ_REQUEST), + IB_OPCODE(XRC, RDMA_READ_RESPONSE_FIRST), + IB_OPCODE(XRC, RDMA_READ_RESPONSE_MIDDLE), + IB_OPCODE(XRC, RDMA_READ_RESPONSE_LAST), + IB_OPCODE(XRC, RDMA_READ_RESPONSE_ONLY), + IB_OPCODE(XRC, ACKNOWLEDGE), + IB_OPCODE(XRC, ATOMIC_ACKNOWLEDGE), + IB_OPCODE(XRC, COMPARE_SWAP), + IB_OPCODE(XRC, FETCH_ADD), + IB_OPCODE(XRC, SEND_LAST_WITH_INVALIDATE), + IB_OPCODE(XRC, SEND_ONLY_WITH_INVALIDATE), }; enum { From patchwork Thu Sep 29 17:08:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12994466 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6913C433F5 for ; Thu, 29 Sep 2022 17:09:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235215AbiI2RJQ (ORCPT ); Thu, 29 Sep 2022 13:09:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59896 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236108AbiI2RJN (ORCPT ); Thu, 29 Sep 2022 13:09:13 -0400 Received: from mail-oi1-x233.google.com (mail-oi1-x233.google.com [IPv6:2607:f8b0:4864:20::233]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E32C61CFB81 for ; Thu, 29 Sep 2022 10:09:10 -0700 (PDT) Received: by mail-oi1-x233.google.com with SMTP id w13so2244732oiw.8 for ; Thu, 29 Sep 2022 10:09:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=9I4wcPZ1fJouFFXV3leJFvUefnvjt0shk2pKmQXTP1E=; b=cn/zTU+cny6qJBhEASR2zt1vFlzPdcDUFtJeQeNIX5HJeRJTzL4LQsoif5FF5bxh76 5NS7vMMVuBoTznBfRsecROnDlUwZx/fP3QbZDxEcSxDzFdZppzCaIfEe/NdZOzh9vVU+ tUi3iCavSrR9yLYlK/wcdaWn6QfBZkS1JarqJjVIuhGKkGHPAwm6rYub8qyMW3Q9dc14 Rbp0En4bHTypl/S4qlbpgOgNfGmjQyCsWP/9/hJUqh61y2GVqThzrl9aDeUIauZ/LS25 wXUI6JZ0JZFxWi2jhX9bpNCoMR6hhAZeOP5/zG+wmTdQq68nqVd/sm+dMIytHEjVkesL TJmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=9I4wcPZ1fJouFFXV3leJFvUefnvjt0shk2pKmQXTP1E=; b=NU9n/M47gWeJYFVlM3+dqdJCWIE97cehyBeanigmDyEx7WNGviw6aX0QYFXU3h+mdV Hn3oBAb18OHqmuNA6XbKZo0KQFEv0gj0ggS3uL80kskz/NwD1faz1d+MJI1H8OiITLBL ZWkRH3LNGLvQsNwHA9w5Reua5QN+DNY4RizwwPLJn9HGVJ1YrE77mqaRcGRZ8AYNo0FP A0c4g82+BN8ciUgXDnZLMzd+MayDwRzjqxKhazl+JBPNBUDWPULRNyfOHZufg62p2i39 HZDB1yCF4Za8E6eG6ZKzPJIqsy0LESs2Z/R2BhcsabTaqSFJxNDBsgF+4cS8VUAzixlC jdsw== X-Gm-Message-State: ACrzQf3fO+7Sc4eKTBus6JyLLnNR8f57fsBUKYMbYRECt57I9gDA5nRy UHohb4VAT3ZnX3acNGqjgDw= X-Google-Smtp-Source: AMsMyM4btOGE4vzDvrshRmq64mBhNmHEHLXDuT+MMGDD7AXkGpmX1w5CMg5B1ccPQXEYm8R0imsMRg== X-Received: by 2002:a05:6808:a01:b0:350:4832:2902 with SMTP id n1-20020a0568080a0100b0035048322902mr2009479oij.163.1664471350200; Thu, 29 Sep 2022 10:09:10 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-c4e7-bfae-90ed-ac81.res6.spectrum.com. [2603:8081:140c:1a00:c4e7:bfae:90ed:ac81]) by smtp.googlemail.com with ESMTPSA id v17-20020a056808005100b00349a06c581fsm2798557oic.3.2022.09.29.10.09.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 29 Sep 2022 10:09:09 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 04/13] RDMA/rxe: Extend opcodes and headers to support xrc Date: Thu, 29 Sep 2022 12:08:28 -0500 Message-Id: <20220929170836.17838-5-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220929170836.17838-1-rpearsonhpe@gmail.com> References: <20220929170836.17838-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend rxe_hdr.h to include the xrceth header and extend opcode tables in rxe_opcode.c to support xrc operations and qps. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_hdr.h | 36 +++ drivers/infiniband/sw/rxe/rxe_opcode.c | 379 +++++++++++++++++++++++-- drivers/infiniband/sw/rxe/rxe_opcode.h | 4 +- 3 files changed, 395 insertions(+), 24 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_hdr.h b/drivers/infiniband/sw/rxe/rxe_hdr.h index e432f9e37795..e947bcf75209 100644 --- a/drivers/infiniband/sw/rxe/rxe_hdr.h +++ b/drivers/infiniband/sw/rxe/rxe_hdr.h @@ -900,6 +900,41 @@ static inline void ieth_set_rkey(struct rxe_pkt_info *pkt, u32 rkey) rxe_opcode[pkt->opcode].offset[RXE_IETH], rkey); } +/****************************************************************************** + * XRC Extended Transport Header + ******************************************************************************/ +struct rxe_xrceth { + __be32 srqn; +}; + +#define XRCETH_SRQN_MASK (0x00ffffff) + +static inline u32 __xrceth_srqn(void *arg) +{ + struct rxe_xrceth *xrceth = arg; + + return be32_to_cpu(xrceth->srqn); +} + +static inline void __xrceth_set_srqn(void *arg, u32 srqn) +{ + struct rxe_xrceth *xrceth = arg; + + xrceth->srqn = cpu_to_be32(srqn & XRCETH_SRQN_MASK); +} + +static inline u32 xrceth_srqn(struct rxe_pkt_info *pkt) +{ + return __xrceth_srqn(pkt->hdr + + rxe_opcode[pkt->opcode].offset[RXE_XRCETH]); +} + +static inline void xrceth_set_srqn(struct rxe_pkt_info *pkt, u32 srqn) +{ + __xrceth_set_srqn(pkt->hdr + + rxe_opcode[pkt->opcode].offset[RXE_XRCETH], srqn); +} + enum rxe_hdr_length { RXE_BTH_BYTES = sizeof(struct rxe_bth), RXE_DETH_BYTES = sizeof(struct rxe_deth), @@ -909,6 +944,7 @@ enum rxe_hdr_length { RXE_ATMACK_BYTES = sizeof(struct rxe_atmack), RXE_ATMETH_BYTES = sizeof(struct rxe_atmeth), RXE_IETH_BYTES = sizeof(struct rxe_ieth), + RXE_XRCETH_BYTES = sizeof(struct rxe_xrceth), RXE_RDETH_BYTES = sizeof(struct rxe_rdeth), }; diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.c b/drivers/infiniband/sw/rxe/rxe_opcode.c index 6b1a1f197c4d..4ae926a37ef8 100644 --- a/drivers/infiniband/sw/rxe/rxe_opcode.c +++ b/drivers/infiniband/sw/rxe/rxe_opcode.c @@ -15,51 +15,58 @@ struct rxe_wr_opcode_info rxe_wr_opcode_info[] = { [IB_WR_RDMA_WRITE] = { .name = "IB_WR_RDMA_WRITE", .mask = { - [IB_QPT_RC] = WR_INLINE_MASK | WR_WRITE_MASK, - [IB_QPT_UC] = WR_INLINE_MASK | WR_WRITE_MASK, + [IB_QPT_RC] = WR_INLINE_MASK | WR_WRITE_MASK, + [IB_QPT_UC] = WR_INLINE_MASK | WR_WRITE_MASK, + [IB_QPT_XRC_INI] = WR_INLINE_MASK | WR_WRITE_MASK, }, }, [IB_WR_RDMA_WRITE_WITH_IMM] = { .name = "IB_WR_RDMA_WRITE_WITH_IMM", .mask = { - [IB_QPT_RC] = WR_INLINE_MASK | WR_WRITE_MASK, - [IB_QPT_UC] = WR_INLINE_MASK | WR_WRITE_MASK, + [IB_QPT_RC] = WR_INLINE_MASK | WR_WRITE_MASK, + [IB_QPT_UC] = WR_INLINE_MASK | WR_WRITE_MASK, + [IB_QPT_XRC_INI] = WR_INLINE_MASK | WR_WRITE_MASK, }, }, [IB_WR_SEND] = { .name = "IB_WR_SEND", .mask = { - [IB_QPT_GSI] = WR_INLINE_MASK | WR_SEND_MASK, - [IB_QPT_RC] = WR_INLINE_MASK | WR_SEND_MASK, - [IB_QPT_UC] = WR_INLINE_MASK | WR_SEND_MASK, - [IB_QPT_UD] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_GSI] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_RC] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_UC] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_UD] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_XRC_INI] = WR_INLINE_MASK | WR_SEND_MASK, }, }, [IB_WR_SEND_WITH_IMM] = { .name = "IB_WR_SEND_WITH_IMM", .mask = { - [IB_QPT_GSI] = WR_INLINE_MASK | WR_SEND_MASK, - [IB_QPT_RC] = WR_INLINE_MASK | WR_SEND_MASK, - [IB_QPT_UC] = WR_INLINE_MASK | WR_SEND_MASK, - [IB_QPT_UD] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_GSI] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_RC] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_UC] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_UD] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_XRC_INI] = WR_INLINE_MASK | WR_SEND_MASK, }, }, [IB_WR_RDMA_READ] = { .name = "IB_WR_RDMA_READ", .mask = { - [IB_QPT_RC] = WR_READ_MASK, + [IB_QPT_RC] = WR_READ_MASK, + [IB_QPT_XRC_INI] = WR_READ_MASK, }, }, [IB_WR_ATOMIC_CMP_AND_SWP] = { .name = "IB_WR_ATOMIC_CMP_AND_SWP", .mask = { - [IB_QPT_RC] = WR_ATOMIC_MASK, + [IB_QPT_RC] = WR_ATOMIC_MASK, + [IB_QPT_XRC_INI] = WR_ATOMIC_MASK, }, }, [IB_WR_ATOMIC_FETCH_AND_ADD] = { .name = "IB_WR_ATOMIC_FETCH_AND_ADD", .mask = { - [IB_QPT_RC] = WR_ATOMIC_MASK, + [IB_QPT_RC] = WR_ATOMIC_MASK, + [IB_QPT_XRC_INI] = WR_ATOMIC_MASK, }, }, [IB_WR_LSO] = { @@ -71,34 +78,39 @@ struct rxe_wr_opcode_info rxe_wr_opcode_info[] = { [IB_WR_SEND_WITH_INV] = { .name = "IB_WR_SEND_WITH_INV", .mask = { - [IB_QPT_RC] = WR_INLINE_MASK | WR_SEND_MASK, - [IB_QPT_UC] = WR_INLINE_MASK | WR_SEND_MASK, - [IB_QPT_UD] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_RC] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_UC] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_UD] = WR_INLINE_MASK | WR_SEND_MASK, + [IB_QPT_XRC_INI] = WR_INLINE_MASK | WR_SEND_MASK, }, }, [IB_WR_RDMA_READ_WITH_INV] = { .name = "IB_WR_RDMA_READ_WITH_INV", .mask = { - [IB_QPT_RC] = WR_READ_MASK, + [IB_QPT_RC] = WR_READ_MASK, + [IB_QPT_XRC_INI] = WR_READ_MASK, }, }, [IB_WR_LOCAL_INV] = { .name = "IB_WR_LOCAL_INV", .mask = { - [IB_QPT_RC] = WR_LOCAL_OP_MASK, + [IB_QPT_RC] = WR_LOCAL_OP_MASK, + [IB_QPT_XRC_INI] = WR_LOCAL_OP_MASK, }, }, [IB_WR_REG_MR] = { .name = "IB_WR_REG_MR", .mask = { - [IB_QPT_RC] = WR_LOCAL_OP_MASK, + [IB_QPT_RC] = WR_LOCAL_OP_MASK, + [IB_QPT_XRC_INI] = WR_LOCAL_OP_MASK, }, }, [IB_WR_BIND_MW] = { .name = "IB_WR_BIND_MW", .mask = { - [IB_QPT_RC] = WR_LOCAL_OP_MASK, - [IB_QPT_UC] = WR_LOCAL_OP_MASK, + [IB_QPT_RC] = WR_LOCAL_OP_MASK, + [IB_QPT_UC] = WR_LOCAL_OP_MASK, + [IB_QPT_XRC_INI] = WR_LOCAL_OP_MASK, }, }, }; @@ -918,6 +930,327 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { } }, + /* XRC */ + [IB_OPCODE_XRC_SEND_FIRST] = { + .name = "IB_OPCODE_XRC_SEND_FIRST", + .mask = RXE_XRCETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | + RXE_RWR_MASK | RXE_SEND_MASK | RXE_FIRST_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + } + }, + [IB_OPCODE_XRC_SEND_MIDDLE] = { + .name = "IB_OPCODE_XRC_SEND_MIDDLE", + .mask = RXE_XRCETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | + RXE_SEND_MASK | RXE_MIDDLE_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + } + }, + [IB_OPCODE_XRC_SEND_LAST] = { + .name = "IB_OPCODE_XRC_SEND_LAST", + .mask = RXE_XRCETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | + RXE_COMP_MASK | RXE_SEND_MASK | RXE_LAST_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + } + }, + [IB_OPCODE_XRC_SEND_LAST_WITH_IMMEDIATE] = { + .name = "IB_OPCODE_XRC_SEND_LAST_WITH_IMMEDIATE", + .mask = RXE_XRCETH_MASK | RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | + RXE_REQ_MASK | RXE_COMP_MASK | RXE_SEND_MASK | + RXE_LAST_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES + RXE_IMMDT_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_IMMDT] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_IMMDT_BYTES, + } + }, + [IB_OPCODE_XRC_SEND_ONLY] = { + .name = "IB_OPCODE_XRC_SEND_ONLY", + .mask = RXE_XRCETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | + RXE_COMP_MASK | RXE_RWR_MASK | RXE_SEND_MASK | + RXE_ONLY_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + } + }, + [IB_OPCODE_XRC_SEND_ONLY_WITH_IMMEDIATE] = { + .name = "IB_OPCODE_XRC_SEND_ONLY_WITH_IMMEDIATE", + .mask = RXE_XRCETH_MASK | RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | + RXE_REQ_MASK | RXE_COMP_MASK | RXE_RWR_MASK | + RXE_SEND_MASK | RXE_ONLY_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES + RXE_IMMDT_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_IMMDT] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_IMMDT_BYTES, + } + }, + [IB_OPCODE_XRC_RDMA_WRITE_FIRST] = { + .name = "IB_OPCODE_XRC_RDMA_WRITE_FIRST", + .mask = RXE_XRCETH_MASK | RXE_RETH_MASK | RXE_PAYLOAD_MASK | + RXE_REQ_MASK | RXE_WRITE_MASK | RXE_FIRST_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES + RXE_RETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_RETH] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_RETH_BYTES, + } + }, + [IB_OPCODE_XRC_RDMA_WRITE_MIDDLE] = { + .name = "IB_OPCODE_XRC_RDMA_WRITE_MIDDLE", + .mask = RXE_XRCETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | + RXE_WRITE_MASK | RXE_MIDDLE_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + } + }, + [IB_OPCODE_XRC_RDMA_WRITE_LAST] = { + .name = "IB_OPCODE_XRC_RDMA_WRITE_LAST", + .mask = RXE_XRCETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK | + RXE_WRITE_MASK | RXE_LAST_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + } + }, + [IB_OPCODE_XRC_RDMA_WRITE_LAST_WITH_IMMEDIATE] = { + .name = "IB_OPCODE_XRC_RDMA_WRITE_LAST_WITH_IMMEDIATE", + .mask = RXE_XRCETH_MASK | RXE_IMMDT_MASK | RXE_PAYLOAD_MASK | + RXE_REQ_MASK | RXE_WRITE_MASK | RXE_COMP_MASK | + RXE_RWR_MASK | RXE_LAST_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES + RXE_IMMDT_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_IMMDT] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_IMMDT_BYTES, + } + }, + [IB_OPCODE_XRC_RDMA_WRITE_ONLY] = { + .name = "IB_OPCODE_XRC_RDMA_WRITE_ONLY", + .mask = RXE_XRCETH_MASK | RXE_RETH_MASK | RXE_PAYLOAD_MASK | + RXE_REQ_MASK | RXE_WRITE_MASK | RXE_ONLY_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES + RXE_RETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_RETH] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_RETH_BYTES, + } + }, + [IB_OPCODE_XRC_RDMA_WRITE_ONLY_WITH_IMMEDIATE] = { + .name = "IB_OPCODE_XRC_RDMA_WRITE_ONLY_WITH_IMMEDIATE", + .mask = RXE_XRCETH_MASK | RXE_RETH_MASK | RXE_IMMDT_MASK | + RXE_PAYLOAD_MASK | RXE_REQ_MASK | RXE_WRITE_MASK | + RXE_COMP_MASK | RXE_RWR_MASK | RXE_ONLY_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES + RXE_IMMDT_BYTES + + RXE_RETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_RETH] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + [RXE_IMMDT] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_RETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_RETH_BYTES + + RXE_IMMDT_BYTES, + } + }, + [IB_OPCODE_XRC_RDMA_READ_REQUEST] = { + .name = "IB_OPCODE_XRC_RDMA_READ_REQUEST", + .mask = RXE_XRCETH_MASK | RXE_RETH_MASK | RXE_REQ_MASK | + RXE_READ_MASK | RXE_ONLY_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES + RXE_RETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_RETH] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_RETH_BYTES, + } + }, + [IB_OPCODE_XRC_RDMA_READ_RESPONSE_FIRST] = { + .name = "IB_OPCODE_XRC_RDMA_READ_RESPONSE_FIRST", + .mask = RXE_AETH_MASK | RXE_PAYLOAD_MASK | RXE_ACK_MASK | + RXE_FIRST_MASK, + .length = RXE_BTH_BYTES + RXE_AETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_AETH] = RXE_BTH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_AETH_BYTES, + } + }, + [IB_OPCODE_XRC_RDMA_READ_RESPONSE_MIDDLE] = { + .name = "IB_OPCODE_XRC_RDMA_READ_RESPONSE_MIDDLE", + .mask = RXE_PAYLOAD_MASK | RXE_ACK_MASK | RXE_MIDDLE_MASK, + .length = RXE_BTH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_PAYLOAD] = RXE_BTH_BYTES, + } + }, + [IB_OPCODE_XRC_RDMA_READ_RESPONSE_LAST] = { + .name = "IB_OPCODE_XRC_RDMA_READ_RESPONSE_LAST", + .mask = RXE_AETH_MASK | RXE_PAYLOAD_MASK | RXE_ACK_MASK | + RXE_LAST_MASK, + .length = RXE_BTH_BYTES + RXE_AETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_AETH] = RXE_BTH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_AETH_BYTES, + } + }, + [IB_OPCODE_XRC_RDMA_READ_RESPONSE_ONLY] = { + .name = "IB_OPCODE_XRC_RDMA_READ_RESPONSE_ONLY", + .mask = RXE_AETH_MASK | RXE_PAYLOAD_MASK | RXE_ACK_MASK | + RXE_ONLY_MASK, + .length = RXE_BTH_BYTES + RXE_AETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_AETH] = RXE_BTH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_AETH_BYTES, + } + }, + [IB_OPCODE_XRC_ACKNOWLEDGE] = { + .name = "IB_OPCODE_XRC_ACKNOWLEDGE", + .mask = RXE_AETH_MASK | RXE_ACK_MASK | RXE_ONLY_MASK, + .length = RXE_BTH_BYTES + RXE_AETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_AETH] = RXE_BTH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_AETH_BYTES, + } + }, + [IB_OPCODE_XRC_ATOMIC_ACKNOWLEDGE] = { + .name = "IB_OPCODE_XRC_ATOMIC_ACKNOWLEDGE", + .mask = RXE_AETH_MASK | RXE_ATMACK_MASK | RXE_ACK_MASK | + RXE_ONLY_MASK, + .length = RXE_BTH_BYTES + RXE_ATMACK_BYTES + RXE_AETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_AETH] = RXE_BTH_BYTES, + [RXE_ATMACK] = RXE_BTH_BYTES + + RXE_AETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_AETH_BYTES + + RXE_ATMACK_BYTES, + } + }, + [IB_OPCODE_XRC_COMPARE_SWAP] = { + .name = "IB_OPCODE_XRC_COMPARE_SWAP", + .mask = RXE_XRCETH_MASK | RXE_ATMETH_MASK | RXE_REQ_MASK | + RXE_ATOMIC_MASK | RXE_ONLY_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES + RXE_ATMETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_ATMETH] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_ATMETH_BYTES, + } + }, + [IB_OPCODE_XRC_FETCH_ADD] = { + .name = "IB_OPCODE_XRC_FETCH_ADD", + .mask = RXE_XRCETH_MASK | RXE_ATMETH_MASK | RXE_REQ_MASK | + RXE_ATOMIC_MASK | RXE_ONLY_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES + RXE_ATMETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_ATMETH] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_ATMETH_BYTES, + } + }, + [IB_OPCODE_XRC_SEND_LAST_WITH_INVALIDATE] = { + .name = "IB_OPCODE_XRC_SEND_LAST_WITH_INVALIDATE", + .mask = RXE_XRCETH_MASK | RXE_IETH_MASK | RXE_PAYLOAD_MASK | + RXE_REQ_MASK | RXE_COMP_MASK | RXE_SEND_MASK | + RXE_LAST_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES + RXE_IETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_IETH] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_IETH_BYTES, + } + }, + [IB_OPCODE_XRC_SEND_ONLY_WITH_INVALIDATE] = { + .name = "IB_OPCODE_XRC_SEND_ONLY_INV", + .mask = RXE_XRCETH_MASK | RXE_IETH_MASK | RXE_PAYLOAD_MASK | + RXE_REQ_MASK | RXE_COMP_MASK | RXE_RWR_MASK | + RXE_SEND_MASK | RXE_ONLY_MASK, + .length = RXE_BTH_BYTES + RXE_XRCETH_BYTES + RXE_IETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_XRCETH] = RXE_BTH_BYTES, + [RXE_IETH] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES, + [RXE_PAYLOAD] = RXE_BTH_BYTES + + RXE_XRCETH_BYTES + + RXE_IETH_BYTES, + } + }, }; static int next_opcode_rc(struct rxe_qp *qp, u32 opcode, int fits) diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.h b/drivers/infiniband/sw/rxe/rxe_opcode.h index d2b6a8232e92..5528a47f0266 100644 --- a/drivers/infiniband/sw/rxe/rxe_opcode.h +++ b/drivers/infiniband/sw/rxe/rxe_opcode.h @@ -30,7 +30,7 @@ enum rxe_wr_mask { struct rxe_wr_opcode_info { char *name; - enum rxe_wr_mask mask[WR_MAX_QPT]; + enum rxe_wr_mask mask[IB_QPT_MAX]; }; extern struct rxe_wr_opcode_info rxe_wr_opcode_info[]; @@ -44,6 +44,7 @@ enum rxe_hdr_type { RXE_ATMETH, RXE_ATMACK, RXE_IETH, + RXE_XRCETH, RXE_RDETH, RXE_DETH, RXE_IMMDT, @@ -61,6 +62,7 @@ enum rxe_hdr_mask { RXE_ATMETH_MASK = BIT(RXE_ATMETH), RXE_ATMACK_MASK = BIT(RXE_ATMACK), RXE_IETH_MASK = BIT(RXE_IETH), + RXE_XRCETH_MASK = BIT(RXE_XRCETH), RXE_RDETH_MASK = BIT(RXE_RDETH), RXE_DETH_MASK = BIT(RXE_DETH), RXE_PAYLOAD_MASK = BIT(RXE_PAYLOAD), From patchwork Thu Sep 29 17:08:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12994464 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC7D1C433FE for ; Thu, 29 Sep 2022 17:09:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234959AbiI2RJQ (ORCPT ); Thu, 29 Sep 2022 13:09:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59892 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236006AbiI2RJN (ORCPT ); Thu, 29 Sep 2022 13:09:13 -0400 Received: from mail-oo1-xc2f.google.com (mail-oo1-xc2f.google.com [IPv6:2607:f8b0:4864:20::c2f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D43611CE91B for ; Thu, 29 Sep 2022 10:09:11 -0700 (PDT) Received: by mail-oo1-xc2f.google.com with SMTP id u3-20020a4ab5c3000000b0044b125e5d9eso530329ooo.12 for ; Thu, 29 Sep 2022 10:09:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=ozTLBWu+OgVpUPFoK0XaDtZgeXfSbSLcsDTataCzAd4=; b=jRt+TNiRkUfaNbKnPLGRKwzHIcDEiRvqit1UYZn/pAzo+YdsnwFgoLt+holU5pdZJh dD5xZoGChjNwIDYd3FG8xUFWEgDyFS7W89RP2Xql3QuwxlTNy+2KSPMGqkr7DXuRBaHA i2OtwP8bOmQpR897REyv2BaBO+6FEBJ7ljefRbZnzWcaX62oKmPmYrcewmen4C9BbBEE Vh4SS7njXhOdJ3eOGcJwgUFKTFdTk6S9HUkgvGDMS8wm1k3EBWnPeSXgmvVbpZvXnfeQ XG1QxMTD7DxOpCPKEd2FrUpypWD5WAtFrKM6WS+yRAU2WcnXDUOUUXSTASJtCUWB/U7P fzHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=ozTLBWu+OgVpUPFoK0XaDtZgeXfSbSLcsDTataCzAd4=; b=qs1AGWoah72qwG8rv8fWtz5bHW1/5CylE/1Bl4XL1Sr2cEHHEEkjAUDRlE754Nuj4b E3w1gOeqBvy+XYWHW3mSULq04UP36acGRESx95/+XjjFkLlBvTJg7fNkrZG4zDx1Siwn YhxjHF591U9XS+42/SGwglp13eQhTWxIXGZ/vzZlfXnQUFIYZaAdE2zGMx08vTi8ikRv a1+WdeHUvr4Q1U2pVKwWqLJ+oycHb1t7ekQZ+rrJCj5f7oRqUQo7W7mMZXy3OSsb7YRF x4xpaVkSR4pAzGSHLD79/Bn9BLb+lVlEFJN0nIBlqG8kXdnReJdP9kTrVsuj4DMSbB4C IDgw== X-Gm-Message-State: ACrzQf0XsnrVOljbGVL3cWzU0ubiovc7eOpW4iEmWReNDzcIiBKPKwAu ivqQfjyVABjKp7KLm4r60lT58Efkv8wV6A== X-Google-Smtp-Source: AMsMyM6PXlEJwNh+U0HsUJ5x2myO/kXOiZeDawOjB23FMui8JlgVTJFxfa6W4zYtyC6yq8B77PduWQ== X-Received: by 2002:a4a:d6ce:0:b0:476:7f74:26ff with SMTP id j14-20020a4ad6ce000000b004767f7426ffmr1772769oot.32.1664471351138; Thu, 29 Sep 2022 10:09:11 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-c4e7-bfae-90ed-ac81.res6.spectrum.com. [2603:8081:140c:1a00:c4e7:bfae:90ed:ac81]) by smtp.googlemail.com with ESMTPSA id v17-20020a056808005100b00349a06c581fsm2798557oic.3.2022.09.29.10.09.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 29 Sep 2022 10:09:10 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 05/13] RDMA/rxe: Add xrc opcodes to next_opcode() Date: Thu, 29 Sep 2022 12:08:29 -0500 Message-Id: <20220929170836.17838-6-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220929170836.17838-1-rpearsonhpe@gmail.com> References: <20220929170836.17838-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend next_opcode() to support xrc operations. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_opcode.c | 88 ++++++++++++++++++++++++++ 1 file changed, 88 insertions(+) diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.c b/drivers/infiniband/sw/rxe/rxe_opcode.c index 4ae926a37ef8..c2bac0ce444a 100644 --- a/drivers/infiniband/sw/rxe/rxe_opcode.c +++ b/drivers/infiniband/sw/rxe/rxe_opcode.c @@ -1376,6 +1376,91 @@ static int next_opcode_uc(struct rxe_qp *qp, u32 opcode, int fits) return -EINVAL; } +static int next_opcode_xrc(struct rxe_qp *qp, u32 wr_opcode, int fits) +{ + switch (wr_opcode) { + case IB_WR_RDMA_WRITE: + if (qp->req.opcode == IB_OPCODE_XRC_RDMA_WRITE_FIRST || + qp->req.opcode == IB_OPCODE_XRC_RDMA_WRITE_MIDDLE) + return fits ? + IB_OPCODE_XRC_RDMA_WRITE_LAST : + IB_OPCODE_XRC_RDMA_WRITE_MIDDLE; + else + return fits ? + IB_OPCODE_XRC_RDMA_WRITE_ONLY : + IB_OPCODE_XRC_RDMA_WRITE_FIRST; + + case IB_WR_RDMA_WRITE_WITH_IMM: + if (qp->req.opcode == IB_OPCODE_XRC_RDMA_WRITE_FIRST || + qp->req.opcode == IB_OPCODE_XRC_RDMA_WRITE_MIDDLE) + return fits ? + IB_OPCODE_XRC_RDMA_WRITE_LAST_WITH_IMMEDIATE : + IB_OPCODE_XRC_RDMA_WRITE_MIDDLE; + else + return fits ? + IB_OPCODE_XRC_RDMA_WRITE_ONLY_WITH_IMMEDIATE : + IB_OPCODE_XRC_RDMA_WRITE_FIRST; + + case IB_WR_SEND: + if (qp->req.opcode == IB_OPCODE_XRC_SEND_FIRST || + qp->req.opcode == IB_OPCODE_XRC_SEND_MIDDLE) + return fits ? + IB_OPCODE_XRC_SEND_LAST : + IB_OPCODE_XRC_SEND_MIDDLE; + else + return fits ? + IB_OPCODE_XRC_SEND_ONLY : + IB_OPCODE_XRC_SEND_FIRST; + + case IB_WR_SEND_WITH_IMM: + if (qp->req.opcode == IB_OPCODE_XRC_SEND_FIRST || + qp->req.opcode == IB_OPCODE_XRC_SEND_MIDDLE) + return fits ? + IB_OPCODE_XRC_SEND_LAST_WITH_IMMEDIATE : + IB_OPCODE_XRC_SEND_MIDDLE; + else + return fits ? + IB_OPCODE_XRC_SEND_ONLY_WITH_IMMEDIATE : + IB_OPCODE_XRC_SEND_FIRST; + + case IB_WR_RDMA_READ: + return IB_OPCODE_XRC_RDMA_READ_REQUEST; + + case IB_WR_RDMA_READ_WITH_INV: + return IB_OPCODE_XRC_RDMA_READ_REQUEST; + + case IB_WR_ATOMIC_CMP_AND_SWP: + return IB_OPCODE_XRC_COMPARE_SWAP; + + case IB_WR_MASKED_ATOMIC_CMP_AND_SWP: + return -EOPNOTSUPP; + + case IB_WR_ATOMIC_FETCH_AND_ADD: + return IB_OPCODE_XRC_FETCH_ADD; + + case IB_WR_MASKED_ATOMIC_FETCH_AND_ADD: + return -EOPNOTSUPP; + + case IB_WR_SEND_WITH_INV: + if (qp->req.opcode == IB_OPCODE_XRC_SEND_FIRST || + qp->req.opcode == IB_OPCODE_XRC_SEND_MIDDLE) + return fits ? + IB_OPCODE_XRC_SEND_LAST_WITH_INVALIDATE : + IB_OPCODE_XRC_SEND_MIDDLE; + else + return fits ? + IB_OPCODE_XRC_SEND_ONLY_WITH_INVALIDATE : + IB_OPCODE_XRC_SEND_FIRST; + + case IB_WR_LOCAL_INV: + case IB_WR_REG_MR: + case IB_WR_BIND_MW: + return wr_opcode; + } + + return -EINVAL; +} + int next_opcode(struct rxe_qp *qp, struct rxe_send_wqe *wqe, u32 opcode) { int fits = (wqe->dma.resid <= qp->mtu); @@ -1387,6 +1472,9 @@ int next_opcode(struct rxe_qp *qp, struct rxe_send_wqe *wqe, u32 opcode) case IB_QPT_UC: return next_opcode_uc(qp, opcode, fits); + case IB_QPT_XRC_INI: + return next_opcode_xrc(qp, opcode, fits); + case IB_QPT_UD: case IB_QPT_GSI: switch (opcode) { From patchwork Thu Sep 29 17:08:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12994467 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8FFDC43217 for ; Thu, 29 Sep 2022 17:09:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236115AbiI2RJS (ORCPT ); Thu, 29 Sep 2022 13:09:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59898 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236112AbiI2RJN (ORCPT ); Thu, 29 Sep 2022 13:09:13 -0400 Received: from mail-oi1-x229.google.com (mail-oi1-x229.google.com [IPv6:2607:f8b0:4864:20::229]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E35451CEDDE for ; Thu, 29 Sep 2022 10:09:12 -0700 (PDT) Received: by mail-oi1-x229.google.com with SMTP id n83so2237540oif.11 for ; Thu, 29 Sep 2022 10:09:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=Z7E0TVledxyghIEb8qD3FQSUbtEKPG3PWt+9xJJJOXs=; b=G6jB/Uw6Ch5h35H6rTOjHb83vItvzaQMfkXSSgMYG2oOoRVXeYYJqpBPVPqRXWW+aQ 4mxf514pTlQuvFe/nOzwSKh5AkQkPPuveFSw5dgXcc9PmpU8/0h5jpBhTcjv4JnArjny XhjA5opATs5OKktT6bZGuQyEKLYrfMbS6dwQA6w2WoNYipYZ99Sg0DN138S5NKp1Ac89 oRS4xD9kmAd4oqWIbvElJ5EdNeXsY3MptBXgaghNnNBcqeu8pBVVejajCxTIyBANSPSb lBHdszZCjqsWk9wxZvgPHyhjdbuXkva0jAeACAUL7lI2yHKYzUjVqT90kUtt5rAlq2cw maeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=Z7E0TVledxyghIEb8qD3FQSUbtEKPG3PWt+9xJJJOXs=; b=vDgVofc7xEttfIXVmYPepENc0owkksGaT3VhZ8psKhfkbQ0eI/5NKE0yGF6vA2y8d+ iXF9qu4iWwF7GrpsgOFTVgzr/n9LHY/SzYFJ6tZcP6CdoOI442FVpU66xngJGkjwZytD Cetoo2YYiXMcNkQjggCxSe7gf93056qFQoWZGj+VfdkqtPVkrxbHz2OVL9mF3ArHswm8 dFkKh/vFzeh+JkWy2RVsBhGbCXbJbcVCp5+o2qgR7EgiPaGmn+xNYLFEzVAvaI0rmJJW DnXaElzMjtUTdQs4u+HUxdDnQn0+9m82MO9vgyqJyalQo9micgzXx7jLa4IpnMe+lfOR G/3g== X-Gm-Message-State: ACrzQf1oYrzsIAotvN9vL6cIsfh+uhP19Pz+h8cZo4SYIGGw3+DiJ4tJ YV/icuZxTtUOyz0ihVucYbg= X-Google-Smtp-Source: AMsMyM7OEZnRFI5MeRd+9CgAPArpNfQ4qtTXDUJYJfMifB3V5NF5kpqlKpUz879qrmWj/IXHnziRDA== X-Received: by 2002:a05:6808:1997:b0:34f:d372:b790 with SMTP id bj23-20020a056808199700b0034fd372b790mr2038970oib.2.1664471352186; Thu, 29 Sep 2022 10:09:12 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-c4e7-bfae-90ed-ac81.res6.spectrum.com. [2603:8081:140c:1a00:c4e7:bfae:90ed:ac81]) by smtp.googlemail.com with ESMTPSA id v17-20020a056808005100b00349a06c581fsm2798557oic.3.2022.09.29.10.09.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 29 Sep 2022 10:09:11 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 06/13] RDMA/rxe: Implement open_xrcd and close_xrcd Date: Thu, 29 Sep 2022 12:08:30 -0500 Message-Id: <20220929170836.17838-7-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220929170836.17838-1-rpearsonhpe@gmail.com> References: <20220929170836.17838-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Add rxe_open_xrcd() and rxe_close_xrcd() and add xrcd objects to rxe object pools to implement ib_open_xrcd() and ib_close_xrcd(). Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe.c | 2 ++ drivers/infiniband/sw/rxe/rxe_param.h | 3 +++ drivers/infiniband/sw/rxe/rxe_pool.c | 8 ++++++++ drivers/infiniband/sw/rxe/rxe_pool.h | 1 + drivers/infiniband/sw/rxe/rxe_verbs.c | 23 +++++++++++++++++++++++ drivers/infiniband/sw/rxe/rxe_verbs.h | 11 +++++++++++ 6 files changed, 48 insertions(+) diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c index 51daac5c4feb..acd22980836e 100644 --- a/drivers/infiniband/sw/rxe/rxe.c +++ b/drivers/infiniband/sw/rxe/rxe.c @@ -23,6 +23,7 @@ void rxe_dealloc(struct ib_device *ib_dev) rxe_pool_cleanup(&rxe->uc_pool); rxe_pool_cleanup(&rxe->pd_pool); rxe_pool_cleanup(&rxe->ah_pool); + rxe_pool_cleanup(&rxe->xrcd_pool); rxe_pool_cleanup(&rxe->srq_pool); rxe_pool_cleanup(&rxe->qp_pool); rxe_pool_cleanup(&rxe->cq_pool); @@ -120,6 +121,7 @@ static void rxe_init_pools(struct rxe_dev *rxe) rxe_pool_init(rxe, &rxe->uc_pool, RXE_TYPE_UC); rxe_pool_init(rxe, &rxe->pd_pool, RXE_TYPE_PD); rxe_pool_init(rxe, &rxe->ah_pool, RXE_TYPE_AH); + rxe_pool_init(rxe, &rxe->xrcd_pool, RXE_TYPE_XRCD); rxe_pool_init(rxe, &rxe->srq_pool, RXE_TYPE_SRQ); rxe_pool_init(rxe, &rxe->qp_pool, RXE_TYPE_QP); rxe_pool_init(rxe, &rxe->cq_pool, RXE_TYPE_CQ); diff --git a/drivers/infiniband/sw/rxe/rxe_param.h b/drivers/infiniband/sw/rxe/rxe_param.h index 86c7a8bf3cbb..fa4bf177e123 100644 --- a/drivers/infiniband/sw/rxe/rxe_param.h +++ b/drivers/infiniband/sw/rxe/rxe_param.h @@ -86,6 +86,9 @@ enum rxe_device_param { RXE_MAX_QP_INDEX = DEFAULT_MAX_VALUE, RXE_MAX_QP = DEFAULT_MAX_VALUE - RXE_MIN_QP_INDEX, + RXE_MIN_XRCD_INDEX = 1, + RXE_MAX_XRCD_INDEX = 128, + RXE_MAX_XRCD = 128, RXE_MIN_SRQ_INDEX = 0x00020001, RXE_MAX_SRQ_INDEX = DEFAULT_MAX_VALUE, RXE_MAX_SRQ = DEFAULT_MAX_VALUE - RXE_MIN_SRQ_INDEX, diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c index f50620f5a0a1..b54453b68169 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.c +++ b/drivers/infiniband/sw/rxe/rxe_pool.c @@ -42,6 +42,14 @@ static const struct rxe_type_info { .max_index = RXE_MAX_AH_INDEX, .max_elem = RXE_MAX_AH_INDEX - RXE_MIN_AH_INDEX + 1, }, + [RXE_TYPE_XRCD] = { + .name = "xrcd", + .size = sizeof(struct rxe_xrcd), + .elem_offset = offsetof(struct rxe_xrcd, elem), + .min_index = RXE_MIN_XRCD_INDEX, + .max_index = RXE_MAX_XRCD_INDEX, + .max_elem = RXE_MAX_XRCD_INDEX - RXE_MIN_XRCD_INDEX + 1, + }, [RXE_TYPE_SRQ] = { .name = "srq", .size = sizeof(struct rxe_srq), diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h index 9d83cb32092f..35ac0746a4b8 100644 --- a/drivers/infiniband/sw/rxe/rxe_pool.h +++ b/drivers/infiniband/sw/rxe/rxe_pool.h @@ -11,6 +11,7 @@ enum rxe_elem_type { RXE_TYPE_UC, RXE_TYPE_PD, RXE_TYPE_AH, + RXE_TYPE_XRCD, RXE_TYPE_SRQ, RXE_TYPE_QP, RXE_TYPE_CQ, diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 88825edc7dce..c7641bdf3ba1 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -280,6 +280,26 @@ static int post_one_recv(struct rxe_rq *rq, const struct ib_recv_wr *ibwr) return err; } +static int rxe_alloc_xrcd(struct ib_xrcd *ibxrcd, struct ib_udata *udata) +{ + struct rxe_dev *rxe = to_rdev(ibxrcd->device); + struct rxe_xrcd *xrcd = to_rxrcd(ibxrcd); + int err; + + err = rxe_add_to_pool(&rxe->xrcd_pool, xrcd); + + return err; +} + +static int rxe_dealloc_xrcd(struct ib_xrcd *ibxrcd, struct ib_udata *udata) +{ + struct rxe_xrcd *xrcd = to_rxrcd(ibxrcd); + + rxe_cleanup(xrcd); + + return 0; +} + static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init, struct ib_udata *udata) { @@ -1053,6 +1073,7 @@ static const struct ib_device_ops rxe_dev_ops = { .alloc_mw = rxe_alloc_mw, .alloc_pd = rxe_alloc_pd, .alloc_ucontext = rxe_alloc_ucontext, + .alloc_xrcd = rxe_alloc_xrcd, .attach_mcast = rxe_attach_mcast, .create_ah = rxe_create_ah, .create_cq = rxe_create_cq, @@ -1063,6 +1084,7 @@ static const struct ib_device_ops rxe_dev_ops = { .dealloc_mw = rxe_dealloc_mw, .dealloc_pd = rxe_dealloc_pd, .dealloc_ucontext = rxe_dealloc_ucontext, + .dealloc_xrcd = rxe_dealloc_xrcd, .dereg_mr = rxe_dereg_mr, .destroy_ah = rxe_destroy_ah, .destroy_cq = rxe_destroy_cq, @@ -1101,6 +1123,7 @@ static const struct ib_device_ops rxe_dev_ops = { INIT_RDMA_OBJ_SIZE(ib_cq, rxe_cq, ibcq), INIT_RDMA_OBJ_SIZE(ib_pd, rxe_pd, ibpd), INIT_RDMA_OBJ_SIZE(ib_qp, rxe_qp, ibqp), + INIT_RDMA_OBJ_SIZE(ib_xrcd, rxe_xrcd, ibxrcd), INIT_RDMA_OBJ_SIZE(ib_srq, rxe_srq, ibsrq), INIT_RDMA_OBJ_SIZE(ib_ucontext, rxe_ucontext, ibuc), INIT_RDMA_OBJ_SIZE(ib_mw, rxe_mw, ibmw), diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 5f5cbfcb3569..fb2fbf281232 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -93,6 +93,11 @@ struct rxe_rq { struct rxe_queue *queue; }; +struct rxe_xrcd { + struct ib_xrcd ibxrcd; + struct rxe_pool_elem elem; +}; + struct rxe_srq { struct ib_srq ibsrq; struct rxe_pool_elem elem; @@ -381,6 +386,7 @@ struct rxe_dev { struct rxe_pool uc_pool; struct rxe_pool pd_pool; struct rxe_pool ah_pool; + struct rxe_pool xrcd_pool; struct rxe_pool srq_pool; struct rxe_pool qp_pool; struct rxe_pool cq_pool; @@ -430,6 +436,11 @@ static inline struct rxe_ah *to_rah(struct ib_ah *ah) return ah ? container_of(ah, struct rxe_ah, ibah) : NULL; } +static inline struct rxe_xrcd *to_rxrcd(struct ib_xrcd *ibxrcd) +{ + return ibxrcd ? container_of(ibxrcd, struct rxe_xrcd, ibxrcd) : NULL; +} + static inline struct rxe_srq *to_rsrq(struct ib_srq *srq) { return srq ? container_of(srq, struct rxe_srq, ibsrq) : NULL; From patchwork Thu Sep 29 17:08:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12994468 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BECCC4332F for ; Thu, 29 Sep 2022 17:09:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235688AbiI2RJT (ORCPT ); Thu, 29 Sep 2022 13:09:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59914 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235345AbiI2RJP (ORCPT ); Thu, 29 Sep 2022 13:09:15 -0400 Received: from mail-oa1-x2a.google.com (mail-oa1-x2a.google.com [IPv6:2001:4860:4864:20::2a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 006631CE915 for ; Thu, 29 Sep 2022 10:09:13 -0700 (PDT) Received: by mail-oa1-x2a.google.com with SMTP id 586e51a60fabf-131ea99262dso261211fac.9 for ; Thu, 29 Sep 2022 10:09:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=Vgko5WlFK+gfJcSWIJeIRuyVc9ynigRaD/hZ7gTFtac=; b=osTJsLKzmn960RqGi1C4yIH4YooDszdW8MH3OYC7xcgU2YSni7JFxph5g4Vgs+AW3e UYaH28WRWpZAJEnfRdPlwOVlx6PhmA7638oegqawQXB0cO0EsUXUk6mBOMwNo75nMfQP 65Mzo01l4+eYdcRZuF1Qdixdo+tA1poR+04LdyUxhQ2vHKama+zEFJpgv7hpvrq49jFO XjxaUJ+ahIg8fo4Ky3IJ/00lE8DhUZ4MiL4hDVGKA9bm4/p3jLQGHVke0sg9YFAbyKsk L9Ew3gFvOoJNl/h2CccbbmotjeFweLZq0quliMEabn9fa8C47VT1+UVMlN3LM6KpUZIF lDPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=Vgko5WlFK+gfJcSWIJeIRuyVc9ynigRaD/hZ7gTFtac=; b=f7TFFbnDRRlpxiTSeVfur3essFzZnE72CJ97hyFLOn8rlls/GvvP9dhDY2m7yOo/c4 G08IWk6/VTPOb/Wai9hs2AqCEuWhexsLMIvoJk+deVffKlflsQSnzafZId4+SNBOC/e5 5qiwETN25Dl2ukZRmYQoBAebuwQIE0VvHSeiyCpM3X1uXhahNb91ri55ehnRkFSUu10q nEeUuYMeCV7VE9kLXt1ZcukHRlJLm0mCrAhne0M0mBOLyDEaD8mRr233w6aTX3+Y/5Ll 7sYpFxg0DZrLTuXlrDRA0SBkxwAfG+Dp3aQkHh6BCcapcjrV8+Q+lVLQx26WFN+jn5hI yhZQ== X-Gm-Message-State: ACrzQf0h0dV7qA/uvNu/hfIhWDlFy9Hki6AH3UPfphj2DYXcWL2TZ373 u3Z9PYyodXO91D3F59gopUXWUFbPIlRlng== X-Google-Smtp-Source: AMsMyM4wLHJqOpmd+eZxjn6SvIT3SESiFT2Q0+5O2Hxc8rP16o3B9UX2OlU8ha4TB2YyWGjPIN7sCw== X-Received: by 2002:a05:6870:c149:b0:12d:8014:be54 with SMTP id g9-20020a056870c14900b0012d8014be54mr9048888oad.29.1664471353184; Thu, 29 Sep 2022 10:09:13 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-c4e7-bfae-90ed-ac81.res6.spectrum.com. [2603:8081:140c:1a00:c4e7:bfae:90ed:ac81]) by smtp.googlemail.com with ESMTPSA id v17-20020a056808005100b00349a06c581fsm2798557oic.3.2022.09.29.10.09.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 29 Sep 2022 10:09:12 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 07/13] RDMA/rxe: Extend srq verbs to support xrcd Date: Thu, 29 Sep 2022 12:08:31 -0500 Message-Id: <20220929170836.17838-8-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220929170836.17838-1-rpearsonhpe@gmail.com> References: <20220929170836.17838-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend srq to support xrcd in create verb Signed-off-by: Bob Pearson --- v2 Rebased to current for-next drivers/infiniband/sw/rxe/rxe_srq.c | 131 ++++++++++++++------------ drivers/infiniband/sw/rxe/rxe_verbs.c | 12 +-- drivers/infiniband/sw/rxe/rxe_verbs.h | 8 +- include/uapi/rdma/rdma_user_rxe.h | 4 +- 4 files changed, 83 insertions(+), 72 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_srq.c b/drivers/infiniband/sw/rxe/rxe_srq.c index 02b39498c370..fcd1a58c3900 100644 --- a/drivers/infiniband/sw/rxe/rxe_srq.c +++ b/drivers/infiniband/sw/rxe/rxe_srq.c @@ -11,61 +11,85 @@ int rxe_srq_chk_init(struct rxe_dev *rxe, struct ib_srq_init_attr *init) { struct ib_srq_attr *attr = &init->attr; + int err = -EINVAL; - if (attr->max_wr > rxe->attr.max_srq_wr) { - pr_warn("max_wr(%d) > max_srq_wr(%d)\n", - attr->max_wr, rxe->attr.max_srq_wr); - goto err1; + if (init->srq_type == IB_SRQT_TM) { + err = -EOPNOTSUPP; + goto err_out; } - if (attr->max_wr <= 0) { - pr_warn("max_wr(%d) <= 0\n", attr->max_wr); - goto err1; + if (init->srq_type == IB_SRQT_XRC) { + if (!init->ext.cq || !init->ext.xrc.xrcd) + goto err_out; } + if (attr->max_wr > rxe->attr.max_srq_wr) + goto err_out; + + if (attr->max_wr <= 0) + goto err_out; + if (attr->max_wr < RXE_MIN_SRQ_WR) attr->max_wr = RXE_MIN_SRQ_WR; - if (attr->max_sge > rxe->attr.max_srq_sge) { - pr_warn("max_sge(%d) > max_srq_sge(%d)\n", - attr->max_sge, rxe->attr.max_srq_sge); - goto err1; - } + if (attr->max_sge > rxe->attr.max_srq_sge) + goto err_out; if (attr->max_sge < RXE_MIN_SRQ_SGE) attr->max_sge = RXE_MIN_SRQ_SGE; return 0; -err1: - return -EINVAL; +err_out: + pr_debug("%s: failed err = %d\n", __func__, err); + return err; } int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq, struct ib_srq_init_attr *init, struct ib_udata *udata, struct rxe_create_srq_resp __user *uresp) { - int err; - int srq_wqe_size; + struct rxe_pd *pd = to_rpd(srq->ibsrq.pd); + struct rxe_cq *cq; + struct rxe_xrcd *xrcd; struct rxe_queue *q; - enum queue_type type; + int srq_wqe_size; + int err; + + rxe_get(pd); + srq->pd = pd; srq->ibsrq.event_handler = init->event_handler; srq->ibsrq.srq_context = init->srq_context; srq->limit = init->attr.srq_limit; - srq->srq_num = srq->elem.index; srq->rq.max_wr = init->attr.max_wr; srq->rq.max_sge = init->attr.max_sge; - srq_wqe_size = rcv_wqe_size(srq->rq.max_sge); + if (init->srq_type == IB_SRQT_XRC) { + cq = to_rcq(init->ext.cq); + if (cq) { + rxe_get(cq); + srq->cq = to_rcq(init->ext.cq); + } else { + return -EINVAL; + } + xrcd = to_rxrcd(init->ext.xrc.xrcd); + if (xrcd) { + rxe_get(xrcd); + srq->xrcd = to_rxrcd(init->ext.xrc.xrcd); + } + srq->ibsrq.ext.xrc.srq_num = srq->elem.index; + } spin_lock_init(&srq->rq.producer_lock); spin_lock_init(&srq->rq.consumer_lock); - type = QUEUE_TYPE_FROM_CLIENT; - q = rxe_queue_init(rxe, &srq->rq.max_wr, srq_wqe_size, type); + srq_wqe_size = rcv_wqe_size(srq->rq.max_sge); + q = rxe_queue_init(rxe, &srq->rq.max_wr, srq_wqe_size, + QUEUE_TYPE_FROM_CLIENT); if (!q) { - pr_warn("unable to allocate queue for srq\n"); + pr_debug("%s: srq#%d: unable to allocate queue\n", + __func__, srq->elem.index); return -ENOMEM; } @@ -79,66 +103,45 @@ int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq, return err; } - if (uresp) { - if (copy_to_user(&uresp->srq_num, &srq->srq_num, - sizeof(uresp->srq_num))) { - rxe_queue_cleanup(q); - return -EFAULT; - } - } - return 0; } int rxe_srq_chk_attr(struct rxe_dev *rxe, struct rxe_srq *srq, struct ib_srq_attr *attr, enum ib_srq_attr_mask mask) { - if (srq->error) { - pr_warn("srq in error state\n"); - goto err1; - } + int err = -EINVAL; + + if (srq->error) + goto err_out; if (mask & IB_SRQ_MAX_WR) { - if (attr->max_wr > rxe->attr.max_srq_wr) { - pr_warn("max_wr(%d) > max_srq_wr(%d)\n", - attr->max_wr, rxe->attr.max_srq_wr); - goto err1; - } + if (attr->max_wr > rxe->attr.max_srq_wr) + goto err_out; - if (attr->max_wr <= 0) { - pr_warn("max_wr(%d) <= 0\n", attr->max_wr); - goto err1; - } + if (attr->max_wr <= 0) + goto err_out; - if (srq->limit && (attr->max_wr < srq->limit)) { - pr_warn("max_wr (%d) < srq->limit (%d)\n", - attr->max_wr, srq->limit); - goto err1; - } + if (srq->limit && (attr->max_wr < srq->limit)) + goto err_out; if (attr->max_wr < RXE_MIN_SRQ_WR) attr->max_wr = RXE_MIN_SRQ_WR; } if (mask & IB_SRQ_LIMIT) { - if (attr->srq_limit > rxe->attr.max_srq_wr) { - pr_warn("srq_limit(%d) > max_srq_wr(%d)\n", - attr->srq_limit, rxe->attr.max_srq_wr); - goto err1; - } + if (attr->srq_limit > rxe->attr.max_srq_wr) + goto err_out; - if (attr->srq_limit > srq->rq.queue->buf->index_mask) { - pr_warn("srq_limit (%d) > cur limit(%d)\n", - attr->srq_limit, - srq->rq.queue->buf->index_mask); - goto err1; - } + if (attr->srq_limit > srq->rq.queue->buf->index_mask) + goto err_out; } return 0; -err1: - return -EINVAL; +err_out: + pr_debug("%s: srq#%d: failed err = %d\n", __func__, + srq->elem.index, err); + return err; } int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq, @@ -182,6 +185,12 @@ void rxe_srq_cleanup(struct rxe_pool_elem *elem) if (srq->pd) rxe_put(srq->pd); + if (srq->cq) + rxe_put(srq->cq); + + if (srq->xrcd) + rxe_put(srq->xrcd); + if (srq->rq.queue) rxe_queue_cleanup(srq->rq.queue); } diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index c7641bdf3ba1..cee31b650fe0 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -305,7 +305,6 @@ static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init, { int err; struct rxe_dev *rxe = to_rdev(ibsrq->device); - struct rxe_pd *pd = to_rpd(ibsrq->pd); struct rxe_srq *srq = to_rsrq(ibsrq); struct rxe_create_srq_resp __user *uresp = NULL; @@ -315,9 +314,6 @@ static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init, uresp = udata->outbuf; } - if (init->srq_type != IB_SRQT_BASIC) - return -EOPNOTSUPP; - err = rxe_srq_chk_init(rxe, init); if (err) return err; @@ -326,13 +322,11 @@ static int rxe_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init, if (err) return err; - rxe_get(pd); - srq->pd = pd; - err = rxe_srq_from_init(rxe, srq, init, udata, uresp); if (err) goto err_cleanup; + rxe_finalize(srq); return 0; err_cleanup: @@ -366,6 +360,7 @@ static int rxe_modify_srq(struct ib_srq *ibsrq, struct ib_srq_attr *attr, err = rxe_srq_from_attr(rxe, srq, attr, mask, &ucmd, udata); if (err) return err; + return 0; } @@ -379,6 +374,7 @@ static int rxe_query_srq(struct ib_srq *ibsrq, struct ib_srq_attr *attr) attr->max_wr = srq->rq.queue->buf->index_mask; attr->max_sge = srq->rq.max_sge; attr->srq_limit = srq->limit; + return 0; } @@ -626,6 +622,8 @@ static void init_send_wqe(struct rxe_qp *qp, const struct ib_send_wr *ibwr, return; } + wqe->dma.num_sge = ibwr->num_sge; + if (unlikely(ibwr->send_flags & IB_SEND_INLINE)) copy_inline_data_to_wqe(wqe, ibwr); else diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index fb2fbf281232..465af1517112 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -102,13 +102,19 @@ struct rxe_srq { struct ib_srq ibsrq; struct rxe_pool_elem elem; struct rxe_pd *pd; + struct rxe_xrcd *xrcd; /* xrc only */ + struct rxe_cq *cq; /* xrc only */ struct rxe_rq rq; - u32 srq_num; int limit; int error; }; +static inline u32 srq_num(struct rxe_srq *srq) +{ + return srq->ibsrq.ext.xrc.srq_num; +} + enum rxe_qp_state { QP_STATE_RESET, QP_STATE_INIT, diff --git a/include/uapi/rdma/rdma_user_rxe.h b/include/uapi/rdma/rdma_user_rxe.h index 73f679dfd2df..f908347963c0 100644 --- a/include/uapi/rdma/rdma_user_rxe.h +++ b/include/uapi/rdma/rdma_user_rxe.h @@ -74,7 +74,7 @@ struct rxe_av { struct rxe_send_wr { __aligned_u64 wr_id; - __u32 reserved; + __u32 srq_num; /* xrc only */ __u32 opcode; __u32 send_flags; union { @@ -191,8 +191,6 @@ struct rxe_create_qp_resp { struct rxe_create_srq_resp { struct mminfo mi; - __u32 srq_num; - __u32 reserved; }; struct rxe_modify_srq_cmd { From patchwork Thu Sep 29 17:08:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12994470 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3254AC433FE for ; Thu, 29 Sep 2022 17:09:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235345AbiI2RJU (ORCPT ); Thu, 29 Sep 2022 13:09:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59972 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235926AbiI2RJR (ORCPT ); Thu, 29 Sep 2022 13:09:17 -0400 Received: from mail-oo1-xc33.google.com (mail-oo1-xc33.google.com [IPv6:2607:f8b0:4864:20::c33]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DF5AA1CEDE1 for ; Thu, 29 Sep 2022 10:09:14 -0700 (PDT) Received: by mail-oo1-xc33.google.com with SMTP id k10-20020a4ad10a000000b004756ab911f8so531549oor.2 for ; Thu, 29 Sep 2022 10:09:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=5dCbi8WvmUR9/WJbqIz+oTkaGf+yKmfm7XOCy0KBaWM=; b=SFH3XLFuGRsU6fJ6AyVIINUivFWxciblkMFTnSZGaDh1w8beNTOwU1CBJTADAoMWuD eLJ+lj5Zn+3wbDocInmvYshJiXK/YOcT2CMBVSuoRx1BhGkT1EOy8S82a2YpXGgv/yDY eNnc7H6YgPJLPQ69DVKzzmiqCuhy5fqOw07oyiIV3nzk8atID5b5ZxT/b11QcnjjokTV 4slRn+jjR1l1kFLE9gJwh4ytCyFXjVhu4aw26I2UPkBbzeyzRjJSKAkUijfroI4zwfPY EGYiUOOVnrO+wp6e38pF69LTC8IOFVfywaPSYOS4pBkM5mxvUISwM7H8pi72wROFOKzO VIxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=5dCbi8WvmUR9/WJbqIz+oTkaGf+yKmfm7XOCy0KBaWM=; b=AyJxckxAlLMnJ9zNPJfsvmMsRl8hfnpNMVkcJvMtf7Buwyr0hZPuo/gM+hWn1rPHY4 yf/KpnByu0nDaRajxah0OA+vzwXf2ECJKG4VsbCx/WnTyZ293cCPy6Uwk11nZpE73iHZ TSJEmiU8csIZMLlKaFeSmFf5o0RHT6h8fSXcS4wbrWzjRRtynSd5P1Sd+0t/c5FeFXd/ 0/pn4oYk01+EL4hnqroBGHulcCBnGqSBCvA0S7usBgRbRJuDjsmrDlz+sr03oY4w7xmS x27JG1xZZF789q+cHN48ioLDmuL//bUlY+70eAPzX7u8GcEQ/HfyJ3bL/crXcMQp/FYN D60w== X-Gm-Message-State: ACrzQf0jXHiVQjd64/Lz3uZIyqig8Wwpwe3vrSIz4MPuUZ9IgHS6QmK7 IXY7g9Ye22cbYX2Miu0M2klCgYEfAdW0CA== X-Google-Smtp-Source: AMsMyM7k0cpon0mHW10jQZo8w7eT5DVzLPnauTT5oFv+5tU165fVXL2YINe4vrw/DiEdUHJpJGnqoA== X-Received: by 2002:a05:6820:1b1a:b0:476:805:101a with SMTP id bv26-20020a0568201b1a00b004760805101amr1771330oob.79.1664471354098; Thu, 29 Sep 2022 10:09:14 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-c4e7-bfae-90ed-ac81.res6.spectrum.com. [2603:8081:140c:1a00:c4e7:bfae:90ed:ac81]) by smtp.googlemail.com with ESMTPSA id v17-20020a056808005100b00349a06c581fsm2798557oic.3.2022.09.29.10.09.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 29 Sep 2022 10:09:13 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 08/13] RDMA/rxe: Extend rxe_qp.c to support xrc qps Date: Thu, 29 Sep 2022 12:08:32 -0500 Message-Id: <20220929170836.17838-9-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220929170836.17838-1-rpearsonhpe@gmail.com> References: <20220929170836.17838-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend code in rxe_qp.c to support xrc qp types. Signed-off-by: Bob Pearson --- v2 Rebased to current for-next. drivers/infiniband/sw/rxe/rxe_av.c | 3 +- drivers/infiniband/sw/rxe/rxe_loc.h | 7 +- drivers/infiniband/sw/rxe/rxe_qp.c | 308 +++++++++++++++----------- drivers/infiniband/sw/rxe/rxe_verbs.c | 22 +- drivers/infiniband/sw/rxe/rxe_verbs.h | 1 + 5 files changed, 200 insertions(+), 141 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_av.c b/drivers/infiniband/sw/rxe/rxe_av.c index 3b05314ca739..c8f3ec53aa79 100644 --- a/drivers/infiniband/sw/rxe/rxe_av.c +++ b/drivers/infiniband/sw/rxe/rxe_av.c @@ -110,7 +110,8 @@ struct rxe_av *rxe_get_av(struct rxe_pkt_info *pkt, struct rxe_ah **ahp) if (!pkt || !pkt->qp) return NULL; - if (qp_type(pkt->qp) == IB_QPT_RC || qp_type(pkt->qp) == IB_QPT_UC) + if (qp_type(pkt->qp) == IB_QPT_RC || qp_type(pkt->qp) == IB_QPT_UC || + qp_type(pkt->qp) == IB_QPT_XRC_INI) return &pkt->qp->pri_av; if (!pkt->wqe) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index a806737168d0..1eba6384b6a4 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -103,11 +103,12 @@ const char *rxe_parent_name(struct rxe_dev *rxe, unsigned int port_num); int next_opcode(struct rxe_qp *qp, struct rxe_send_wqe *wqe, u32 opcode); /* rxe_qp.c */ -int rxe_qp_chk_init(struct rxe_dev *rxe, struct ib_qp_init_attr *init); -int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd, +int rxe_qp_chk_init(struct rxe_dev *rxe, struct ib_qp *ibqp, + struct ib_qp_init_attr *init); +int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct ib_qp_init_attr *init, struct rxe_create_qp_resp __user *uresp, - struct ib_pd *ibpd, struct ib_udata *udata); + struct ib_udata *udata); int rxe_qp_to_init(struct rxe_qp *qp, struct ib_qp_init_attr *init); int rxe_qp_chk_attr(struct rxe_dev *rxe, struct rxe_qp *qp, struct ib_qp_attr *attr, int mask); diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c index a62bab88415c..5782f8aa2213 100644 --- a/drivers/infiniband/sw/rxe/rxe_qp.c +++ b/drivers/infiniband/sw/rxe/rxe_qp.c @@ -56,34 +56,45 @@ static int rxe_qp_chk_cap(struct rxe_dev *rxe, struct ib_qp_cap *cap, return -EINVAL; } -int rxe_qp_chk_init(struct rxe_dev *rxe, struct ib_qp_init_attr *init) +int rxe_qp_chk_init(struct rxe_dev *rxe, struct ib_qp *ibqp, + struct ib_qp_init_attr *init) { + struct ib_pd *ibpd = ibqp->pd; struct ib_qp_cap *cap = &init->cap; struct rxe_port *port; int port_num = init->port_num; + if (init->create_flags) + return -EOPNOTSUPP; + switch (init->qp_type) { case IB_QPT_GSI: case IB_QPT_RC: case IB_QPT_UC: case IB_QPT_UD: + if (!ibpd || !init->recv_cq || !init->send_cq) + return -EINVAL; + break; + case IB_QPT_XRC_INI: + if (!init->send_cq) + return -EINVAL; + break; + case IB_QPT_XRC_TGT: + if (!init->xrcd) + return -EINVAL; break; default: return -EOPNOTSUPP; } - if (!init->recv_cq || !init->send_cq) { - pr_debug("missing cq\n"); - goto err1; + if (init->qp_type != IB_QPT_XRC_TGT) { + if (rxe_qp_chk_cap(rxe, cap, !!(init->srq || init->xrcd))) + goto err1; } - if (rxe_qp_chk_cap(rxe, cap, !!init->srq)) - goto err1; - if (init->qp_type == IB_QPT_GSI) { if (!rdma_is_port_valid(&rxe->ib_dev, port_num)) { pr_debug("invalid port = %d\n", port_num); - goto err1; } port = &rxe->port; @@ -148,49 +159,83 @@ static void cleanup_rd_atomic_resources(struct rxe_qp *qp) static void rxe_qp_init_misc(struct rxe_dev *rxe, struct rxe_qp *qp, struct ib_qp_init_attr *init) { - struct rxe_port *port; - u32 qpn; - + qp->ibqp.qp_type = init->qp_type; qp->sq_sig_type = init->sq_sig_type; qp->attr.path_mtu = 1; qp->mtu = ib_mtu_enum_to_int(qp->attr.path_mtu); - qpn = qp->elem.index; - port = &rxe->port; - switch (init->qp_type) { case IB_QPT_GSI: qp->ibqp.qp_num = 1; - port->qp_gsi_index = qpn; + rxe->port.qp_gsi_index = qp->elem.index; qp->attr.port_num = init->port_num; break; default: - qp->ibqp.qp_num = qpn; + qp->ibqp.qp_num = qp->elem.index; break; } spin_lock_init(&qp->state_lock); - spin_lock_init(&qp->req.task.state_lock); - spin_lock_init(&qp->resp.task.state_lock); - spin_lock_init(&qp->comp.task.state_lock); - - spin_lock_init(&qp->sq.sq_lock); - spin_lock_init(&qp->rq.producer_lock); - spin_lock_init(&qp->rq.consumer_lock); - atomic_set(&qp->ssn, 0); atomic_set(&qp->skb_out, 0); } +static int rxe_prepare_send_queue(struct rxe_dev *rxe, struct rxe_qp *qp, + struct ib_qp_init_attr *init, struct ib_udata *udata, + struct rxe_create_qp_resp __user *uresp) +{ + struct rxe_queue *q; + int wqe_size; + int err; + + qp->sq.max_wr = init->cap.max_send_wr; + + wqe_size = init->cap.max_send_sge*sizeof(struct ib_sge); + wqe_size = max_t(int, wqe_size, init->cap.max_inline_data); + + qp->sq.max_sge = wqe_size/sizeof(struct ib_sge); + qp->sq.max_inline = wqe_size; + wqe_size += sizeof(struct rxe_send_wqe); + + q = rxe_queue_init(rxe, &qp->sq.max_wr, wqe_size, + QUEUE_TYPE_FROM_CLIENT); + if (!q) + return -ENOMEM; + + err = do_mmap_info(rxe, uresp ? &uresp->sq_mi : NULL, udata, + q->buf, q->buf_size, &q->ip); + + if (err) { + vfree(q->buf); + kfree(q); + return err; + } + + init->cap.max_send_sge = qp->sq.max_sge; + init->cap.max_inline_data = qp->sq.max_inline; + + qp->sq.queue = q; + + return 0; +} + static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp, struct ib_qp_init_attr *init, struct ib_udata *udata, struct rxe_create_qp_resp __user *uresp) { int err; - int wqe_size; - enum queue_type type; + + err = rxe_prepare_send_queue(rxe, qp, init, udata, uresp); + if (err) + return err; + + spin_lock_init(&qp->sq.sq_lock); + spin_lock_init(&qp->req.task.state_lock); + spin_lock_init(&qp->comp.task.state_lock); + + skb_queue_head_init(&qp->resp_pkts); err = sock_create_kern(&init_net, AF_INET, SOCK_DGRAM, 0, &qp->sk); if (err < 0) @@ -205,32 +250,6 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp, * (0xc000 - 0xffff). */ qp->src_port = RXE_ROCE_V2_SPORT + (hash_32(qp_num(qp), 14) & 0x3fff); - qp->sq.max_wr = init->cap.max_send_wr; - - /* These caps are limited by rxe_qp_chk_cap() done by the caller */ - wqe_size = max_t(int, init->cap.max_send_sge * sizeof(struct ib_sge), - init->cap.max_inline_data); - qp->sq.max_sge = init->cap.max_send_sge = - wqe_size / sizeof(struct ib_sge); - qp->sq.max_inline = init->cap.max_inline_data = wqe_size; - wqe_size += sizeof(struct rxe_send_wqe); - - type = QUEUE_TYPE_FROM_CLIENT; - qp->sq.queue = rxe_queue_init(rxe, &qp->sq.max_wr, - wqe_size, type); - if (!qp->sq.queue) - return -ENOMEM; - - err = do_mmap_info(rxe, uresp ? &uresp->sq_mi : NULL, udata, - qp->sq.queue->buf, qp->sq.queue->buf_size, - &qp->sq.queue->ip); - - if (err) { - vfree(qp->sq.queue->buf); - kfree(qp->sq.queue); - qp->sq.queue = NULL; - return err; - } qp->req.wqe_index = queue_get_producer(qp->sq.queue, QUEUE_TYPE_FROM_CLIENT); @@ -240,57 +259,71 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp, qp->req.opcode = -1; qp->comp.opcode = -1; - skb_queue_head_init(&qp->req_pkts); - rxe_init_task(&qp->req.task, qp, rxe_requester, "req"); rxe_init_task(&qp->comp.task, qp, rxe_completer, "comp"); qp->qp_timeout_jiffies = 0; /* Can't be set for UD/UC in modify_qp */ - if (init->qp_type == IB_QPT_RC) { + if (init->qp_type == IB_QPT_RC || init->qp_type == IB_QPT_XRC_INI) { timer_setup(&qp->rnr_nak_timer, rnr_nak_timer, 0); timer_setup(&qp->retrans_timer, retransmit_timer, 0); } return 0; } +static int rxe_prepare_recv_queue(struct rxe_dev *rxe, struct rxe_qp *qp, + struct ib_qp_init_attr *init, struct ib_udata *udata, + struct rxe_create_qp_resp __user *uresp) +{ + struct rxe_queue *q; + int wqe_size; + int err; + + qp->rq.max_wr = init->cap.max_recv_wr; + qp->rq.max_sge = init->cap.max_recv_sge; + + wqe_size = sizeof(struct rxe_recv_wqe) + + qp->rq.max_sge*sizeof(struct ib_sge); + + q = rxe_queue_init(rxe, &qp->rq.max_wr, wqe_size, + QUEUE_TYPE_FROM_CLIENT); + if (!q) + return -ENOMEM; + + err = do_mmap_info(rxe, uresp ? &uresp->rq_mi : NULL, udata, + q->buf, q->buf_size, &q->ip); + + if (err) { + vfree(q->buf); + kfree(q); + return err; + } + + qp->rq.queue = q; + + return 0; +} + static int rxe_qp_init_resp(struct rxe_dev *rxe, struct rxe_qp *qp, struct ib_qp_init_attr *init, struct ib_udata *udata, struct rxe_create_qp_resp __user *uresp) { int err; - int wqe_size; - enum queue_type type; - if (!qp->srq) { - qp->rq.max_wr = init->cap.max_recv_wr; - qp->rq.max_sge = init->cap.max_recv_sge; - - wqe_size = rcv_wqe_size(qp->rq.max_sge); - - pr_debug("qp#%d max_wr = %d, max_sge = %d, wqe_size = %d\n", - qp_num(qp), qp->rq.max_wr, qp->rq.max_sge, wqe_size); - - type = QUEUE_TYPE_FROM_CLIENT; - qp->rq.queue = rxe_queue_init(rxe, &qp->rq.max_wr, - wqe_size, type); - if (!qp->rq.queue) - return -ENOMEM; - - err = do_mmap_info(rxe, uresp ? &uresp->rq_mi : NULL, udata, - qp->rq.queue->buf, qp->rq.queue->buf_size, - &qp->rq.queue->ip); - if (err) { - vfree(qp->rq.queue->buf); - kfree(qp->rq.queue); - qp->rq.queue = NULL; + if (!qp->srq && qp_type(qp) != IB_QPT_XRC_TGT) { + err = rxe_prepare_recv_queue(rxe, qp, init, udata, uresp); + if (err) return err; - } + + spin_lock_init(&qp->rq.producer_lock); + spin_lock_init(&qp->rq.consumer_lock); } - skb_queue_head_init(&qp->resp_pkts); + spin_lock_init(&qp->resp.task.state_lock); + + skb_queue_head_init(&qp->req_pkts); rxe_init_task(&qp->resp.task, qp, rxe_responder, "resp"); @@ -303,64 +336,82 @@ static int rxe_qp_init_resp(struct rxe_dev *rxe, struct rxe_qp *qp, } /* called by the create qp verb */ -int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd, +int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct ib_qp_init_attr *init, struct rxe_create_qp_resp __user *uresp, - struct ib_pd *ibpd, struct ib_udata *udata) { int err; + struct rxe_pd *pd = to_rpd(qp->ibqp.pd); struct rxe_cq *rcq = to_rcq(init->recv_cq); struct rxe_cq *scq = to_rcq(init->send_cq); - struct rxe_srq *srq = init->srq ? to_rsrq(init->srq) : NULL; + struct rxe_srq *srq = to_rsrq(init->srq); + struct rxe_xrcd *xrcd = to_rxrcd(init->xrcd); - rxe_get(pd); - rxe_get(rcq); - rxe_get(scq); - if (srq) + if (pd) { + rxe_get(pd); + qp->pd = pd; + } + if (rcq) { + rxe_get(rcq); + qp->rcq = rcq; + atomic_inc(&rcq->num_wq); + } + if (scq) { + rxe_get(scq); + qp->scq = scq; + atomic_inc(&scq->num_wq); + } + if (srq) { rxe_get(srq); - - qp->pd = pd; - qp->rcq = rcq; - qp->scq = scq; - qp->srq = srq; - - atomic_inc(&rcq->num_wq); - atomic_inc(&scq->num_wq); + qp->srq = srq; + } + if (xrcd) { + rxe_get(xrcd); + qp->xrcd = xrcd; + } rxe_qp_init_misc(rxe, qp, init); - err = rxe_qp_init_req(rxe, qp, init, udata, uresp); - if (err) - goto err1; + switch (init->qp_type) { + case IB_QPT_RC: + case IB_QPT_UC: + case IB_QPT_GSI: + case IB_QPT_UD: + err = rxe_qp_init_req(rxe, qp, init, udata, uresp); + if (err) + goto err_out; - err = rxe_qp_init_resp(rxe, qp, init, udata, uresp); - if (err) - goto err2; + err = rxe_qp_init_resp(rxe, qp, init, udata, uresp); + if (err) + goto err_unwind; + break; + case IB_QPT_XRC_INI: + err = rxe_qp_init_req(rxe, qp, init, udata, uresp); + if (err) + goto err_out; + break; + case IB_QPT_XRC_TGT: + err = rxe_qp_init_resp(rxe, qp, init, udata, uresp); + if (err) + goto err_out; + break; + default: + /* not reached */ + err = -EOPNOTSUPP; + goto err_out; + }; qp->attr.qp_state = IB_QPS_RESET; qp->valid = 1; return 0; -err2: +err_unwind: rxe_queue_cleanup(qp->sq.queue); qp->sq.queue = NULL; -err1: - atomic_dec(&rcq->num_wq); - atomic_dec(&scq->num_wq); - - qp->pd = NULL; - qp->rcq = NULL; - qp->scq = NULL; - qp->srq = NULL; - - if (srq) - rxe_put(srq); - rxe_put(scq); - rxe_put(rcq); - rxe_put(pd); - +err_out: + /* rxe_qp_cleanup handles the rest */ return err; } @@ -485,7 +536,8 @@ static void rxe_qp_reset(struct rxe_qp *qp) /* stop request/comp */ if (qp->sq.queue) { - if (qp_type(qp) == IB_QPT_RC) + if (qp_type(qp) == IB_QPT_RC || + qp_type(qp) == IB_QPT_XRC_INI) rxe_disable_task(&qp->comp.task); rxe_disable_task(&qp->req.task); } @@ -529,7 +581,8 @@ static void rxe_qp_reset(struct rxe_qp *qp) rxe_enable_task(&qp->resp.task); if (qp->sq.queue) { - if (qp_type(qp) == IB_QPT_RC) + if (qp_type(qp) == IB_QPT_RC || + qp_type(qp) == IB_QPT_XRC_INI) rxe_enable_task(&qp->comp.task); rxe_enable_task(&qp->req.task); @@ -542,7 +595,8 @@ static void rxe_qp_drain(struct rxe_qp *qp) if (qp->sq.queue) { if (qp->req.state != QP_STATE_DRAINED) { qp->req.state = QP_STATE_DRAIN; - if (qp_type(qp) == IB_QPT_RC) + if (qp_type(qp) == IB_QPT_RC || + qp_type(qp) == IB_QPT_XRC_INI) rxe_run_task(&qp->comp.task, 1); else __rxe_do_task(&qp->comp.task); @@ -562,7 +616,7 @@ void rxe_qp_error(struct rxe_qp *qp) /* drain work and packet queues */ rxe_run_task(&qp->resp.task, 1); - if (qp_type(qp) == IB_QPT_RC) + if (qp_type(qp) == IB_QPT_RC || qp_type(qp) == IB_QPT_XRC_INI) rxe_run_task(&qp->comp.task, 1); else __rxe_do_task(&qp->comp.task); @@ -672,7 +726,8 @@ int rxe_qp_from_attr(struct rxe_qp *qp, struct ib_qp_attr *attr, int mask, qp->attr.sq_psn = (attr->sq_psn & BTH_PSN_MASK); qp->req.psn = qp->attr.sq_psn; qp->comp.psn = qp->attr.sq_psn; - pr_debug("qp#%d set req psn = 0x%x\n", qp_num(qp), qp->req.psn); + pr_debug("qp#%d set req psn = %d comp psn = %d\n", qp_num(qp), + qp->req.psn, qp->comp.psn); } if (mask & IB_QP_PATH_MIG_STATE) @@ -787,7 +842,7 @@ static void rxe_qp_do_cleanup(struct work_struct *work) qp->qp_timeout_jiffies = 0; rxe_cleanup_task(&qp->resp.task); - if (qp_type(qp) == IB_QPT_RC) { + if (qp_type(qp) == IB_QPT_RC || qp_type(qp) == IB_QPT_XRC_INI) { del_timer_sync(&qp->retrans_timer); del_timer_sync(&qp->rnr_nak_timer); } @@ -807,6 +862,9 @@ static void rxe_qp_do_cleanup(struct work_struct *work) if (qp->sq.queue) rxe_queue_cleanup(qp->sq.queue); + if (qp->xrcd) + rxe_put(qp->xrcd); + if (qp->srq) rxe_put(qp->srq); @@ -829,7 +887,7 @@ static void rxe_qp_do_cleanup(struct work_struct *work) if (qp->resp.mr) rxe_put(qp->resp.mr); - if (qp_type(qp) == IB_QPT_RC) + if (qp_type(qp) == IB_QPT_RC || qp_type(qp) == IB_QPT_XRC_INI) sk_dst_reset(qp->sk->sk); free_rd_atomic_resources(qp); diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index cee31b650fe0..b490f7d53d72 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -415,7 +415,6 @@ static int rxe_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init, { int err; struct rxe_dev *rxe = to_rdev(ibqp->device); - struct rxe_pd *pd = to_rpd(ibqp->pd); struct rxe_qp *qp = to_rqp(ibqp); struct rxe_create_qp_resp __user *uresp = NULL; @@ -423,16 +422,7 @@ static int rxe_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init, if (udata->outlen < sizeof(*uresp)) return -EINVAL; uresp = udata->outbuf; - } - - if (init->create_flags) - return -EOPNOTSUPP; - err = rxe_qp_chk_init(rxe, init); - if (err) - return err; - - if (udata) { if (udata->inlen) return -EINVAL; @@ -441,11 +431,15 @@ static int rxe_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init, qp->is_user = false; } + err = rxe_qp_chk_init(rxe, ibqp, init); + if (err) + return err; + err = rxe_add_to_pool(&rxe->qp_pool, qp); if (err) return err; - err = rxe_qp_from_init(rxe, qp, pd, init, uresp, ibqp->pd, udata); + err = rxe_qp_from_init(rxe, qp, init, uresp, udata); if (err) goto qp_init; @@ -516,6 +510,9 @@ static int validate_send_wr(struct rxe_qp *qp, const struct ib_send_wr *ibwr, int num_sge = ibwr->num_sge; struct rxe_sq *sq = &qp->sq; + if (unlikely(qp_type(qp) == IB_QPT_XRC_TGT)) + return -EOPNOTSUPP; + if (unlikely(num_sge > sq->max_sge)) goto err1; @@ -739,8 +736,9 @@ static int rxe_post_send(struct ib_qp *ibqp, const struct ib_send_wr *wr, /* Utilize process context to do protocol processing */ rxe_run_task(&qp->req.task, 0); return 0; - } else + } else { return rxe_post_send_kernel(qp, wr, bad_wr); + } } static int rxe_post_recv(struct ib_qp *ibqp, const struct ib_recv_wr *wr, diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 465af1517112..582ffdecb9e9 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -230,6 +230,7 @@ struct rxe_qp { struct rxe_srq *srq; struct rxe_cq *scq; struct rxe_cq *rcq; + struct rxe_xrcd *xrcd; enum ib_sig_type sq_sig_type; From patchwork Thu Sep 29 17:08:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12994469 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC053C433F5 for ; Thu, 29 Sep 2022 17:09:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235234AbiI2RJV (ORCPT ); Thu, 29 Sep 2022 13:09:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59984 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236006AbiI2RJR (ORCPT ); Thu, 29 Sep 2022 13:09:17 -0400 Received: from mail-oa1-x2c.google.com (mail-oa1-x2c.google.com [IPv6:2001:4860:4864:20::2c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B7A4A1CEDFF for ; Thu, 29 Sep 2022 10:09:15 -0700 (PDT) Received: by mail-oa1-x2c.google.com with SMTP id 586e51a60fabf-131b7bb5077so2567993fac.2 for ; Thu, 29 Sep 2022 10:09:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=2sQYsVirRP6MdY15qHzh8WxTTUEdGT3IBBGPdUTLpFI=; b=OqzuwCP/hQ6HIZPHMHXJuXlJ1PlrzyPHbJmG7qd9FW0a1bJKqiVliyMRe/5GGXdPav p1kMFJRVVuStZA3PUvgqrMl6uwo0cJMrBpAR0paf6G6/YWTc4Aa4olOhI2KGGWYzgUyb gZtQGRoUwmzTzKvyG0+AGZ9nbDs9e9kuMywqhbawPcPiWzdGC+w8Ts5HKjgkYiqcMGVt 9RU6oW0wwcmbEBNExPURWSa68sN81QUiHZYeM14eBQdIjR0OYzXvwJKx5SzI83Lug3z5 0esv5pFBJ76SEgFrA2Y87Uojodo/KuYH9y8ti0ql11WqIRwfG6UeSrGW747nE7ewMCrM 2iTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=2sQYsVirRP6MdY15qHzh8WxTTUEdGT3IBBGPdUTLpFI=; b=jin/eejkuYue3TdFzRb1fiz0TPE64DDn6tRBhd4A8w3Egv8JDHUDuWPNC+JNSIGzCM Th7l+5jDGl8e6WfTzVw1n9cN1qYZBYpsv1vjmH3llcV3d7PowF9qadCBweWsdpBYHXps o6EaEYiWGOXP8zJnL1cu7Ql8laK6XMSay8yIFeo4n6K5asbhOmTT7wXh9HUCapJL69O6 XoOs9KPmOULKfnbBoRop2BIL/t7ZkE3+GC4hmhK1zduD6YtxQAMOX89tMYWfSm10ELPV v2i0RodvLAf0w5NmJ7dKgjuQL84LeHDAreI5CrvMUihRmMMAjmqBKZeCEu7foWII5Mo9 wcxQ== X-Gm-Message-State: ACrzQf3+iEbIqP+rNsIutzPVF68cs6wGajcE1X9tIaSPL/eYGXepajx1 qBTSgX9lQYqg9W9bA1zN4WU= X-Google-Smtp-Source: AMsMyM5H23uQBW2NZCIjia2M5PItOvB3/nP8O7Wv43ZzeY+uJl22rzIzK1Gx3MLkSu/4Pi9/d4vvIw== X-Received: by 2002:a05:6870:59d:b0:f3:627:e2b0 with SMTP id m29-20020a056870059d00b000f30627e2b0mr2446436oap.47.1664471355033; Thu, 29 Sep 2022 10:09:15 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-c4e7-bfae-90ed-ac81.res6.spectrum.com. [2603:8081:140c:1a00:c4e7:bfae:90ed:ac81]) by smtp.googlemail.com with ESMTPSA id v17-20020a056808005100b00349a06c581fsm2798557oic.3.2022.09.29.10.09.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 29 Sep 2022 10:09:14 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 09/13] RDMA/rxe: Extend rxe_recv.c to support xrc Date: Thu, 29 Sep 2022 12:08:33 -0500 Message-Id: <20220929170836.17838-10-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220929170836.17838-1-rpearsonhpe@gmail.com> References: <20220929170836.17838-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend rxe_recv.c to support xrc packets. Add checks for qp type and check qp->xrcd matches srq->xrcd. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_hdr.h | 5 +- drivers/infiniband/sw/rxe/rxe_recv.c | 79 +++++++++++++++++++++------- 2 files changed, 63 insertions(+), 21 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_hdr.h b/drivers/infiniband/sw/rxe/rxe_hdr.h index e947bcf75209..fb9959d91b8d 100644 --- a/drivers/infiniband/sw/rxe/rxe_hdr.h +++ b/drivers/infiniband/sw/rxe/rxe_hdr.h @@ -14,7 +14,10 @@ struct rxe_pkt_info { struct rxe_dev *rxe; /* device that owns packet */ struct rxe_qp *qp; /* qp that owns packet */ - struct rxe_send_wqe *wqe; /* send wqe */ + union { + struct rxe_send_wqe *wqe; /* send wqe */ + struct rxe_srq *srq; /* srq for recvd xrc packets */ + }; u8 *hdr; /* points to bth */ u32 mask; /* useful info about pkt */ u32 psn; /* bth psn of packet */ diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c index f3ad7b6dbd97..4f35757d3c52 100644 --- a/drivers/infiniband/sw/rxe/rxe_recv.c +++ b/drivers/infiniband/sw/rxe/rxe_recv.c @@ -13,49 +13,51 @@ static int check_type_state(struct rxe_dev *rxe, struct rxe_pkt_info *pkt, struct rxe_qp *qp) { - unsigned int pkt_type; + unsigned int pkt_type = pkt->opcode & IB_OPCODE_TYPE; if (unlikely(!qp->valid)) - goto err1; + goto err_out; - pkt_type = pkt->opcode & 0xe0; switch (qp_type(qp)) { case IB_QPT_RC: - if (unlikely(pkt_type != IB_OPCODE_RC)) { - pr_warn_ratelimited("bad qp type\n"); - goto err1; - } + if (unlikely(pkt_type != IB_OPCODE_RC)) + goto err_out; break; case IB_QPT_UC: - if (unlikely(pkt_type != IB_OPCODE_UC)) { - pr_warn_ratelimited("bad qp type\n"); - goto err1; - } + if (unlikely(pkt_type != IB_OPCODE_UC)) + goto err_out; break; case IB_QPT_UD: case IB_QPT_GSI: - if (unlikely(pkt_type != IB_OPCODE_UD)) { - pr_warn_ratelimited("bad qp type\n"); - goto err1; - } + if (unlikely(pkt_type != IB_OPCODE_UD)) + goto err_out; + break; + case IB_QPT_XRC_INI: + if (unlikely(pkt_type != IB_OPCODE_XRC)) + goto err_out; + break; + case IB_QPT_XRC_TGT: + if (unlikely(pkt_type != IB_OPCODE_XRC)) + goto err_out; break; default: - pr_warn_ratelimited("unsupported qp type\n"); - goto err1; + goto err_out; } if (pkt->mask & RXE_REQ_MASK) { if (unlikely(qp->resp.state != QP_STATE_READY)) - goto err1; + goto err_out; } else if (unlikely(qp->req.state < QP_STATE_READY || qp->req.state > QP_STATE_DRAINED)) { - goto err1; + goto err_out; } return 0; -err1: +err_out: + pr_debug("%s: failed qp#%d: opcode = 0x%02x\n", __func__, + qp->elem.index, pkt->opcode); return -EINVAL; } @@ -166,6 +168,37 @@ static int check_addr(struct rxe_dev *rxe, struct rxe_pkt_info *pkt, return -EINVAL; } +static int check_xrcd(struct rxe_dev *rxe, struct rxe_pkt_info *pkt, + struct rxe_qp *qp) +{ + int err; + + struct rxe_xrcd *xrcd = qp->xrcd; + u32 srqn = xrceth_srqn(pkt); + struct rxe_srq *srq; + + srq = rxe_pool_get_index(&rxe->srq_pool, srqn); + if (unlikely(!srq)) { + err = -EINVAL; + goto err_out; + } + + if (unlikely(srq->xrcd != xrcd)) { + rxe_put(srq); + err = -EINVAL; + goto err_out; + } + + pkt->srq = srq; + + return 0; + +err_out: + pr_debug("%s: qp#%d: failed err = %d\n", __func__, + qp->elem.index, err); + return err; +} + static int hdr_check(struct rxe_pkt_info *pkt) { struct rxe_dev *rxe = pkt->rxe; @@ -205,6 +238,12 @@ static int hdr_check(struct rxe_pkt_info *pkt) err = check_keys(rxe, pkt, qpn, qp); if (unlikely(err)) goto err2; + + if (qp_type(qp) == IB_QPT_XRC_TGT) { + err = check_xrcd(rxe, pkt, qp); + if (unlikely(err)) + goto err2; + } } else { if (unlikely((pkt->mask & RXE_GRH_MASK) == 0)) { pr_warn_ratelimited("no grh for mcast qpn\n"); From patchwork Thu Sep 29 17:08:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12994471 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5FBBC433FE for ; Thu, 29 Sep 2022 17:09:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236155AbiI2RJY (ORCPT ); Thu, 29 Sep 2022 13:09:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60076 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236128AbiI2RJU (ORCPT ); Thu, 29 Sep 2022 13:09:20 -0400 Received: from mail-oo1-xc2e.google.com (mail-oo1-xc2e.google.com [IPv6:2607:f8b0:4864:20::c2e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8BBB71D1E01 for ; Thu, 29 Sep 2022 10:09:17 -0700 (PDT) Received: by mail-oo1-xc2e.google.com with SMTP id d74-20020a4a524d000000b004755f8aae16so529857oob.11 for ; Thu, 29 Sep 2022 10:09:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=ku0ZUcIZA3npQmzAHPTmYvCJbqp5m4nqBvFCAYUUG2w=; b=QypgKAedzTGaVDdff7L9RWA0idQzKQcrIDQ+niOOecq4kANkhclJ5MEnFCw/DGRd9h tXy5AvAS/jtuQPlz7cTPUx7b1ztetr22PfJ+Qkw4ojF/I1+aXLn5RKZjGh1fiQKKRURY X5cfUk7AdRJ0PJcf1OOiqAfHzq1xII8T/zdunDyDhtjGNweAvOW8s7+pwxxny/4ADSOZ +F3TAhoeMT8ADViCQLVTRJ+A5ymfBjuPZiwrqSPn72A/GBZY5rh5G/wb7YSOPhggbm07 EcMq2rtORj+agL5WKQfIlivElyP6uqx+q3+WS+NdgMvlkM4xFRsWiqTeGvTRs2yBVzOD arQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=ku0ZUcIZA3npQmzAHPTmYvCJbqp5m4nqBvFCAYUUG2w=; b=0Gz/bA1Y14dEcmEUd30d9KkWLmw9wlKW2U+Gg4enFZqVr8exwPzduyjzQydin3Fn50 XtPt/dat5HS8DpunvFbHgEZq9yp2WanyMjFhMY8qtF9kMEYnmILFXTAW6ukxfTNUp2SE rIh98LYrZjQAeCi8mZYCG06YOMXfqNlZhB76PGMZKxEZrL9ai0Y0gnucZ6WlBjJ8cP/W EqABCojWaJgOxacjOTOuv6qIHq5ZSbuAvnEa+YF9kyKOlpPLAEyp/NUA/lbVNOBWd8G5 rtKqJ6oBeyph08vVauD6H0403MOt7RJNN66b6s8xds8BwNJhrxv5T9POOsvG9vw81t8p AFqA== X-Gm-Message-State: ACrzQf2KhVDNMQu7exuF0QBdd0TBIYtcyPgO+9q/atHKdkwJU1bvckWQ vRwtiiPFl5Le6fmS2+9y3Cs= X-Google-Smtp-Source: AMsMyM4jZqTis+HEgW+apXpkIDS5+1rgfsnpscfxfywCJ/p4d/i50EkxJKt04LA/wKpfLD+Vai0bfQ== X-Received: by 2002:a05:6830:2709:b0:659:d9e4:b6b with SMTP id j9-20020a056830270900b00659d9e40b6bmr1809738otu.9.1664471356445; Thu, 29 Sep 2022 10:09:16 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-c4e7-bfae-90ed-ac81.res6.spectrum.com. [2603:8081:140c:1a00:c4e7:bfae:90ed:ac81]) by smtp.googlemail.com with ESMTPSA id v17-20020a056808005100b00349a06c581fsm2798557oic.3.2022.09.29.10.09.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 29 Sep 2022 10:09:15 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 10/13] RDMA/rxe: Extend rxe_comp.c to support xrc qps Date: Thu, 29 Sep 2022 12:08:34 -0500 Message-Id: <20220929170836.17838-11-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220929170836.17838-1-rpearsonhpe@gmail.com> References: <20220929170836.17838-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend code in rxe_comp.c to support xrc qp types. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_comp.c | 45 ++++++++++++++-------------- 1 file changed, 22 insertions(+), 23 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c index 1f10ae4a35d5..cb6621b4055d 100644 --- a/drivers/infiniband/sw/rxe/rxe_comp.c +++ b/drivers/infiniband/sw/rxe/rxe_comp.c @@ -213,12 +213,13 @@ static inline enum comp_state check_ack(struct rxe_qp *qp, struct rxe_pkt_info *pkt, struct rxe_send_wqe *wqe) { + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); unsigned int mask = pkt->mask; + int opcode; u8 syn; - struct rxe_dev *rxe = to_rdev(qp->ibqp.device); - /* Check the sequence only */ - switch (qp->comp.opcode) { + /* Mask off type bits and check the sequence only */ + switch (qp->comp.opcode & IB_OPCODE_CMD) { case -1: /* Will catch all *_ONLY cases. */ if (!(mask & RXE_FIRST_MASK)) @@ -226,42 +227,39 @@ static inline enum comp_state check_ack(struct rxe_qp *qp, break; - case IB_OPCODE_RC_RDMA_READ_RESPONSE_FIRST: - case IB_OPCODE_RC_RDMA_READ_RESPONSE_MIDDLE: - if (pkt->opcode != IB_OPCODE_RC_RDMA_READ_RESPONSE_MIDDLE && - pkt->opcode != IB_OPCODE_RC_RDMA_READ_RESPONSE_LAST) { + case IB_OPCODE_RDMA_READ_RESPONSE_FIRST: + case IB_OPCODE_RDMA_READ_RESPONSE_MIDDLE: + opcode = pkt->opcode & IB_OPCODE_CMD; + if (opcode != IB_OPCODE_RDMA_READ_RESPONSE_MIDDLE && + opcode != IB_OPCODE_RDMA_READ_RESPONSE_LAST) { /* read retries of partial data may restart from * read response first or response only. */ if ((pkt->psn == wqe->first_psn && - pkt->opcode == - IB_OPCODE_RC_RDMA_READ_RESPONSE_FIRST) || + opcode == IB_OPCODE_RDMA_READ_RESPONSE_FIRST) || (wqe->first_psn == wqe->last_psn && - pkt->opcode == - IB_OPCODE_RC_RDMA_READ_RESPONSE_ONLY)) + opcode == IB_OPCODE_RDMA_READ_RESPONSE_ONLY)) break; return COMPST_ERROR; } break; default: - WARN_ON_ONCE(1); + //WARN_ON_ONCE(1); } - /* Check operation validity. */ - switch (pkt->opcode) { - case IB_OPCODE_RC_RDMA_READ_RESPONSE_FIRST: - case IB_OPCODE_RC_RDMA_READ_RESPONSE_LAST: - case IB_OPCODE_RC_RDMA_READ_RESPONSE_ONLY: + /* Mask off the type bits and check operation validity. */ + switch (pkt->opcode & IB_OPCODE_CMD) { + case IB_OPCODE_RDMA_READ_RESPONSE_FIRST: + case IB_OPCODE_RDMA_READ_RESPONSE_LAST: + case IB_OPCODE_RDMA_READ_RESPONSE_ONLY: syn = aeth_syn(pkt); if ((syn & AETH_TYPE_MASK) != AETH_ACK) return COMPST_ERROR; fallthrough; - /* (IB_OPCODE_RC_RDMA_READ_RESPONSE_MIDDLE doesn't have an AETH) - */ - case IB_OPCODE_RC_RDMA_READ_RESPONSE_MIDDLE: + case IB_OPCODE_RDMA_READ_RESPONSE_MIDDLE: if (wqe->wr.opcode != IB_WR_RDMA_READ && wqe->wr.opcode != IB_WR_RDMA_READ_WITH_INV) { wqe->status = IB_WC_FATAL_ERR; @@ -270,7 +268,7 @@ static inline enum comp_state check_ack(struct rxe_qp *qp, reset_retry_counters(qp); return COMPST_READ; - case IB_OPCODE_RC_ATOMIC_ACKNOWLEDGE: + case IB_OPCODE_ATOMIC_ACKNOWLEDGE: syn = aeth_syn(pkt); if ((syn & AETH_TYPE_MASK) != AETH_ACK) @@ -282,7 +280,7 @@ static inline enum comp_state check_ack(struct rxe_qp *qp, reset_retry_counters(qp); return COMPST_ATOMIC; - case IB_OPCODE_RC_ACKNOWLEDGE: + case IB_OPCODE_ACKNOWLEDGE: syn = aeth_syn(pkt); switch (syn & AETH_TYPE_MASK) { case AETH_ACK: @@ -669,7 +667,8 @@ int rxe_completer(void *arg) * timeouts but try to keep them as few as possible) * (4) the timeout parameter is set */ - if ((qp_type(qp) == IB_QPT_RC) && + if ((qp_type(qp) == IB_QPT_RC || + qp_type(qp) == IB_QPT_XRC_INI) && (qp->req.state == QP_STATE_READY) && (psn_compare(qp->req.psn, qp->comp.psn) > 0) && qp->qp_timeout_jiffies) From patchwork Thu Sep 29 17:08:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12994472 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC268C433FE for ; Thu, 29 Sep 2022 17:09:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236165AbiI2RJ1 (ORCPT ); Thu, 29 Sep 2022 13:09:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60162 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236133AbiI2RJW (ORCPT ); Thu, 29 Sep 2022 13:09:22 -0400 Received: from mail-oo1-xc2b.google.com (mail-oo1-xc2b.google.com [IPv6:2607:f8b0:4864:20::c2b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8A4321CEDE1 for ; Thu, 29 Sep 2022 10:09:18 -0700 (PDT) Received: by mail-oo1-xc2b.google.com with SMTP id k11-20020a4ab28b000000b0047659ccfc28so530240ooo.8 for ; Thu, 29 Sep 2022 10:09:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=pmXKEYyCNniaE+BlAFgYZOoUvItL4ct2QR9ZHK2YPsY=; b=bA8pwMcPE/R/nSHDIqB9iir2Vr6ISeVqs5oLaWP91XAgqyBcSbnt3anehuPtbYgYVw TB2hm40CW/brAe+/UoX58ghu8uYUmPIvcQl1qnMJ5+iYpSQq9Dd2OGRRUgN3kQtfIBma thneeD2/PPPzwehd+fhL2mHEm+MxLCdckYeDhw67x6559Ql/qwgxb5n5gU9U2pz7wub5 d8J7neyLsnCsmDxgOLFCnCS93Eap9sUfQjyYW8U7pROCG8nS35Tbj1mbTdhToh5ys+Qw T7G1zKBBsn6q06o6+7QrGP4xjtSeFvRk8x4DvP5DnXjX3ymQShlcl6Ndmu5KXH2cm3Xs IqvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=pmXKEYyCNniaE+BlAFgYZOoUvItL4ct2QR9ZHK2YPsY=; b=YxcUydr7IQLt/30hkS0WfTe4C4Hb33PKqgVw20/hrLFV57fWjI/K9VO5kCmIAeHHT+ zudfAmT58gS2kNHB56POoN7u3uDsvQa0xfh4PmV3QPtgwoB2f7Y3bUKkE/x+LRB4tQFy JtsZSvIYwK01I4bOgg210QanX2V82GiPCDr30jXpgpEZdiOhlNwoPKQDzRJNtyNUC7Mh +pAT1Kr+/FKcwA4HoZr3Ctlox4i5GjNwNpmmvPX3tzyTMLa0BCCXXkSb4uj32kTqeJGi hLlVN6xokq9Bfe0nkHBuRvKBV0ytWzvttVkFfVlfB5ZiBs8LZkSvPfqYUxC/tLAqWlEA 8uLg== X-Gm-Message-State: ACrzQf34D0eY02wTFmRHGloMEaGMJXpiTkM4UkisI/Rc+QT9WXi20Nmc 7/Bx1mu0o9eIXNQv/N4tL3a/RAbgF/hwOA== X-Google-Smtp-Source: AMsMyM56NHT+MRHccmazHXly/Bbr2eZbiqMsSjyeQBJ9/rcDIVFFPKQmDGlF12vcRveFYcLNILLtxA== X-Received: by 2002:a05:6830:349:b0:63a:cd5d:9014 with SMTP id h9-20020a056830034900b0063acd5d9014mr1769951ote.29.1664471357483; Thu, 29 Sep 2022 10:09:17 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-c4e7-bfae-90ed-ac81.res6.spectrum.com. [2603:8081:140c:1a00:c4e7:bfae:90ed:ac81]) by smtp.googlemail.com with ESMTPSA id v17-20020a056808005100b00349a06c581fsm2798557oic.3.2022.09.29.10.09.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 29 Sep 2022 10:09:17 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 11/13] RDMA/rxe: Extend rxe_req.c to support xrc qps Date: Thu, 29 Sep 2022 12:08:35 -0500 Message-Id: <20220929170836.17838-12-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220929170836.17838-1-rpearsonhpe@gmail.com> References: <20220929170836.17838-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend code in rxe_req.c to support xrc qp types. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_req.c | 38 +++++++++++++++++------------ 1 file changed, 22 insertions(+), 16 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index d2a9abfed596..e7bb969f97f3 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -229,7 +229,7 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp, { struct rxe_dev *rxe = to_rdev(qp->ibqp.device); struct sk_buff *skb; - struct rxe_send_wr *ibwr = &wqe->wr; + struct rxe_send_wr *wr = &wqe->wr; int pad = (-payload) & 0x3; int paylen; int solicited; @@ -246,13 +246,13 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp, return NULL; /* init bth */ - solicited = (ibwr->send_flags & IB_SEND_SOLICITED) && + solicited = (wr->send_flags & IB_SEND_SOLICITED) && (pkt->mask & RXE_LAST_MASK) && ((pkt->mask & (RXE_SEND_MASK)) || (pkt->mask & (RXE_WRITE_MASK | RXE_IMMDT_MASK)) == (RXE_WRITE_MASK | RXE_IMMDT_MASK)); - qp_num = (pkt->mask & RXE_DETH_MASK) ? ibwr->wr.ud.remote_qpn : + qp_num = (pkt->mask & RXE_DETH_MASK) ? wr->wr.ud.remote_qpn : qp->attr.dest_qp_num; ack_req = ((pkt->mask & RXE_LAST_MASK) || @@ -264,34 +264,37 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp, ack_req, pkt->psn); /* init optional headers */ + if (pkt->mask & RXE_XRCETH_MASK) + xrceth_set_srqn(pkt, wr->srq_num); + if (pkt->mask & RXE_RETH_MASK) { - reth_set_rkey(pkt, ibwr->wr.rdma.rkey); + reth_set_rkey(pkt, wr->wr.rdma.rkey); reth_set_va(pkt, wqe->iova); reth_set_len(pkt, wqe->dma.resid); } if (pkt->mask & RXE_IMMDT_MASK) - immdt_set_imm(pkt, ibwr->ex.imm_data); + immdt_set_imm(pkt, wr->ex.imm_data); if (pkt->mask & RXE_IETH_MASK) - ieth_set_rkey(pkt, ibwr->ex.invalidate_rkey); + ieth_set_rkey(pkt, wr->ex.invalidate_rkey); if (pkt->mask & RXE_ATMETH_MASK) { atmeth_set_va(pkt, wqe->iova); - if (opcode == IB_OPCODE_RC_COMPARE_SWAP) { - atmeth_set_swap_add(pkt, ibwr->wr.atomic.swap); - atmeth_set_comp(pkt, ibwr->wr.atomic.compare_add); + if ((opcode & IB_OPCODE_CMD) == IB_OPCODE_COMPARE_SWAP) { + atmeth_set_swap_add(pkt, wr->wr.atomic.swap); + atmeth_set_comp(pkt, wr->wr.atomic.compare_add); } else { - atmeth_set_swap_add(pkt, ibwr->wr.atomic.compare_add); + atmeth_set_swap_add(pkt, wr->wr.atomic.compare_add); } - atmeth_set_rkey(pkt, ibwr->wr.atomic.rkey); + atmeth_set_rkey(pkt, wr->wr.atomic.rkey); } if (pkt->mask & RXE_DETH_MASK) { if (qp->ibqp.qp_num == 1) deth_set_qkey(pkt, GSI_QKEY); else - deth_set_qkey(pkt, ibwr->wr.ud.remote_qkey); + deth_set_qkey(pkt, wr->wr.ud.remote_qkey); deth_set_sqp(pkt, qp->ibqp.qp_num); } @@ -338,8 +341,10 @@ static void update_wqe_state(struct rxe_qp *qp, struct rxe_pkt_info *pkt) { if (pkt->mask & RXE_LAST_MASK) { - if (qp_type(qp) == IB_QPT_RC) + if (qp_type(qp) == IB_QPT_RC || + qp_type(qp) == IB_QPT_XRC_INI) wqe->state = wqe_state_pending; + /* other qp types handled in rxe_xmit_packet() */ } else { wqe->state = wqe_state_processing; } @@ -532,9 +537,10 @@ int rxe_requester(void *arg) goto done; } - if (unlikely(qp_type(qp) == IB_QPT_RC && - psn_compare(qp->req.psn, (qp->comp.psn + - RXE_MAX_UNACKED_PSNS)) > 0)) { + if (unlikely((qp_type(qp) == IB_QPT_RC || + qp_type(qp) == IB_QPT_XRC_INI) && + psn_compare(qp->req.psn, (qp->comp.psn + + RXE_MAX_UNACKED_PSNS)) > 0)) { qp->req.wait_psn = 1; goto exit; } From patchwork Thu Sep 29 17:08:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12994473 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6094C433FE for ; Thu, 29 Sep 2022 17:09:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236146AbiI2RJd (ORCPT ); Thu, 29 Sep 2022 13:09:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60082 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235976AbiI2RJY (ORCPT ); Thu, 29 Sep 2022 13:09:24 -0400 Received: from mail-oo1-xc34.google.com (mail-oo1-xc34.google.com [IPv6:2607:f8b0:4864:20::c34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D459D1D35B9 for ; Thu, 29 Sep 2022 10:09:19 -0700 (PDT) Received: by mail-oo1-xc34.google.com with SMTP id c13-20020a4ac30d000000b0047663e3e16bso530695ooq.6 for ; Thu, 29 Sep 2022 10:09:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=Qv+MFCfjBy9qlntNACjCRozVQRuJY60ehMS3GyMG2/8=; b=iId2GNFMoFbEXeTMLCCrNzjn9/D3ZCak0EOmV0Zfs4UmGyMrOBhLJ0AFb9zEoSY0U4 b2C/6qFS/YSf3wtlvr3MlOpw+oP0Ug7o0oTkHZOve2VAdRo9hrPd2kO+pTJzlFZHtLO/ zKT4iuRs+96BqtLc5s37mxGrRDP4CLQlUTsJQpT6x80+rMMnUvXSJiNtmpYoWEBYjIct x3Naw09m94slRlVw174ZAFL+7kjg5QBjJQbmrQYcSKwvzUXoKAkCee4EP9VJngNL4J97 n7wqoPrXtZ5nQ3OkdI4GH8/6NmRNxxUwH7hv3FUv+mAE9U2RrB2Ajdmo+vArB4Ct2g/k CwwA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=Qv+MFCfjBy9qlntNACjCRozVQRuJY60ehMS3GyMG2/8=; b=03WFyrxHgbYyKP89NxovtV/MC1lwCwA/cljNRnhguirPMTKHOK6NmGCH/StRduZy8K JdcOJ0aY7LAyplLQ9aTKqIsx8aEIH24YLuxt27NcXKpNjtZRVz4Ktcxdm7CAUdec3N/P 7vEQ9JdDZ0m7NDJ5PgQYO7QplatNqFEwvDcaSYuskqzXNp6EauWRj03RBWeFQ3AhCYwP jwMCG9J2ZJ10IkD+k3cxO8t+0eZ5c6YxfbTgiCUSw5f+Tx2VefLorSy3b4ioN1/lWpO9 A7qj/HtK4cFZ3J2+tH9tYNf3P46AzjwSJ6HmYC2nPYbfqL6IpATe/08joMzPpJLLvTnB 6uVg== X-Gm-Message-State: ACrzQf04maWRpywydPUaIJbxFSmgXgYQBlu2+0mp9NDFNk66fzoLIS5R q5593SguQ26cXfIWi7ga/ik= X-Google-Smtp-Source: AMsMyM4CIS8CwHTO9FOwzUg21bLCmDwW4EniDDGbC5V2FTwDEv250pAEKprWmmzpa//gsC/YUKWhjw== X-Received: by 2002:a05:6830:440b:b0:65a:127d:89a9 with SMTP id q11-20020a056830440b00b0065a127d89a9mr1728084otv.103.1664471358417; Thu, 29 Sep 2022 10:09:18 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-c4e7-bfae-90ed-ac81.res6.spectrum.com. [2603:8081:140c:1a00:c4e7:bfae:90ed:ac81]) by smtp.googlemail.com with ESMTPSA id v17-20020a056808005100b00349a06c581fsm2798557oic.3.2022.09.29.10.09.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 29 Sep 2022 10:09:18 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 12/13] RDMA/rxe: Extend rxe_net.c to support xrc qps Date: Thu, 29 Sep 2022 12:08:36 -0500 Message-Id: <20220929170836.17838-13-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220929170836.17838-1-rpearsonhpe@gmail.com> References: <20220929170836.17838-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend code in rxe_net.c to support xrc qp types. Signed-off-by: Bob Pearson --- drivers/infiniband/sw/rxe/rxe_net.c | 23 ++++++++++++++++------- 1 file changed, 16 insertions(+), 7 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c index d46190ad082f..d9bedd6fc497 100644 --- a/drivers/infiniband/sw/rxe/rxe_net.c +++ b/drivers/infiniband/sw/rxe/rxe_net.c @@ -92,7 +92,7 @@ static struct dst_entry *rxe_find_route(struct net_device *ndev, { struct dst_entry *dst = NULL; - if (qp_type(qp) == IB_QPT_RC) + if (qp_type(qp) == IB_QPT_RC || qp_type(qp) == IB_QPT_XRC_INI) dst = sk_dst_get(qp->sk->sk); if (!dst || !dst_check(dst, qp->dst_cookie)) { @@ -120,7 +120,8 @@ static struct dst_entry *rxe_find_route(struct net_device *ndev, #endif } - if (dst && (qp_type(qp) == IB_QPT_RC)) { + if (dst && (qp_type(qp) == IB_QPT_RC || + qp_type(qp) == IB_QPT_XRC_INI)) { dst_hold(dst); sk_dst_set(qp->sk->sk, dst); } @@ -386,14 +387,23 @@ static int rxe_send(struct sk_buff *skb, struct rxe_pkt_info *pkt) */ static int rxe_loopback(struct sk_buff *skb, struct rxe_pkt_info *pkt) { - memcpy(SKB_TO_PKT(skb), pkt, sizeof(*pkt)); + struct rxe_pkt_info *new_pkt = SKB_TO_PKT(skb); + + memset(new_pkt, 0, sizeof(*new_pkt)); + + /* match rxe_udp_encap_recv */ + new_pkt->rxe = pkt->rxe; + new_pkt->port_num = 1; + new_pkt->hdr = pkt->hdr; + new_pkt->mask = RXE_GRH_MASK; + new_pkt->paylen = pkt->paylen; if (skb->protocol == htons(ETH_P_IP)) skb_pull(skb, sizeof(struct iphdr)); else skb_pull(skb, sizeof(struct ipv6hdr)); - if (WARN_ON(!ib_device_try_get(&pkt->rxe->ib_dev))) { + if (WARN_ON(!ib_device_try_get(&new_pkt->rxe->ib_dev))) { kfree_skb(skb); return -EIO; } @@ -412,7 +422,6 @@ int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, if ((is_request && (qp->req.state != QP_STATE_READY)) || (!is_request && (qp->resp.state != QP_STATE_READY))) { - pr_info("Packet dropped. QP is not in ready state\n"); goto drop; } @@ -427,8 +436,8 @@ int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, return err; } - if ((qp_type(qp) != IB_QPT_RC) && - (pkt->mask & RXE_LAST_MASK)) { + if ((pkt->mask & RXE_REQ_MASK) && (pkt->mask & RXE_LAST_MASK) && + (qp_type(qp) != IB_QPT_RC && qp_type(qp) != IB_QPT_XRC_INI)) { pkt->wqe->state = wqe_state_done; rxe_run_task(&qp->comp.task, 1); } From patchwork Thu Sep 29 17:08:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bob Pearson X-Patchwork-Id: 12994474 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE80DC4332F for ; Thu, 29 Sep 2022 17:09:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235731AbiI2RJd (ORCPT ); Thu, 29 Sep 2022 13:09:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60156 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236132AbiI2RJZ (ORCPT ); Thu, 29 Sep 2022 13:09:25 -0400 Received: from mail-oa1-x2b.google.com (mail-oa1-x2b.google.com [IPv6:2001:4860:4864:20::2b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9B19E1D6269 for ; Thu, 29 Sep 2022 10:09:20 -0700 (PDT) Received: by mail-oa1-x2b.google.com with SMTP id 586e51a60fabf-12c8312131fso2561905fac.4 for ; Thu, 29 Sep 2022 10:09:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=kpZQs1qlHMVRppEy3tTlfHDtAaH8ueWdIB0n4ypbFkE=; b=T6E9xGvGQO2xvbE1wCB/YjJ69OaNyURx6VEE9+t5xvoLoM00bKDyvnGHyIr9Rt1whR kakr+KLseY2MxD47TMjGInQLnkuJoJ8hXqEPAP5ydsNZhPuizSsURIfw0xkrhos+wcQI Er4m8ktrZqUmtw9HUpxEsrrxloOXcPH3z2o36Ve5r+IzdqAvgfXL5ydl6SlCP93fHCs0 aDh9sDMDDFDR+8JUnwziLxFCfI9VaCNkheqnPzqucjKGV6ak1lZ+kynpqjHhHhtuLlYs XrJQyykpmRG4SdDV/AVPC77YAM5aY+vK991AcseEu2thttDAuqNjDvC5oINDGhamoZLd ox4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=kpZQs1qlHMVRppEy3tTlfHDtAaH8ueWdIB0n4ypbFkE=; b=MjSdVx6toawGpLdu+PDQcOM8aqd1Cg7d0EIsMpqQBUG5sHSCNoTkn4a27cMsTqTcrc uWyP+fPbJih/fl1/HONuKPLjqkgp4+pDIlS8IWN0kGLVFDul+UEn8zg7h6y2IgH0LqcY nv4L2zPaksb1c0B7Zia/8dMIvfiS+Bykj2/+RXiZzdhGLBd81cNTyMMAuE3Xt3z8XcaZ 67rx59Jz4g8eyfedvFVm+eQEXScR5OOlDYWz5fdfMcBDPlO085+j2PmSCplqczhG2mgR tCbEs2hh4VKCGL0Lh+GwpyOWGUbJ3WSU2f2+GpwcgDoRe2Dyda8XFwXwmWl3GLbQwlCB 5afA== X-Gm-Message-State: ACrzQf0aRdDxf7Q9El5g+HUzcDoEWo9HmmfQgVKB98YzowJun5iz3yjD AqVY1/gYdM+hXJ5nRVULiKk= X-Google-Smtp-Source: AMsMyM68MwPmLe2SChaoHe0nSQ/kz6rNn1KdV3pRb19qCuoNWq1ZPtQF7qmRkfU7d/LGqFUCXsI4BA== X-Received: by 2002:a05:6871:794:b0:12c:4bff:341d with SMTP id o20-20020a056871079400b0012c4bff341dmr2334919oap.127.1664471359364; Thu, 29 Sep 2022 10:09:19 -0700 (PDT) Received: from ubuntu-22.tx.rr.com (2603-8081-140c-1a00-c4e7-bfae-90ed-ac81.res6.spectrum.com. [2603:8081:140c:1a00:c4e7:bfae:90ed:ac81]) by smtp.googlemail.com with ESMTPSA id v17-20020a056808005100b00349a06c581fsm2798557oic.3.2022.09.29.10.09.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 29 Sep 2022 10:09:19 -0700 (PDT) From: Bob Pearson To: jgg@nvidia.com, zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org Cc: Bob Pearson Subject: [PATCH for-next v2 13/13] RDMA/rxe: Extend rxe_resp.c to support xrc qps Date: Thu, 29 Sep 2022 12:08:37 -0500 Message-Id: <20220929170836.17838-14-rpearsonhpe@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220929170836.17838-1-rpearsonhpe@gmail.com> References: <20220929170836.17838-1-rpearsonhpe@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend code in rxe_resp.c to support xrc qp types. Signed-off-by: Bob Pearson --- v2 Rebased ot current for-next drivers/infiniband/sw/rxe/rxe_loc.h | 3 +- drivers/infiniband/sw/rxe/rxe_mw.c | 14 +-- drivers/infiniband/sw/rxe/rxe_resp.c | 164 +++++++++++++++++++++------ 3 files changed, 141 insertions(+), 40 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 1eba6384b6a4..4be3c74e0f86 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -87,7 +87,8 @@ int rxe_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata); int rxe_dealloc_mw(struct ib_mw *ibmw); int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe); int rxe_invalidate_mw(struct rxe_qp *qp, u32 rkey); -struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey); +struct rxe_mw *rxe_lookup_mw(struct rxe_pd *pd, struct rxe_qp *qp, + int access, u32 rkey); void rxe_mw_cleanup(struct rxe_pool_elem *elem); /* rxe_net.c */ diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c index 902b7df7aaed..890503ac3a95 100644 --- a/drivers/infiniband/sw/rxe/rxe_mw.c +++ b/drivers/infiniband/sw/rxe/rxe_mw.c @@ -280,10 +280,10 @@ int rxe_invalidate_mw(struct rxe_qp *qp, u32 rkey) return ret; } -struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey) +struct rxe_mw *rxe_lookup_mw(struct rxe_pd *pd, struct rxe_qp *qp, + int access, u32 rkey) { struct rxe_dev *rxe = to_rdev(qp->ibqp.device); - struct rxe_pd *pd = to_rpd(qp->ibqp.pd); struct rxe_mw *mw; int index = rkey >> 8; @@ -291,11 +291,11 @@ struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey) if (!mw) return NULL; - if (unlikely((mw->rkey != rkey) || rxe_mw_pd(mw) != pd || - (mw->ibmw.type == IB_MW_TYPE_2 && mw->qp != qp) || - (mw->length == 0) || - (access && !(access & mw->access)) || - mw->state != RXE_MW_STATE_VALID)) { + if ((mw->rkey != rkey) || rxe_mw_pd(mw) != pd || + (mw->ibmw.type == IB_MW_TYPE_2 && + (mw->qp != qp || qp_type(qp) == IB_QPT_XRC_TGT)) || + (mw->length == 0) || (access && !(access & mw->access)) || + mw->state != RXE_MW_STATE_VALID) { rxe_put(mw); return NULL; } diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index e62a7f31779f..01fea1b328b7 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -88,7 +88,8 @@ void rxe_resp_queue_pkt(struct rxe_qp *qp, struct sk_buff *skb) skb_queue_tail(&qp->req_pkts, skb); - must_sched = (pkt->opcode == IB_OPCODE_RC_RDMA_READ_REQUEST) || + /* mask off opcode type bits */ + must_sched = ((pkt->opcode & 0x1f) == IB_OPCODE_RDMA_READ_REQUEST) || (skb_queue_len(&qp->req_pkts) > 1); rxe_run_task(&qp->resp.task, must_sched); @@ -127,6 +128,7 @@ static enum resp_states check_psn(struct rxe_qp *qp, switch (qp_type(qp)) { case IB_QPT_RC: + case IB_QPT_XRC_TGT: if (diff > 0) { if (qp->resp.sent_psn_nak) return RESPST_CLEANUP; @@ -156,6 +158,7 @@ static enum resp_states check_psn(struct rxe_qp *qp, return RESPST_CLEANUP; } break; + default: break; } @@ -248,6 +251,47 @@ static enum resp_states check_op_seq(struct rxe_qp *qp, } break; + case IB_QPT_XRC_TGT: + switch (qp->resp.opcode) { + case IB_OPCODE_XRC_SEND_FIRST: + case IB_OPCODE_XRC_SEND_MIDDLE: + switch (pkt->opcode) { + case IB_OPCODE_XRC_SEND_MIDDLE: + case IB_OPCODE_XRC_SEND_LAST: + case IB_OPCODE_XRC_SEND_LAST_WITH_IMMEDIATE: + case IB_OPCODE_XRC_SEND_LAST_WITH_INVALIDATE: + return RESPST_CHK_OP_VALID; + default: + return RESPST_ERR_MISSING_OPCODE_LAST_C; + } + + case IB_OPCODE_XRC_RDMA_WRITE_FIRST: + case IB_OPCODE_XRC_RDMA_WRITE_MIDDLE: + switch (pkt->opcode) { + case IB_OPCODE_XRC_RDMA_WRITE_MIDDLE: + case IB_OPCODE_XRC_RDMA_WRITE_LAST: + case IB_OPCODE_XRC_RDMA_WRITE_LAST_WITH_IMMEDIATE: + return RESPST_CHK_OP_VALID; + default: + return RESPST_ERR_MISSING_OPCODE_LAST_C; + } + + default: + switch (pkt->opcode) { + case IB_OPCODE_XRC_SEND_MIDDLE: + case IB_OPCODE_XRC_SEND_LAST: + case IB_OPCODE_XRC_SEND_LAST_WITH_IMMEDIATE: + case IB_OPCODE_XRC_SEND_LAST_WITH_INVALIDATE: + case IB_OPCODE_XRC_RDMA_WRITE_MIDDLE: + case IB_OPCODE_XRC_RDMA_WRITE_LAST: + case IB_OPCODE_XRC_RDMA_WRITE_LAST_WITH_IMMEDIATE: + return RESPST_ERR_MISSING_OPCODE_FIRST; + default: + return RESPST_CHK_OP_VALID; + } + } + break; + default: return RESPST_CHK_OP_VALID; } @@ -258,6 +302,7 @@ static enum resp_states check_op_valid(struct rxe_qp *qp, { switch (qp_type(qp)) { case IB_QPT_RC: + case IB_QPT_XRC_TGT: if (((pkt->mask & RXE_READ_MASK) && !(qp->attr.qp_access_flags & IB_ACCESS_REMOTE_READ)) || ((pkt->mask & RXE_WRITE_MASK) && @@ -290,9 +335,22 @@ static enum resp_states check_op_valid(struct rxe_qp *qp, return RESPST_CHK_RESOURCE; } -static enum resp_states get_srq_wqe(struct rxe_qp *qp) +static struct rxe_srq *get_srq(struct rxe_qp *qp, struct rxe_pkt_info *pkt) +{ + struct rxe_srq *srq; + + if (qp_type(qp) == IB_QPT_XRC_TGT) + srq = pkt->srq; + else if (qp->srq) + srq = qp->srq; + else + srq = NULL; + + return srq; +} + +static enum resp_states get_srq_wqe(struct rxe_qp *qp, struct rxe_srq *srq) { - struct rxe_srq *srq = qp->srq; struct rxe_queue *q = srq->rq.queue; struct rxe_recv_wqe *wqe; struct ib_event ev; @@ -344,7 +402,7 @@ static enum resp_states get_srq_wqe(struct rxe_qp *qp) static enum resp_states check_resource(struct rxe_qp *qp, struct rxe_pkt_info *pkt) { - struct rxe_srq *srq = qp->srq; + struct rxe_srq *srq = get_srq(qp, pkt); if (qp->resp.state == QP_STATE_ERROR) { if (qp->resp.wqe) { @@ -377,7 +435,7 @@ static enum resp_states check_resource(struct rxe_qp *qp, if (pkt->mask & RXE_RWR_MASK) { if (srq) - return get_srq_wqe(qp); + return get_srq_wqe(qp, srq); qp->resp.wqe = queue_head(qp->rq.queue, QUEUE_TYPE_FROM_CLIENT); @@ -387,6 +445,7 @@ static enum resp_states check_resource(struct rxe_qp *qp, return RESPST_CHK_LENGTH; } +/* TODO this should actually do what it says per IBA spec */ static enum resp_states check_length(struct rxe_qp *qp, struct rxe_pkt_info *pkt) { @@ -397,6 +456,9 @@ static enum resp_states check_length(struct rxe_qp *qp, case IB_QPT_UC: return RESPST_CHK_RKEY; + case IB_QPT_XRC_TGT: + return RESPST_CHK_RKEY; + default: return RESPST_CHK_RKEY; } @@ -407,6 +469,7 @@ static enum resp_states check_rkey(struct rxe_qp *qp, { struct rxe_mr *mr = NULL; struct rxe_mw *mw = NULL; + struct rxe_pd *pd; u64 va; u32 rkey; u32 resid; @@ -447,8 +510,11 @@ static enum resp_states check_rkey(struct rxe_qp *qp, resid = qp->resp.resid; pktlen = payload_size(pkt); + /* we have ref counts on qp and pkt->srq so this is just a temp */ + pd = (qp_type(qp) == IB_QPT_XRC_TGT) ? pkt->srq->pd : qp->pd; + if (rkey_is_mw(rkey)) { - mw = rxe_lookup_mw(qp, access, rkey); + mw = rxe_lookup_mw(pd, qp, access, rkey); if (!mw) { pr_debug("%s: no MW matches rkey %#x\n", __func__, rkey); @@ -469,7 +535,7 @@ static enum resp_states check_rkey(struct rxe_qp *qp, rxe_put(mw); rxe_get(mr); } else { - mr = lookup_mr(qp->pd, access, rkey, RXE_LOOKUP_REMOTE); + mr = lookup_mr(pd, access, rkey, RXE_LOOKUP_REMOTE); if (!mr) { pr_debug("%s: no MR matches rkey %#x\n", __func__, rkey); @@ -518,12 +584,12 @@ static enum resp_states check_rkey(struct rxe_qp *qp, return state; } -static enum resp_states send_data_in(struct rxe_qp *qp, void *data_addr, - int data_len) +static enum resp_states send_data_in(struct rxe_pd *pd, struct rxe_qp *qp, + void *data_addr, int data_len) { int err; - err = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, &qp->resp.wqe->dma, + err = copy_data(pd, IB_ACCESS_LOCAL_WRITE, &qp->resp.wqe->dma, data_addr, data_len, RXE_TO_MR_OBJ); if (unlikely(err)) return (err == -ENOSPC) ? RESPST_ERR_LENGTH @@ -627,7 +693,8 @@ static enum resp_states atomic_reply(struct rxe_qp *qp, spin_lock_bh(&atomic_ops_lock); res->atomic.orig_val = value = *vaddr; - if (pkt->opcode == IB_OPCODE_RC_COMPARE_SWAP) { + if (pkt->opcode == IB_OPCODE_RC_COMPARE_SWAP || + pkt->opcode == IB_OPCODE_XRC_COMPARE_SWAP) { if (value == atmeth_comp(pkt)) value = atmeth_swap_add(pkt); } else { @@ -786,24 +853,30 @@ static enum resp_states read_reply(struct rxe_qp *qp, } if (res->read.resid <= mtu) - opcode = IB_OPCODE_RC_RDMA_READ_RESPONSE_ONLY; + opcode = IB_OPCODE_RDMA_READ_RESPONSE_ONLY; else - opcode = IB_OPCODE_RC_RDMA_READ_RESPONSE_FIRST; + opcode = IB_OPCODE_RDMA_READ_RESPONSE_FIRST; } else { mr = rxe_recheck_mr(qp, res->read.rkey); if (!mr) return RESPST_ERR_RKEY_VIOLATION; if (res->read.resid > mtu) - opcode = IB_OPCODE_RC_RDMA_READ_RESPONSE_MIDDLE; + opcode = IB_OPCODE_RDMA_READ_RESPONSE_MIDDLE; else - opcode = IB_OPCODE_RC_RDMA_READ_RESPONSE_LAST; + opcode = IB_OPCODE_RDMA_READ_RESPONSE_LAST; } res->state = rdatm_res_state_next; payload = min_t(int, res->read.resid, mtu); + /* fixup opcode type */ + if (qp_type(qp) == IB_QPT_XRC_TGT) + opcode |= IB_OPCODE_XRC; + else + opcode |= IB_OPCODE_RC; + skb = prepare_ack_packet(qp, &ack_pkt, opcode, payload, res->cur_psn, AETH_ACK_UNLIMITED); if (!skb) @@ -858,6 +931,8 @@ static enum resp_states execute(struct rxe_qp *qp, struct rxe_pkt_info *pkt) enum resp_states err; struct sk_buff *skb = PKT_TO_SKB(pkt); union rdma_network_hdr hdr; + struct rxe_pd *pd = (qp_type(qp) == IB_QPT_XRC_TGT) ? + pkt->srq->pd : qp->pd; if (pkt->mask & RXE_SEND_MASK) { if (qp_type(qp) == IB_QPT_UD || @@ -867,15 +942,15 @@ static enum resp_states execute(struct rxe_qp *qp, struct rxe_pkt_info *pkt) sizeof(hdr.reserved)); memcpy(&hdr.roce4grh, ip_hdr(skb), sizeof(hdr.roce4grh)); - err = send_data_in(qp, &hdr, sizeof(hdr)); + err = send_data_in(pd, qp, &hdr, sizeof(hdr)); } else { - err = send_data_in(qp, ipv6_hdr(skb), + err = send_data_in(pd, qp, ipv6_hdr(skb), sizeof(hdr)); } if (err) return err; } - err = send_data_in(qp, payload_addr(pkt), payload_size(pkt)); + err = send_data_in(pd, qp, payload_addr(pkt), payload_size(pkt)); if (err) return err; } else if (pkt->mask & RXE_WRITE_MASK) { @@ -914,7 +989,7 @@ static enum resp_states execute(struct rxe_qp *qp, struct rxe_pkt_info *pkt) if (pkt->mask & RXE_COMP_MASK) return RESPST_COMPLETE; - else if (qp_type(qp) == IB_QPT_RC) + else if (qp_type(qp) == IB_QPT_RC || qp_type(qp) == IB_QPT_XRC_TGT) return RESPST_ACKNOWLEDGE; else return RESPST_CLEANUP; @@ -928,13 +1003,21 @@ static enum resp_states do_complete(struct rxe_qp *qp, struct ib_uverbs_wc *uwc = &cqe.uibwc; struct rxe_recv_wqe *wqe = qp->resp.wqe; struct rxe_dev *rxe = to_rdev(qp->ibqp.device); + struct rxe_cq *cq; + struct rxe_srq *srq; if (!wqe) goto finish; memset(&cqe, 0, sizeof(cqe)); - if (qp->rcq->is_user) { + /* srq and cq if != 0 are protected by references held by qp or pkt */ + srq = (qp_type(qp) == IB_QPT_XRC_TGT) ? pkt->srq : qp->srq; + cq = (qp_type(qp) == IB_QPT_XRC_TGT) ? pkt->srq->cq : qp->rcq; + + WARN_ON(!cq); + + if (cq->is_user) { uwc->status = qp->resp.status; uwc->qp_num = qp->ibqp.qp_num; uwc->wr_id = wqe->wr_id; @@ -956,7 +1039,7 @@ static enum resp_states do_complete(struct rxe_qp *qp, /* fields after byte_len are different between kernel and user * space */ - if (qp->rcq->is_user) { + if (cq->is_user) { uwc->wc_flags = IB_WC_GRH; if (pkt->mask & RXE_IMMDT_MASK) { @@ -1005,12 +1088,13 @@ static enum resp_states do_complete(struct rxe_qp *qp, } /* have copy for srq and reference for !srq */ - if (!qp->srq) + if (!srq) queue_advance_consumer(qp->rq.queue, QUEUE_TYPE_FROM_CLIENT); qp->resp.wqe = NULL; - if (rxe_cq_post(qp->rcq, &cqe, pkt ? bth_se(pkt) : 1)) + /* either qp or srq is holding a reference to cq */ + if (rxe_cq_post(cq, &cqe, pkt ? bth_se(pkt) : 1)) return RESPST_ERR_CQ_OVERFLOW; finish: @@ -1018,7 +1102,7 @@ static enum resp_states do_complete(struct rxe_qp *qp, return RESPST_CHK_RESOURCE; if (unlikely(!pkt)) return RESPST_DONE; - if (qp_type(qp) == IB_QPT_RC) + if (qp_type(qp) == IB_QPT_RC || qp_type(qp) == IB_QPT_XRC_TGT) return RESPST_ACKNOWLEDGE; else return RESPST_CLEANUP; @@ -1045,14 +1129,25 @@ static int send_common_ack(struct rxe_qp *qp, u8 syndrome, u32 psn, static int send_ack(struct rxe_qp *qp, u8 syndrome, u32 psn) { - return send_common_ack(qp, syndrome, psn, - IB_OPCODE_RC_ACKNOWLEDGE, "ACK"); + int opcode; + + opcode = (qp_type(qp) == IB_QPT_XRC_TGT) ? + IB_OPCODE_XRC_ACKNOWLEDGE : + IB_OPCODE_RC_ACKNOWLEDGE; + + return send_common_ack(qp, syndrome, psn, opcode, "ACK"); } static int send_atomic_ack(struct rxe_qp *qp, u8 syndrome, u32 psn) { - int ret = send_common_ack(qp, syndrome, psn, - IB_OPCODE_RC_ATOMIC_ACKNOWLEDGE, "ATOMIC ACK"); + int opcode; + int ret; + + opcode = (qp_type(qp) == IB_QPT_XRC_TGT) ? + IB_OPCODE_XRC_ATOMIC_ACKNOWLEDGE : + IB_OPCODE_RC_ATOMIC_ACKNOWLEDGE; + + ret = send_common_ack(qp, syndrome, psn, opcode, "ATOMIC ACK"); /* have to clear this since it is used to trigger * long read replies @@ -1064,7 +1159,7 @@ static int send_atomic_ack(struct rxe_qp *qp, u8 syndrome, u32 psn) static enum resp_states acknowledge(struct rxe_qp *qp, struct rxe_pkt_info *pkt) { - if (qp_type(qp) != IB_QPT_RC) + if (qp_type(qp) != IB_QPT_RC && qp_type(qp) != IB_QPT_XRC_TGT) return RESPST_CLEANUP; if (qp->resp.aeth_syndrome != AETH_ACK_UNLIMITED) @@ -1085,6 +1180,8 @@ static enum resp_states cleanup(struct rxe_qp *qp, if (pkt) { skb = skb_dequeue(&qp->req_pkts); rxe_put(qp); + if (pkt->srq) + rxe_put(pkt->srq); kfree_skb(skb); ib_device_put(qp->ibqp.device); } @@ -1350,7 +1447,8 @@ int rxe_responder(void *arg) state = do_class_d1e_error(qp); break; case RESPST_ERR_RNR: - if (qp_type(qp) == IB_QPT_RC) { + if (qp_type(qp) == IB_QPT_RC || + qp_type(qp) == IB_QPT_XRC_TGT) { rxe_counter_inc(rxe, RXE_CNT_SND_RNR); /* RC - class B */ send_ack(qp, AETH_RNR_NAK | @@ -1365,7 +1463,8 @@ int rxe_responder(void *arg) break; case RESPST_ERR_RKEY_VIOLATION: - if (qp_type(qp) == IB_QPT_RC) { + if (qp_type(qp) == IB_QPT_RC || + qp_type(qp) == IB_QPT_XRC_TGT) { /* Class C */ do_class_ac_error(qp, AETH_NAK_REM_ACC_ERR, IB_WC_REM_ACCESS_ERR); @@ -1391,7 +1490,8 @@ int rxe_responder(void *arg) break; case RESPST_ERR_LENGTH: - if (qp_type(qp) == IB_QPT_RC) { + if (qp_type(qp) == IB_QPT_RC || + qp_type(qp) == IB_QPT_XRC_TGT) { /* Class C */ do_class_ac_error(qp, AETH_NAK_INVALID_REQ, IB_WC_REM_INV_REQ_ERR);