From patchwork Fri Apr 21 18:57:12 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Devesh Sharma X-Patchwork-Id: 9693543 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 92C536038D for ; Fri, 21 Apr 2017 18:57:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8631E2863D for ; Fri, 21 Apr 2017 18:57:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 780E82865C; Fri, 21 Apr 2017 18:57:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F349628660 for ; Fri, 21 Apr 2017 18:57:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1424175AbdDUS5s (ORCPT ); Fri, 21 Apr 2017 14:57:48 -0400 Received: from mail-wm0-f51.google.com ([74.125.82.51]:35859 "EHLO mail-wm0-f51.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1424350AbdDUS5q (ORCPT ); Fri, 21 Apr 2017 14:57:46 -0400 Received: by mail-wm0-f51.google.com with SMTP id o81so22329162wmb.1 for ; Fri, 21 Apr 2017 11:57:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references; bh=tcSDSEQjnnCPPj5tU9W/UNEwW7ttiM6jRJItU5Ylj2Q=; b=VKNKV1cAuQcsXoCvl5kJWJAYGs5xKsqQCnHZDM0yeRfuFHMyB47uEkXJsdQm4hue8I KKVg23OBZcYTb33g+q2zGQZrYK9a9rX9ue+Jgi/DPnGlF3TMpnFnvy9kaVlajEy6HbpS /c4UqgvOy8uxfkS80zydhEW9eplcilcWQvwto= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=tcSDSEQjnnCPPj5tU9W/UNEwW7ttiM6jRJItU5Ylj2Q=; b=UwXCa3jgPeoJqx0v3dvYpegwZz6b74RdKHF/JO4GmCaj5lSlBOqZaEDk9ffZCrFEeO OUih8APzL8gKdPmEwjX7lZX0narU+UAi/X/o095js4EQdL+QHuEdb5Z8Fec22vV5jGoQ Uoxzyxf6heiuHG5Jy7Fs8N1FXeIAR/WL7PqgQe2k/KAm9vdTU5Gg9gtofvxIrNO3lN80 WgY977FR3/B/p6ja1jpLHqeI+RZhE50iHwgGbVS/pJ3WudvvO4g0QDV7UHme6a6VsRqO 92jNmi6mXoaGssJxJUiQ7R2rxO/3GZ2joe4UZYFXDXrEKoo+xxLm+0FUDBH9xemOS2sz 5h/Q== X-Gm-Message-State: AN3rC/7hzTWo5E0CuuFXXuyBvkbbCXvz5bK27bUuL8p0LGp8e/U/WnkV 9PnGhdmxl433WITBYI0= X-Received: by 10.28.238.213 with SMTP id j82mr95752wmi.43.1492801064760; Fri, 21 Apr 2017 11:57:44 -0700 (PDT) Received: from neo00-el73.iig.avagotech.net ([192.19.239.250]) by smtp.gmail.com with ESMTPSA id e12sm12622588wrc.43.2017.04.21.11.57.42 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 21 Apr 2017 11:57:44 -0700 (PDT) From: Devesh Sharma To: linux-rdma@vger.kernel.org Subject: [RESEND rdma-core v4 8/8] libbnxt_re: Add support for atomic operations Date: Fri, 21 Apr 2017 14:57:12 -0400 Message-Id: <1492801032-17587-9-git-send-email-devesh.sharma@broadcom.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1492801032-17587-1-git-send-email-devesh.sharma@broadcom.com> References: <1492801032-17587-1-git-send-email-devesh.sharma@broadcom.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch adds support for compare-and-swap and fetch-and-add atomic operations in user library. v3->v4 -- Changed weq init code to match the DMA ABI. v1->v2 -- Fixed the missing "break" -- Changed macros to inline function Signed-off-by: Sriharsha Basavapatna Signed-off-by: Somnath Kotur Signed-off-by: Selvin Xavier Signed-off-by: Devesh Sharma --- providers/bnxt_re/bnxt_re-abi.h | 3 ++- providers/bnxt_re/main.h | 8 ++++++- providers/bnxt_re/memory.h | 10 ++++++++ providers/bnxt_re/verbs.c | 52 ++++++++++++++++++++++++++++++++++------- 4 files changed, 63 insertions(+), 10 deletions(-) diff --git a/providers/bnxt_re/bnxt_re-abi.h b/providers/bnxt_re/bnxt_re-abi.h index db11322..205d8c4 100644 --- a/providers/bnxt_re/bnxt_re-abi.h +++ b/providers/bnxt_re/bnxt_re-abi.h @@ -54,7 +54,8 @@ enum bnxt_re_wr_opcode { BNXT_RE_WR_OPCD_ATOMIC_FA = 0x0B, BNXT_RE_WR_OPCD_LOC_INVAL = 0x0C, BNXT_RE_WR_OPCD_BIND = 0x0E, - BNXT_RE_WR_OPCD_RECV = 0x80 + BNXT_RE_WR_OPCD_RECV = 0x80, + BNXT_RE_WR_OPCD_INVAL = 0xFF }; enum bnxt_re_wr_flags { diff --git a/providers/bnxt_re/main.h b/providers/bnxt_re/main.h index 1a4dc06..4cc8abd 100644 --- a/providers/bnxt_re/main.h +++ b/providers/bnxt_re/main.h @@ -236,9 +236,15 @@ static inline uint8_t bnxt_re_ibv_to_bnxt_wr_opcd(uint8_t ibv_opcd) case IBV_WR_RDMA_READ: bnxt_opcd = BNXT_RE_WR_OPCD_RDMA_READ; break; + case IBV_WR_ATOMIC_CMP_AND_SWP: + bnxt_opcd = BNXT_RE_WR_OPCD_ATOMIC_CS; + break; + case IBV_WR_ATOMIC_FETCH_AND_ADD: + bnxt_opcd = BNXT_RE_WR_OPCD_ATOMIC_FA; + break; /* TODO: Add other opcodes */ default: - bnxt_opcd = 0xFF; + bnxt_opcd = BNXT_RE_WR_OPCD_INVAL; break; }; diff --git a/providers/bnxt_re/memory.h b/providers/bnxt_re/memory.h index 0150d80..dc5b352 100644 --- a/providers/bnxt_re/memory.h +++ b/providers/bnxt_re/memory.h @@ -83,6 +83,16 @@ static inline void iowrite32(__u32 *dst, __le32 *src) *(volatile __le32 *)dst = *src; } +static inline __u32 upper_32_bits(uint64_t n) +{ + return (__u32)((n >> 16) >> 16); +} + +static inline __u32 lower_32_bits(uint64_t n) +{ + return (__u32)(n & 0xFFFFFFFFUL); +} + /* Basic queue operation */ static inline uint32_t bnxt_re_is_que_full(struct bnxt_re_queue *que) { diff --git a/providers/bnxt_re/verbs.c b/providers/bnxt_re/verbs.c index 1f8ff8e..2768a56 100644 --- a/providers/bnxt_re/verbs.c +++ b/providers/bnxt_re/verbs.c @@ -1068,6 +1068,8 @@ static int bnxt_re_build_send_sqe(struct bnxt_re_qp *qp, void *wqe, /* Fill Header */ opcode = bnxt_re_ibv_to_bnxt_wr_opcd(wr->opcode); + if (opcode == BNXT_RE_WR_OPCD_INVAL) + return -EINVAL; hdrval = (opcode & BNXT_RE_HDR_WT_MASK); if (is_inline) { @@ -1115,6 +1117,39 @@ static int bnxt_re_build_rdma_sqe(struct bnxt_re_qp *qp, void *wqe, return len; } +static int bnxt_re_build_cns_sqe(struct bnxt_re_qp *qp, void *wqe, + struct ibv_send_wr *wr) +{ + struct bnxt_re_bsqe *hdr = wqe; + struct bnxt_re_atomic *sqe = ((void *)wqe + + sizeof(struct bnxt_re_bsqe)); + int len; + + len = bnxt_re_build_send_sqe(qp, wqe, wr, false); + hdr->key_immd = htole32(wr->wr.atomic.rkey); + sqe->rva = htole64(wr->wr.atomic.remote_addr); + sqe->cmp_dt = htole64(wr->wr.atomic.compare_add); + sqe->swp_dt = htole64(wr->wr.atomic.swap); + + return len; +} + +static int bnxt_re_build_fna_sqe(struct bnxt_re_qp *qp, void *wqe, + struct ibv_send_wr *wr) +{ + struct bnxt_re_bsqe *hdr = wqe; + struct bnxt_re_atomic *sqe = ((void *)wqe + + sizeof(struct bnxt_re_bsqe)); + int len; + + len = bnxt_re_build_send_sqe(qp, wqe, wr, false); + hdr->key_immd = htole32(wr->wr.atomic.rkey); + sqe->rva = htole64(wr->wr.atomic.remote_addr); + sqe->cmp_dt = htole64(wr->wr.atomic.compare_add); + + return len; +} + int bnxt_re_post_send(struct ibv_qp *ibvqp, struct ibv_send_wr *wr, struct ibv_send_wr **bad) { @@ -1168,27 +1203,28 @@ int bnxt_re_post_send(struct ibv_qp *ibvqp, struct ibv_send_wr *wr, else bytes = bnxt_re_build_send_sqe(qp, sqe, wr, is_inline); - if (bytes < 0) - ret = (bytes == -EINVAL) ? EINVAL : ENOMEM; break; case IBV_WR_RDMA_WRITE_WITH_IMM: hdr->key_immd = htole32(be32toh(wr->imm_data)); case IBV_WR_RDMA_WRITE: bytes = bnxt_re_build_rdma_sqe(qp, sqe, wr, is_inline); - if (bytes < 0) - ret = ENOMEM; break; case IBV_WR_RDMA_READ: bytes = bnxt_re_build_rdma_sqe(qp, sqe, wr, false); - if (bytes < 0) - ret = ENOMEM; + break; + case IBV_WR_ATOMIC_CMP_AND_SWP: + bytes = bnxt_re_build_cns_sqe(qp, sqe, wr); + break; + case IBV_WR_ATOMIC_FETCH_AND_ADD: + bytes = bnxt_re_build_fna_sqe(qp, sqe, wr); break; default: - ret = EINVAL; + bytes = -EINVAL; break; } - if (ret) { + if (bytes < 0) { + ret = (bytes == -EINVAL) ? EINVAL : ENOMEM; *bad = wr; break; }