From patchwork Tue Sep 27 05:53:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhijian X-Patchwork-Id: 12989898 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 690CBC07E9D for ; Tue, 27 Sep 2022 05:54:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229459AbiI0FyG (ORCPT ); Tue, 27 Sep 2022 01:54:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52804 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229453AbiI0FyF (ORCPT ); Tue, 27 Sep 2022 01:54:05 -0400 Received: from esa3.hc1455-7.c3s2.iphmx.com (esa3.hc1455-7.c3s2.iphmx.com [207.54.90.49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1DE9796FF9; Mon, 26 Sep 2022 22:54:03 -0700 (PDT) X-IronPort-AV: E=McAfee;i="6500,9779,10482"; a="90094095" X-IronPort-AV: E=Sophos;i="5.93,348,1654527600"; d="scan'208";a="90094095" Received: from unknown (HELO yto-r3.gw.nic.fujitsu.com) ([218.44.52.219]) by esa3.hc1455-7.c3s2.iphmx.com with ESMTP; 27 Sep 2022 14:54:02 +0900 Received: from yto-m4.gw.nic.fujitsu.com (yto-nat-yto-m4.gw.nic.fujitsu.com [192.168.83.67]) by yto-r3.gw.nic.fujitsu.com (Postfix) with ESMTP id AF558D4F60; Tue, 27 Sep 2022 14:54:00 +0900 (JST) Received: from kws-ab1.gw.nic.fujitsu.com (kws-ab1.gw.nic.fujitsu.com [192.51.206.11]) by yto-m4.gw.nic.fujitsu.com (Postfix) with ESMTP id D6918F0FB6; Tue, 27 Sep 2022 14:53:59 +0900 (JST) Received: from FNSTPC.g08.fujitsu.local (unknown [10.167.226.45]) by kws-ab1.gw.nic.fujitsu.com (Postfix) with ESMTP id 96E9911416F0; Tue, 27 Sep 2022 14:53:58 +0900 (JST) From: Li Zhijian To: Bob Pearson , Leon Romanovsky , Jason Gunthorpe , linux-rdma@vger.kernel.org Cc: Zhu Yanjun , yangx.jy@fujitsu.com, y-goto@fujitsu.com, mbloch@nvidia.com, liangwenpeng@huawei.com, tom@talpey.com, tomasz.gromadzki@intel.com, dan.j.williams@intel.com, linux-kernel@vger.kernel.org, Li Zhijian Subject: [for-next PATCH v5 01/11] RDMA/rxe: make sure requested access is a subset of {mr,mw}->access Date: Tue, 27 Sep 2022 13:53:27 +0800 Message-Id: <20220927055337.22630-2-lizhijian@fujitsu.com> X-Mailer: git-send-email 2.36.0 In-Reply-To: <20220927055337.22630-1-lizhijian@fujitsu.com> References: <20220927055337.22630-1-lizhijian@fujitsu.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-TM-AS-Product-Ver: IMSS-9.1.0.1408-9.0.0.1002-27166.005 X-TM-AS-User-Approved-Sender: Yes X-TMASE-Version: IMSS-9.1.0.1408-9.0.1002-27166.005 X-TMASE-Result: 10-0.410700-10.000000 X-TMASE-MatchedRID: PQKBKUUweS8q/mR31mzCQEXBhxFdFgcQYnjb54Q6bnBD0XHWdCmZPJgU Ly4zAbFBjx5X3FdI4UDmn3xyPJAJoh2P280ZiGmRSCtgxgcsrMoqIDKj/B/pqZsoi2XrUn/Jn6K dMrRsL14qtq5d3cxkNXXgXsQkv1483utzuttQ/10s7SSaWe3OQyo0r00SEySLSM2MLO925gqfkq MAQK0wp6mp9yep8KrUR4ywKJOlCUlstuyAPeJTWNGQez76nGiBFcUQf3Yp/ridO0/GUi4gFb0fO PzpgdcEKeJ/HkAZ8Is= X-TMASE-SNAP-Result: 1.821001.0001-0-1-22:0,33:0,34:0-0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org We should reject the requests with access flags that is not registered by MR/MW. For example, lookup_mr() should return NULL when requested access is 0x03 and mr->access is 0x01. Signed-off-by: Li Zhijian --- drivers/infiniband/sw/rxe/rxe_mr.c | 2 +- drivers/infiniband/sw/rxe/rxe_mw.c | 3 +-- 2 files changed, 2 insertions(+), 3 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 502e9ada99b3..74a38d06332f 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -511,7 +511,7 @@ struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key, if (unlikely((type == RXE_LOOKUP_LOCAL && mr->lkey != key) || (type == RXE_LOOKUP_REMOTE && mr->rkey != key) || - mr_pd(mr) != pd || (access && !(access & mr->access)) || + mr_pd(mr) != pd || ((access & mr->access) != access) || mr->state != RXE_MR_STATE_VALID)) { rxe_put(mr); mr = NULL; diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c index 902b7df7aaed..8df1c9066ed8 100644 --- a/drivers/infiniband/sw/rxe/rxe_mw.c +++ b/drivers/infiniband/sw/rxe/rxe_mw.c @@ -293,8 +293,7 @@ struct rxe_mw *rxe_lookup_mw(struct rxe_qp *qp, int access, u32 rkey) if (unlikely((mw->rkey != rkey) || rxe_mw_pd(mw) != pd || (mw->ibmw.type == IB_MW_TYPE_2 && mw->qp != qp) || - (mw->length == 0) || - (access && !(access & mw->access)) || + (mw->length == 0) || ((access & mw->access) != access) || mw->state != RXE_MW_STATE_VALID)) { rxe_put(mw); return NULL; From patchwork Tue Sep 27 05:53:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhijian X-Patchwork-Id: 12989899 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 881CAC6FA83 for ; Tue, 27 Sep 2022 05:54:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229582AbiI0FyH (ORCPT ); Tue, 27 Sep 2022 01:54:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229489AbiI0FyG (ORCPT ); Tue, 27 Sep 2022 01:54:06 -0400 Received: from esa11.hc1455-7.c3s2.iphmx.com (esa11.hc1455-7.c3s2.iphmx.com [207.54.90.137]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0135797500; Mon, 26 Sep 2022 22:54:04 -0700 (PDT) X-IronPort-AV: E=McAfee;i="6500,9779,10482"; a="69512577" X-IronPort-AV: E=Sophos;i="5.93,348,1654527600"; d="scan'208";a="69512577" Received: from unknown (HELO oym-r1.gw.nic.fujitsu.com) ([210.162.30.89]) by esa11.hc1455-7.c3s2.iphmx.com with ESMTP; 27 Sep 2022 14:54:03 +0900 Received: from oym-m1.gw.nic.fujitsu.com (oym-nat-oym-m1.gw.nic.fujitsu.com [192.168.87.58]) by oym-r1.gw.nic.fujitsu.com (Postfix) with ESMTP id 8CFBEE756C; Tue, 27 Sep 2022 14:54:01 +0900 (JST) Received: from kws-ab1.gw.nic.fujitsu.com (kws-ab1.gw.nic.fujitsu.com [192.51.206.11]) by oym-m1.gw.nic.fujitsu.com (Postfix) with ESMTP id A2CC2D995A; Tue, 27 Sep 2022 14:54:00 +0900 (JST) Received: from FNSTPC.g08.fujitsu.local (unknown [10.167.226.45]) by kws-ab1.gw.nic.fujitsu.com (Postfix) with ESMTP id 78CB011403D7; Tue, 27 Sep 2022 14:53:59 +0900 (JST) From: Li Zhijian To: Bob Pearson , Leon Romanovsky , Jason Gunthorpe , linux-rdma@vger.kernel.org Cc: Zhu Yanjun , yangx.jy@fujitsu.com, y-goto@fujitsu.com, mbloch@nvidia.com, liangwenpeng@huawei.com, tom@talpey.com, tomasz.gromadzki@intel.com, dan.j.williams@intel.com, linux-kernel@vger.kernel.org, Li Zhijian Subject: [for-next PATCH v5 02/11] RDMA: Extend RDMA user ABI to support flush Date: Tue, 27 Sep 2022 13:53:28 +0800 Message-Id: <20220927055337.22630-3-lizhijian@fujitsu.com> X-Mailer: git-send-email 2.36.0 In-Reply-To: <20220927055337.22630-1-lizhijian@fujitsu.com> References: <20220927055337.22630-1-lizhijian@fujitsu.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-TM-AS-Product-Ver: IMSS-9.1.0.1408-9.0.0.1002-27166.005 X-TM-AS-User-Approved-Sender: Yes X-TMASE-Version: IMSS-9.1.0.1408-9.0.1002-27166.005 X-TMASE-Result: 10--7.877900-10.000000 X-TMASE-MatchedRID: DVl2cimgKao3i8FXueJ7nioiRKlBVkYI4Tk+Ydnj9Rsv82KK+I057r/E Sxz77lNODCNnb8CrvNq12HagvbwDji/7QU2czuUNA9lly13c/gFzq96Ia33704pLyz8UyqY4B4c sVfC2ODTckbaA+VOyo5fwx8YL6OA6IWERHkBDPUlBDn6Fjq77jpULM5ZpFgIpnhD4vcFcha4fJg 2kkQhA79l7+hrY7FlcVfvWvmVRqVkfE8yM4pjsDwtuKBGekqUpI/NGWt0UYPDut4Vvp8ZKp0rl8 UwqZvKb8FW04FsKkQ7HPI+7Kko/4xoVg1riyt42 X-TMASE-SNAP-Result: 1.821001.0001-0-1-22:0,33:0,34:0-0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org This commit extends the RDMA user ABI to support the flush operation defined in IBA A19.4.1. These changes are backwards compatible with the existing RDMA user ABI. Signed-off-by: Li Zhijian --- V5: new names and new patch split scheme, suggested by Bob --- include/uapi/rdma/ib_user_ioctl_verbs.h | 2 ++ include/uapi/rdma/ib_user_verbs.h | 16 ++++++++++++++++ 2 files changed, 18 insertions(+) diff --git a/include/uapi/rdma/ib_user_ioctl_verbs.h b/include/uapi/rdma/ib_user_ioctl_verbs.h index 7dd56210226f..07b105e22f6f 100644 --- a/include/uapi/rdma/ib_user_ioctl_verbs.h +++ b/include/uapi/rdma/ib_user_ioctl_verbs.h @@ -57,6 +57,8 @@ enum ib_uverbs_access_flags { IB_UVERBS_ACCESS_ZERO_BASED = 1 << 5, IB_UVERBS_ACCESS_ON_DEMAND = 1 << 6, IB_UVERBS_ACCESS_HUGETLB = 1 << 7, + IB_UVERBS_ACCESS_FLUSH_GLOBAL = 1 << 8, + IB_UVERBS_ACCESS_FLUSH_PERSISTENT = 1 << 9, IB_UVERBS_ACCESS_RELAXED_ORDERING = IB_UVERBS_ACCESS_OPTIONAL_FIRST, IB_UVERBS_ACCESS_OPTIONAL_RANGE = diff --git a/include/uapi/rdma/ib_user_verbs.h b/include/uapi/rdma/ib_user_verbs.h index 43672cb1fd57..2d5f32d9d0d9 100644 --- a/include/uapi/rdma/ib_user_verbs.h +++ b/include/uapi/rdma/ib_user_verbs.h @@ -105,6 +105,18 @@ enum { IB_USER_VERBS_EX_CMD_MODIFY_CQ }; +/* see IBA A19.4.1.1 Placement Types */ +enum ib_placement_type { + IB_FLUSH_GLOBAL = 1U << 0, + IB_FLUSH_PERSISTENT = 1U << 1, +}; + +/* see IBA A19.4.1.2 Selectivity Level */ +enum ib_selectivity_level { + IB_FLUSH_RANGE = 0, + IB_FLUSH_MR, +}; + /* * Make sure that all structs defined in this file remain laid out so * that they pack the same way on 32-bit and 64-bit architectures (to @@ -466,6 +478,7 @@ enum ib_uverbs_wc_opcode { IB_UVERBS_WC_BIND_MW = 5, IB_UVERBS_WC_LOCAL_INV = 6, IB_UVERBS_WC_TSO = 7, + IB_UVERBS_WC_FLUSH = 8, }; struct ib_uverbs_wc { @@ -784,6 +797,7 @@ enum ib_uverbs_wr_opcode { IB_UVERBS_WR_RDMA_READ_WITH_INV = 11, IB_UVERBS_WR_MASKED_ATOMIC_CMP_AND_SWP = 12, IB_UVERBS_WR_MASKED_ATOMIC_FETCH_AND_ADD = 13, + IB_UVERBS_WR_FLUSH = 14, /* Review enum ib_wr_opcode before modifying this */ }; @@ -1331,6 +1345,8 @@ enum ib_uverbs_device_cap_flags { /* Deprecated. Please use IB_UVERBS_RAW_PACKET_CAP_SCATTER_FCS. */ IB_UVERBS_DEVICE_RAW_SCATTER_FCS = 1ULL << 34, IB_UVERBS_DEVICE_PCI_WRITE_END_PADDING = 1ULL << 36, + IB_UVERBS_DEVICE_FLUSH_GLOBAL = 1ULL << 38, + IB_UVERBS_DEVICE_FLUSH_PERSISTENT = 1ULL << 39, }; enum ib_uverbs_raw_packet_caps { From patchwork Tue Sep 27 05:53:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhijian X-Patchwork-Id: 12989907 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD738C07E9D for ; Tue, 27 Sep 2022 05:56:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229832AbiI0F4T (ORCPT ); Tue, 27 Sep 2022 01:56:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55792 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229875AbiI0F4C (ORCPT ); Tue, 27 Sep 2022 01:56:02 -0400 X-Greylist: delayed 68 seconds by postgrey-1.37 at lindbergh.monkeyblade.net; Mon, 26 Sep 2022 22:55:14 PDT Received: from esa10.hc1455-7.c3s2.iphmx.com (esa10.hc1455-7.c3s2.iphmx.com [139.138.36.225]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 70571A5C6C; Mon, 26 Sep 2022 22:55:13 -0700 (PDT) X-IronPort-AV: E=McAfee;i="6500,9779,10482"; a="77857524" X-IronPort-AV: E=Sophos;i="5.93,348,1654527600"; d="scan'208";a="77857524" Received: from unknown (HELO yto-r1.gw.nic.fujitsu.com) ([218.44.52.217]) by esa10.hc1455-7.c3s2.iphmx.com with ESMTP; 27 Sep 2022 14:54:03 +0900 Received: from yto-m2.gw.nic.fujitsu.com (yto-nat-yto-m2.gw.nic.fujitsu.com [192.168.83.65]) by yto-r1.gw.nic.fujitsu.com (Postfix) with ESMTP id 5DCBFDAFD0; Tue, 27 Sep 2022 14:54:02 +0900 (JST) Received: from kws-ab1.gw.nic.fujitsu.com (kws-ab1.gw.nic.fujitsu.com [192.51.206.11]) by yto-m2.gw.nic.fujitsu.com (Postfix) with ESMTP id 7E7E5D35B6; Tue, 27 Sep 2022 14:54:01 +0900 (JST) Received: from FNSTPC.g08.fujitsu.local (unknown [10.167.226.45]) by kws-ab1.gw.nic.fujitsu.com (Postfix) with ESMTP id 4B0B711416F0; Tue, 27 Sep 2022 14:54:00 +0900 (JST) From: Li Zhijian To: Bob Pearson , Leon Romanovsky , Jason Gunthorpe , linux-rdma@vger.kernel.org Cc: Zhu Yanjun , yangx.jy@fujitsu.com, y-goto@fujitsu.com, mbloch@nvidia.com, liangwenpeng@huawei.com, tom@talpey.com, tomasz.gromadzki@intel.com, dan.j.williams@intel.com, linux-kernel@vger.kernel.org, Li Zhijian Subject: [for-next PATCH v5 03/11] RDMA: Extend RDMA kernel verbs ABI to support flush Date: Tue, 27 Sep 2022 13:53:29 +0800 Message-Id: <20220927055337.22630-4-lizhijian@fujitsu.com> X-Mailer: git-send-email 2.36.0 In-Reply-To: <20220927055337.22630-1-lizhijian@fujitsu.com> References: <20220927055337.22630-1-lizhijian@fujitsu.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-TM-AS-Product-Ver: IMSS-9.1.0.1408-9.0.0.1002-27166.005 X-TM-AS-User-Approved-Sender: Yes X-TMASE-Version: IMSS-9.1.0.1408-9.0.1002-27166.005 X-TMASE-Result: 10--10.970200-10.000000 X-TMASE-MatchedRID: p0yCgO9DtmM3i8FXueJ7nqzGfgakLdja4Tk+Ydnj9Rsv82KK+I057uEO iHvBs/Z/m+UTNeyODYnz1oCTj9zInhmNqsUuotRsJTyMiqml0inVBDonH99+VjoUdFbHYUawvwU evDt+uW5/XjpbSJS7a2e9j2HEH29zBhg5qTA0S18wYApm54/SZlcoFMSHqXOcwszB/83OsgdzFn 5q/FBFPMNN2eR4SoF7Z3XFK4bdN4aGN4c/hNvzQ6qHmm/V4M/PCIFiJP1XZ1KOVdQAiMmbZ9E08 LKMDwXNnIqnRXCrHED+8jNl7Al3/PCpvtQpW4FpngIgpj8eDcAZ1CdBJOsoY8RB0bsfrpPIfiAq rjYtFiTDQc28a3sN9DHjumRon+WmCi1AacwWk06GbjSsxghr7X7cGd19dSFd X-TMASE-SNAP-Result: 1.821001.0001-0-1-22:0,33:0,34:0-0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org This commit extends the RDMA kernel verbs ABI to support the flush operation defined in IBA A19.4.1. These changes are backwards compatible with the existing RDMA kernel verbs ABI. It makes device/HCA support new FLUSH attributes/capabilities, and it also makes memory region support new FLUSH access flags. Users can use ibv_reg_mr(3) to register flush access flags. Only the access flags also supported by device's capabilities can be registered successfully. Once registered successfully, it means the MR is flushable. Similarly, A flushable MR should also have one or both of GLOBAL_VISIBILITY and PERSISTENT attributes/capabilities like device/HCA. Signed-off-by: Li Zhijian --- V5: new names and new patch split scheme, suggested by Bob --- include/rdma/ib_pack.h | 3 +++ include/rdma/ib_verbs.h | 20 +++++++++++++++++++- 2 files changed, 22 insertions(+), 1 deletion(-) diff --git a/include/rdma/ib_pack.h b/include/rdma/ib_pack.h index a9162f25beaf..56211d1cc9f9 100644 --- a/include/rdma/ib_pack.h +++ b/include/rdma/ib_pack.h @@ -84,6 +84,7 @@ enum { /* opcode 0x15 is reserved */ IB_OPCODE_SEND_LAST_WITH_INVALIDATE = 0x16, IB_OPCODE_SEND_ONLY_WITH_INVALIDATE = 0x17, + IB_OPCODE_FLUSH = 0x1C, /* real constants follow -- see comment about above IB_OPCODE() macro for more details */ @@ -112,6 +113,7 @@ enum { IB_OPCODE(RC, FETCH_ADD), IB_OPCODE(RC, SEND_LAST_WITH_INVALIDATE), IB_OPCODE(RC, SEND_ONLY_WITH_INVALIDATE), + IB_OPCODE(RC, FLUSH), /* UC */ IB_OPCODE(UC, SEND_FIRST), @@ -149,6 +151,7 @@ enum { IB_OPCODE(RD, ATOMIC_ACKNOWLEDGE), IB_OPCODE(RD, COMPARE_SWAP), IB_OPCODE(RD, FETCH_ADD), + IB_OPCODE(RD, FLUSH), /* UD */ IB_OPCODE(UD, SEND_ONLY), diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 975d6e9efbcb..571838dd06eb 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -270,6 +270,9 @@ enum ib_device_cap_flags { /* The device supports padding incoming writes to cacheline. */ IB_DEVICE_PCI_WRITE_END_PADDING = IB_UVERBS_DEVICE_PCI_WRITE_END_PADDING, + /* Placement type attributes */ + IB_DEVICE_FLUSH_GLOBAL = IB_UVERBS_DEVICE_FLUSH_GLOBAL, + IB_DEVICE_FLUSH_PERSISTENT = IB_UVERBS_DEVICE_FLUSH_PERSISTENT, }; enum ib_kernel_cap_flags { @@ -985,6 +988,7 @@ enum ib_wc_opcode { IB_WC_REG_MR, IB_WC_MASKED_COMP_SWAP, IB_WC_MASKED_FETCH_ADD, + IB_WC_FLUSH = IB_UVERBS_WC_FLUSH, /* * Set value of IB_WC_RECV so consumers can test if a completion is a * receive by testing (opcode & IB_WC_RECV). @@ -1325,6 +1329,7 @@ enum ib_wr_opcode { IB_UVERBS_WR_MASKED_ATOMIC_CMP_AND_SWP, IB_WR_MASKED_ATOMIC_FETCH_AND_ADD = IB_UVERBS_WR_MASKED_ATOMIC_FETCH_AND_ADD, + IB_WR_FLUSH = IB_UVERBS_WR_FLUSH, /* These are kernel only and can not be issued by userspace */ IB_WR_REG_MR = 0x20, @@ -1458,10 +1463,14 @@ enum ib_access_flags { IB_ACCESS_ON_DEMAND = IB_UVERBS_ACCESS_ON_DEMAND, IB_ACCESS_HUGETLB = IB_UVERBS_ACCESS_HUGETLB, IB_ACCESS_RELAXED_ORDERING = IB_UVERBS_ACCESS_RELAXED_ORDERING, + IB_ACCESS_FLUSH_GLOBAL = IB_UVERBS_ACCESS_FLUSH_GLOBAL, + IB_ACCESS_FLUSH_PERSISTENT = IB_UVERBS_ACCESS_FLUSH_PERSISTENT, + IB_ACCESS_FLUSHABLE = IB_ACCESS_FLUSH_GLOBAL | + IB_ACCESS_FLUSH_PERSISTENT, IB_ACCESS_OPTIONAL = IB_UVERBS_ACCESS_OPTIONAL_RANGE, IB_ACCESS_SUPPORTED = - ((IB_ACCESS_HUGETLB << 1) - 1) | IB_ACCESS_OPTIONAL, + ((IB_ACCESS_FLUSH_PERSISTENT << 1) - 1) | IB_ACCESS_OPTIONAL, }; /* @@ -4321,6 +4330,8 @@ int ib_dealloc_xrcd_user(struct ib_xrcd *xrcd, struct ib_udata *udata); static inline int ib_check_mr_access(struct ib_device *ib_dev, unsigned int flags) { + u64 device_cap = ib_dev->attrs.device_cap_flags; + /* * Local write permission is required if remote write or * remote atomic permission is also requested. @@ -4335,6 +4346,13 @@ static inline int ib_check_mr_access(struct ib_device *ib_dev, if (flags & IB_ACCESS_ON_DEMAND && !(ib_dev->attrs.kernel_cap_flags & IBK_ON_DEMAND_PAGING)) return -EINVAL; + + if ((flags & IB_ACCESS_FLUSH_GLOBAL && + !(device_cap & IB_DEVICE_FLUSH_GLOBAL)) || + (flags & IB_ACCESS_FLUSH_PERSISTENT && + !(device_cap & IB_DEVICE_FLUSH_PERSISTENT))) + return -EINVAL; + return 0; } From patchwork Tue Sep 27 05:53:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhijian X-Patchwork-Id: 12989902 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2FF68C6FA90 for ; Tue, 27 Sep 2022 05:54:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229867AbiI0FyO (ORCPT ); Tue, 27 Sep 2022 01:54:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52882 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229680AbiI0FyK (ORCPT ); Tue, 27 Sep 2022 01:54:10 -0400 Received: from esa7.hc1455-7.c3s2.iphmx.com (esa7.hc1455-7.c3s2.iphmx.com [139.138.61.252]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 266ED97533; Mon, 26 Sep 2022 22:54:09 -0700 (PDT) X-IronPort-AV: E=McAfee;i="6500,9779,10482"; a="68849860" X-IronPort-AV: E=Sophos;i="5.93,348,1654527600"; d="scan'208";a="68849860" Received: from unknown (HELO yto-r2.gw.nic.fujitsu.com) ([218.44.52.218]) by esa7.hc1455-7.c3s2.iphmx.com with ESMTP; 27 Sep 2022 14:54:03 +0900 Received: from yto-m4.gw.nic.fujitsu.com (yto-nat-yto-m4.gw.nic.fujitsu.com [192.168.83.67]) by yto-r2.gw.nic.fujitsu.com (Postfix) with ESMTP id 5B455D6252; Tue, 27 Sep 2022 14:54:03 +0900 (JST) Received: from kws-ab1.gw.nic.fujitsu.com (kws-ab1.gw.nic.fujitsu.com [192.51.206.11]) by yto-m4.gw.nic.fujitsu.com (Postfix) with ESMTP id 7F829142C2; Tue, 27 Sep 2022 14:54:02 +0900 (JST) Received: from FNSTPC.g08.fujitsu.local (unknown [10.167.226.45]) by kws-ab1.gw.nic.fujitsu.com (Postfix) with ESMTP id 21B3A11403D7; Tue, 27 Sep 2022 14:54:01 +0900 (JST) From: Li Zhijian To: Bob Pearson , Leon Romanovsky , Jason Gunthorpe , linux-rdma@vger.kernel.org Cc: Zhu Yanjun , yangx.jy@fujitsu.com, y-goto@fujitsu.com, mbloch@nvidia.com, liangwenpeng@huawei.com, tom@talpey.com, tomasz.gromadzki@intel.com, dan.j.williams@intel.com, linux-kernel@vger.kernel.org, Li Zhijian Subject: [for-next PATCH v5 04/11] RDMA/rxe: Extend rxe user ABI to support flush Date: Tue, 27 Sep 2022 13:53:30 +0800 Message-Id: <20220927055337.22630-5-lizhijian@fujitsu.com> X-Mailer: git-send-email 2.36.0 In-Reply-To: <20220927055337.22630-1-lizhijian@fujitsu.com> References: <20220927055337.22630-1-lizhijian@fujitsu.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-TM-AS-Product-Ver: IMSS-9.1.0.1408-9.0.0.1002-27166.005 X-TM-AS-User-Approved-Sender: Yes X-TMASE-Version: IMSS-9.1.0.1408-9.0.1002-27166.005 X-TMASE-Result: 10--3.211900-10.000000 X-TMASE-MatchedRID: e0bKGi4Iljo3i8FXueJ7nioiRKlBVkYI4Tk+Ydnj9RtFms6YEs23D8s0 A8Och09BxTsa1zsKwGAzoUNAtFjsci/7QU2czuUNA9lly13c/gGrHFhHKqXuS+0NA4pWf9v6o8W MkQWv6iUoTQl7wNH8Pg1fA1QHegDv3QfwsVk0UbvqwGfCk7KUs8Cx+5zdsNXieV+GVTO9r576L9 Rdzx8Ia3BpfZuY721qzZC7kciz4mL6AFbccivBztjvnSWxNHzqwdIQhbYFrscbOi487b5zISHJp 2UYVccqxOB8J0pRLhyJxKSZiwBX6QtRTXOqKmFVftwZ3X11IV0= X-TMASE-SNAP-Result: 1.821001.0001-0-1-22:0,33:0,34:0-0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org This commit extends the rxe user ABI to support the flush operation defined in IBA A19.4.1. These changes are backwards compatible with the existing rxe user ABI. The user API request a flush by filling this structure. Signed-off-by: Li Zhijian --- V5: new patch split scheme, suggested by Bob --- include/uapi/rdma/rdma_user_rxe.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/include/uapi/rdma/rdma_user_rxe.h b/include/uapi/rdma/rdma_user_rxe.h index 73f679dfd2df..e2b93df94590 100644 --- a/include/uapi/rdma/rdma_user_rxe.h +++ b/include/uapi/rdma/rdma_user_rxe.h @@ -82,6 +82,13 @@ struct rxe_send_wr { __u32 invalidate_rkey; } ex; union { + struct { + __aligned_u64 remote_addr; + __u32 length; + __u32 rkey; + __u8 type; + __u8 level; + } flush; struct { __aligned_u64 remote_addr; __u32 rkey; From patchwork Tue Sep 27 05:53:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhijian X-Patchwork-Id: 12989909 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE3C7C6FA83 for ; Tue, 27 Sep 2022 05:56:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229976AbiI0F4Z (ORCPT ); Tue, 27 Sep 2022 01:56:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53080 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229967AbiI0F4H (ORCPT ); Tue, 27 Sep 2022 01:56:07 -0400 Received: from esa10.hc1455-7.c3s2.iphmx.com (esa10.hc1455-7.c3s2.iphmx.com [139.138.36.225]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 50793A61FB; Mon, 26 Sep 2022 22:55:26 -0700 (PDT) X-IronPort-AV: E=McAfee;i="6500,9779,10482"; a="77857527" X-IronPort-AV: E=Sophos;i="5.93,348,1654527600"; d="scan'208";a="77857527" Received: from unknown (HELO yto-r4.gw.nic.fujitsu.com) ([218.44.52.220]) by esa10.hc1455-7.c3s2.iphmx.com with ESMTP; 27 Sep 2022 14:54:04 +0900 Received: from yto-m1.gw.nic.fujitsu.com (yto-nat-yto-m1.gw.nic.fujitsu.com [192.168.83.64]) by yto-r4.gw.nic.fujitsu.com (Postfix) with ESMTP id 171DCD4F5B; Tue, 27 Sep 2022 14:54:04 +0900 (JST) Received: from kws-ab1.gw.nic.fujitsu.com (kws-ab1.gw.nic.fujitsu.com [192.51.206.11]) by yto-m1.gw.nic.fujitsu.com (Postfix) with ESMTP id 53882CFBAE; Tue, 27 Sep 2022 14:54:03 +0900 (JST) Received: from FNSTPC.g08.fujitsu.local (unknown [10.167.226.45]) by kws-ab1.gw.nic.fujitsu.com (Postfix) with ESMTP id 2334E11416F0; Tue, 27 Sep 2022 14:54:02 +0900 (JST) From: Li Zhijian To: Bob Pearson , Leon Romanovsky , Jason Gunthorpe , linux-rdma@vger.kernel.org Cc: Zhu Yanjun , yangx.jy@fujitsu.com, y-goto@fujitsu.com, mbloch@nvidia.com, liangwenpeng@huawei.com, tom@talpey.com, tomasz.gromadzki@intel.com, dan.j.williams@intel.com, linux-kernel@vger.kernel.org, Li Zhijian Subject: [for-next PATCH v5 05/11] RDMA/rxe: Allow registering persistent flag for pmem MR only Date: Tue, 27 Sep 2022 13:53:31 +0800 Message-Id: <20220927055337.22630-6-lizhijian@fujitsu.com> X-Mailer: git-send-email 2.36.0 In-Reply-To: <20220927055337.22630-1-lizhijian@fujitsu.com> References: <20220927055337.22630-1-lizhijian@fujitsu.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-TM-AS-Product-Ver: IMSS-9.1.0.1408-9.0.0.1002-27166.005 X-TM-AS-User-Approved-Sender: Yes X-TMASE-Version: IMSS-9.1.0.1408-9.0.1002-27166.005 X-TMASE-Result: 10--8.619200-10.000000 X-TMASE-MatchedRID: TXpevnfF2xwrH07ux06QXUXBhxFdFgcQtGonEOW5g+bRjBmBWsa9ROgU rKBxousFuVbZ8RRkSPuhSdh91d47Acez6vGnS3H6CtzGvPCy/m4Ea8g1x8eqF6ILsEPieIGYttJ z0Nnmewki+t+0AiFaYvL3NxFKQpq1fDPEC/yQgPTOvXpg7ONnXb/I3arxTrvic759besLmmt5bB iV7e/VjIb1oRJzA0abgDLqnrRlXrZ8nn9tnqel2DsAVzN+Ov/sXWjKDWViOOJGU9H5Qx3ry7vP9 dBtdfPM+dULentFltl5eMc+lEROAg== X-TMASE-SNAP-Result: 1.821001.0001-0-1-22:0,33:0,34:0-0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Memory region could support at most 2 flush access flags: IB_ACCESS_FLUSH_PERSISTENT and IB_ACCESS_FLUSH_GLOBAL But we only allow user to register persistent flush flags to the pmem MR where it has the ability of persisting data across power cycles. So register a persistent access flag to a non-pmem MR will be rejected. CC: Dan Williams Signed-off-by: Li Zhijian --- V5: make sure the whole MR is pmem V4: set is_pmem more simple V2: new scheme check is_pmem # Dan update commit message, get rid of confusing ib_check_flush_access_flags() # Tom --- drivers/infiniband/sw/rxe/rxe_mr.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 74a38d06332f..1da3ad5eba64 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -112,6 +112,13 @@ void rxe_mr_init_dma(int access, struct rxe_mr *mr) mr->type = IB_MR_TYPE_DMA; } +static bool vaddr_in_pmem(char *vaddr) +{ + return REGION_INTERSECTS == + region_intersects(virt_to_phys(vaddr), 1, IORESOURCE_MEM, + IORES_DESC_PERSISTENT_MEMORY); +} + int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova, int access, struct rxe_mr *mr) { @@ -122,6 +129,7 @@ int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova, int num_buf; void *vaddr; int err; + bool is_pmem = false; int i; umem = ib_umem_get(&rxe->ib_dev, start, length, access); @@ -149,6 +157,7 @@ int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova, num_buf = 0; map = mr->map; if (length > 0) { + is_pmem = true; buf = map[0]->buf; for_each_sgtable_page (&umem->sgt_append.sgt, &sg_iter, 0) { @@ -166,6 +175,10 @@ int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova, goto err_cleanup_map; } + /* True only if the *whole* MR is pmem */ + if (is_pmem) + is_pmem = vaddr_in_pmem(vaddr); + buf->addr = (uintptr_t)vaddr; buf->size = PAGE_SIZE; num_buf++; @@ -174,6 +187,12 @@ int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova, } } + if (!is_pmem && access & IB_ACCESS_FLUSH_PERSISTENT) { + pr_warn("Cannot register IB_ACCESS_FLUSH_PERSISTENT for non-pmem memory\n"); + err = -EINVAL; + goto err_release_umem; + } + mr->umem = umem; mr->access = access; mr->offset = ib_umem_offset(umem); From patchwork Tue Sep 27 05:53:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhijian X-Patchwork-Id: 12989901 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 695DDC6FA86 for ; Tue, 27 Sep 2022 05:54:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229853AbiI0FyN (ORCPT ); Tue, 27 Sep 2022 01:54:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52860 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229641AbiI0FyJ (ORCPT ); Tue, 27 Sep 2022 01:54:09 -0400 Received: from esa2.hc1455-7.c3s2.iphmx.com (esa2.hc1455-7.c3s2.iphmx.com [207.54.90.48]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 35CC697502; Mon, 26 Sep 2022 22:54:08 -0700 (PDT) X-IronPort-AV: E=McAfee;i="6500,9779,10482"; a="89973665" X-IronPort-AV: E=Sophos;i="5.93,348,1654527600"; d="scan'208";a="89973665" Received: from unknown (HELO oym-r4.gw.nic.fujitsu.com) ([210.162.30.92]) by esa2.hc1455-7.c3s2.iphmx.com with ESMTP; 27 Sep 2022 14:54:06 +0900 Received: from oym-m1.gw.nic.fujitsu.com (oym-nat-oym-m1.gw.nic.fujitsu.com [192.168.87.58]) by oym-r4.gw.nic.fujitsu.com (Postfix) with ESMTP id 02B11DA69A; Tue, 27 Sep 2022 14:54:05 +0900 (JST) Received: from kws-ab1.gw.nic.fujitsu.com (kws-ab1.gw.nic.fujitsu.com [192.51.206.11]) by oym-m1.gw.nic.fujitsu.com (Postfix) with ESMTP id 2387DD996B; Tue, 27 Sep 2022 14:54:04 +0900 (JST) Received: from FNSTPC.g08.fujitsu.local (unknown [10.167.226.45]) by kws-ab1.gw.nic.fujitsu.com (Postfix) with ESMTP id F033111403D7; Tue, 27 Sep 2022 14:54:02 +0900 (JST) From: Li Zhijian To: Bob Pearson , Leon Romanovsky , Jason Gunthorpe , linux-rdma@vger.kernel.org Cc: Zhu Yanjun , yangx.jy@fujitsu.com, y-goto@fujitsu.com, mbloch@nvidia.com, liangwenpeng@huawei.com, tom@talpey.com, tomasz.gromadzki@intel.com, dan.j.williams@intel.com, linux-kernel@vger.kernel.org, Li Zhijian Subject: [for-next PATCH v5 06/11] RDMA/rxe: Extend rxe packet format to support flush Date: Tue, 27 Sep 2022 13:53:32 +0800 Message-Id: <20220927055337.22630-7-lizhijian@fujitsu.com> X-Mailer: git-send-email 2.36.0 In-Reply-To: <20220927055337.22630-1-lizhijian@fujitsu.com> References: <20220927055337.22630-1-lizhijian@fujitsu.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-TM-AS-Product-Ver: IMSS-9.1.0.1408-9.0.0.1002-27166.005 X-TM-AS-User-Approved-Sender: Yes X-TMASE-Version: IMSS-9.1.0.1408-9.0.1002-27166.005 X-TMASE-Result: 10--6.410700-10.000000 X-TMASE-MatchedRID: p4zY1N5EKtazivLrJCCCm31JIA4rhsZ/1QQ6Jx/fflY6FHRWx2FGsL8F Hrw7frluf146W0iUu2tacZzTSiX0+dDFhoVadDNxnXdphQTSK/JgSkbYPaRxGnFKJkg2RU9v2ZX sky9PWr9q337+/hURJfKWMlUiyd4tOtq2MCn+D5WXfrypKnpvnn0tCKdnhB589yM15V5aWpj6C0 ePs7A07Xi4XEoPXecxzL4zqIbMv11HfXDpf+pZS+tgU5cRONmzJse0AifVtBo= X-TMASE-SNAP-Result: 1.821001.0001-0-1-22:0,33:0,34:0-0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Extend rxe opcode tables, headers, helper and constants to support flush operations. Refer to the IBA A19.4.1 for more FETH definition details Signed-off-by: Li Zhijian --- V5: new FETH structure and simplify header helper new names and new patch split scheme, suggested by Bob. --- drivers/infiniband/sw/rxe/rxe_hdr.h | 47 ++++++++++++++++++++++++++ drivers/infiniband/sw/rxe/rxe_opcode.c | 17 ++++++++++ drivers/infiniband/sw/rxe/rxe_opcode.h | 16 +++++---- 3 files changed, 74 insertions(+), 6 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_hdr.h b/drivers/infiniband/sw/rxe/rxe_hdr.h index e432f9e37795..e995a97c54fd 100644 --- a/drivers/infiniband/sw/rxe/rxe_hdr.h +++ b/drivers/infiniband/sw/rxe/rxe_hdr.h @@ -607,6 +607,52 @@ static inline void reth_set_len(struct rxe_pkt_info *pkt, u32 len) rxe_opcode[pkt->opcode].offset[RXE_RETH], len); } +/****************************************************************************** + * FLUSH Extended Transport Header + ******************************************************************************/ + +struct rxe_feth { + __be32 bits; +}; + +#define FETH_PLT_MASK (0x0000000f) /* bits 3-0 */ +#define FETH_SEL_MASK (0x00000030) /* bits 5-4 */ +#define FETH_SEL_SHIFT (4U) + +static inline u32 __feth_plt(void *arg) +{ + struct rxe_feth *feth = arg; + + return be32_to_cpu(feth->bits) & FETH_PLT_MASK; +} + +static inline u32 __feth_sel(void *arg) +{ + struct rxe_feth *feth = arg; + + return (be32_to_cpu(feth->bits) & FETH_SEL_MASK) >> FETH_SEL_SHIFT; +} + +static inline u32 feth_plt(struct rxe_pkt_info *pkt) +{ + return __feth_plt(pkt->hdr + rxe_opcode[pkt->opcode].offset[RXE_FETH]); +} + +static inline u32 feth_sel(struct rxe_pkt_info *pkt) +{ + return __feth_sel(pkt->hdr + rxe_opcode[pkt->opcode].offset[RXE_FETH]); +} + +static inline void feth_init(struct rxe_pkt_info *pkt, u8 type, u8 level) +{ + struct rxe_feth *feth = (struct rxe_feth *) + (pkt->hdr + rxe_opcode[pkt->opcode].offset[RXE_FETH]); + u32 bits = ((level << FETH_SEL_SHIFT) & FETH_SEL_MASK) | + (type & FETH_PLT_MASK); + + feth->bits = cpu_to_be32(bits); +} + /****************************************************************************** * Atomic Extended Transport Header ******************************************************************************/ @@ -910,6 +956,7 @@ enum rxe_hdr_length { RXE_ATMETH_BYTES = sizeof(struct rxe_atmeth), RXE_IETH_BYTES = sizeof(struct rxe_ieth), RXE_RDETH_BYTES = sizeof(struct rxe_rdeth), + RXE_FETH_BYTES = sizeof(struct rxe_feth), }; static inline size_t header_size(struct rxe_pkt_info *pkt) diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.c b/drivers/infiniband/sw/rxe/rxe_opcode.c index d4ba4d506f17..55aad13e57bb 100644 --- a/drivers/infiniband/sw/rxe/rxe_opcode.c +++ b/drivers/infiniband/sw/rxe/rxe_opcode.c @@ -101,6 +101,12 @@ struct rxe_wr_opcode_info rxe_wr_opcode_info[] = { [IB_QPT_UC] = WR_LOCAL_OP_MASK, }, }, + [IB_WR_FLUSH] = { + .name = "IB_WR_FLUSH", + .mask = { + [IB_QPT_RC] = WR_FLUSH_MASK, + }, + }, }; struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { @@ -378,6 +384,17 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = { RXE_IETH_BYTES, } }, + [IB_OPCODE_RC_FLUSH] = { + .name = "IB_OPCODE_RC_FLUSH", + .mask = RXE_FETH_MASK | RXE_RETH_MASK | RXE_FLUSH_MASK | + RXE_START_MASK | RXE_END_MASK | RXE_REQ_MASK, + .length = RXE_BTH_BYTES + RXE_FETH_BYTES + RXE_RETH_BYTES, + .offset = { + [RXE_BTH] = 0, + [RXE_FETH] = RXE_BTH_BYTES, + [RXE_RETH] = RXE_BTH_BYTES + RXE_FETH_BYTES, + } + }, /* UC */ [IB_OPCODE_UC_SEND_FIRST] = { diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.h b/drivers/infiniband/sw/rxe/rxe_opcode.h index 8f9aaaf260f2..02d256745793 100644 --- a/drivers/infiniband/sw/rxe/rxe_opcode.h +++ b/drivers/infiniband/sw/rxe/rxe_opcode.h @@ -19,7 +19,8 @@ enum rxe_wr_mask { WR_SEND_MASK = BIT(2), WR_READ_MASK = BIT(3), WR_WRITE_MASK = BIT(4), - WR_LOCAL_OP_MASK = BIT(5), + WR_FLUSH_MASK = BIT(5), + WR_LOCAL_OP_MASK = BIT(6), WR_READ_OR_WRITE_MASK = WR_READ_MASK | WR_WRITE_MASK, WR_WRITE_OR_SEND_MASK = WR_WRITE_MASK | WR_SEND_MASK, @@ -47,6 +48,7 @@ enum rxe_hdr_type { RXE_RDETH, RXE_DETH, RXE_IMMDT, + RXE_FETH, RXE_PAYLOAD, NUM_HDR_TYPES }; @@ -63,6 +65,7 @@ enum rxe_hdr_mask { RXE_IETH_MASK = BIT(RXE_IETH), RXE_RDETH_MASK = BIT(RXE_RDETH), RXE_DETH_MASK = BIT(RXE_DETH), + RXE_FETH_MASK = BIT(RXE_FETH), RXE_PAYLOAD_MASK = BIT(RXE_PAYLOAD), RXE_REQ_MASK = BIT(NUM_HDR_TYPES + 0), @@ -71,13 +74,14 @@ enum rxe_hdr_mask { RXE_WRITE_MASK = BIT(NUM_HDR_TYPES + 3), RXE_READ_MASK = BIT(NUM_HDR_TYPES + 4), RXE_ATOMIC_MASK = BIT(NUM_HDR_TYPES + 5), + RXE_FLUSH_MASK = BIT(NUM_HDR_TYPES + 6), - RXE_RWR_MASK = BIT(NUM_HDR_TYPES + 6), - RXE_COMP_MASK = BIT(NUM_HDR_TYPES + 7), + RXE_RWR_MASK = BIT(NUM_HDR_TYPES + 7), + RXE_COMP_MASK = BIT(NUM_HDR_TYPES + 8), - RXE_START_MASK = BIT(NUM_HDR_TYPES + 8), - RXE_MIDDLE_MASK = BIT(NUM_HDR_TYPES + 9), - RXE_END_MASK = BIT(NUM_HDR_TYPES + 10), + RXE_START_MASK = BIT(NUM_HDR_TYPES + 9), + RXE_MIDDLE_MASK = BIT(NUM_HDR_TYPES + 10), + RXE_END_MASK = BIT(NUM_HDR_TYPES + 11), RXE_LOOPBACK_MASK = BIT(NUM_HDR_TYPES + 12), From patchwork Tue Sep 27 05:53:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhijian X-Patchwork-Id: 12989903 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02A13C54EE9 for ; Tue, 27 Sep 2022 05:54:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229813AbiI0FyO (ORCPT ); Tue, 27 Sep 2022 01:54:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52864 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229453AbiI0FyJ (ORCPT ); Tue, 27 Sep 2022 01:54:09 -0400 Received: from esa6.hc1455-7.c3s2.iphmx.com (esa6.hc1455-7.c3s2.iphmx.com [68.232.139.139]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9853F97514; Mon, 26 Sep 2022 22:54:08 -0700 (PDT) X-IronPort-AV: E=McAfee;i="6500,9779,10482"; a="90729127" X-IronPort-AV: E=Sophos;i="5.93,348,1654527600"; d="scan'208";a="90729127" Received: from unknown (HELO yto-r2.gw.nic.fujitsu.com) ([218.44.52.218]) by esa6.hc1455-7.c3s2.iphmx.com with ESMTP; 27 Sep 2022 14:54:06 +0900 Received: from yto-m1.gw.nic.fujitsu.com (yto-nat-yto-m1.gw.nic.fujitsu.com [192.168.83.64]) by yto-r2.gw.nic.fujitsu.com (Postfix) with ESMTP id 9F68BD624F; Tue, 27 Sep 2022 14:54:05 +0900 (JST) Received: from kws-ab1.gw.nic.fujitsu.com (kws-ab1.gw.nic.fujitsu.com [192.51.206.11]) by yto-m1.gw.nic.fujitsu.com (Postfix) with ESMTP id E5D41CFBAB; Tue, 27 Sep 2022 14:54:04 +0900 (JST) Received: from FNSTPC.g08.fujitsu.local (unknown [10.167.226.45]) by kws-ab1.gw.nic.fujitsu.com (Postfix) with ESMTP id BE25411416F0; Tue, 27 Sep 2022 14:54:03 +0900 (JST) From: Li Zhijian To: Bob Pearson , Leon Romanovsky , Jason Gunthorpe , linux-rdma@vger.kernel.org Cc: Zhu Yanjun , yangx.jy@fujitsu.com, y-goto@fujitsu.com, mbloch@nvidia.com, liangwenpeng@huawei.com, tom@talpey.com, tomasz.gromadzki@intel.com, dan.j.williams@intel.com, linux-kernel@vger.kernel.org, Li Zhijian Subject: [for-next PATCH v5 07/11] RDMA/rxe: Implement RC RDMA FLUSH service in requester side Date: Tue, 27 Sep 2022 13:53:33 +0800 Message-Id: <20220927055337.22630-8-lizhijian@fujitsu.com> X-Mailer: git-send-email 2.36.0 In-Reply-To: <20220927055337.22630-1-lizhijian@fujitsu.com> References: <20220927055337.22630-1-lizhijian@fujitsu.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-TM-AS-Product-Ver: IMSS-9.1.0.1408-9.0.0.1002-27166.005 X-TM-AS-User-Approved-Sender: Yes X-TMASE-Version: IMSS-9.1.0.1408-9.0.1002-27166.005 X-TMASE-Result: 10--10.252600-10.000000 X-TMASE-MatchedRID: HCSAtt429UxlJTodqNqEzqoXHZz/dXlxTfK5j0EZbyuOGDxcvp6O0NRO 61VnsdOoe+eZU8eiw0YoSJ1XMXiioS/7QU2czuUNA9lly13c/gH5UnqVnIHSz3d17Y6gGqDC6e5 QS/V7c1m06G7pnRwyhvAdrFuuWv7hGAdnzrnkM48URSScn+QSXhhJCIHRlO51+gtHj7OwNO38o7 Ys1NK4Y7AUkesqPgu4dP+I5zc2ru56+ogKT/gI0KCBz8wyafCL X-TMASE-SNAP-Result: 1.821001.0001-0-1-22:0,33:0,34:0-0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Implement FLUSH request operation in the requester. Signed-off-by: Li Zhijian --- V4: Remove flush union for legecy API, add WR_FLUSH_MASK V3: Fix sparse: incorrect type in assignment; Reported-by: kernel test robot V2: extend flush to include length field. --- drivers/infiniband/sw/rxe/rxe_req.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index f63771207970..5996b0e3177a 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -241,6 +241,9 @@ static int next_opcode_rc(struct rxe_qp *qp, u32 opcode, int fits) IB_OPCODE_RC_SEND_ONLY_WITH_IMMEDIATE : IB_OPCODE_RC_SEND_FIRST; + case IB_WR_FLUSH: + return IB_OPCODE_RC_FLUSH; + case IB_WR_RDMA_READ: return IB_OPCODE_RC_RDMA_READ_REQUEST; @@ -421,11 +424,18 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp, /* init optional headers */ if (pkt->mask & RXE_RETH_MASK) { - reth_set_rkey(pkt, ibwr->wr.rdma.rkey); + if (pkt->mask & RXE_FETH_MASK) + reth_set_rkey(pkt, ibwr->wr.flush.rkey); + else + reth_set_rkey(pkt, ibwr->wr.rdma.rkey); reth_set_va(pkt, wqe->iova); reth_set_len(pkt, wqe->dma.resid); } + /* Fill Flush Extension Transport Header */ + if (pkt->mask & RXE_FETH_MASK) + feth_init(pkt, ibwr->wr.flush.type, ibwr->wr.flush.level); + if (pkt->mask & RXE_IMMDT_MASK) immdt_set_imm(pkt, ibwr->ex.imm_data); @@ -484,6 +494,9 @@ static int finish_packet(struct rxe_qp *qp, struct rxe_av *av, memset(pad, 0, bth_pad(pkt)); } + } else if (pkt->mask & RXE_FLUSH_MASK) { + /* oA19-2: shall have no payload. */ + wqe->dma.resid = 0; } return 0; From patchwork Tue Sep 27 05:53:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhijian X-Patchwork-Id: 12989908 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6160C54EE9 for ; Tue, 27 Sep 2022 05:56:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230208AbiI0F4Y (ORCPT ); Tue, 27 Sep 2022 01:56:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56076 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230240AbiI0F4G (ORCPT ); Tue, 27 Sep 2022 01:56:06 -0400 X-Greylist: delayed 65 seconds by postgrey-1.37 at lindbergh.monkeyblade.net; Mon, 26 Sep 2022 22:55:17 PDT Received: from esa4.hc1455-7.c3s2.iphmx.com (esa4.hc1455-7.c3s2.iphmx.com [68.232.139.117]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2B78DA74D6; Mon, 26 Sep 2022 22:55:15 -0700 (PDT) X-IronPort-AV: E=McAfee;i="6500,9779,10482"; a="89987500" X-IronPort-AV: E=Sophos;i="5.93,348,1654527600"; d="scan'208";a="89987500" Received: from unknown (HELO oym-r2.gw.nic.fujitsu.com) ([210.162.30.90]) by esa4.hc1455-7.c3s2.iphmx.com with ESMTP; 27 Sep 2022 14:54:08 +0900 Received: from oym-m3.gw.nic.fujitsu.com (oym-nat-oym-m3.gw.nic.fujitsu.com [192.168.87.60]) by oym-r2.gw.nic.fujitsu.com (Postfix) with ESMTP id A1B98D432A; Tue, 27 Sep 2022 14:54:06 +0900 (JST) Received: from kws-ab1.gw.nic.fujitsu.com (kws-ab1.gw.nic.fujitsu.com [192.51.206.11]) by oym-m3.gw.nic.fujitsu.com (Postfix) with ESMTP id BB0D7D947A; Tue, 27 Sep 2022 14:54:05 +0900 (JST) Received: from FNSTPC.g08.fujitsu.local (unknown [10.167.226.45]) by kws-ab1.gw.nic.fujitsu.com (Postfix) with ESMTP id 906D911403D7; Tue, 27 Sep 2022 14:54:04 +0900 (JST) From: Li Zhijian To: Bob Pearson , Leon Romanovsky , Jason Gunthorpe , linux-rdma@vger.kernel.org Cc: Zhu Yanjun , yangx.jy@fujitsu.com, y-goto@fujitsu.com, mbloch@nvidia.com, liangwenpeng@huawei.com, tom@talpey.com, tomasz.gromadzki@intel.com, dan.j.williams@intel.com, linux-kernel@vger.kernel.org, Li Zhijian Subject: [for-next PATCH v5 08/11] RDMA/rxe: Implement flush execution in responder side Date: Tue, 27 Sep 2022 13:53:34 +0800 Message-Id: <20220927055337.22630-9-lizhijian@fujitsu.com> X-Mailer: git-send-email 2.36.0 In-Reply-To: <20220927055337.22630-1-lizhijian@fujitsu.com> References: <20220927055337.22630-1-lizhijian@fujitsu.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-TM-AS-Product-Ver: IMSS-9.1.0.1408-9.0.0.1002-27166.005 X-TM-AS-User-Approved-Sender: Yes X-TMASE-Version: IMSS-9.1.0.1408-9.0.1002-27166.005 X-TMASE-Result: 10--10.118100-10.000000 X-TMASE-MatchedRID: /y6RQ8BxlhUYQkBYLWi1BUXBhxFdFgcQltF+xW+zhUiPaLJ/Ca3ST6Yn mMLJRh+Ja+o/S6GsElBoVm4pXGYZ6dTgNUieJDkRjhXy0Khej9IJlr1xKkE5ucC5DTEMxpeQfiq 1gj2xET/gr0WZ6u+ypYSu6NlkfR1/MhccbogzjSS/QNwZdfw3FTKEtjy6tQe+z1qdVCu81fdDJi enRpkiMnP+9mhbDjvtsFECQAI2evsR3RjJbq0tuYQTwECUVvOpeF+F9LT9kRKKS8s/FMqmOEwsE sMM8cGPtI3VVXMQt02+t9N8LjXJkxhts3z4zPFRJTyMiqml0ik5NozJyIvWPl/8lGqVstJXWzrM Xigb0Gm7KTpurpkj8ZRGJCkc1j8t0J+zUQd7y572TmdnNlUFIHiywgNDw+2oSI4EpE9G42qjxYy RBa/qJft6/2HgfIgDVymkLM+r7VQ7AFczfjr/7ElofInmj+mfv8FQTAy2hG1E/URDqU+7yE7N6H ER0HOtb07M+SvDMKs= X-TMASE-SNAP-Result: 1.821001.0001-0-1-22:0,33:0,34:0-0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Only the requested placement types that also registered in the destination memory region are acceptable. Otherwise, responder will also reply NAK "Remote Access Error" if it found a placement type violation. We will persist data via arch_wb_cache_pmem(), which could be architecture specific. This commit also add 2 helpers to update qp.resp from the incoming packet. Signed-off-by: Li Zhijian --- v5: add QP attr check for flush access rename flush_nvdimm_iova -> rxe_flush_pmem_iova() v4: add send_read_response_ack and flush resource --- drivers/infiniband/sw/rxe/rxe_loc.h | 1 + drivers/infiniband/sw/rxe/rxe_mr.c | 60 +++++++++ drivers/infiniband/sw/rxe/rxe_resp.c | 180 ++++++++++++++++++++++---- drivers/infiniband/sw/rxe/rxe_verbs.h | 6 + 4 files changed, 225 insertions(+), 22 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index c2a5c8814a48..944d564a11cd 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -68,6 +68,7 @@ void rxe_mr_init_dma(int access, struct rxe_mr *mr); int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova, int access, struct rxe_mr *mr); int rxe_mr_init_fast(int max_pages, struct rxe_mr *mr); +int rxe_flush_pmem_iova(struct rxe_mr *mr, u64 iova, int length); int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, enum rxe_mr_copy_dir dir); int copy_data(struct rxe_pd *pd, int access, struct rxe_dma_info *dma, diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 1da3ad5eba64..fa7e71074233 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -4,6 +4,8 @@ * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved. */ +#include + #include "rxe.h" #include "rxe_loc.h" @@ -305,6 +307,64 @@ void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length) return addr; } +int rxe_flush_pmem_iova(struct rxe_mr *mr, u64 iova, int length) +{ + int err; + int bytes; + u8 *va; + struct rxe_map **map; + struct rxe_phys_buf *buf; + int m; + int i; + size_t offset; + + if (length == 0) + return 0; + + if (mr->type == IB_MR_TYPE_DMA) { + err = -EFAULT; + goto err1; + } + + err = mr_check_range(mr, iova, length); + if (err) { + err = -EFAULT; + goto err1; + } + + lookup_iova(mr, iova, &m, &i, &offset); + + map = mr->map + m; + buf = map[0]->buf + i; + + while (length > 0) { + va = (u8 *)(uintptr_t)buf->addr + offset; + bytes = buf->size - offset; + + if (bytes > length) + bytes = length; + + arch_wb_cache_pmem(va, bytes); + + length -= bytes; + + offset = 0; + buf++; + i++; + + if (i == RXE_BUF_PER_MAP) { + i = 0; + map++; + buf = map[0]->buf; + } + } + + return 0; + +err1: + return err; +} + /* copy data from a range (vaddr, vaddr+length-1) to or from * a mr object starting at iova. */ diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index ed5a09e86417..0b68e5d8e1d2 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -22,6 +22,7 @@ enum resp_states { RESPST_EXECUTE, RESPST_READ_REPLY, RESPST_ATOMIC_REPLY, + RESPST_PROCESS_FLUSH, RESPST_COMPLETE, RESPST_ACKNOWLEDGE, RESPST_CLEANUP, @@ -57,6 +58,7 @@ static char *resp_state_name[] = { [RESPST_EXECUTE] = "EXECUTE", [RESPST_READ_REPLY] = "READ_REPLY", [RESPST_ATOMIC_REPLY] = "ATOMIC_REPLY", + [RESPST_PROCESS_FLUSH] = "PROCESS_FLUSH", [RESPST_COMPLETE] = "COMPLETE", [RESPST_ACKNOWLEDGE] = "ACKNOWLEDGE", [RESPST_CLEANUP] = "CLEANUP", @@ -253,19 +255,38 @@ static enum resp_states check_op_seq(struct rxe_qp *qp, } } +static bool check_qp_attr_access(struct rxe_qp *qp, + struct rxe_pkt_info *pkt) +{ + if (((pkt->mask & RXE_READ_MASK) && + !(qp->attr.qp_access_flags & IB_ACCESS_REMOTE_READ)) || + ((pkt->mask & RXE_WRITE_MASK) && + !(qp->attr.qp_access_flags & IB_ACCESS_REMOTE_WRITE)) || + ((pkt->mask & RXE_ATOMIC_MASK) && + !(qp->attr.qp_access_flags & IB_ACCESS_REMOTE_ATOMIC))) { + return false; + } + + if (pkt->mask & RXE_FLUSH_MASK) { + u32 flush_type = feth_plt(pkt); + + if ((flush_type & IB_FLUSH_GLOBAL && + !(qp->attr.qp_access_flags & IB_ACCESS_FLUSH_GLOBAL)) || + (flush_type & IB_FLUSH_PERSISTENT && + !(qp->attr.qp_access_flags & IB_ACCESS_FLUSH_PERSISTENT))) + return false; + } + + return true; +} + static enum resp_states check_op_valid(struct rxe_qp *qp, struct rxe_pkt_info *pkt) { switch (qp_type(qp)) { case IB_QPT_RC: - if (((pkt->mask & RXE_READ_MASK) && - !(qp->attr.qp_access_flags & IB_ACCESS_REMOTE_READ)) || - ((pkt->mask & RXE_WRITE_MASK) && - !(qp->attr.qp_access_flags & IB_ACCESS_REMOTE_WRITE)) || - ((pkt->mask & RXE_ATOMIC_MASK) && - !(qp->attr.qp_access_flags & IB_ACCESS_REMOTE_ATOMIC))) { + if (!check_qp_attr_access(qp, pkt)) return RESPST_ERR_UNSUPPORTED_OPCODE; - } break; @@ -402,6 +423,23 @@ static enum resp_states check_length(struct rxe_qp *qp, } } +static void qp_resp_from_reth(struct rxe_qp *qp, struct rxe_pkt_info *pkt) +{ + qp->resp.va = reth_va(pkt); + qp->resp.offset = 0; + qp->resp.rkey = reth_rkey(pkt); + qp->resp.resid = reth_len(pkt); + qp->resp.length = reth_len(pkt); +} + +static void qp_resp_from_atmeth(struct rxe_qp *qp, struct rxe_pkt_info *pkt) +{ + qp->resp.va = atmeth_va(pkt); + qp->resp.offset = 0; + qp->resp.rkey = atmeth_rkey(pkt); + qp->resp.resid = sizeof(u64); +} + static enum resp_states check_rkey(struct rxe_qp *qp, struct rxe_pkt_info *pkt) { @@ -413,23 +451,26 @@ static enum resp_states check_rkey(struct rxe_qp *qp, u32 pktlen; int mtu = qp->mtu; enum resp_states state; - int access; + int access = 0; if (pkt->mask & RXE_READ_OR_WRITE_MASK) { - if (pkt->mask & RXE_RETH_MASK) { - qp->resp.va = reth_va(pkt); - qp->resp.offset = 0; - qp->resp.rkey = reth_rkey(pkt); - qp->resp.resid = reth_len(pkt); - qp->resp.length = reth_len(pkt); - } + if (pkt->mask & RXE_RETH_MASK) + qp_resp_from_reth(qp, pkt); + access = (pkt->mask & RXE_READ_MASK) ? IB_ACCESS_REMOTE_READ : IB_ACCESS_REMOTE_WRITE; + } else if (pkt->mask & RXE_FLUSH_MASK) { + u32 flush_type = feth_plt(pkt); + + if (pkt->mask & RXE_RETH_MASK) + qp_resp_from_reth(qp, pkt); + + if (flush_type & IB_FLUSH_GLOBAL) + access |= IB_ACCESS_FLUSH_GLOBAL; + if (flush_type & IB_FLUSH_PERSISTENT) + access |= IB_ACCESS_FLUSH_PERSISTENT; } else if (pkt->mask & RXE_ATOMIC_MASK) { - qp->resp.va = atmeth_va(pkt); - qp->resp.offset = 0; - qp->resp.rkey = atmeth_rkey(pkt); - qp->resp.resid = sizeof(u64); + qp_resp_from_atmeth(qp, pkt); access = IB_ACCESS_REMOTE_ATOMIC; } else { return RESPST_EXECUTE; @@ -450,7 +491,7 @@ static enum resp_states check_rkey(struct rxe_qp *qp, if (rkey_is_mw(rkey)) { mw = rxe_lookup_mw(qp, access, rkey); if (!mw) { - pr_debug("%s: no MW matches rkey %#x\n", + pr_err("%s: no MW matches rkey %#x\n", __func__, rkey); state = RESPST_ERR_RKEY_VIOLATION; goto err; @@ -458,7 +499,7 @@ static enum resp_states check_rkey(struct rxe_qp *qp, mr = mw->mr; if (!mr) { - pr_err("%s: MW doesn't have an MR\n", __func__); + pr_debug("%s: MW doesn't have an MR\n", __func__); state = RESPST_ERR_RKEY_VIOLATION; goto err; } @@ -478,12 +519,21 @@ static enum resp_states check_rkey(struct rxe_qp *qp, } } + if (pkt->mask & RXE_FLUSH_MASK) { + /* FLUSH MR may not set va or resid + * no need to check range since we will flush whole mr + */ + if (feth_sel(pkt) == IB_FLUSH_MR) + goto skip_check_range; + } + if (mr_check_range(mr, va + qp->resp.offset, resid)) { state = RESPST_ERR_RKEY_VIOLATION; goto err; } - if (pkt->mask & RXE_WRITE_MASK) { +skip_check_range: + if (pkt->mask & RXE_WRITE_MASK) { if (resid > mtu) { if (pktlen != mtu || bth_pad(pkt)) { state = RESPST_ERR_LENGTH; @@ -587,11 +637,61 @@ static struct resp_res *rxe_prepare_res(struct rxe_qp *qp, res->last_psn = pkt->psn; res->cur_psn = pkt->psn; break; + case RXE_FLUSH_MASK: + res->flush.va = qp->resp.va + qp->resp.offset; + res->flush.length = qp->resp.length; + res->flush.type = feth_plt(pkt); + res->flush.level = feth_sel(pkt); } return res; } +static enum resp_states process_flush(struct rxe_qp *qp, + struct rxe_pkt_info *pkt) +{ + u64 length, start; + struct rxe_mr *mr = qp->resp.mr; + struct resp_res *res = qp->resp.res; + + /* oA19-14, oA19-15 */ + if (res && res->replay) + return RESPST_ACKNOWLEDGE; + else if (!res) { + res = rxe_prepare_res(qp, pkt, RXE_FLUSH_MASK); + qp->resp.res = res; + } + + if (res->flush.level == IB_FLUSH_RANGE) { + start = res->flush.va; + length = res->flush.length; + } else { /* level == IB_FLUSH_MR */ + start = mr->ibmr.iova; + length = mr->ibmr.length; + } + + if (res->flush.type & IB_FLUSH_PERSISTENT) { + if (rxe_flush_pmem_iova(mr, start, length)) + return RESPST_ERR_RKEY_VIOLATION; + /* Make data persistent. */ + wmb(); + } else if (res->flush.type & IB_FLUSH_GLOBAL) { + /* Make data global visibility. */ + wmb(); + } + + qp->resp.msn++; + + /* next expected psn, read handles this separately */ + qp->resp.psn = (pkt->psn + 1) & BTH_PSN_MASK; + qp->resp.ack_psn = qp->resp.psn; + + qp->resp.opcode = pkt->opcode; + qp->resp.status = IB_WC_SUCCESS; + + return RESPST_ACKNOWLEDGE; +} + /* Guarantee atomicity of atomic operations at the machine level. */ static DEFINE_SPINLOCK(atomic_ops_lock); @@ -888,6 +988,8 @@ static enum resp_states execute(struct rxe_qp *qp, struct rxe_pkt_info *pkt) return RESPST_READ_REPLY; } else if (pkt->mask & RXE_ATOMIC_MASK) { return RESPST_ATOMIC_REPLY; + } else if (pkt->mask & RXE_FLUSH_MASK) { + return RESPST_PROCESS_FLUSH; } else { /* Unreachable */ WARN_ON_ONCE(1); @@ -1061,6 +1163,19 @@ static int send_atomic_ack(struct rxe_qp *qp, u8 syndrome, u32 psn) return ret; } +static int send_read_response_ack(struct rxe_qp *qp, u8 syndrome, u32 psn) +{ + int ret = send_common_ack(qp, syndrome, psn, + IB_OPCODE_RC_RDMA_READ_RESPONSE_ONLY, + "RDMA READ response of length zero ACK"); + + /* have to clear this since it is used to trigger + * long read replies + */ + qp->resp.res = NULL; + return ret; +} + static enum resp_states acknowledge(struct rxe_qp *qp, struct rxe_pkt_info *pkt) { @@ -1071,6 +1186,8 @@ static enum resp_states acknowledge(struct rxe_qp *qp, send_ack(qp, qp->resp.aeth_syndrome, pkt->psn); else if (pkt->mask & RXE_ATOMIC_MASK) send_atomic_ack(qp, AETH_ACK_UNLIMITED, pkt->psn); + else if (pkt->mask & RXE_FLUSH_MASK) + send_read_response_ack(qp, AETH_ACK_UNLIMITED, pkt->psn); else if (bth_ack(pkt)) send_ack(qp, AETH_ACK_UNLIMITED, pkt->psn); @@ -1127,6 +1244,22 @@ static enum resp_states duplicate_request(struct rxe_qp *qp, /* SEND. Ack again and cleanup. C9-105. */ send_ack(qp, AETH_ACK_UNLIMITED, prev_psn); return RESPST_CLEANUP; + } else if (pkt->mask & RXE_FLUSH_MASK) { + struct resp_res *res; + + /* Find the operation in our list of responder resources. */ + res = find_resource(qp, pkt->psn); + if (res) { + res->replay = 1; + res->cur_psn = pkt->psn; + qp->resp.res = res; + rc = RESPST_PROCESS_FLUSH; + goto out; + } + + /* Resource not found. Class D error. Drop the request. */ + rc = RESPST_CLEANUP; + goto out; } else if (pkt->mask & RXE_READ_MASK) { struct resp_res *res; @@ -1320,6 +1453,9 @@ int rxe_responder(void *arg) case RESPST_ATOMIC_REPLY: state = atomic_reply(qp, pkt); break; + case RESPST_PROCESS_FLUSH: + state = process_flush(qp, pkt); + break; case RESPST_ACKNOWLEDGE: state = acknowledge(qp, pkt); break; diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index 5f5cbfcb3569..4cfe4d8b0aaa 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -165,6 +165,12 @@ struct resp_res { u64 va; u32 resid; } read; + struct { + u32 length; + u64 va; + u8 type; + u8 level; + } flush; }; }; From patchwork Tue Sep 27 05:53:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhijian X-Patchwork-Id: 12989905 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 574AFC07E9D for ; Tue, 27 Sep 2022 05:54:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229985AbiI0FyS (ORCPT ); Tue, 27 Sep 2022 01:54:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52952 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229713AbiI0FyM (ORCPT ); Tue, 27 Sep 2022 01:54:12 -0400 Received: from esa8.hc1455-7.c3s2.iphmx.com (esa8.hc1455-7.c3s2.iphmx.com [139.138.61.253]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F3AE79875D; Mon, 26 Sep 2022 22:54:10 -0700 (PDT) X-IronPort-AV: E=McAfee;i="6500,9779,10482"; a="78051765" X-IronPort-AV: E=Sophos;i="5.93,348,1654527600"; d="scan'208";a="78051765" Received: from unknown (HELO yto-r1.gw.nic.fujitsu.com) ([218.44.52.217]) by esa8.hc1455-7.c3s2.iphmx.com with ESMTP; 27 Sep 2022 14:54:08 +0900 Received: from yto-m1.gw.nic.fujitsu.com (yto-nat-yto-m1.gw.nic.fujitsu.com [192.168.83.64]) by yto-r1.gw.nic.fujitsu.com (Postfix) with ESMTP id 7E03FDAFD3; Tue, 27 Sep 2022 14:54:07 +0900 (JST) Received: from kws-ab1.gw.nic.fujitsu.com (kws-ab1.gw.nic.fujitsu.com [192.51.206.11]) by yto-m1.gw.nic.fujitsu.com (Postfix) with ESMTP id 97059CFBA0; Tue, 27 Sep 2022 14:54:06 +0900 (JST) Received: from FNSTPC.g08.fujitsu.local (unknown [10.167.226.45]) by kws-ab1.gw.nic.fujitsu.com (Postfix) with ESMTP id 6184E11416F0; Tue, 27 Sep 2022 14:54:05 +0900 (JST) From: Li Zhijian To: Bob Pearson , Leon Romanovsky , Jason Gunthorpe , linux-rdma@vger.kernel.org Cc: Zhu Yanjun , yangx.jy@fujitsu.com, y-goto@fujitsu.com, mbloch@nvidia.com, liangwenpeng@huawei.com, tom@talpey.com, tomasz.gromadzki@intel.com, dan.j.williams@intel.com, linux-kernel@vger.kernel.org, Li Zhijian Subject: [for-next PATCH v5 09/11] RDMA/rxe: Implement flush completion Date: Tue, 27 Sep 2022 13:53:35 +0800 Message-Id: <20220927055337.22630-10-lizhijian@fujitsu.com> X-Mailer: git-send-email 2.36.0 In-Reply-To: <20220927055337.22630-1-lizhijian@fujitsu.com> References: <20220927055337.22630-1-lizhijian@fujitsu.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-TM-AS-Product-Ver: IMSS-9.1.0.1408-9.0.0.1002-27166.005 X-TM-AS-User-Approved-Sender: Yes X-TMASE-Version: IMSS-9.1.0.1408-9.0.1002-27166.005 X-TMASE-Result: 10--2.732800-10.000000 X-TMASE-MatchedRID: pif/TqAgu++ojsVP+osNyI4V8tCoXo/SwTlc9CcHMZerwqxtE531VNnf JrUSEbFDd5/rOwjY0ImeLHNUN8SOI5cLewwAa76fzr16YOzjZ119LQinZ4QefCP/VFuTOXUTae6 hIZpj4MuOhzOa6g8KrXfKRqKbAgK48rdj9xzzZbvOGSIquzXm+ylvQrIP5c7Z6RcPK5XUP8wsIO 1Lf7K72dySj4SWdA0m7hd/53vJMo1xp3rD8nnW8BXBt/mUREyAj/ZFF9Wfm7hNy7ppG0IjcFQqk 0j7vLVUewMSBDreIdk= X-TMASE-SNAP-Result: 1.821001.0001-0-1-22:0,33:0,34:0-0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Per IBA SPEC, FLUSH will ack in rdma read response with 0 length. Use IB_WC_FLUSH (aka IB_UVERBS_WC_FLUSH) code to tell userspace a FLUSH completion. Signed-off-by: Li Zhijian --- drivers/infiniband/sw/rxe/rxe_comp.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c index fb0c008af78c..2dea786e20ad 100644 --- a/drivers/infiniband/sw/rxe/rxe_comp.c +++ b/drivers/infiniband/sw/rxe/rxe_comp.c @@ -104,6 +104,7 @@ static enum ib_wc_opcode wr_to_wc_opcode(enum ib_wr_opcode opcode) case IB_WR_LOCAL_INV: return IB_WC_LOCAL_INV; case IB_WR_REG_MR: return IB_WC_REG_MR; case IB_WR_BIND_MW: return IB_WC_BIND_MW; + case IB_WR_FLUSH: return IB_WC_FLUSH; default: return 0xff; @@ -263,7 +264,8 @@ static inline enum comp_state check_ack(struct rxe_qp *qp, */ case IB_OPCODE_RC_RDMA_READ_RESPONSE_MIDDLE: if (wqe->wr.opcode != IB_WR_RDMA_READ && - wqe->wr.opcode != IB_WR_RDMA_READ_WITH_INV) { + wqe->wr.opcode != IB_WR_RDMA_READ_WITH_INV && + wqe->wr.opcode != IB_WR_FLUSH) { wqe->status = IB_WC_FATAL_ERR; return COMPST_ERROR; } From patchwork Tue Sep 27 05:53:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhijian X-Patchwork-Id: 12989904 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A9F6C6FA83 for ; Tue, 27 Sep 2022 05:54:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229821AbiI0FyR (ORCPT ); Tue, 27 Sep 2022 01:54:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52922 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229798AbiI0FyL (ORCPT ); Tue, 27 Sep 2022 01:54:11 -0400 Received: from esa6.hc1455-7.c3s2.iphmx.com (esa6.hc1455-7.c3s2.iphmx.com [68.232.139.139]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2190897D71; Mon, 26 Sep 2022 22:54:09 -0700 (PDT) X-IronPort-AV: E=McAfee;i="6500,9779,10482"; a="90729131" X-IronPort-AV: E=Sophos;i="5.93,348,1654527600"; d="scan'208";a="90729131" Received: from unknown (HELO yto-r2.gw.nic.fujitsu.com) ([218.44.52.218]) by esa6.hc1455-7.c3s2.iphmx.com with ESMTP; 27 Sep 2022 14:54:09 +0900 Received: from yto-m2.gw.nic.fujitsu.com (yto-nat-yto-m2.gw.nic.fujitsu.com [192.168.83.65]) by yto-r2.gw.nic.fujitsu.com (Postfix) with ESMTP id 5650DD624B; Tue, 27 Sep 2022 14:54:08 +0900 (JST) Received: from kws-ab1.gw.nic.fujitsu.com (kws-ab1.gw.nic.fujitsu.com [192.51.206.11]) by yto-m2.gw.nic.fujitsu.com (Postfix) with ESMTP id 741BAD35AE; Tue, 27 Sep 2022 14:54:07 +0900 (JST) Received: from FNSTPC.g08.fujitsu.local (unknown [10.167.226.45]) by kws-ab1.gw.nic.fujitsu.com (Postfix) with ESMTP id 3C61F11403D7; Tue, 27 Sep 2022 14:54:06 +0900 (JST) From: Li Zhijian To: Bob Pearson , Leon Romanovsky , Jason Gunthorpe , linux-rdma@vger.kernel.org Cc: Zhu Yanjun , yangx.jy@fujitsu.com, y-goto@fujitsu.com, mbloch@nvidia.com, liangwenpeng@huawei.com, tom@talpey.com, tomasz.gromadzki@intel.com, dan.j.williams@intel.com, linux-kernel@vger.kernel.org, Li Zhijian Subject: [for-next PATCH v5 10/11] RDMA/cm: Make QP FLUSHABLE Date: Tue, 27 Sep 2022 13:53:36 +0800 Message-Id: <20220927055337.22630-11-lizhijian@fujitsu.com> X-Mailer: git-send-email 2.36.0 In-Reply-To: <20220927055337.22630-1-lizhijian@fujitsu.com> References: <20220927055337.22630-1-lizhijian@fujitsu.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-TM-AS-Product-Ver: IMSS-9.1.0.1408-9.0.0.1002-27166.005 X-TM-AS-User-Approved-Sender: Yes X-TMASE-Version: IMSS-9.1.0.1408-9.0.1002-27166.005 X-TMASE-Result: 10--0.921700-10.000000 X-TMASE-MatchedRID: M35ia3mjZ7StVKSQoU2TwQrcxrzwsv5u1QQ6Jx/fflbAuQ0xDMaXkH4q tYI9sRE/KqrQ7lLcMnwur3Fjd+eQSljaDEobnfK28Jb881FGn9l9LQinZ4QefCP/VFuTOXUT3n8 eBZjGmUzkwjHXXC/4I8ZW5ai5WKlyM99BYxnKS3kxb1ulXaH/32kieamUbtsrqDqferhvzIc0pn liIFcQucsjLiUIBI2RcrCcbDx2V+24kpwBjmmShjxoObZSQIcAEWW0bEJOTAVAdUD6vW8Z1mZAM QMIyK6zB8/x9JIi8hKhgLRzA45JPQ== X-TMASE-SNAP-Result: 1.821001.0001-0-1-22:0,33:0,34:0-0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org It enables flushable access flag for qp Signed-off-by: Li Zhijian --- V5: new patch, inspired by Bob --- drivers/infiniband/core/cm.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c index 1f9938a2c475..58837aac980b 100644 --- a/drivers/infiniband/core/cm.c +++ b/drivers/infiniband/core/cm.c @@ -4096,7 +4096,8 @@ static int cm_init_qp_init_attr(struct cm_id_private *cm_id_priv, qp_attr->qp_access_flags = IB_ACCESS_REMOTE_WRITE; if (cm_id_priv->responder_resources) qp_attr->qp_access_flags |= IB_ACCESS_REMOTE_READ | - IB_ACCESS_REMOTE_ATOMIC; + IB_ACCESS_REMOTE_ATOMIC | + IB_ACCESS_FLUSHABLE; qp_attr->pkey_index = cm_id_priv->av.pkey_index; if (cm_id_priv->av.port) qp_attr->port_num = cm_id_priv->av.port->port_num; From patchwork Tue Sep 27 05:53:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li Zhijian X-Patchwork-Id: 12989906 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4FF61C07E9D for ; Tue, 27 Sep 2022 05:54:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230116AbiI0Fyi (ORCPT ); Tue, 27 Sep 2022 01:54:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53072 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229890AbiI0FyQ (ORCPT ); Tue, 27 Sep 2022 01:54:16 -0400 Received: from esa11.hc1455-7.c3s2.iphmx.com (esa11.hc1455-7.c3s2.iphmx.com [207.54.90.137]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B22A297511; Mon, 26 Sep 2022 22:54:11 -0700 (PDT) X-IronPort-AV: E=McAfee;i="6500,9779,10482"; a="69512613" X-IronPort-AV: E=Sophos;i="5.93,348,1654527600"; d="scan'208";a="69512613" Received: from unknown (HELO oym-r1.gw.nic.fujitsu.com) ([210.162.30.89]) by esa11.hc1455-7.c3s2.iphmx.com with ESMTP; 27 Sep 2022 14:54:11 +0900 Received: from oym-m1.gw.nic.fujitsu.com (oym-nat-oym-m1.gw.nic.fujitsu.com [192.168.87.58]) by oym-r1.gw.nic.fujitsu.com (Postfix) with ESMTP id A84AEE8BF1; Tue, 27 Sep 2022 14:54:09 +0900 (JST) Received: from kws-ab1.gw.nic.fujitsu.com (kws-ab1.gw.nic.fujitsu.com [192.51.206.11]) by oym-m1.gw.nic.fujitsu.com (Postfix) with ESMTP id CA095D9968; Tue, 27 Sep 2022 14:54:08 +0900 (JST) Received: from FNSTPC.g08.fujitsu.local (unknown [10.167.226.45]) by kws-ab1.gw.nic.fujitsu.com (Postfix) with ESMTP id 149C611416F0; Tue, 27 Sep 2022 14:54:07 +0900 (JST) From: Li Zhijian To: Bob Pearson , Leon Romanovsky , Jason Gunthorpe , linux-rdma@vger.kernel.org Cc: Zhu Yanjun , yangx.jy@fujitsu.com, y-goto@fujitsu.com, mbloch@nvidia.com, liangwenpeng@huawei.com, tom@talpey.com, tomasz.gromadzki@intel.com, dan.j.williams@intel.com, linux-kernel@vger.kernel.org, Li Zhijian Subject: [for-next PATCH v5 11/11] RDMA/rxe: Enable RDMA FLUSH capability for rxe device Date: Tue, 27 Sep 2022 13:53:37 +0800 Message-Id: <20220927055337.22630-12-lizhijian@fujitsu.com> X-Mailer: git-send-email 2.36.0 In-Reply-To: <20220927055337.22630-1-lizhijian@fujitsu.com> References: <20220927055337.22630-1-lizhijian@fujitsu.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-TM-AS-Product-Ver: IMSS-9.1.0.1408-9.0.0.1002-27166.005 X-TM-AS-User-Approved-Sender: Yes X-TMASE-Version: IMSS-9.1.0.1408-9.0.1002-27166.005 X-TMASE-Result: 10-3.982300-10.000000 X-TMASE-MatchedRID: D7uj3L7FQ3I2TliLLcf2acIkzTqL3E/WnQkHrAHoKqay65WOujyvG78F Hrw7frluf146W0iUu2tDc4lSgrowpa8zfGxMvR+8v0DcGXX8NxVULRRq00o2mZsoi2XrUn/J8m+ hzBStansUGm4zriL0oQtuKBGekqUpnH7sbImOEBTxvqjmp5rLZGqULqe+P5MHNkCLPUI0ylSrcj a4kAOPRNdcyBdQRFo/m+djAjX/tYVWiOVVC2gjixj6XCRmX28vNxlq7dfoh5qbDRBqS2n66yzP5 xAyz9Oenvkw4sh/+PcMX5CwH5DTUmgGZNLBHGNe X-TMASE-SNAP-Result: 1.821001.0001-0-1-22:0,33:0,34:0-0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Now we are ready to enable RDMA FLUSH capability for RXE. It can support Global Visibility and Persistence placement types. Signed-off-by: Li Zhijian --- drivers/infiniband/sw/rxe/rxe_param.h | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/infiniband/sw/rxe/rxe_param.h b/drivers/infiniband/sw/rxe/rxe_param.h index 86c7a8bf3cbb..c7a82823a041 100644 --- a/drivers/infiniband/sw/rxe/rxe_param.h +++ b/drivers/infiniband/sw/rxe/rxe_param.h @@ -51,7 +51,9 @@ enum rxe_device_param { | IB_DEVICE_SRQ_RESIZE | IB_DEVICE_MEM_MGT_EXTENSIONS | IB_DEVICE_MEM_WINDOW - | IB_DEVICE_MEM_WINDOW_TYPE_2B, + | IB_DEVICE_MEM_WINDOW_TYPE_2B + | IB_DEVICE_FLUSH_GLOBAL + | IB_DEVICE_FLUSH_PERSISTENT, RXE_MAX_SGE = 32, RXE_MAX_WQE_SIZE = sizeof(struct rxe_send_wqe) + sizeof(struct ib_sge) * RXE_MAX_SGE,