From patchwork Fri Nov 27 20:55:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xiong, Jianxin" X-Patchwork-Id: 11937253 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A28AC5519F for ; Fri, 27 Nov 2020 20:46:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2193E21D7F for ; Fri, 27 Nov 2020 20:46:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729362AbgK0UqH (ORCPT ); Fri, 27 Nov 2020 15:46:07 -0500 Received: from mga03.intel.com ([134.134.136.65]:6750 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732611AbgK0UmX (ORCPT ); Fri, 27 Nov 2020 15:42:23 -0500 IronPort-SDR: PRQvhb9HzflRbR6oeXnlg9Sl2ckmtoeXxbhb88h7d0fCDByg0mE7HJIaKvP+bBKZIUwahS1Rgr Aiyybl3ximYg== X-IronPort-AV: E=McAfee;i="6000,8403,9818"; a="172533988" X-IronPort-AV: E=Sophos;i="5.78,375,1599548400"; d="scan'208";a="172533988" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Nov 2020 12:41:45 -0800 IronPort-SDR: OGGEj6j8XwePoQt8wvF+RjV9vgtyeyFu3CFLoZ+Js5Kfa0vhvgtLpGRUlt3qLGhIZwkLFamCPZ Q++/GJnehv3A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,375,1599548400"; d="scan'208";a="537737686" Received: from cst-dev.jf.intel.com ([10.23.221.69]) by fmsmga005.fm.intel.com with ESMTP; 27 Nov 2020 12:41:45 -0800 From: Jianxin Xiong To: linux-rdma@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: Jianxin Xiong , Doug Ledford , Jason Gunthorpe , Leon Romanovsky , Sumit Semwal , Christian Koenig , Daniel Vetter Subject: [PATCH rdma-core v3 3/6] mlx5: Support dma-buf based memory region Date: Fri, 27 Nov 2020 12:55:40 -0800 Message-Id: <1606510543-45567-4-git-send-email-jianxin.xiong@intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1606510543-45567-1-git-send-email-jianxin.xiong@intel.com> References: <1606510543-45567-1-git-send-email-jianxin.xiong@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Implement the new provider method for registering dma-buf based memory regions. Signed-off-by: Jianxin Xiong --- providers/mlx5/mlx5.c | 2 ++ providers/mlx5/mlx5.h | 3 +++ providers/mlx5/verbs.c | 22 ++++++++++++++++++++++ 3 files changed, 27 insertions(+) diff --git a/providers/mlx5/mlx5.c b/providers/mlx5/mlx5.c index 1378acf..b3e2d57 100644 --- a/providers/mlx5/mlx5.c +++ b/providers/mlx5/mlx5.c @@ -1,5 +1,6 @@ /* * Copyright (c) 2012 Mellanox Technologies, Inc. All rights reserved. + * Copyright (c) 2020 Intel Corporation. All rights reserved. * * This software is available to you under a choice of one of two * licenses. You may choose to be licensed under the terms of the GNU @@ -96,6 +97,7 @@ static const struct verbs_context_ops mlx5_ctx_common_ops = { .async_event = mlx5_async_event, .dealloc_pd = mlx5_free_pd, .reg_mr = mlx5_reg_mr, + .reg_dmabuf_mr = mlx5_reg_dmabuf_mr, .rereg_mr = mlx5_rereg_mr, .dereg_mr = mlx5_dereg_mr, .alloc_mw = mlx5_alloc_mw, diff --git a/providers/mlx5/mlx5.h b/providers/mlx5/mlx5.h index 8c94f72..17a2470 100644 --- a/providers/mlx5/mlx5.h +++ b/providers/mlx5/mlx5.h @@ -1,5 +1,6 @@ /* * Copyright (c) 2012 Mellanox Technologies, Inc. All rights reserved. + * Copyright (c) 2020 Intel Corporation. All rights reserved. * * This software is available to you under a choice of one of two * licenses. You may choose to be licensed under the terms of the GNU @@ -903,6 +904,8 @@ void mlx5_async_event(struct ibv_context *context, struct ibv_mr *mlx5_alloc_null_mr(struct ibv_pd *pd); struct ibv_mr *mlx5_reg_mr(struct ibv_pd *pd, void *addr, size_t length, uint64_t hca_va, int access); +struct ibv_mr *mlx5_reg_dmabuf_mr(struct ibv_pd *pd, uint64_t offset, size_t length, + uint64_t iova, int fd, int access); int mlx5_rereg_mr(struct verbs_mr *mr, int flags, struct ibv_pd *pd, void *addr, size_t length, int access); int mlx5_dereg_mr(struct verbs_mr *mr); diff --git a/providers/mlx5/verbs.c b/providers/mlx5/verbs.c index b956156..a7fc3b0 100644 --- a/providers/mlx5/verbs.c +++ b/providers/mlx5/verbs.c @@ -1,5 +1,6 @@ /* * Copyright (c) 2012 Mellanox Technologies, Inc. All rights reserved. + * Copyright (c) 2020 Intel Corporation. All rights reserved. * * This software is available to you under a choice of one of two * licenses. You may choose to be licensed under the terms of the GNU @@ -647,6 +648,27 @@ struct ibv_mr *mlx5_reg_mr(struct ibv_pd *pd, void *addr, size_t length, return &mr->vmr.ibv_mr; } +struct ibv_mr *mlx5_reg_dmabuf_mr(struct ibv_pd *pd, uint64_t offset, size_t length, + uint64_t iova, int fd, int acc) +{ + struct mlx5_mr *mr; + int ret; + + mr = calloc(1, sizeof(*mr)); + if (!mr) + return NULL; + + ret = ibv_cmd_reg_dmabuf_mr(pd, offset, length, iova, fd, acc, + &mr->vmr); + if (ret) { + free(mr); + return NULL; + } + mr->alloc_flags = acc; + + return &mr->vmr.ibv_mr; +} + struct ibv_mr *mlx5_alloc_null_mr(struct ibv_pd *pd) { struct mlx5_mr *mr;