From patchwork Mon Jan 25 19:56:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xiong, Jianxin" X-Patchwork-Id: 12044943 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A669C433DB for ; Mon, 25 Jan 2021 23:57:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CCC3B22597 for ; Mon, 25 Jan 2021 23:57:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726828AbhAYToP (ORCPT ); Mon, 25 Jan 2021 14:44:15 -0500 Received: from mga09.intel.com ([134.134.136.24]:28894 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731422AbhAYTmu (ORCPT ); Mon, 25 Jan 2021 14:42:50 -0500 IronPort-SDR: H+pZGkxbvMwa+zcoOAhy19M6oSP5IxrF3+MOgC3XdyzrtFz5A7+sKVhSB64IgdfpxW7G5ognxH MBFl9XSN6JzQ== X-IronPort-AV: E=McAfee;i="6000,8403,9875"; a="179937093" X-IronPort-AV: E=Sophos;i="5.79,374,1602572400"; d="scan'208";a="179937093" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jan 2021 11:41:59 -0800 IronPort-SDR: ORqN1kAEQJO9e6I/a5MBj3H/+o1ySziUQlTZY+/GiMSowaKdsNwGBn7MDR3DMzKyqphLZ77qeC bPjvYuzwzhdA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,374,1602572400"; d="scan'208";a="402468959" Received: from cst-dev.jf.intel.com ([10.23.221.69]) by fmsmga004.fm.intel.com with ESMTP; 25 Jan 2021 11:41:58 -0800 From: Jianxin Xiong To: linux-rdma@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: Jianxin Xiong , Doug Ledford , Jason Gunthorpe , Leon Romanovsky , Sumit Semwal , Christian Koenig , Daniel Vetter , Edward Srouji , Yishai Hadas Subject: [PATCH rdma-core v7 3/6] mlx5: Support dma-buf based memory region Date: Mon, 25 Jan 2021 11:56:59 -0800 Message-Id: <1611604622-86968-4-git-send-email-jianxin.xiong@intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1611604622-86968-1-git-send-email-jianxin.xiong@intel.com> References: <1611604622-86968-1-git-send-email-jianxin.xiong@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Implement the new provider method for registering dma-buf based memory regions. Signed-off-by: Jianxin Xiong --- providers/mlx5/mlx5.c | 2 ++ providers/mlx5/mlx5.h | 3 +++ providers/mlx5/verbs.c | 22 ++++++++++++++++++++++ 3 files changed, 27 insertions(+) diff --git a/providers/mlx5/mlx5.c b/providers/mlx5/mlx5.c index a2a1696..b3c49af 100644 --- a/providers/mlx5/mlx5.c +++ b/providers/mlx5/mlx5.c @@ -1,5 +1,6 @@ /* * Copyright (c) 2012 Mellanox Technologies, Inc. All rights reserved. + * Copyright (c) 2020 Intel Corporation. All rights reserved. * * This software is available to you under a choice of one of two * licenses. You may choose to be licensed under the terms of the GNU @@ -95,6 +96,7 @@ static const struct verbs_context_ops mlx5_ctx_common_ops = { .async_event = mlx5_async_event, .dealloc_pd = mlx5_free_pd, .reg_mr = mlx5_reg_mr, + .reg_dmabuf_mr = mlx5_reg_dmabuf_mr, .rereg_mr = mlx5_rereg_mr, .dereg_mr = mlx5_dereg_mr, .alloc_mw = mlx5_alloc_mw, diff --git a/providers/mlx5/mlx5.h b/providers/mlx5/mlx5.h index bafe077..decc7f8 100644 --- a/providers/mlx5/mlx5.h +++ b/providers/mlx5/mlx5.h @@ -1,5 +1,6 @@ /* * Copyright (c) 2012 Mellanox Technologies, Inc. All rights reserved. + * Copyright (c) 2020 Intel Corporation. All rights reserved. * * This software is available to you under a choice of one of two * licenses. You may choose to be licensed under the terms of the GNU @@ -941,6 +942,8 @@ void mlx5_async_event(struct ibv_context *context, struct ibv_mr *mlx5_alloc_null_mr(struct ibv_pd *pd); struct ibv_mr *mlx5_reg_mr(struct ibv_pd *pd, void *addr, size_t length, uint64_t hca_va, int access); +struct ibv_mr *mlx5_reg_dmabuf_mr(struct ibv_pd *pd, uint64_t offset, size_t length, + uint64_t iova, int fd, int access); int mlx5_rereg_mr(struct verbs_mr *mr, int flags, struct ibv_pd *pd, void *addr, size_t length, int access); int mlx5_dereg_mr(struct verbs_mr *mr); diff --git a/providers/mlx5/verbs.c b/providers/mlx5/verbs.c index 30c7b6e..7790b50 100644 --- a/providers/mlx5/verbs.c +++ b/providers/mlx5/verbs.c @@ -1,5 +1,6 @@ /* * Copyright (c) 2012 Mellanox Technologies, Inc. All rights reserved. + * Copyright (c) 2020 Intel Corporation. All rights reserved. * * This software is available to you under a choice of one of two * licenses. You may choose to be licensed under the terms of the GNU @@ -628,6 +629,27 @@ struct ibv_mr *mlx5_reg_mr(struct ibv_pd *pd, void *addr, size_t length, return &mr->vmr.ibv_mr; } +struct ibv_mr *mlx5_reg_dmabuf_mr(struct ibv_pd *pd, uint64_t offset, size_t length, + uint64_t iova, int fd, int acc) +{ + struct mlx5_mr *mr; + int ret; + + mr = calloc(1, sizeof(*mr)); + if (!mr) + return NULL; + + ret = ibv_cmd_reg_dmabuf_mr(pd, offset, length, iova, fd, acc, + &mr->vmr); + if (ret) { + free(mr); + return NULL; + } + mr->alloc_flags = acc; + + return &mr->vmr.ibv_mr; +} + struct ibv_mr *mlx5_alloc_null_mr(struct ibv_pd *pd) { struct mlx5_mr *mr;