From patchwork Thu Jan 14 21:35:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xiong, Jianxin" X-Patchwork-Id: 12020933 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10719C433DB for ; Thu, 14 Jan 2021 21:21:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A9D1B23A9D for ; Thu, 14 Jan 2021 21:21:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728726AbhANVVh (ORCPT ); Thu, 14 Jan 2021 16:21:37 -0500 Received: from mga06.intel.com ([134.134.136.31]:64540 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726123AbhANVVh (ORCPT ); Thu, 14 Jan 2021 16:21:37 -0500 IronPort-SDR: 70wM1EfQ2At+jSrPuBSbFAFqTMezsziodHZSHUn+uGDVfNbx7XaftjDuyX+UnIGA45uvX+jyYk ChkKUAdmvAxw== X-IronPort-AV: E=McAfee;i="6000,8403,9864"; a="239991140" X-IronPort-AV: E=Sophos;i="5.79,347,1602572400"; d="scan'208";a="239991140" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Jan 2021 13:20:25 -0800 IronPort-SDR: b/3MsT+uUnFr+Dk60HAtzZUvqWHsYPKho+VlpiqlZ4+3OvL5FafQ+DWu0SlWKFOhR3hyvOVNrK B+Pa7fCtRTAQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,347,1602572400"; d="scan'208";a="499717147" Received: from cst-dev.jf.intel.com ([10.23.221.69]) by orsmga004.jf.intel.com with ESMTP; 14 Jan 2021 13:20:25 -0800 From: Jianxin Xiong To: linux-rdma@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: Jianxin Xiong , Doug Ledford , Jason Gunthorpe , Leon Romanovsky , Sumit Semwal , Christian Koenig , Daniel Vetter , Edward Srouji , Yishai Hadas Subject: [PATCH rdma-core v5 3/6] mlx5: Support dma-buf based memory region Date: Thu, 14 Jan 2021 13:35:33 -0800 Message-Id: <1610660136-126627-4-git-send-email-jianxin.xiong@intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1610660136-126627-1-git-send-email-jianxin.xiong@intel.com> References: <1610660136-126627-1-git-send-email-jianxin.xiong@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Implement the new provider method for registering dma-buf based memory regions. Signed-off-by: Jianxin Xiong --- providers/mlx5/mlx5.c | 2 ++ providers/mlx5/mlx5.h | 3 +++ providers/mlx5/verbs.c | 22 ++++++++++++++++++++++ 3 files changed, 27 insertions(+) diff --git a/providers/mlx5/mlx5.c b/providers/mlx5/mlx5.c index a2a1696..b3c49af 100644 --- a/providers/mlx5/mlx5.c +++ b/providers/mlx5/mlx5.c @@ -1,5 +1,6 @@ /* * Copyright (c) 2012 Mellanox Technologies, Inc. All rights reserved. + * Copyright (c) 2020 Intel Corporation. All rights reserved. * * This software is available to you under a choice of one of two * licenses. You may choose to be licensed under the terms of the GNU @@ -95,6 +96,7 @@ static const struct verbs_context_ops mlx5_ctx_common_ops = { .async_event = mlx5_async_event, .dealloc_pd = mlx5_free_pd, .reg_mr = mlx5_reg_mr, + .reg_dmabuf_mr = mlx5_reg_dmabuf_mr, .rereg_mr = mlx5_rereg_mr, .dereg_mr = mlx5_dereg_mr, .alloc_mw = mlx5_alloc_mw, diff --git a/providers/mlx5/mlx5.h b/providers/mlx5/mlx5.h index bafe077..decc7f8 100644 --- a/providers/mlx5/mlx5.h +++ b/providers/mlx5/mlx5.h @@ -1,5 +1,6 @@ /* * Copyright (c) 2012 Mellanox Technologies, Inc. All rights reserved. + * Copyright (c) 2020 Intel Corporation. All rights reserved. * * This software is available to you under a choice of one of two * licenses. You may choose to be licensed under the terms of the GNU @@ -941,6 +942,8 @@ void mlx5_async_event(struct ibv_context *context, struct ibv_mr *mlx5_alloc_null_mr(struct ibv_pd *pd); struct ibv_mr *mlx5_reg_mr(struct ibv_pd *pd, void *addr, size_t length, uint64_t hca_va, int access); +struct ibv_mr *mlx5_reg_dmabuf_mr(struct ibv_pd *pd, uint64_t offset, size_t length, + uint64_t iova, int fd, int access); int mlx5_rereg_mr(struct verbs_mr *mr, int flags, struct ibv_pd *pd, void *addr, size_t length, int access); int mlx5_dereg_mr(struct verbs_mr *mr); diff --git a/providers/mlx5/verbs.c b/providers/mlx5/verbs.c index b2391e8..cc0bc42 100644 --- a/providers/mlx5/verbs.c +++ b/providers/mlx5/verbs.c @@ -1,5 +1,6 @@ /* * Copyright (c) 2012 Mellanox Technologies, Inc. All rights reserved. + * Copyright (c) 2020 Intel Corporation. All rights reserved. * * This software is available to you under a choice of one of two * licenses. You may choose to be licensed under the terms of the GNU @@ -626,6 +627,27 @@ struct ibv_mr *mlx5_reg_mr(struct ibv_pd *pd, void *addr, size_t length, return &mr->vmr.ibv_mr; } +struct ibv_mr *mlx5_reg_dmabuf_mr(struct ibv_pd *pd, uint64_t offset, size_t length, + uint64_t iova, int fd, int acc) +{ + struct mlx5_mr *mr; + int ret; + + mr = calloc(1, sizeof(*mr)); + if (!mr) + return NULL; + + ret = ibv_cmd_reg_dmabuf_mr(pd, offset, length, iova, fd, acc, + &mr->vmr); + if (ret) { + free(mr); + return NULL; + } + mr->alloc_flags = acc; + + return &mr->vmr.ibv_mr; +} + struct ibv_mr *mlx5_alloc_null_mr(struct ibv_pd *pd) { struct mlx5_mr *mr;