From patchwork Wed Oct 14 16:15:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xiong, Jianxin" X-Patchwork-Id: 11838003 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id ED64914B3 for ; Wed, 14 Oct 2020 16:01:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C1E70214D8 for ; Wed, 14 Oct 2020 16:01:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727819AbgJNQBS (ORCPT ); Wed, 14 Oct 2020 12:01:18 -0400 Received: from mga03.intel.com ([134.134.136.65]:26103 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727365AbgJNQBS (ORCPT ); Wed, 14 Oct 2020 12:01:18 -0400 IronPort-SDR: y0T54QWMr2U+p3KKIicA6+Qvi9fAilpSiLo6wIBbdw1Ffim7ncIC4mh9n3kYEoQR0gS1uLrba+ GvOJ/mkhEwsg== X-IronPort-AV: E=McAfee;i="6000,8403,9773"; a="166197394" X-IronPort-AV: E=Sophos;i="5.77,375,1596524400"; d="scan'208";a="166197394" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Oct 2020 09:01:17 -0700 IronPort-SDR: 9ZBKlSntHyYDSlpOr9r86LBhwlxEaoiRgLYR/gg+yK8CJa6RrJeV0TI+XdVd63gpZjY68ssNT/ O+xpYSeMlYEg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,375,1596524400"; d="scan'208";a="299964491" Received: from cst-dev.jf.intel.com ([10.23.221.69]) by fmsmga008.fm.intel.com with ESMTP; 14 Oct 2020 09:01:17 -0700 From: Jianxin Xiong To: linux-rdma@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: Jianxin Xiong , Doug Ledford , Jason Gunthorpe , Leon Romanovsky , Sumit Semwal , Christian Koenig , Daniel Vetter Subject: [PATCH v4 1/5] RDMA/umem: Support importing dma-buf as user memory region Date: Wed, 14 Oct 2020 09:15:16 -0700 Message-Id: <1602692116-106937-1-git-send-email-jianxin.xiong@intel.com> X-Mailer: git-send-email 1.8.3.1 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Dma-buf is a standard cross-driver buffer sharing mechanism that can be used to support peer-to-peer access from RDMA devices. Device memory exported via dma-buf is associated with a file descriptor. This is passed to the user space as a property associated with the buffer allocation. When the buffer is registered as a memory region, the file descriptor is passed to the RDMA driver along with other parameters. Implement the common code for importing dma-buf object and mapping dma-buf pages. Signed-off-by: Jianxin Xiong Reviewed-by: Sean Hefty Acked-by: Michael J. Ruhl Acked-by: Christian Koenig Reported-by: kernel test robot Acked-by: Daniel Vetter --- drivers/infiniband/core/Makefile | 2 +- drivers/infiniband/core/umem.c | 4 + drivers/infiniband/core/umem_dmabuf.c | 200 ++++++++++++++++++++++++++++++++++ drivers/infiniband/core/umem_dmabuf.h | 11 ++ include/rdma/ib_umem.h | 32 +++++- 5 files changed, 247 insertions(+), 2 deletions(-) create mode 100644 drivers/infiniband/core/umem_dmabuf.c create mode 100644 drivers/infiniband/core/umem_dmabuf.h diff --git a/drivers/infiniband/core/Makefile b/drivers/infiniband/core/Makefile index ccf2670..8ab4eea 100644 --- a/drivers/infiniband/core/Makefile +++ b/drivers/infiniband/core/Makefile @@ -40,5 +40,5 @@ ib_uverbs-y := uverbs_main.o uverbs_cmd.o uverbs_marshall.o \ uverbs_std_types_srq.o \ uverbs_std_types_wq.o \ uverbs_std_types_qp.o -ib_uverbs-$(CONFIG_INFINIBAND_USER_MEM) += umem.o +ib_uverbs-$(CONFIG_INFINIBAND_USER_MEM) += umem.o umem_dmabuf.o ib_uverbs-$(CONFIG_INFINIBAND_ON_DEMAND_PAGING) += umem_odp.o diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index e9fecbd..8c608a5 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -2,6 +2,7 @@ * Copyright (c) 2005 Topspin Communications. All rights reserved. * Copyright (c) 2005 Cisco Systems. All rights reserved. * Copyright (c) 2005 Mellanox Technologies. All rights reserved. + * Copyright (c) 2020 Intel Corporation. All rights reserved. * * This software is available to you under a choice of one of two * licenses. You may choose to be licensed under the terms of the GNU @@ -43,6 +44,7 @@ #include #include "uverbs.h" +#include "umem_dmabuf.h" static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int dirty) { @@ -269,6 +271,8 @@ void ib_umem_release(struct ib_umem *umem) { if (!umem) return; + if (umem->is_dmabuf) + return ib_umem_dmabuf_release(umem); if (umem->is_odp) return ib_umem_odp_release(to_ib_umem_odp(umem)); diff --git a/drivers/infiniband/core/umem_dmabuf.c b/drivers/infiniband/core/umem_dmabuf.c new file mode 100644 index 0000000..4f2303e --- /dev/null +++ b/drivers/infiniband/core/umem_dmabuf.c @@ -0,0 +1,200 @@ +// SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) +/* + * Copyright (c) 2020 Intel Corporation. All rights reserved. + */ + +#include +#include +#include + +#include "uverbs.h" + +struct ib_umem_dmabuf { + struct ib_umem umem; + struct dma_buf_attachment *attach; + struct sg_table *sgt; + const struct ib_umem_dmabuf_ops *ops; + void *device_context; + struct work_struct work; +}; + +static inline struct ib_umem_dmabuf *to_ib_umem_dmabuf(struct ib_umem *umem) +{ + return container_of(umem, struct ib_umem_dmabuf, umem); +} + +int ib_umem_dmabuf_map_pages(struct ib_umem *umem, bool first) +{ + struct ib_umem_dmabuf *umem_dmabuf = to_ib_umem_dmabuf(umem); + struct sg_table *sgt; + struct dma_fence *fence; + int err; + + dma_resv_lock(umem_dmabuf->attach->dmabuf->resv, NULL); + + sgt = dma_buf_map_attachment(umem_dmabuf->attach, + DMA_BIDIRECTIONAL); + + if (IS_ERR(sgt)) { + dma_resv_unlock(umem_dmabuf->attach->dmabuf->resv); + return PTR_ERR(sgt); + } + + umem_dmabuf->umem.sg_head = *sgt; + umem_dmabuf->umem.nmap = sgt->nents; + umem_dmabuf->sgt = sgt; + + fence = dma_resv_get_excl(umem_dmabuf->attach->dmabuf->resv); + if (fence) + dma_fence_wait(fence, false); + + if (first) + err = umem_dmabuf->ops->init(umem, + umem_dmabuf->device_context); + else + err = umem_dmabuf->ops->update(umem, + umem_dmabuf->device_context); + + dma_resv_unlock(umem_dmabuf->attach->dmabuf->resv); + return err; +} + +int ib_umem_dmabuf_init_mapping(struct ib_umem *umem, void *device_context) +{ + struct ib_umem_dmabuf *umem_dmabuf = to_ib_umem_dmabuf(umem); + + umem_dmabuf->device_context = device_context; + return ib_umem_dmabuf_map_pages(umem, true); +} +EXPORT_SYMBOL(ib_umem_dmabuf_init_mapping); + +bool ib_umem_dmabuf_mapping_ready(struct ib_umem *umem) +{ + struct ib_umem_dmabuf *umem_dmabuf = to_ib_umem_dmabuf(umem); + bool ret; + + dma_resv_lock(umem_dmabuf->attach->dmabuf->resv, NULL); + ret = !!umem_dmabuf->sgt; + dma_resv_unlock(umem_dmabuf->attach->dmabuf->resv); + return ret; +} +EXPORT_SYMBOL(ib_umem_dmabuf_mapping_ready); + +static void ib_umem_dmabuf_unmap_pages(struct ib_umem *umem, bool do_invalidate) +{ + struct ib_umem_dmabuf *umem_dmabuf = to_ib_umem_dmabuf(umem); + + dma_resv_assert_held(umem_dmabuf->attach->dmabuf->resv); + + if (!umem_dmabuf->sgt) + return; + + if (do_invalidate) + umem_dmabuf->ops->invalidate(umem, umem_dmabuf->device_context); + + dma_buf_unmap_attachment(umem_dmabuf->attach, umem_dmabuf->sgt, + DMA_BIDIRECTIONAL); + umem_dmabuf->sgt = NULL; +} + +static void ib_umem_dmabuf_work(struct work_struct *work) +{ + struct ib_umem_dmabuf *umem_dmabuf; + int ret; + + umem_dmabuf = container_of(work, struct ib_umem_dmabuf, work); + ret = ib_umem_dmabuf_map_pages(&umem_dmabuf->umem, false); + if (ret) + pr_debug("%s: failed to update dmabuf mapping, error %d\n", + __func__, ret); +} + +static void ib_umem_dmabuf_invalidate_cb(struct dma_buf_attachment *attach) +{ + struct ib_umem_dmabuf *umem_dmabuf = attach->importer_priv; + + dma_resv_assert_held(umem_dmabuf->attach->dmabuf->resv); + + ib_umem_dmabuf_unmap_pages(&umem_dmabuf->umem, true); + queue_work(ib_wq, &umem_dmabuf->work); +} + +static struct dma_buf_attach_ops ib_umem_dmabuf_attach_ops = { + .allow_peer2peer = 1, + .move_notify = ib_umem_dmabuf_invalidate_cb, +}; + +struct ib_umem *ib_umem_dmabuf_get(struct ib_device *device, + unsigned long addr, size_t size, + int dmabuf_fd, int access, + const struct ib_umem_dmabuf_ops *ops) +{ + struct dma_buf *dmabuf; + struct ib_umem_dmabuf *umem_dmabuf; + struct ib_umem *umem; + unsigned long end; + long ret; + + if (check_add_overflow(addr, size, &end)) + return ERR_PTR(-EINVAL); + + if (unlikely(PAGE_ALIGN(end) < PAGE_SIZE)) + return ERR_PTR(-EINVAL); + + if (unlikely(!ops || !ops->invalidate || !ops->update)) + return ERR_PTR(-EINVAL); + + umem_dmabuf = kzalloc(sizeof(*umem_dmabuf), GFP_KERNEL); + if (!umem_dmabuf) + return ERR_PTR(-ENOMEM); + + umem_dmabuf->ops = ops; + INIT_WORK(&umem_dmabuf->work, ib_umem_dmabuf_work); + + umem = &umem_dmabuf->umem; + umem->ibdev = device; + umem->length = size; + umem->address = addr; + umem->writable = ib_access_writable(access); + umem->is_dmabuf = 1; + + dmabuf = dma_buf_get(dmabuf_fd); + if (IS_ERR(dmabuf)) { + ret = PTR_ERR(dmabuf); + goto out_free_umem; + } + + umem_dmabuf->attach = dma_buf_dynamic_attach( + dmabuf, + device->dma_device, + &ib_umem_dmabuf_attach_ops, + umem_dmabuf); + if (IS_ERR(umem_dmabuf->attach)) { + ret = PTR_ERR(umem_dmabuf->attach); + goto out_release_dmabuf; + } + + return umem; + +out_release_dmabuf: + dma_buf_put(dmabuf); + +out_free_umem: + kfree(umem_dmabuf); + return ERR_PTR(ret); +} +EXPORT_SYMBOL(ib_umem_dmabuf_get); + +void ib_umem_dmabuf_release(struct ib_umem *umem) +{ + struct ib_umem_dmabuf *umem_dmabuf = to_ib_umem_dmabuf(umem); + struct dma_buf *dmabuf = umem_dmabuf->attach->dmabuf; + + dma_resv_lock(umem_dmabuf->attach->dmabuf->resv, NULL); + ib_umem_dmabuf_unmap_pages(umem, false); + dma_resv_unlock(umem_dmabuf->attach->dmabuf->resv); + + dma_buf_detach(dmabuf, umem_dmabuf->attach); + dma_buf_put(dmabuf); + kfree(umem_dmabuf); +} diff --git a/drivers/infiniband/core/umem_dmabuf.h b/drivers/infiniband/core/umem_dmabuf.h new file mode 100644 index 0000000..485f653 --- /dev/null +++ b/drivers/infiniband/core/umem_dmabuf.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) */ +/* + * Copyright (c) 2020 Intel Corporation. All rights reserved. + */ + +#ifndef UMEM_DMABUF_H +#define UMEM_DMABUF_H + +void ib_umem_dmabuf_release(struct ib_umem *umem); + +#endif /* UMEM_DMABUF_H */ diff --git a/include/rdma/ib_umem.h b/include/rdma/ib_umem.h index 7059750..fac8553 100644 --- a/include/rdma/ib_umem.h +++ b/include/rdma/ib_umem.h @@ -1,6 +1,7 @@ /* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ /* * Copyright (c) 2007 Cisco Systems. All rights reserved. + * Copyright (c) 2020 Intel Corporation. All rights reserved. */ #ifndef IB_UMEM_H @@ -22,12 +23,19 @@ struct ib_umem { unsigned long address; u32 writable : 1; u32 is_odp : 1; + u32 is_dmabuf : 1; struct work_struct work; struct sg_table sg_head; int nmap; unsigned int sg_nents; }; +struct ib_umem_dmabuf_ops { + int (*init)(struct ib_umem *umem, void *context); + int (*update)(struct ib_umem *umem, void *context); + int (*invalidate)(struct ib_umem *umem, void *context); +}; + /* Returns the offset of the umem start relative to the first page. */ static inline int ib_umem_offset(struct ib_umem *umem) { @@ -79,6 +87,12 @@ int ib_umem_copy_from(void *dst, struct ib_umem *umem, size_t offset, unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem, unsigned long pgsz_bitmap, unsigned long virt); +struct ib_umem *ib_umem_dmabuf_get(struct ib_device *device, + unsigned long addr, size_t size, + int dmabuf_fd, int access, + const struct ib_umem_dmabuf_ops *ops); +int ib_umem_dmabuf_init_mapping(struct ib_umem *umem, void *device_context); +bool ib_umem_dmabuf_mapping_ready(struct ib_umem *umem); #else /* CONFIG_INFINIBAND_USER_MEM */ @@ -101,7 +115,23 @@ static inline unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem, { return 0; } +static inline struct ib_umem *ib_umem_dmabuf_get(struct ib_device *device, + unsigned long addr, + size_t size, int dmabuf_fd, + int access, + struct ib_umem_dmabuf_ops *ops) +{ + return ERR_PTR(-EINVAL); +} +static inline int ib_umem_dmabuf_init_mapping(struct ib_umem *umem, + void *device_context) +{ + return -EINVAL; +} +static inline bool ib_umem_dmabuf_mapping_ready(struct ib_umem *umem) +{ + return false; +} #endif /* CONFIG_INFINIBAND_USER_MEM */ - #endif /* IB_UMEM_H */