From patchwork Wed Oct 14 16:15:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xiong, Jianxin" X-Patchwork-Id: 11838003 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id ED64914B3 for ; Wed, 14 Oct 2020 16:01:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C1E70214D8 for ; Wed, 14 Oct 2020 16:01:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727819AbgJNQBS (ORCPT ); Wed, 14 Oct 2020 12:01:18 -0400 Received: from mga03.intel.com ([134.134.136.65]:26103 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727365AbgJNQBS (ORCPT ); Wed, 14 Oct 2020 12:01:18 -0400 IronPort-SDR: y0T54QWMr2U+p3KKIicA6+Qvi9fAilpSiLo6wIBbdw1Ffim7ncIC4mh9n3kYEoQR0gS1uLrba+ GvOJ/mkhEwsg== X-IronPort-AV: E=McAfee;i="6000,8403,9773"; a="166197394" X-IronPort-AV: E=Sophos;i="5.77,375,1596524400"; d="scan'208";a="166197394" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Oct 2020 09:01:17 -0700 IronPort-SDR: 9ZBKlSntHyYDSlpOr9r86LBhwlxEaoiRgLYR/gg+yK8CJa6RrJeV0TI+XdVd63gpZjY68ssNT/ O+xpYSeMlYEg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,375,1596524400"; d="scan'208";a="299964491" Received: from cst-dev.jf.intel.com ([10.23.221.69]) by fmsmga008.fm.intel.com with ESMTP; 14 Oct 2020 09:01:17 -0700 From: Jianxin Xiong To: linux-rdma@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: Jianxin Xiong , Doug Ledford , Jason Gunthorpe , Leon Romanovsky , Sumit Semwal , Christian Koenig , Daniel Vetter Subject: [PATCH v4 1/5] RDMA/umem: Support importing dma-buf as user memory region Date: Wed, 14 Oct 2020 09:15:16 -0700 Message-Id: <1602692116-106937-1-git-send-email-jianxin.xiong@intel.com> X-Mailer: git-send-email 1.8.3.1 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Dma-buf is a standard cross-driver buffer sharing mechanism that can be used to support peer-to-peer access from RDMA devices. Device memory exported via dma-buf is associated with a file descriptor. This is passed to the user space as a property associated with the buffer allocation. When the buffer is registered as a memory region, the file descriptor is passed to the RDMA driver along with other parameters. Implement the common code for importing dma-buf object and mapping dma-buf pages. Signed-off-by: Jianxin Xiong Reviewed-by: Sean Hefty Acked-by: Michael J. Ruhl Acked-by: Christian Koenig Reported-by: kernel test robot Acked-by: Daniel Vetter --- drivers/infiniband/core/Makefile | 2 +- drivers/infiniband/core/umem.c | 4 + drivers/infiniband/core/umem_dmabuf.c | 200 ++++++++++++++++++++++++++++++++++ drivers/infiniband/core/umem_dmabuf.h | 11 ++ include/rdma/ib_umem.h | 32 +++++- 5 files changed, 247 insertions(+), 2 deletions(-) create mode 100644 drivers/infiniband/core/umem_dmabuf.c create mode 100644 drivers/infiniband/core/umem_dmabuf.h diff --git a/drivers/infiniband/core/Makefile b/drivers/infiniband/core/Makefile index ccf2670..8ab4eea 100644 --- a/drivers/infiniband/core/Makefile +++ b/drivers/infiniband/core/Makefile @@ -40,5 +40,5 @@ ib_uverbs-y := uverbs_main.o uverbs_cmd.o uverbs_marshall.o \ uverbs_std_types_srq.o \ uverbs_std_types_wq.o \ uverbs_std_types_qp.o -ib_uverbs-$(CONFIG_INFINIBAND_USER_MEM) += umem.o +ib_uverbs-$(CONFIG_INFINIBAND_USER_MEM) += umem.o umem_dmabuf.o ib_uverbs-$(CONFIG_INFINIBAND_ON_DEMAND_PAGING) += umem_odp.o diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index e9fecbd..8c608a5 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -2,6 +2,7 @@ * Copyright (c) 2005 Topspin Communications. All rights reserved. * Copyright (c) 2005 Cisco Systems. All rights reserved. * Copyright (c) 2005 Mellanox Technologies. All rights reserved. + * Copyright (c) 2020 Intel Corporation. All rights reserved. * * This software is available to you under a choice of one of two * licenses. You may choose to be licensed under the terms of the GNU @@ -43,6 +44,7 @@ #include #include "uverbs.h" +#include "umem_dmabuf.h" static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int dirty) { @@ -269,6 +271,8 @@ void ib_umem_release(struct ib_umem *umem) { if (!umem) return; + if (umem->is_dmabuf) + return ib_umem_dmabuf_release(umem); if (umem->is_odp) return ib_umem_odp_release(to_ib_umem_odp(umem)); diff --git a/drivers/infiniband/core/umem_dmabuf.c b/drivers/infiniband/core/umem_dmabuf.c new file mode 100644 index 0000000..4f2303e --- /dev/null +++ b/drivers/infiniband/core/umem_dmabuf.c @@ -0,0 +1,200 @@ +// SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) +/* + * Copyright (c) 2020 Intel Corporation. All rights reserved. + */ + +#include +#include +#include + +#include "uverbs.h" + +struct ib_umem_dmabuf { + struct ib_umem umem; + struct dma_buf_attachment *attach; + struct sg_table *sgt; + const struct ib_umem_dmabuf_ops *ops; + void *device_context; + struct work_struct work; +}; + +static inline struct ib_umem_dmabuf *to_ib_umem_dmabuf(struct ib_umem *umem) +{ + return container_of(umem, struct ib_umem_dmabuf, umem); +} + +int ib_umem_dmabuf_map_pages(struct ib_umem *umem, bool first) +{ + struct ib_umem_dmabuf *umem_dmabuf = to_ib_umem_dmabuf(umem); + struct sg_table *sgt; + struct dma_fence *fence; + int err; + + dma_resv_lock(umem_dmabuf->attach->dmabuf->resv, NULL); + + sgt = dma_buf_map_attachment(umem_dmabuf->attach, + DMA_BIDIRECTIONAL); + + if (IS_ERR(sgt)) { + dma_resv_unlock(umem_dmabuf->attach->dmabuf->resv); + return PTR_ERR(sgt); + } + + umem_dmabuf->umem.sg_head = *sgt; + umem_dmabuf->umem.nmap = sgt->nents; + umem_dmabuf->sgt = sgt; + + fence = dma_resv_get_excl(umem_dmabuf->attach->dmabuf->resv); + if (fence) + dma_fence_wait(fence, false); + + if (first) + err = umem_dmabuf->ops->init(umem, + umem_dmabuf->device_context); + else + err = umem_dmabuf->ops->update(umem, + umem_dmabuf->device_context); + + dma_resv_unlock(umem_dmabuf->attach->dmabuf->resv); + return err; +} + +int ib_umem_dmabuf_init_mapping(struct ib_umem *umem, void *device_context) +{ + struct ib_umem_dmabuf *umem_dmabuf = to_ib_umem_dmabuf(umem); + + umem_dmabuf->device_context = device_context; + return ib_umem_dmabuf_map_pages(umem, true); +} +EXPORT_SYMBOL(ib_umem_dmabuf_init_mapping); + +bool ib_umem_dmabuf_mapping_ready(struct ib_umem *umem) +{ + struct ib_umem_dmabuf *umem_dmabuf = to_ib_umem_dmabuf(umem); + bool ret; + + dma_resv_lock(umem_dmabuf->attach->dmabuf->resv, NULL); + ret = !!umem_dmabuf->sgt; + dma_resv_unlock(umem_dmabuf->attach->dmabuf->resv); + return ret; +} +EXPORT_SYMBOL(ib_umem_dmabuf_mapping_ready); + +static void ib_umem_dmabuf_unmap_pages(struct ib_umem *umem, bool do_invalidate) +{ + struct ib_umem_dmabuf *umem_dmabuf = to_ib_umem_dmabuf(umem); + + dma_resv_assert_held(umem_dmabuf->attach->dmabuf->resv); + + if (!umem_dmabuf->sgt) + return; + + if (do_invalidate) + umem_dmabuf->ops->invalidate(umem, umem_dmabuf->device_context); + + dma_buf_unmap_attachment(umem_dmabuf->attach, umem_dmabuf->sgt, + DMA_BIDIRECTIONAL); + umem_dmabuf->sgt = NULL; +} + +static void ib_umem_dmabuf_work(struct work_struct *work) +{ + struct ib_umem_dmabuf *umem_dmabuf; + int ret; + + umem_dmabuf = container_of(work, struct ib_umem_dmabuf, work); + ret = ib_umem_dmabuf_map_pages(&umem_dmabuf->umem, false); + if (ret) + pr_debug("%s: failed to update dmabuf mapping, error %d\n", + __func__, ret); +} + +static void ib_umem_dmabuf_invalidate_cb(struct dma_buf_attachment *attach) +{ + struct ib_umem_dmabuf *umem_dmabuf = attach->importer_priv; + + dma_resv_assert_held(umem_dmabuf->attach->dmabuf->resv); + + ib_umem_dmabuf_unmap_pages(&umem_dmabuf->umem, true); + queue_work(ib_wq, &umem_dmabuf->work); +} + +static struct dma_buf_attach_ops ib_umem_dmabuf_attach_ops = { + .allow_peer2peer = 1, + .move_notify = ib_umem_dmabuf_invalidate_cb, +}; + +struct ib_umem *ib_umem_dmabuf_get(struct ib_device *device, + unsigned long addr, size_t size, + int dmabuf_fd, int access, + const struct ib_umem_dmabuf_ops *ops) +{ + struct dma_buf *dmabuf; + struct ib_umem_dmabuf *umem_dmabuf; + struct ib_umem *umem; + unsigned long end; + long ret; + + if (check_add_overflow(addr, size, &end)) + return ERR_PTR(-EINVAL); + + if (unlikely(PAGE_ALIGN(end) < PAGE_SIZE)) + return ERR_PTR(-EINVAL); + + if (unlikely(!ops || !ops->invalidate || !ops->update)) + return ERR_PTR(-EINVAL); + + umem_dmabuf = kzalloc(sizeof(*umem_dmabuf), GFP_KERNEL); + if (!umem_dmabuf) + return ERR_PTR(-ENOMEM); + + umem_dmabuf->ops = ops; + INIT_WORK(&umem_dmabuf->work, ib_umem_dmabuf_work); + + umem = &umem_dmabuf->umem; + umem->ibdev = device; + umem->length = size; + umem->address = addr; + umem->writable = ib_access_writable(access); + umem->is_dmabuf = 1; + + dmabuf = dma_buf_get(dmabuf_fd); + if (IS_ERR(dmabuf)) { + ret = PTR_ERR(dmabuf); + goto out_free_umem; + } + + umem_dmabuf->attach = dma_buf_dynamic_attach( + dmabuf, + device->dma_device, + &ib_umem_dmabuf_attach_ops, + umem_dmabuf); + if (IS_ERR(umem_dmabuf->attach)) { + ret = PTR_ERR(umem_dmabuf->attach); + goto out_release_dmabuf; + } + + return umem; + +out_release_dmabuf: + dma_buf_put(dmabuf); + +out_free_umem: + kfree(umem_dmabuf); + return ERR_PTR(ret); +} +EXPORT_SYMBOL(ib_umem_dmabuf_get); + +void ib_umem_dmabuf_release(struct ib_umem *umem) +{ + struct ib_umem_dmabuf *umem_dmabuf = to_ib_umem_dmabuf(umem); + struct dma_buf *dmabuf = umem_dmabuf->attach->dmabuf; + + dma_resv_lock(umem_dmabuf->attach->dmabuf->resv, NULL); + ib_umem_dmabuf_unmap_pages(umem, false); + dma_resv_unlock(umem_dmabuf->attach->dmabuf->resv); + + dma_buf_detach(dmabuf, umem_dmabuf->attach); + dma_buf_put(dmabuf); + kfree(umem_dmabuf); +} diff --git a/drivers/infiniband/core/umem_dmabuf.h b/drivers/infiniband/core/umem_dmabuf.h new file mode 100644 index 0000000..485f653 --- /dev/null +++ b/drivers/infiniband/core/umem_dmabuf.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) */ +/* + * Copyright (c) 2020 Intel Corporation. All rights reserved. + */ + +#ifndef UMEM_DMABUF_H +#define UMEM_DMABUF_H + +void ib_umem_dmabuf_release(struct ib_umem *umem); + +#endif /* UMEM_DMABUF_H */ diff --git a/include/rdma/ib_umem.h b/include/rdma/ib_umem.h index 7059750..fac8553 100644 --- a/include/rdma/ib_umem.h +++ b/include/rdma/ib_umem.h @@ -1,6 +1,7 @@ /* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ /* * Copyright (c) 2007 Cisco Systems. All rights reserved. + * Copyright (c) 2020 Intel Corporation. All rights reserved. */ #ifndef IB_UMEM_H @@ -22,12 +23,19 @@ struct ib_umem { unsigned long address; u32 writable : 1; u32 is_odp : 1; + u32 is_dmabuf : 1; struct work_struct work; struct sg_table sg_head; int nmap; unsigned int sg_nents; }; +struct ib_umem_dmabuf_ops { + int (*init)(struct ib_umem *umem, void *context); + int (*update)(struct ib_umem *umem, void *context); + int (*invalidate)(struct ib_umem *umem, void *context); +}; + /* Returns the offset of the umem start relative to the first page. */ static inline int ib_umem_offset(struct ib_umem *umem) { @@ -79,6 +87,12 @@ int ib_umem_copy_from(void *dst, struct ib_umem *umem, size_t offset, unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem, unsigned long pgsz_bitmap, unsigned long virt); +struct ib_umem *ib_umem_dmabuf_get(struct ib_device *device, + unsigned long addr, size_t size, + int dmabuf_fd, int access, + const struct ib_umem_dmabuf_ops *ops); +int ib_umem_dmabuf_init_mapping(struct ib_umem *umem, void *device_context); +bool ib_umem_dmabuf_mapping_ready(struct ib_umem *umem); #else /* CONFIG_INFINIBAND_USER_MEM */ @@ -101,7 +115,23 @@ static inline unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem, { return 0; } +static inline struct ib_umem *ib_umem_dmabuf_get(struct ib_device *device, + unsigned long addr, + size_t size, int dmabuf_fd, + int access, + struct ib_umem_dmabuf_ops *ops) +{ + return ERR_PTR(-EINVAL); +} +static inline int ib_umem_dmabuf_init_mapping(struct ib_umem *umem, + void *device_context) +{ + return -EINVAL; +} +static inline bool ib_umem_dmabuf_mapping_ready(struct ib_umem *umem) +{ + return false; +} #endif /* CONFIG_INFINIBAND_USER_MEM */ - #endif /* IB_UMEM_H */ From patchwork Wed Oct 14 16:15:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xiong, Jianxin" X-Patchwork-Id: 11838011 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C0B5D14B3 for ; Wed, 14 Oct 2020 16:01:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AA364214D8 for ; Wed, 14 Oct 2020 16:01:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729372AbgJNQBo (ORCPT ); Wed, 14 Oct 2020 12:01:44 -0400 Received: from mga02.intel.com ([134.134.136.20]:44281 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727034AbgJNQBn (ORCPT ); Wed, 14 Oct 2020 12:01:43 -0400 IronPort-SDR: Sp6Iw2rbHn18wJfaBuVJlsTzoNSoloxck5QXhZ9u03Flq0WqUF729YzslY5/3wj0ATx6XY/2/S mkZKmx9ozIqw== X-IronPort-AV: E=McAfee;i="6000,8403,9773"; a="153075436" X-IronPort-AV: E=Sophos;i="5.77,375,1596524400"; d="scan'208";a="153075436" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Oct 2020 09:01:24 -0700 IronPort-SDR: fYwJqAlDgkKX1DKUzGR4rICwG/GvP5YUGSKxJPTssGlGS0lO5YJCbP1xkECuzQ4Vb4uLog4Dm4 5K2EYBaMKXmw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,375,1596524400"; d="scan'208";a="521471611" Received: from cst-dev.jf.intel.com ([10.23.221.69]) by fmsmga005.fm.intel.com with ESMTP; 14 Oct 2020 09:01:23 -0700 From: Jianxin Xiong To: linux-rdma@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: Jianxin Xiong , Doug Ledford , Jason Gunthorpe , Leon Romanovsky , Sumit Semwal , Christian Koenig , Daniel Vetter Subject: [PATCH v4 2/5] RDMA/core: Add device method for registering dma-buf base memory region Date: Wed, 14 Oct 2020 09:15:33 -0700 Message-Id: <1602692133-106979-1-git-send-email-jianxin.xiong@intel.com> X-Mailer: git-send-email 1.8.3.1 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Dma-buf based memory region requires one extra parameter and is processed quite differently. Adding a separate method allows clean separation from regular memory regions. Signed-off-by: Jianxin Xiong Reviewed-by: Sean Hefty Acked-by: Michael J. Ruhl Acked-by: Christian Koenig --- drivers/infiniband/core/device.c | 1 + include/rdma/ib_verbs.h | 6 +++++- 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c index feaec8d..d6cd0ac 100644 --- a/drivers/infiniband/core/device.c +++ b/drivers/infiniband/core/device.c @@ -2653,6 +2653,7 @@ void ib_set_device_ops(struct ib_device *dev, const struct ib_device_ops *ops) SET_DEVICE_OP(dev_ops, read_counters); SET_DEVICE_OP(dev_ops, reg_dm_mr); SET_DEVICE_OP(dev_ops, reg_user_mr); + SET_DEVICE_OP(dev_ops, reg_user_mr_dmabuf); SET_DEVICE_OP(dev_ops, req_ncomp_notif); SET_DEVICE_OP(dev_ops, req_notify_cq); SET_DEVICE_OP(dev_ops, rereg_user_mr); diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 9bf6c31..48bab74 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -2,7 +2,7 @@ /* * Copyright (c) 2004 Mellanox Technologies Ltd. All rights reserved. * Copyright (c) 2004 Infinicon Corporation. All rights reserved. - * Copyright (c) 2004 Intel Corporation. All rights reserved. + * Copyright (c) 2004, 2020 Intel Corporation. All rights reserved. * Copyright (c) 2004 Topspin Corporation. All rights reserved. * Copyright (c) 2004 Voltaire Corporation. All rights reserved. * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved. @@ -2429,6 +2429,10 @@ struct ib_device_ops { struct ib_mr *(*reg_user_mr)(struct ib_pd *pd, u64 start, u64 length, u64 virt_addr, int mr_access_flags, struct ib_udata *udata); + struct ib_mr *(*reg_user_mr_dmabuf)(struct ib_pd *pd, u64 start, + u64 length, u64 virt_addr, int dmabuf_fd, + int mr_access_flags, + struct ib_udata *udata); int (*rereg_user_mr)(struct ib_mr *mr, int flags, u64 start, u64 length, u64 virt_addr, int mr_access_flags, struct ib_pd *pd, struct ib_udata *udata); From patchwork Wed Oct 14 16:15:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xiong, Jianxin" X-Patchwork-Id: 11838009 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0C77D14B3 for ; Wed, 14 Oct 2020 16:01:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E8B5B214D8 for ; Wed, 14 Oct 2020 16:01:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730371AbgJNQBg (ORCPT ); Wed, 14 Oct 2020 12:01:36 -0400 Received: from mga09.intel.com ([134.134.136.24]:7081 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727034AbgJNQBd (ORCPT ); Wed, 14 Oct 2020 12:01:33 -0400 IronPort-SDR: V6RsKKb8P9wQjrLSOMmJBqYt6AXqh2TYynPcsr0wRf95CROSbJMGdbMMMm0yTJ03gIqs1gNcmf eHVt1x8g1o4Q== X-IronPort-AV: E=McAfee;i="6000,8403,9773"; a="166254885" X-IronPort-AV: E=Sophos;i="5.77,375,1596524400"; d="scan'208";a="166254885" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Oct 2020 09:01:31 -0700 IronPort-SDR: SdjKmP1y2PCUyb9QkV13uUYfRLXkhgcqHHeHrTnTdlDomyLH57deq5OZ2OYioQ5dzEFZn96dm1 gd9S6BF7SggQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,375,1596524400"; d="scan'208";a="521471652" Received: from cst-dev.jf.intel.com ([10.23.221.69]) by fmsmga005.fm.intel.com with ESMTP; 14 Oct 2020 09:01:31 -0700 From: Jianxin Xiong To: linux-rdma@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: Jianxin Xiong , Doug Ledford , Jason Gunthorpe , Leon Romanovsky , Sumit Semwal , Christian Koenig , Daniel Vetter Subject: [PATCH v4 3/5] RDMA/uverbs: Add uverbs command for dma-buf based MR registration Date: Wed, 14 Oct 2020 09:15:40 -0700 Message-Id: <1602692140-107018-1-git-send-email-jianxin.xiong@intel.com> X-Mailer: git-send-email 1.8.3.1 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Implement a new uverbs ioctl method for memory registration with file descriptor as an extra parameter. Signed-off-by: Jianxin Xiong Reviewed-by: Sean Hefty Acked-by: Michael J. Ruhl Acked-by: Christian Koenig --- drivers/infiniband/core/uverbs_std_types_mr.c | 112 ++++++++++++++++++++++++++ include/uapi/rdma/ib_user_ioctl_cmds.h | 14 ++++ 2 files changed, 126 insertions(+) diff --git a/drivers/infiniband/core/uverbs_std_types_mr.c b/drivers/infiniband/core/uverbs_std_types_mr.c index 9b22bb5..e54459f 100644 --- a/drivers/infiniband/core/uverbs_std_types_mr.c +++ b/drivers/infiniband/core/uverbs_std_types_mr.c @@ -1,5 +1,6 @@ /* * Copyright (c) 2018, Mellanox Technologies inc. All rights reserved. + * Copyright (c) 2020, Intel Corporation. All rights reserved. * * This software is available to you under a choice of one of two * licenses. You may choose to be licensed under the terms of the GNU @@ -178,6 +179,85 @@ static int UVERBS_HANDLER(UVERBS_METHOD_QUERY_MR)( return IS_UVERBS_COPY_ERR(ret) ? ret : 0; } +static int UVERBS_HANDLER(UVERBS_METHOD_REG_DMABUF_MR)( + struct uverbs_attr_bundle *attrs) +{ + struct ib_uobject *uobj = + uverbs_attr_get_uobject(attrs, UVERBS_ATTR_REG_DMABUF_MR_HANDLE); + struct ib_pd *pd = + uverbs_attr_get_obj(attrs, UVERBS_ATTR_REG_DMABUF_MR_PD_HANDLE); + struct ib_device *ib_dev = pd->device; + + u64 start, length, virt_addr; + u32 fd, access_flags; + struct ib_mr *mr; + int ret; + + if (!ib_dev->ops.reg_user_mr_dmabuf) + return -EOPNOTSUPP; + + ret = uverbs_copy_from(&start, attrs, + UVERBS_ATTR_REG_DMABUF_MR_ADDR); + if (ret) + return ret; + + ret = uverbs_copy_from(&length, attrs, + UVERBS_ATTR_REG_DMABUF_MR_LENGTH); + if (ret) + return ret; + + ret = uverbs_copy_from(&virt_addr, attrs, + UVERBS_ATTR_REG_DMABUF_MR_HCA_VA); + if (ret) + return ret; + + ret = uverbs_copy_from(&fd, attrs, + UVERBS_ATTR_REG_DMABUF_MR_FD); + if (ret) + return ret; + + ret = uverbs_get_flags32(&access_flags, attrs, + UVERBS_ATTR_REG_DMABUF_MR_ACCESS_FLAGS, + IB_ACCESS_SUPPORTED); + if (ret) + return ret; + + ret = ib_check_mr_access(access_flags); + if (ret) + return ret; + + mr = pd->device->ops.reg_user_mr_dmabuf(pd, start, length, virt_addr, + fd, access_flags, + &attrs->driver_udata); + if (IS_ERR(mr)) + return PTR_ERR(mr); + + mr->device = pd->device; + mr->pd = pd; + mr->type = IB_MR_TYPE_USER; + mr->uobject = uobj; + atomic_inc(&pd->usecnt); + + uobj->object = mr; + + ret = uverbs_copy_to(attrs, UVERBS_ATTR_REG_DMABUF_MR_RESP_LKEY, + &mr->lkey, sizeof(mr->lkey)); + if (ret) + goto err_dereg; + + ret = uverbs_copy_to(attrs, UVERBS_ATTR_REG_DMABUF_MR_RESP_RKEY, + &mr->rkey, sizeof(mr->rkey)); + if (ret) + goto err_dereg; + + return 0; + +err_dereg: + ib_dereg_mr_user(mr, uverbs_get_cleared_udata(attrs)); + + return ret; +} + DECLARE_UVERBS_NAMED_METHOD( UVERBS_METHOD_ADVISE_MR, UVERBS_ATTR_IDR(UVERBS_ATTR_ADVISE_MR_PD_HANDLE, @@ -243,6 +323,37 @@ static int UVERBS_HANDLER(UVERBS_METHOD_QUERY_MR)( UVERBS_ATTR_TYPE(u32), UA_MANDATORY)); +DECLARE_UVERBS_NAMED_METHOD( + UVERBS_METHOD_REG_DMABUF_MR, + UVERBS_ATTR_IDR(UVERBS_ATTR_REG_DMABUF_MR_HANDLE, + UVERBS_OBJECT_MR, + UVERBS_ACCESS_NEW, + UA_MANDATORY), + UVERBS_ATTR_IDR(UVERBS_ATTR_REG_DMABUF_MR_PD_HANDLE, + UVERBS_OBJECT_PD, + UVERBS_ACCESS_READ, + UA_MANDATORY), + UVERBS_ATTR_PTR_IN(UVERBS_ATTR_REG_DMABUF_MR_ADDR, + UVERBS_ATTR_TYPE(u64), + UA_MANDATORY), + UVERBS_ATTR_PTR_IN(UVERBS_ATTR_REG_DMABUF_MR_LENGTH, + UVERBS_ATTR_TYPE(u64), + UA_MANDATORY), + UVERBS_ATTR_PTR_IN(UVERBS_ATTR_REG_DMABUF_MR_HCA_VA, + UVERBS_ATTR_TYPE(u64), + UA_MANDATORY), + UVERBS_ATTR_PTR_IN(UVERBS_ATTR_REG_DMABUF_MR_FD, + UVERBS_ATTR_TYPE(u32), + UA_MANDATORY), + UVERBS_ATTR_FLAGS_IN(UVERBS_ATTR_REG_DMABUF_MR_ACCESS_FLAGS, + enum ib_access_flags), + UVERBS_ATTR_PTR_OUT(UVERBS_ATTR_REG_DMABUF_MR_RESP_LKEY, + UVERBS_ATTR_TYPE(u32), + UA_MANDATORY), + UVERBS_ATTR_PTR_OUT(UVERBS_ATTR_REG_DMABUF_MR_RESP_RKEY, + UVERBS_ATTR_TYPE(u32), + UA_MANDATORY)); + DECLARE_UVERBS_NAMED_METHOD_DESTROY( UVERBS_METHOD_MR_DESTROY, UVERBS_ATTR_IDR(UVERBS_ATTR_DESTROY_MR_HANDLE, @@ -253,6 +364,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_QUERY_MR)( DECLARE_UVERBS_NAMED_OBJECT( UVERBS_OBJECT_MR, UVERBS_TYPE_ALLOC_IDR(uverbs_free_mr), + &UVERBS_METHOD(UVERBS_METHOD_REG_DMABUF_MR), &UVERBS_METHOD(UVERBS_METHOD_DM_MR_REG), &UVERBS_METHOD(UVERBS_METHOD_MR_DESTROY), &UVERBS_METHOD(UVERBS_METHOD_ADVISE_MR), diff --git a/include/uapi/rdma/ib_user_ioctl_cmds.h b/include/uapi/rdma/ib_user_ioctl_cmds.h index 7968a18..7e07f52 100644 --- a/include/uapi/rdma/ib_user_ioctl_cmds.h +++ b/include/uapi/rdma/ib_user_ioctl_cmds.h @@ -1,5 +1,6 @@ /* * Copyright (c) 2018, Mellanox Technologies inc. All rights reserved. + * Copyright (c) 2020, Intel Corporation. All rights reserved. * * This software is available to you under a choice of one of two * licenses. You may choose to be licensed under the terms of the GNU @@ -251,6 +252,7 @@ enum uverbs_methods_mr { UVERBS_METHOD_MR_DESTROY, UVERBS_METHOD_ADVISE_MR, UVERBS_METHOD_QUERY_MR, + UVERBS_METHOD_REG_DMABUF_MR, }; enum uverbs_attrs_mr_destroy_ids { @@ -272,6 +274,18 @@ enum uverbs_attrs_query_mr_cmd_attr_ids { UVERBS_ATTR_QUERY_MR_RESP_IOVA, }; +enum uverbs_attrs_reg_dmabuf_mr_cmd_attr_ids { + UVERBS_ATTR_REG_DMABUF_MR_HANDLE, + UVERBS_ATTR_REG_DMABUF_MR_PD_HANDLE, + UVERBS_ATTR_REG_DMABUF_MR_ADDR, + UVERBS_ATTR_REG_DMABUF_MR_LENGTH, + UVERBS_ATTR_REG_DMABUF_MR_HCA_VA, + UVERBS_ATTR_REG_DMABUF_MR_FD, + UVERBS_ATTR_REG_DMABUF_MR_ACCESS_FLAGS, + UVERBS_ATTR_REG_DMABUF_MR_RESP_LKEY, + UVERBS_ATTR_REG_DMABUF_MR_RESP_RKEY, +}; + enum uverbs_attrs_create_counters_cmd_attr_ids { UVERBS_ATTR_CREATE_COUNTERS_HANDLE, }; From patchwork Wed Oct 14 16:15:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xiong, Jianxin" X-Patchwork-Id: 11838013 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D0D4714B3 for ; Wed, 14 Oct 2020 16:01:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B9AF520B1F for ; Wed, 14 Oct 2020 16:01:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730283AbgJNQBq (ORCPT ); Wed, 14 Oct 2020 12:01:46 -0400 Received: from mga12.intel.com ([192.55.52.136]:46771 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727034AbgJNQBq (ORCPT ); Wed, 14 Oct 2020 12:01:46 -0400 IronPort-SDR: w6dlVrJb8V/Kwd5SQu/m/DyeL73pKyUZ2T4TdCHbsF+UV13nzqhXUDCqF7KIg2wZcrDCDjnx9A /o4p00xoGxWg== X-IronPort-AV: E=McAfee;i="6000,8403,9773"; a="145443242" X-IronPort-AV: E=Sophos;i="5.77,375,1596524400"; d="scan'208";a="145443242" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Oct 2020 09:01:45 -0700 IronPort-SDR: TsQdLgUUcw5Dbfgx0RjcixsGTdjm1Hq1BiY8kVqzs+T1N/0z7anYRlyemw0Md+sK54ZKSVHiAJ dYgHlwJJDW+Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,375,1596524400"; d="scan'208";a="390693658" Received: from cst-dev.jf.intel.com ([10.23.221.69]) by orsmga001.jf.intel.com with ESMTP; 14 Oct 2020 09:01:44 -0700 From: Jianxin Xiong To: linux-rdma@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: Jianxin Xiong , Doug Ledford , Jason Gunthorpe , Leon Romanovsky , Sumit Semwal , Christian Koenig , Daniel Vetter Subject: [PATCH v4 4/5] RDMA/mlx5: Support dma-buf based userspace memory region Date: Wed, 14 Oct 2020 09:15:47 -0700 Message-Id: <1602692147-107057-1-git-send-email-jianxin.xiong@intel.com> X-Mailer: git-send-email 1.8.3.1 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Implement the new driver method 'reg_user_mr_dmabuf'. Utilize the core functions to import dma-buf based memory region and update the mappings. Add code to handle dma-buf related page fault. Signed-off-by: Jianxin Xiong Reviewed-by: Sean Hefty Acked-by: Michael J. Ruhl Acked-by: Christian Koenig --- drivers/infiniband/hw/mlx5/main.c | 2 + drivers/infiniband/hw/mlx5/mlx5_ib.h | 5 ++ drivers/infiniband/hw/mlx5/mr.c | 119 +++++++++++++++++++++++++++++++++++ drivers/infiniband/hw/mlx5/odp.c | 42 +++++++++++++ 4 files changed, 168 insertions(+) diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index 89e04ca..ec4ad2f 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB /* * Copyright (c) 2013-2020, Mellanox Technologies inc. All rights reserved. + * Copyright (c) 2020, Intel Corporation. All rights reserved. */ #include @@ -4060,6 +4061,7 @@ static int mlx5_ib_enable_driver(struct ib_device *dev) .query_srq = mlx5_ib_query_srq, .query_ucontext = mlx5_ib_query_ucontext, .reg_user_mr = mlx5_ib_reg_user_mr, + .reg_user_mr_dmabuf = mlx5_ib_reg_user_mr_dmabuf, .req_notify_cq = mlx5_ib_arm_cq, .rereg_user_mr = mlx5_ib_rereg_user_mr, .resize_cq = mlx5_ib_resize_cq, diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index b1f2b34..65fcc18 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -1,6 +1,7 @@ /* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ /* * Copyright (c) 2013-2020, Mellanox Technologies inc. All rights reserved. + * Copyright (c) 2020, Intel Corporation. All rights reserved. */ #ifndef MLX5_IB_H @@ -1174,6 +1175,10 @@ int mlx5_ib_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr, struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, u64 virt_addr, int access_flags, struct ib_udata *udata); +struct ib_mr *mlx5_ib_reg_user_mr_dmabuf(struct ib_pd *pd, u64 start, + u64 length, u64 virt_addr, + int dmabuf_fd, int access_flags, + struct ib_udata *udata); int mlx5_ib_advise_mr(struct ib_pd *pd, enum ib_uverbs_advise_mr_advice advice, u32 flags, diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index b261797..24750f1 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -1,5 +1,6 @@ /* * Copyright (c) 2013-2015, Mellanox Technologies. All rights reserved. + * Copyright (c) 2020, Intel Corporation. All rights reserved. * * This software is available to you under a choice of one of two * licenses. You may choose to be licensed under the terms of the GNU @@ -1462,6 +1463,124 @@ struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, return ERR_PTR(err); } +static int mlx5_ib_umem_dmabuf_xlt_init(struct ib_umem *umem, void *context) +{ + struct mlx5_ib_mr *mr = context; + int flags = MLX5_IB_UPD_XLT_ENABLE; + + if (!mr) + return -EINVAL; + + return mlx5_ib_update_xlt(mr, 0, mr->npages, PAGE_SHIFT, flags); +} + +static int mlx5_ib_umem_dmabuf_xlt_update(struct ib_umem *umem, void *context) +{ + struct mlx5_ib_mr *mr = context; + int flags = MLX5_IB_UPD_XLT_ATOMIC; + + if (!mr) + return -EINVAL; + + return mlx5_ib_update_xlt(mr, 0, mr->npages, PAGE_SHIFT, flags); +} + +static int mlx5_ib_umem_dmabuf_xlt_invalidate(struct ib_umem *umem, void *context) +{ + struct mlx5_ib_mr *mr = context; + int flags = MLX5_IB_UPD_XLT_ZAP | MLX5_IB_UPD_XLT_ATOMIC; + + if (!mr) + return -EINVAL; + + return mlx5_ib_update_xlt(mr, 0, mr->npages, PAGE_SHIFT, flags); +} + +static struct ib_umem_dmabuf_ops mlx5_ib_umem_dmabuf_ops = { + .init = mlx5_ib_umem_dmabuf_xlt_init, + .update = mlx5_ib_umem_dmabuf_xlt_update, + .invalidate = mlx5_ib_umem_dmabuf_xlt_invalidate, +}; + +struct ib_mr *mlx5_ib_reg_user_mr_dmabuf(struct ib_pd *pd, u64 start, + u64 length, u64 virt_addr, + int dmabuf_fd, int access_flags, + struct ib_udata *udata) +{ + struct mlx5_ib_dev *dev = to_mdev(pd->device); + struct mlx5_ib_mr *mr = NULL; + struct ib_umem *umem; + int page_shift; + int npages; + int ncont; + int order; + int err; + + if (!IS_ENABLED(CONFIG_INFINIBAND_USER_MEM)) + return ERR_PTR(-EOPNOTSUPP); + + mlx5_ib_dbg(dev, + "start 0x%llx, virt_addr 0x%llx, length 0x%llx, fd %d, access_flags 0x%x\n", + start, virt_addr, length, dmabuf_fd, access_flags); + + if (!mlx5_ib_can_load_pas_with_umr(dev, length)) + return ERR_PTR(-EINVAL); + + umem = ib_umem_dmabuf_get(&dev->ib_dev, start, length, dmabuf_fd, + access_flags, &mlx5_ib_umem_dmabuf_ops); + if (IS_ERR(umem)) { + mlx5_ib_dbg(dev, "umem get failed (%ld)\n", PTR_ERR(umem)); + return ERR_PTR(PTR_ERR(umem)); + } + + npages = ib_umem_num_pages(umem); + if (!npages) { + mlx5_ib_warn(dev, "avoid zero region\n"); + ib_umem_release(umem); + return ERR_PTR(-EINVAL); + } + + page_shift = PAGE_SHIFT; + ncont = npages; + order = ilog2(roundup_pow_of_two(ncont)); + + mlx5_ib_dbg(dev, "npages %d, ncont %d, order %d, page_shift %d\n", + npages, ncont, order, page_shift); + + mr = alloc_mr_from_cache(pd, umem, virt_addr, length, ncont, + page_shift, order, access_flags); + if (IS_ERR(mr)) + mr = NULL; + + if (!mr) { + mutex_lock(&dev->slow_path_mutex); + mr = reg_create(NULL, pd, virt_addr, length, umem, ncont, + page_shift, access_flags, false); + mutex_unlock(&dev->slow_path_mutex); + } + + if (IS_ERR(mr)) { + err = PTR_ERR(mr); + goto error; + } + + mlx5_ib_dbg(dev, "mkey 0x%x\n", mr->mmkey.key); + + mr->umem = umem; + set_mr_fields(dev, mr, npages, length, access_flags); + + err = ib_umem_dmabuf_init_mapping(umem, mr); + if (err) { + dereg_mr(dev, mr); + return ERR_PTR(err); + } + + return &mr->ibmr; +error: + ib_umem_release(umem); + return ERR_PTR(err); +} + /** * mlx5_mr_cache_invalidate - Fence all DMA on the MR * @mr: The MR to fence diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c index 5c853ec..16e2e51 100644 --- a/drivers/infiniband/hw/mlx5/odp.c +++ b/drivers/infiniband/hw/mlx5/odp.c @@ -801,6 +801,44 @@ static int pagefault_implicit_mr(struct mlx5_ib_mr *imr, * Returns: * -EFAULT: The io_virt->bcnt is not within the MR, it covers pages that are * not accessible, or the MR is no longer valid. + * -EAGAIN: The operation should be retried + * + * >0: Number of pages mapped + */ +static int pagefault_dmabuf_mr(struct mlx5_ib_mr *mr, struct ib_umem *umem, + u64 io_virt, size_t bcnt, u32 *bytes_mapped, + u32 flags) +{ + u64 user_va; + u64 end; + int npages; + + if (unlikely(io_virt < mr->mmkey.iova)) + return -EFAULT; + if (check_add_overflow(io_virt - mr->mmkey.iova, + (u64)umem->address, &user_va)) + return -EFAULT; + + /* Overflow has alreddy been checked at the umem creation time */ + end = umem->address + umem->length; + if (unlikely(user_va >= end || end - user_va < bcnt)) + return -EFAULT; + + if (!ib_umem_dmabuf_mapping_ready(umem)) + return -EAGAIN; + + if (bytes_mapped) + *bytes_mapped += bcnt; + + npages = (ALIGN(user_va + bcnt, PAGE_SIZE) - + ALIGN_DOWN(user_va, PAGE_SIZE)) >> PAGE_SHIFT; + return npages; +} + +/* + * Returns: + * -EFAULT: The io_virt->bcnt is not within the MR, it covers pages that are + * not accessible, or the MR is no longer valid. * -EAGAIN/-ENOMEM: The operation should be retried * * -EINVAL/others: General internal malfunction @@ -811,6 +849,10 @@ static int pagefault_mr(struct mlx5_ib_mr *mr, u64 io_virt, size_t bcnt, { struct ib_umem_odp *odp = to_ib_umem_odp(mr->umem); + if (mr->umem->is_dmabuf) + return pagefault_dmabuf_mr(mr, mr->umem, io_virt, bcnt, + bytes_mapped, flags); + lockdep_assert_held(&mr->dev->odp_srcu); if (unlikely(io_virt < mr->mmkey.iova)) return -EFAULT; From patchwork Wed Oct 14 16:16:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Xiong, Jianxin" X-Patchwork-Id: 11838021 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5ED3F14B5 for ; Wed, 14 Oct 2020 16:01:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 47A80214D8 for ; Wed, 14 Oct 2020 16:01:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729395AbgJNQB6 (ORCPT ); Wed, 14 Oct 2020 12:01:58 -0400 Received: from mga12.intel.com ([192.55.52.136]:46802 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727034AbgJNQB6 (ORCPT ); Wed, 14 Oct 2020 12:01:58 -0400 IronPort-SDR: e1JXhhVj4/hQOKGI0RdC5TmvF9PVx+P9fM50a3+AjtNcjzcr5Bit72U8tuo7IyKnX8pH4/nYWm n3qfTTroMpgA== X-IronPort-AV: E=McAfee;i="6000,8403,9773"; a="145443294" X-IronPort-AV: E=Sophos;i="5.77,375,1596524400"; d="scan'208";a="145443294" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Oct 2020 09:01:57 -0700 IronPort-SDR: DfBAFi75yC9OryKz1lKBNdATthFLHS9uxKfYZOyLJVz0Fi8UM8WCwYyLuCMwP7KMdqzn4AhFHj NVNW30PqycEA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,375,1596524400"; d="scan'208";a="345711986" Received: from cst-dev.jf.intel.com ([10.23.221.69]) by fmsmga004.fm.intel.com with ESMTP; 14 Oct 2020 09:01:57 -0700 From: Jianxin Xiong To: linux-rdma@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org Cc: Jianxin Xiong , Doug Ledford , Jason Gunthorpe , Leon Romanovsky , Sumit Semwal , Christian Koenig , Daniel Vetter Subject: [PATCH v4 5/5] dma-buf: Clarify that dma-buf sg lists are page aligned Date: Wed, 14 Oct 2020 09:16:01 -0700 Message-Id: <1602692161-107096-1-git-send-email-jianxin.xiong@intel.com> X-Mailer: git-send-email 1.8.3.1 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org The dma-buf API have been used under the assumption that the sg lists returned from dma_buf_map_attachment() are fully page aligned. Lots of stuff can break otherwise all over the place. Clarify this in the documentation and add a check when DMA API debug is enabled. Signed-off-by: Jianxin Xiong Reviewed-by: Christian König --- drivers/dma-buf/dma-buf.c | 21 +++++++++++++++++++++ include/linux/dma-buf.h | 3 ++- 2 files changed, 23 insertions(+), 1 deletion(-) diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 844967f..7309c83 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -851,6 +851,9 @@ void dma_buf_unpin(struct dma_buf_attachment *attach) * Returns sg_table containing the scatterlist to be returned; returns ERR_PTR * on error. May return -EINTR if it is interrupted by a signal. * + * On success, the DMA addresses and lengths in the returned scatterlist are + * PAGE_SIZE aligned. + * * A mapping must be unmapped by using dma_buf_unmap_attachment(). Note that * the underlying backing storage is pinned for as long as a mapping exists, * therefore users/importers should not hold onto a mapping for undue amounts of @@ -904,6 +907,24 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, attach->dir = direction; } +#ifdef CONFIG_DMA_API_DEBUG + { + struct scatterlist *sg; + u64 addr; + int len; + int i; + + for_each_sgtable_dma_sg(sg_table, sg, i) { + addr = sg_dma_address(sg); + len = sg_dma_len(sg); + if (!PAGE_ALIGNED(addr) || !PAGE_ALIGNED(len)) { + pr_debug("%s: addr %llx or len %x is not page aligned!\n", + __func__, addr, len); + } + } + } +#endif /* CONFIG_DMA_API_DEBUG */ + return sg_table; } EXPORT_SYMBOL_GPL(dma_buf_map_attachment); diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index a2ca294e..4a5fa70 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -145,7 +145,8 @@ struct dma_buf_ops { * * A &sg_table scatter list of or the backing storage of the DMA buffer, * already mapped into the device address space of the &device attached - * with the provided &dma_buf_attachment. + * with the provided &dma_buf_attachment. The addresses and lengths in + * the scatter list are PAGE_SIZE aligned. * * On failure, returns a negative error value wrapped into a pointer. * May also return -EINTR when a signal was received while being