diff mbox series

[RFC,3/5] IB/core: P2P DMA for device private pages

Message ID 20241201103659.420677-4-ymaman@nvidia.com (mailing list archive)
State New
Headers show
Series GPU Direct RDMA (P2P DMA) for Device Private Pages | expand

Commit Message

Yonatan Maman Dec. 1, 2024, 10:36 a.m. UTC
From: Yonatan Maman <Ymaman@Nvidia.com>

Add Peer-to-Peer (P2P) DMA request for hmm_range_fault calling,
utilizing capabilities introduced in mm/hmm. By setting
range.default_flags to HMM_PFN_REQ_FAULT | HMM_PFN_REQ_TRY_P2P, HMM
attempts to initiate P2P DMA connections for device private pages
(instead of page fault handling).

This enhancement utilizes P2P DMA to reduce performance overhead
during data migration between devices (e.g., GPU) and system memory,
providing performance benefits for GPU-centric applications that
utilize RDMA and device private pages.

Signed-off-by: Yonatan Maman <Ymaman@Nvidia.com>
Signed-off-by: Gal Shalom <GalShalom@Nvidia.com>
---
 drivers/infiniband/core/umem_odp.c | 4 ++++
 1 file changed, 4 insertions(+)
diff mbox series

Patch

diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c
index 51d518989914..4c2465b9bdda 100644
--- a/drivers/infiniband/core/umem_odp.c
+++ b/drivers/infiniband/core/umem_odp.c
@@ -332,6 +332,10 @@  int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 user_virt,
 			range.default_flags |= HMM_PFN_REQ_WRITE;
 	}
 
+	if (access_mask & HMM_PFN_ALLOW_P2P)
+		range.default_flags |= HMM_PFN_ALLOW_P2P;
+
+	range.pfn_flags_mask = HMM_PFN_ALLOW_P2P;
 	range.hmm_pfns = &(umem_odp->map.pfn_list[pfn_start_idx]);
 	timeout = jiffies + msecs_to_jiffies(HMM_RANGE_DEFAULT_TIMEOUT);