diff mbox series

[RFC,12/12] RDMA/rdmavt: Adapt to handle non-uniform sizes on umem SGEs

Message ID 20190126165913.18272-13-shiraz.saleem@intel.com (mailing list archive)
State Superseded
Headers show
Series Adapt drivers to handle page combining on umem SGEs | expand

Commit Message

Shiraz Saleem Jan. 26, 2019, 4:59 p.m. UTC
From: "Shiraz, Saleem" <shiraz.saleem@intel.com>

rdmavt expects a uniform size on all umem SGEs which is
currently at PAGE_SIZE.

Adapt to a umem API change which could return non-uniform
sized SGEs due to combining contiguous PAGE_SIZE regions
into an SGE. Unfold the larger SGEs into a list of PAGE_SIZE
elements.

Additionally, purge umem->page_shift usage in the driver
as its only relevant for ODP MRs. Use system page size and
shift instead.

Cc: Dennis Dalessandro <dennis.dalessandro@intel.com>
Cc: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Shiraz, Saleem <shiraz.saleem@intel.com>
---
 drivers/infiniband/sw/rdmavt/mr.c | 34 +++++++++++++++++++---------------
 1 file changed, 19 insertions(+), 15 deletions(-)

Comments

Dennis Dalessandro Jan. 28, 2019, 12:07 p.m. UTC | #1
On 1/26/2019 11:59 AM, Shiraz Saleem wrote:
> From: "Shiraz, Saleem" <shiraz.saleem@intel.com>
> 
> rdmavt expects a uniform size on all umem SGEs which is
> currently at PAGE_SIZE.
> 
> Adapt to a umem API change which could return non-uniform
> sized SGEs due to combining contiguous PAGE_SIZE regions
> into an SGE. Unfold the larger SGEs into a list of PAGE_SIZE
> elements.
> 
> Additionally, purge umem->page_shift usage in the driver
> as its only relevant for ODP MRs. Use system page size and
> shift instead.
> 
> Cc: Dennis Dalessandro <dennis.dalessandro@intel.com>
> Cc: Mike Marciniszyn <mike.marciniszyn@intel.com>
> Signed-off-by: Shiraz, Saleem <shiraz.saleem@intel.com>
> ---

Acked-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
diff mbox series

Patch

diff --git a/drivers/infiniband/sw/rdmavt/mr.c b/drivers/infiniband/sw/rdmavt/mr.c
index 8b1c1e8..43a6e71 100644
--- a/drivers/infiniband/sw/rdmavt/mr.c
+++ b/drivers/infiniband/sw/rdmavt/mr.c
@@ -407,25 +407,29 @@  struct ib_mr *rvt_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
 	mr->mr.access_flags = mr_access_flags;
 	mr->umem = umem;
 
-	mr->mr.page_shift = umem->page_shift;
+	mr->mr.page_shift = PAGE_SHIFT;
 	m = 0;
 	n = 0;
 	for_each_sg(umem->sg_head.sgl, sg, umem->nmap, entry) {
-		void *vaddr;
+		int i, chunk_pages;
+		struct page *page = sg_page(sg);
 
-		vaddr = page_address(sg_page(sg));
-		if (!vaddr) {
-			ret = ERR_PTR(-EINVAL);
-			goto bail_inval;
-		}
-		mr->mr.map[m]->segs[n].vaddr = vaddr;
-		mr->mr.map[m]->segs[n].length = BIT(umem->page_shift);
-		trace_rvt_mr_user_seg(&mr->mr, m, n, vaddr,
-				      BIT(umem->page_shift));
-		n++;
-		if (n == RVT_SEGSZ) {
-			m++;
-			n = 0;
+		chunk_pages = sg_dma_len(sg) >> PAGE_SHIFT;
+		for (i = 0; i < chunk_pages; i++) {
+			void *vaddr;
+
+			vaddr = page_address(nth_page(page, i));
+			if (!vaddr) {
+				ret = ERR_PTR(-EINVAL);
+				goto bail_inval;
+			}
+			mr->mr.map[m]->segs[n].vaddr = vaddr;
+			mr->mr.map[m]->segs[n].length = PAGE_SIZE;
+			trace_rvt_mr_user_seg(&mr->mr, m, n, vaddr, PAGE_SIZE);
+			if (++n == RVT_SEGSZ) {
+				m++;
+				n = 0;
+			}
 		}
 	}
 	return &mr->ibmr;