diff mbox series

[RFC,06/12] RDMA/cxgb3: Use for_each_sg_dma_page iterator on umem SGL

Message ID 20190126165913.18272-7-shiraz.saleem@intel.com (mailing list archive)
State Superseded
Headers show
Series Adapt drivers to handle page combining on umem SGEs | expand

Commit Message

Shiraz Saleem Jan. 26, 2019, 4:59 p.m. UTC
From: "Shiraz, Saleem" <shiraz.saleem@intel.com>

Use the for_each_sg_dma_page iterator variant to walk the umem
DMA-mapped SGL and get the page DMA address. This avoids the extra
loop to iterate pages in the SGE when for_each_sg iterator is used.

Additionally, purge umem->page_shift usage in the driver
as its only relevant for ODP MRs. Use system page size and
shift instead.

Cc: Steve Wise <swise@chelsio.com>
Signed-off-by: Shiraz, Saleem <shiraz.saleem@intel.com>
---
 drivers/infiniband/hw/cxgb3/iwch_provider.c | 29 ++++++++++++-----------------
 1 file changed, 12 insertions(+), 17 deletions(-)

Comments

Steve Wise Jan. 26, 2019, 10:12 p.m. UTC | #1
> -----Original Message-----
> From: linux-rdma-owner@vger.kernel.org <linux-rdma-
> owner@vger.kernel.org> On Behalf Of Shiraz Saleem
> Sent: Saturday, January 26, 2019 10:59 AM
> To: dledford@redhat.com; jgg@ziepe.ca; linux-rdma@vger.kernel.org
> Cc: Shiraz, Saleem <shiraz.saleem@intel.com>; Steve Wise
> <swise@chelsio.com>
> Subject: [PATCH RFC 06/12] RDMA/cxgb3: Use for_each_sg_dma_page
> iterator on umem SGL
> 
> From: "Shiraz, Saleem" <shiraz.saleem@intel.com>
> 
> Use the for_each_sg_dma_page iterator variant to walk the umem
> DMA-mapped SGL and get the page DMA address. This avoids the extra
> loop to iterate pages in the SGE when for_each_sg iterator is used.
> 
> Additionally, purge umem->page_shift usage in the driver
> as its only relevant for ODP MRs. Use system page size and
> shift instead.
> 
> Cc: Steve Wise <swise@chelsio.com>
> Signed-off-by: Shiraz, Saleem <shiraz.saleem@intel.com>

Acked-by: Steve Wise <swise@opengridcomputing.com>
diff mbox series

Patch

diff --git a/drivers/infiniband/hw/cxgb3/iwch_provider.c b/drivers/infiniband/hw/cxgb3/iwch_provider.c
index 07c20cd..d153395 100644
--- a/drivers/infiniband/hw/cxgb3/iwch_provider.c
+++ b/drivers/infiniband/hw/cxgb3/iwch_provider.c
@@ -522,14 +522,13 @@  static struct ib_mr *iwch_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
 				      u64 virt, int acc, struct ib_udata *udata)
 {
 	__be64 *pages;
-	int shift, n, len;
-	int i, k, entry;
+	int shift, n, i;
 	int err = 0;
 	struct iwch_dev *rhp;
 	struct iwch_pd *php;
 	struct iwch_mr *mhp;
 	struct iwch_reg_user_mr_resp uresp;
-	struct scatterlist *sg;
+	struct sg_dma_page_iter sg_iter;
 	pr_debug("%s ib_pd %p\n", __func__, pd);
 
 	php = to_iwch_pd(pd);
@@ -547,7 +546,7 @@  static struct ib_mr *iwch_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
 		return ERR_PTR(err);
 	}
 
-	shift = mhp->umem->page_shift;
+	shift = PAGE_SHIFT;
 
 	n = mhp->umem->nmap;
 
@@ -563,19 +562,15 @@  static struct ib_mr *iwch_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
 
 	i = n = 0;
 
-	for_each_sg(mhp->umem->sg_head.sgl, sg, mhp->umem->nmap, entry) {
-			len = sg_dma_len(sg) >> shift;
-			for (k = 0; k < len; ++k) {
-				pages[i++] = cpu_to_be64(sg_dma_address(sg) +
-							 (k << shift));
-				if (i == PAGE_SIZE / sizeof *pages) {
-					err = iwch_write_pbl(mhp, pages, i, n);
-					if (err)
-						goto pbl_done;
-					n += i;
-					i = 0;
-				}
-			}
+	for_each_sg_dma_page(mhp->umem->sg_head.sgl, &sg_iter, mhp->umem->nmap, 0) {
+		pages[i++] = cpu_to_be64(sg_page_iter_dma_address(&sg_iter));
+		if (i == PAGE_SIZE / sizeof *pages) {
+			err = iwch_write_pbl(mhp, pages, i, n);
+			if (err)
+				goto pbl_done;
+			n += i;
+			i = 0;
+		}
 	}
 
 	if (i)