mbox series

[rdma-next,00/12] Adapt drivers to handle page combining on umem SGEs

Message ID 20190211152508.25040-1-shiraz.saleem@intel.com (mailing list archive)
Headers show
Series Adapt drivers to handle page combining on umem SGEs | expand

Message

Shiraz Saleem Feb. 11, 2019, 3:24 p.m. UTC
From: "Saleem, Shiraz" <shiraz.saleem@intel.com>

This patch set serves as precursor series to updating ib_umem_get
to combine contiguous PAGE_SIZE pages in umem SGEs.

Drivers are updated to unfold larger SGEs into PAGE_SIZE elements
when walking the umem DMA-mapped SGL. The for_each_sg_dma_page
variant is used where applicable to iterate the pages of the
SGL and get the page DMA address.

Additionally, umem->page_shift usage is purged in drivers
as its only relevant for ODP MRs. Use system page size and
shift instead.

This series is dependent on the new scatterlist API for_each_sg_dma_page
https://www.spinics.net/lists/linux-rdma/msg75195.html

RFC-->v0:
* drop RFC tag.

Shiraz, Saleem (12):
  RDMA/bnxt_re: Use for_each_sg_dma_page iterator on umem SGL
  RDMA/mthca: Use for_each_sg_dma_page iterator on umem SGL
  RDMA/i40iw: Use for_each_sg_dma_page iterator on umem SGL
  RDMA/hns: Use for_each_sg_dma_page iterator on umem SGL
  RDMA/cxgb4: Use for_each_sg_dma_page iterator on umem SGL
  RDMA/cxgb3: Use for_each_sg_dma_page iterator on umem SGL
  RDMA/vmw_pvrdma: Use for_each_sg_dma_page iterator on umem SGL
  RDMA/qedr: Use for_each_sg_dma_page iterator on umem SGL
  RDMA/ocrdma: Use for_each_sg_dma_page iterator on umem SGL
  RDMA/nes: Use for_each_sg_dma_page iterator on umem SGL
  RDMA/rxe: Use for_each_sg_page iterator on umem SGL
  RDMA/rdmavt: Adapt to handle non-uniform sizes on umem SGEs

 drivers/infiniband/hw/bnxt_re/ib_verbs.c       |  23 ++-
 drivers/infiniband/hw/bnxt_re/qplib_res.c      |   9 +-
 drivers/infiniband/hw/cxgb3/iwch_provider.c    |  29 ++--
 drivers/infiniband/hw/cxgb4/mem.c              |  33 ++--
 drivers/infiniband/hw/hns/hns_roce_hw_v1.c     |   7 +-
 drivers/infiniband/hw/hns/hns_roce_hw_v2.c     |  25 ++-
 drivers/infiniband/hw/hns/hns_roce_mr.c        |  88 +++++------
 drivers/infiniband/hw/hns/hns_roce_qp.c        |   5 +-
 drivers/infiniband/hw/i40iw/i40iw_verbs.c      |  35 ++---
 drivers/infiniband/hw/mthca/mthca_provider.c   |  36 ++---
 drivers/infiniband/hw/nes/nes_verbs.c          | 205 +++++++++++--------------
 drivers/infiniband/hw/ocrdma/ocrdma_verbs.c    |  56 +++----
 drivers/infiniband/hw/qedr/verbs.c             |  68 ++++----
 drivers/infiniband/hw/vmw_pvrdma/pvrdma_misc.c |  21 +--
 drivers/infiniband/sw/rdmavt/mr.c              |  34 ++--
 drivers/infiniband/sw/rxe/rxe_mr.c             |  13 +-
 16 files changed, 306 insertions(+), 381 deletions(-)

Comments

Jason Gunthorpe Feb. 11, 2019, 10:27 p.m. UTC | #1
On Mon, Feb 11, 2019 at 09:24:56AM -0600, Shiraz Saleem wrote:
> From: "Saleem, Shiraz" <shiraz.saleem@intel.com>
> 
> This patch set serves as precursor series to updating ib_umem_get
> to combine contiguous PAGE_SIZE pages in umem SGEs.
> 
> Drivers are updated to unfold larger SGEs into PAGE_SIZE elements
> when walking the umem DMA-mapped SGL. The for_each_sg_dma_page
> variant is used where applicable to iterate the pages of the
> SGL and get the page DMA address.
> 
> Additionally, umem->page_shift usage is purged in drivers
> as its only relevant for ODP MRs. Use system page size and
> shift instead.
> 
> This series is dependent on the new scatterlist API for_each_sg_dma_page
> https://www.spinics.net/lists/linux-rdma/msg75195.html
> 
> RFC-->v0:
> * drop RFC tag.
> 
> Shiraz, Saleem (12):
>   RDMA/bnxt_re: Use for_each_sg_dma_page iterator on umem SGL
>   RDMA/mthca: Use for_each_sg_dma_page iterator on umem SGL
>   RDMA/i40iw: Use for_each_sg_dma_page iterator on umem SGL
>   RDMA/hns: Use for_each_sg_dma_page iterator on umem SGL
>   RDMA/cxgb4: Use for_each_sg_dma_page iterator on umem SGL
>   RDMA/cxgb3: Use for_each_sg_dma_page iterator on umem SGL
>   RDMA/vmw_pvrdma: Use for_each_sg_dma_page iterator on umem SGL
>   RDMA/qedr: Use for_each_sg_dma_page iterator on umem SGL
>   RDMA/ocrdma: Use for_each_sg_dma_page iterator on umem SGL
>   RDMA/rxe: Use for_each_sg_page iterator on umem SGL

I took the above to for-next

>   RDMA/nes: Use for_each_sg_dma_page iterator on umem SGL
>   RDMA/rdmavt: Adapt to handle non-uniform sizes on umem SGEs

These two need comments addressed

Jason