mbox series

[RFC,00/12] Adapt drivers to handle page combining on umem SGEs

Message ID 20190126165913.18272-1-shiraz.saleem@intel.com (mailing list archive)
Headers show
Series Adapt drivers to handle page combining on umem SGEs | expand

Message

Shiraz Saleem Jan. 26, 2019, 4:59 p.m. UTC
From: "Saleem, Shiraz" <shiraz.saleem@intel.com>

This patch set serves as precursor series to updating ib_umem_get
to combine contiguous PAGE_SIZE pages in umem SGEs.

Drivers are updated to unfold larger SGEs into PAGE_SIZE elements
when walking the umem DMA-mapped SGL. The for_each_sg_dma_page
variant is used where applicable to iterate the pages of the
SGL and get the page DMA address.

Additionally, umem->page_shift usage is purged in drivers
as its only relevant for ODP MRs. Use system page size and
shift instead.

This series is dependent on the new scatterlist API for_each_sg_dma_page
which is pending acceptance.

https://patchwork.kernel.org/patch/10748901/

Shiraz, Saleem (12):
  RDMA/bnxt_re: Use for_each_sg_dma_page iterator on umem SGL
  RDMA/mthca: Use for_each_sg_dma_page iterator on umem SGL
  RDMA/i40iw: Use for_each_sg_dma_page iterator on umem SGL
  RDMA/hns: Use for_each_sg_dma_page iterator on umem SGL
  RDMA/cxgb4: Use for_each_sg_dma_page iterator on umem SGL
  RDMA/cxgb3: Use for_each_sg_dma_page iterator on umem SGL
  RDMA/vmw_pvrdma: Use for_each_sg_dma_page iterator on umem SGL
  RDMA/qedr: Use for_each_sg_dma_page iterator on umem SGL
  RDMA/ocrdma: Use for_each_sg_dma_page iterator on umem SGL
  RDMA/nes: Use for_each_sg_dma_page iterator on umem SGL
  RDMA/rxe: Use for_each_sg_page iterator on umem SGL
  RDMA/rdmavt: Adapt to handle non-uniform sizes on umem SGEs

 drivers/infiniband/hw/bnxt_re/ib_verbs.c       |  23 ++-
 drivers/infiniband/hw/bnxt_re/qplib_res.c      |   9 +-
 drivers/infiniband/hw/cxgb3/iwch_provider.c    |  29 ++--
 drivers/infiniband/hw/cxgb4/mem.c              |  33 ++--
 drivers/infiniband/hw/hns/hns_roce_hw_v1.c     |   7 +-
 drivers/infiniband/hw/hns/hns_roce_hw_v2.c     |  25 ++-
 drivers/infiniband/hw/hns/hns_roce_mr.c        |  88 +++++------
 drivers/infiniband/hw/hns/hns_roce_qp.c        |   5 +-
 drivers/infiniband/hw/i40iw/i40iw_verbs.c      |  35 ++---
 drivers/infiniband/hw/mthca/mthca_provider.c   |  36 ++---
 drivers/infiniband/hw/nes/nes_verbs.c          | 205 +++++++++++--------------
 drivers/infiniband/hw/ocrdma/ocrdma_verbs.c    |  56 +++----
 drivers/infiniband/hw/qedr/verbs.c             |  68 ++++----
 drivers/infiniband/hw/vmw_pvrdma/pvrdma_misc.c |  21 +--
 drivers/infiniband/sw/rdmavt/mr.c              |  34 ++--
 drivers/infiniband/sw/rxe/rxe_mr.c             |  13 +-
 16 files changed, 306 insertions(+), 381 deletions(-)

Comments

Jason Gunthorpe Feb. 8, 2019, 4:20 p.m. UTC | #1
On Sat, Jan 26, 2019 at 10:59:01AM -0600, Shiraz Saleem wrote:
> From: "Saleem, Shiraz" <shiraz.saleem@intel.com>
> 
> This patch set serves as precursor series to updating ib_umem_get
> to combine contiguous PAGE_SIZE pages in umem SGEs.
> 
> Drivers are updated to unfold larger SGEs into PAGE_SIZE elements
> when walking the umem DMA-mapped SGL. The for_each_sg_dma_page
> variant is used where applicable to iterate the pages of the
> SGL and get the page DMA address.
> 
> Additionally, umem->page_shift usage is purged in drivers
> as its only relevant for ODP MRs. Use system page size and
> shift instead.
> 
> This series is dependent on the new scatterlist API
> for_each_sg_dma_page which is pending acceptance.
> 
> https://patchwork.kernel.org/patch/10748901/

I think we are good to go on this now.

Is there any reason this is a RFC at this point?

Jason
Shiraz Saleem Feb. 11, 2019, 3:12 p.m. UTC | #2
On Fri, Feb 08, 2019 at 09:20:04AM -0700, Jason Gunthorpe wrote:
> On Sat, Jan 26, 2019 at 10:59:01AM -0600, Shiraz Saleem wrote:
> > From: "Saleem, Shiraz" <shiraz.saleem@intel.com>
> > 
> > This patch set serves as precursor series to updating ib_umem_get
> > to combine contiguous PAGE_SIZE pages in umem SGEs.
> > 
> > Drivers are updated to unfold larger SGEs into PAGE_SIZE elements
> > when walking the umem DMA-mapped SGL. The for_each_sg_dma_page
> > variant is used where applicable to iterate the pages of the
> > SGL and get the page DMA address.
> > 
> > Additionally, umem->page_shift usage is purged in drivers
> > as its only relevant for ODP MRs. Use system page size and
> > shift instead.
> > 
> > This series is dependent on the new scatterlist API
> > for_each_sg_dma_page which is pending acceptance.
> > 
> > https://patchwork.kernel.org/patch/10748901/
> 
> I think we are good to go on this now.
> 
> Is there any reason this is a RFC at this point?
> 
No. I ll resend removing the rfc tag. Thanks for taking care of
instrumenting the new scatterlist API.

Shiraz