mbox series

[rdma-next,v2,0/5] MR cache enhancements

Message ID cover.1644947594.git.leonro@nvidia.com (mailing list archive)
Headers show
Series MR cache enhancements | expand

Message

Leon Romanovsky Feb. 15, 2022, 5:55 p.m. UTC
From: Leon Romanovsky <leonro@nvidia.com>

Changelog:
v2:
 * Susbset of previously sent patches
v1: https://lore.kernel.org/all/cover.1640862842.git.leonro@nvidia.com
 * Based on DM revert https://lore.kernel.org/all/20211222101312.1358616-1-maorg@nvidia.com
v0: https://lore.kernel.org/all/cover.1638781506.git.leonro@nvidia.com

---------------------------------------------------------
Hi,

This series from Aharon cleans a little bit MR logic.

Thanks

Aharon Landau (5):
  RDMA/mlx5: Remove redundant work in struct mlx5_cache_ent
  RDMA/mlx5: Fix the flow of a miss in the allocation of a cache ODP MR
  RDMA/mlx5: Merge similar flows of allocating MR from the cache
  RDMA/mlx5: Store ndescs instead of the translation table size
  RDMA/mlx5: Reorder calls to pcie_relaxed_ordering_enabled()

 drivers/infiniband/hw/mlx5/mlx5_ib.h |   6 +-
 drivers/infiniband/hw/mlx5/mr.c      | 104 ++++++++++-----------------
 drivers/infiniband/hw/mlx5/odp.c     |  19 ++---
 3 files changed, 52 insertions(+), 77 deletions(-)

Comments

Jason Gunthorpe Feb. 23, 2022, 7:02 p.m. UTC | #1
On Tue, Feb 15, 2022 at 07:55:28PM +0200, Leon Romanovsky wrote:
> From: Leon Romanovsky <leonro@nvidia.com>
> 
> Changelog:
> v2:
>  * Susbset of previously sent patches
> v1: https://lore.kernel.org/all/cover.1640862842.git.leonro@nvidia.com
>  * Based on DM revert https://lore.kernel.org/all/20211222101312.1358616-1-maorg@nvidia.com
> v0: https://lore.kernel.org/all/cover.1638781506.git.leonro@nvidia.com
> 
> ---------------------------------------------------------
> Hi,
> 
> This series from Aharon cleans a little bit MR logic.
> 
> Thanks
> 
> Aharon Landau (5):
>   RDMA/mlx5: Remove redundant work in struct mlx5_cache_ent
>   RDMA/mlx5: Fix the flow of a miss in the allocation of a cache ODP MR
>   RDMA/mlx5: Merge similar flows of allocating MR from the cache
>   RDMA/mlx5: Store ndescs instead of the translation table size
>   RDMA/mlx5: Reorder calls to pcie_relaxed_ordering_enabled()

Applied to for-next, thanks

Jason