Message ID | 1589892216-39283-1-git-send-email-yaminf@mellanox.com (mailing list archive) |
---|---|
Headers | show |
Series | Introducing RDMA shared CQ pool | expand |
> This is the fourth re-incarnation of the CQ pool patches proposed > by Sagi and Christoph. I have started with the patches that Sagi last > submitted and built the CQ pool as a new API for acquiring shared CQs. > > The main change from Sagi's last proposal is that I have simplified the > method that ULP drivers interact with the CQ pool. Instead of calling > ib_alloc_cq they now call ib_cq_pool_get but use the CQ in the same manner > that they did before. This allows for a much easier transition to using > shared CQs by the ULP and makes it easier to deal with IB_POLL_DIRECT > contexts. Certain types of actions on CQs have been prevented on shared > CQs in order to prevent one user from harming another. > > Our ULPs often want to make smart decisions on completion vector > affinitization when using multiple completion queues spread on > multiple cpu cores. We can see examples for this in iser, srp, nvme-rdma. Yamin, didn't you promise to adjust other ulps as well in the next post?
On 5/20/2020 10:03 AM, Sagi Grimberg wrote: > >> This is the fourth re-incarnation of the CQ pool patches proposed >> by Sagi and Christoph. I have started with the patches that Sagi last >> submitted and built the CQ pool as a new API for acquiring shared CQs. >> >> The main change from Sagi's last proposal is that I have simplified the >> method that ULP drivers interact with the CQ pool. Instead of calling >> ib_alloc_cq they now call ib_cq_pool_get but use the CQ in the same >> manner >> that they did before. This allows for a much easier transition to using >> shared CQs by the ULP and makes it easier to deal with IB_POLL_DIRECT >> contexts. Certain types of actions on CQs have been prevented on shared >> CQs in order to prevent one user from harming another. >> >> Our ULPs often want to make smart decisions on completion vector >> affinitization when using multiple completion queues spread on >> multiple cpu cores. We can see examples for this in iser, srp, >> nvme-rdma. > > Yamin, didn't you promise to adjust other ulps as well in the next post? I was looking to get it accepted first, I did not want to tie acceptance of the feature to the use in all the different ulps. I can prepare another patch set with other ulps now or later