mbox series

[for-next,00/23] IB/hfi1: Add TID RDMA Write

Message ID 20190124054519.10736.29756.stgit@scvm10.sc.intel.com (mailing list archive)
Headers show
Series IB/hfi1: Add TID RDMA Write | expand

Message

Dennis Dalessandro Jan. 24, 2019, 5:48 a.m. UTC
Here is the final set of patches for TID RDMA. Again this is code which was
previously submitted but re-organized so as to be easier to review. 

Similar to how the READ series was organized the patches to build, receive, 
allocate resources etc are broken out. For details on TID RDMA as a whole
again refer to the original cover letter.

https://www.spinics.net/lists/linux-rdma/msg66611.html

---

Kaike Wan (23):
      IB/hfi1: Build TID RDMA WRITE request
      IB/hfi1: Allow for extra entries in QP's s_ack_queue
      IB/hfi1: Add an s_acked_ack_queue pointer
      IB/hfi1: Add functions to receive TID RDMA WRITE request
      IB/hfi1: Add a function to build TID RDMA WRITE response
      IB/hfi1: Add TID resource timer
      IB/hfi1: Add a function to receive TID RDMA WRITE response
      IB/hfi1: Add a function to build TID RDMA WRITE DATA packet
      IB/hfi1: Add a function to receive TID RDMA WRITE DATA packet
      IB/hfi1: Add a function to build TID RDMA ACK packet
      IB/hfi1: Add a function to receive TID RDMA ACK packet
      IB/hfi1: Add TID RDMA retry timer
      IB/hfi1: Add a function to build TID RDMA RESYNC packet
      IB/hfi1: Add a function to receive TID RDMA RESYNC packet
      IB/hfi1: Resend the TID RDMA WRITE DATA packets
      IB/hfi1: Add the TID second leg send packet builder
      IB/hfi1: Add the TID second leg ACK packet builder
      IB/hfi1: Add the dual leg code
      IB/hfi1: Add TID RDMA WRITE functionality into RDMA verbs
      IB/hfi1: Add interlock between TID RDMA WRITE and other requests
      IB/hfi1: Enable TID RDMA WRITE protocol
      IB/hfi1: Add static trace for TID RDMA WRITE protocol
      IB/hfi1: Prioritize the sending of ACK packets


 drivers/infiniband/hw/hfi1/init.c         |    1 
 drivers/infiniband/hw/hfi1/iowait.c       |   34 
 drivers/infiniband/hw/hfi1/iowait.h       |   99 +
 drivers/infiniband/hw/hfi1/opfn.c         |    5 
 drivers/infiniband/hw/hfi1/pio.c          |   18 
 drivers/infiniband/hw/hfi1/qp.c           |   57 +
 drivers/infiniband/hw/hfi1/qp.h           |    5 
 drivers/infiniband/hw/hfi1/rc.c           |  542 ++++++
 drivers/infiniband/hw/hfi1/rc.h           |    1 
 drivers/infiniband/hw/hfi1/ruc.c          |   32 
 drivers/infiniband/hw/hfi1/sdma.c         |   24 
 drivers/infiniband/hw/hfi1/sdma_txreq.h   |    1 
 drivers/infiniband/hw/hfi1/tid_rdma.c     | 2504 +++++++++++++++++++++++++++++
 drivers/infiniband/hw/hfi1/tid_rdma.h     |   88 +
 drivers/infiniband/hw/hfi1/trace.c        |   66 +
 drivers/infiniband/hw/hfi1/trace_ibhdrs.h |    6 
 drivers/infiniband/hw/hfi1/trace_tid.h    |  532 ++++++
 drivers/infiniband/hw/hfi1/trace_tx.h     |    6 
 drivers/infiniband/hw/hfi1/user_sdma.c    |    9 
 drivers/infiniband/hw/hfi1/verbs.c        |   20 
 drivers/infiniband/hw/hfi1/verbs.h        |   35 
 drivers/infiniband/hw/hfi1/verbs_txreq.h  |    1 
 drivers/infiniband/hw/hfi1/vnic_sdma.c    |    6 
 drivers/infiniband/sw/rdmavt/qp.c         |    1 
 include/rdma/ib_hdrs.h                    |    5 
 include/rdma/rdmavt_qp.h                  |    2 
 include/rdma/tid_rdma_defs.h              |   56 +
 27 files changed, 4025 insertions(+), 131 deletions(-)

--
-Denny

Comments

Doug Ledford Jan. 30, 2019, 5:21 p.m. UTC | #1
On Wed, 2019-01-23 at 21:48 -0800, Dennis Dalessandro wrote:
> Here is the final set of patches for TID RDMA. Again this is code which was
> previously submitted but re-organized so as to be easier to review. 
> 
> Similar to how the READ series was organized the patches to build, receive, 
> allocate resources etc are broken out. For details on TID RDMA as a whole
> again refer to the original cover letter.
> 
> https://www.spinics.net/lists/linux-rdma/msg66611.html

Help me out here Denny.  This appears to be a *monster* submission.  Are
you saying we need all of these patch series to make this work:

Subject:	[PATCH for-next 00/17] IB/hfi1: Add TID RDMA Read
Subject:	[PATCH for-next 0/6] IB/hfi1: Add OPFN
Subject:	[PATCH for-next 00/23] IB/hfi1: Add TID RDMA Write

So a total of 46 patches to add this support?  And do the series need to
be in that order?

> ---
> 
> Kaike Wan (23):
>       IB/hfi1: Build TID RDMA WRITE request
>       IB/hfi1: Allow for extra entries in QP's s_ack_queue
>       IB/hfi1: Add an s_acked_ack_queue pointer
>       IB/hfi1: Add functions to receive TID RDMA WRITE request
>       IB/hfi1: Add a function to build TID RDMA WRITE response
>       IB/hfi1: Add TID resource timer
>       IB/hfi1: Add a function to receive TID RDMA WRITE response
>       IB/hfi1: Add a function to build TID RDMA WRITE DATA packet
>       IB/hfi1: Add a function to receive TID RDMA WRITE DATA packet
>       IB/hfi1: Add a function to build TID RDMA ACK packet
>       IB/hfi1: Add a function to receive TID RDMA ACK packet
>       IB/hfi1: Add TID RDMA retry timer
>       IB/hfi1: Add a function to build TID RDMA RESYNC packet
>       IB/hfi1: Add a function to receive TID RDMA RESYNC packet
>       IB/hfi1: Resend the TID RDMA WRITE DATA packets
>       IB/hfi1: Add the TID second leg send packet builder
>       IB/hfi1: Add the TID second leg ACK packet builder
>       IB/hfi1: Add the dual leg code
>       IB/hfi1: Add TID RDMA WRITE functionality into RDMA verbs
>       IB/hfi1: Add interlock between TID RDMA WRITE and other requests
>       IB/hfi1: Enable TID RDMA WRITE protocol
>       IB/hfi1: Add static trace for TID RDMA WRITE protocol
>       IB/hfi1: Prioritize the sending of ACK packets
> 
> 
>  drivers/infiniband/hw/hfi1/init.c         |    1 
>  drivers/infiniband/hw/hfi1/iowait.c       |   34 
>  drivers/infiniband/hw/hfi1/iowait.h       |   99 +
>  drivers/infiniband/hw/hfi1/opfn.c         |    5 
>  drivers/infiniband/hw/hfi1/pio.c          |   18 
>  drivers/infiniband/hw/hfi1/qp.c           |   57 +
>  drivers/infiniband/hw/hfi1/qp.h           |    5 
>  drivers/infiniband/hw/hfi1/rc.c           |  542 ++++++
>  drivers/infiniband/hw/hfi1/rc.h           |    1 
>  drivers/infiniband/hw/hfi1/ruc.c          |   32 
>  drivers/infiniband/hw/hfi1/sdma.c         |   24 
>  drivers/infiniband/hw/hfi1/sdma_txreq.h   |    1 
>  drivers/infiniband/hw/hfi1/tid_rdma.c     | 2504 +++++++++++++++++++++++++++++
>  drivers/infiniband/hw/hfi1/tid_rdma.h     |   88 +
>  drivers/infiniband/hw/hfi1/trace.c        |   66 +
>  drivers/infiniband/hw/hfi1/trace_ibhdrs.h |    6 
>  drivers/infiniband/hw/hfi1/trace_tid.h    |  532 ++++++
>  drivers/infiniband/hw/hfi1/trace_tx.h     |    6 
>  drivers/infiniband/hw/hfi1/user_sdma.c    |    9 
>  drivers/infiniband/hw/hfi1/verbs.c        |   20 
>  drivers/infiniband/hw/hfi1/verbs.h        |   35 
>  drivers/infiniband/hw/hfi1/verbs_txreq.h  |    1 
>  drivers/infiniband/hw/hfi1/vnic_sdma.c    |    6 
>  drivers/infiniband/sw/rdmavt/qp.c         |    1 
>  include/rdma/ib_hdrs.h                    |    5 
>  include/rdma/rdmavt_qp.h                  |    2 
>  include/rdma/tid_rdma_defs.h              |   56 +
>  27 files changed, 4025 insertions(+), 131 deletions(-)
> 
> --
> -Denny
Dennis Dalessandro Jan. 30, 2019, 5:54 p.m. UTC | #2
On 1/30/2019 12:21 PM, Doug Ledford wrote:
> On Wed, 2019-01-23 at 21:48 -0800, Dennis Dalessandro wrote:
>> Here is the final set of patches for TID RDMA. Again this is code which was
>> previously submitted but re-organized so as to be easier to review.
>>
>> Similar to how the READ series was organized the patches to build, receive,
>> allocate resources etc are broken out. For details on TID RDMA as a whole
>> again refer to the original cover letter.
>>
>> https://www.spinics.net/lists/linux-rdma/msg66611.html
> 
> Help me out here Denny.  This appears to be a *monster* submission.  Are
> you saying we need all of these patch series to make this work:
> 
> Subject:	[PATCH for-next 00/17] IB/hfi1: Add TID RDMA Read
> Subject:	[PATCH for-next 0/6] IB/hfi1: Add OPFN
> Subject:	[PATCH for-next 00/23] IB/hfi1: Add TID RDMA Write
> 
> So a total of 46 patches to add this support?  And do the series need to
> be in that order?

Yes it is certainly a monster series. Same code in the end as we 
submitted before, Kaike just did a big re-org job on it all so it flows 
better for review.

I haven't tested with out all 3 series. It will all compile, but not 
sure it will work right. Better they all go in as a unit when ready. We

The order should be: OPFN, TID Read, TID Write.

-Denny
Jason Gunthorpe Jan. 30, 2019, 9:19 p.m. UTC | #3
On Wed, Jan 30, 2019 at 12:54:04PM -0500, Dennis Dalessandro wrote:
> On 1/30/2019 12:21 PM, Doug Ledford wrote:
> > On Wed, 2019-01-23 at 21:48 -0800, Dennis Dalessandro wrote:
> > > Here is the final set of patches for TID RDMA. Again this is code which was
> > > previously submitted but re-organized so as to be easier to review.
> > > 
> > > Similar to how the READ series was organized the patches to build, receive,
> > > allocate resources etc are broken out. For details on TID RDMA as a whole
> > > again refer to the original cover letter.
> > > 
> > > https://www.spinics.net/lists/linux-rdma/msg66611.html
> > 
> > Help me out here Denny.  This appears to be a *monster* submission.  Are
> > you saying we need all of these patch series to make this work:
> > 
> > Subject:	[PATCH for-next 00/17] IB/hfi1: Add TID RDMA Read
> > Subject:	[PATCH for-next 0/6] IB/hfi1: Add OPFN
> > Subject:	[PATCH for-next 00/23] IB/hfi1: Add TID RDMA Write
> > 
> > So a total of 46 patches to add this support?  And do the series need to
> > be in that order?
> 
> Yes it is certainly a monster series. Same code in the end as we submitted
> before, Kaike just did a big re-org job on it all so it flows better for
> review.

The point of the re-org was to get into the standard accepted flow of
~14 patches sent and applicable on their own, not to just re-organize
the 50 patches into a different set of 50 patches.

Jason
Dennis Dalessandro Jan. 30, 2019, 9:45 p.m. UTC | #4
[Drop Mitko's dead email address]

On 1/30/2019 4:19 PM, Jason Gunthorpe wrote:
> On Wed, Jan 30, 2019 at 12:54:04PM -0500, Dennis Dalessandro wrote:
>> On 1/30/2019 12:21 PM, Doug Ledford wrote:
>>> On Wed, 2019-01-23 at 21:48 -0800, Dennis Dalessandro wrote:
>>>> Here is the final set of patches for TID RDMA. Again this is code which was
>>>> previously submitted but re-organized so as to be easier to review.
>>>>
>>>> Similar to how the READ series was organized the patches to build, receive,
>>>> allocate resources etc are broken out. For details on TID RDMA as a whole
>>>> again refer to the original cover letter.
>>>>
>>>> https://www.spinics.net/lists/linux-rdma/msg66611.html
>>>
>>> Help me out here Denny.  This appears to be a *monster* submission.  Are
>>> you saying we need all of these patch series to make this work:
>>>
>>> Subject:	[PATCH for-next 00/17] IB/hfi1: Add TID RDMA Read
>>> Subject:	[PATCH for-next 0/6] IB/hfi1: Add OPFN
>>> Subject:	[PATCH for-next 00/23] IB/hfi1: Add TID RDMA Write
>>>
>>> So a total of 46 patches to add this support?  And do the series need to
>>> be in that order?
>>
>> Yes it is certainly a monster series. Same code in the end as we submitted
>> before, Kaike just did a big re-org job on it all so it flows better for
>> review.
> 
> The point of the re-org was to get into the standard accepted flow of
> ~14 patches sent and applicable on their own, not to just re-organize
> the 50 patches into a different set of 50 patches.

The code was all re-organized specifically for the "flow". Yeah it's a 
large number of patches but it's logically all arranged and not a set of 
50 hodge-podge patches. Logically these do stand on their own. We have 
the negotiation, we have the read, then we have the right. I don't see 
how breaking things up even more is going to help much. Open to 
suggestions though.

Just taking a quick look through my mail box I see more than a couple 
series that are larger than 14. Including a 30 patch series that didn't 
seem to generate any complaints.

-Denny
Jason Gunthorpe Jan. 30, 2019, 10:01 p.m. UTC | #5
On Wed, Jan 30, 2019 at 04:45:45PM -0500, Dennis Dalessandro wrote:
> [Drop Mitko's dead email address]
> 
> On 1/30/2019 4:19 PM, Jason Gunthorpe wrote:
> > On Wed, Jan 30, 2019 at 12:54:04PM -0500, Dennis Dalessandro wrote:
> > > On 1/30/2019 12:21 PM, Doug Ledford wrote:
> > > > On Wed, 2019-01-23 at 21:48 -0800, Dennis Dalessandro wrote:
> > > > > Here is the final set of patches for TID RDMA. Again this is code which was
> > > > > previously submitted but re-organized so as to be easier to review.
> > > > > 
> > > > > Similar to how the READ series was organized the patches to build, receive,
> > > > > allocate resources etc are broken out. For details on TID RDMA as a whole
> > > > > again refer to the original cover letter.
> > > > > 
> > > > > https://www.spinics.net/lists/linux-rdma/msg66611.html
> > > > 
> > > > Help me out here Denny.  This appears to be a *monster* submission.  Are
> > > > you saying we need all of these patch series to make this work:
> > > > 
> > > > Subject:	[PATCH for-next 00/17] IB/hfi1: Add TID RDMA Read
> > > > Subject:	[PATCH for-next 0/6] IB/hfi1: Add OPFN
> > > > Subject:	[PATCH for-next 00/23] IB/hfi1: Add TID RDMA Write
> > > > 
> > > > So a total of 46 patches to add this support?  And do the series need to
> > > > be in that order?
> > > 
> > > Yes it is certainly a monster series. Same code in the end as we submitted
> > > before, Kaike just did a big re-org job on it all so it flows better for
> > > review.
> > 
> > The point of the re-org was to get into the standard accepted flow of
> > ~14 patches sent and applicable on their own, not to just re-organize
> > the 50 patches into a different set of 50 patches.
> 
> The code was all re-organized specifically for the "flow". Yeah it's a large
> number of patches but it's logically all arranged and not a set of 50
> hodge-podge patches. Logically these do stand on their own. We have the
> negotiation, we have the read, then we have the right. I don't see how
> breaking things up even more is going to help much. Open to suggestions
> though.

You said that if they get applied they might not work, so the
individual series are untested?

> Just taking a quick look through my mail box I see more than a couple series
> that are larger than 14. Including a 30 patch series that didn't seem to
> generate any complaints.

NFS seems to have different rules..

Jason
Dennis Dalessandro Jan. 30, 2019, 10:37 p.m. UTC | #6
On 1/30/2019 5:01 PM, Jason Gunthorpe wrote:
> On Wed, Jan 30, 2019 at 04:45:45PM -0500, Dennis Dalessandro wrote:
>> [Drop Mitko's dead email address]
>>
>> On 1/30/2019 4:19 PM, Jason Gunthorpe wrote:
>>> On Wed, Jan 30, 2019 at 12:54:04PM -0500, Dennis Dalessandro wrote:
>>>> On 1/30/2019 12:21 PM, Doug Ledford wrote:
>>>>> On Wed, 2019-01-23 at 21:48 -0800, Dennis Dalessandro wrote:
>>>>>> Here is the final set of patches for TID RDMA. Again this is code which was
>>>>>> previously submitted but re-organized so as to be easier to review.
>>>>>>
>>>>>> Similar to how the READ series was organized the patches to build, receive,
>>>>>> allocate resources etc are broken out. For details on TID RDMA as a whole
>>>>>> again refer to the original cover letter.
>>>>>>
>>>>>> https://www.spinics.net/lists/linux-rdma/msg66611.html
>>>>>
>>>>> Help me out here Denny.  This appears to be a *monster* submission.  Are
>>>>> you saying we need all of these patch series to make this work:
>>>>>
>>>>> Subject:	[PATCH for-next 00/17] IB/hfi1: Add TID RDMA Read
>>>>> Subject:	[PATCH for-next 0/6] IB/hfi1: Add OPFN
>>>>> Subject:	[PATCH for-next 00/23] IB/hfi1: Add TID RDMA Write
>>>>>
>>>>> So a total of 46 patches to add this support?  And do the series need to
>>>>> be in that order?
>>>>
>>>> Yes it is certainly a monster series. Same code in the end as we submitted
>>>> before, Kaike just did a big re-org job on it all so it flows better for
>>>> review.
>>>
>>> The point of the re-org was to get into the standard accepted flow of
>>> ~14 patches sent and applicable on their own, not to just re-organize
>>> the 50 patches into a different set of 50 patches.
>>
>> The code was all re-organized specifically for the "flow". Yeah it's a large
>> number of patches but it's logically all arranged and not a set of 50
>> hodge-podge patches. Logically these do stand on their own. We have the
>> negotiation, we have the read, then we have the right. I don't see how
>> breaking things up even more is going to help much. Open to suggestions
>> though.
> 
> You said that if they get applied they might not work, so the
> individual series are untested?

Kinda misspoke there. If you apply OPFN and stop, things are fine. If 
you apply OPFN and skip the read and go to the write. That I'm not sure 
about. But adding each series in succession things will work at each 
stop along the way.

>> Just taking a quick look through my mail box I see more than a couple series
>> that are larger than 14. Including a 30 patch series that didn't seem to
>> generate any complaints.
> 
> NFS seems to have different rules..

So are you setting a hard rule in stone here? No series over 14 patches?

-Denny
Jason Gunthorpe Jan. 30, 2019, 10:45 p.m. UTC | #7
On Wed, Jan 30, 2019 at 05:37:30PM -0500, Dennis Dalessandro wrote:
> On 1/30/2019 5:01 PM, Jason Gunthorpe wrote:
> > On Wed, Jan 30, 2019 at 04:45:45PM -0500, Dennis Dalessandro wrote:
> > > [Drop Mitko's dead email address]
> > > 
> > > On 1/30/2019 4:19 PM, Jason Gunthorpe wrote:
> > > > On Wed, Jan 30, 2019 at 12:54:04PM -0500, Dennis Dalessandro wrote:
> > > > > On 1/30/2019 12:21 PM, Doug Ledford wrote:
> > > > > > On Wed, 2019-01-23 at 21:48 -0800, Dennis Dalessandro wrote:
> > > > > > > Here is the final set of patches for TID RDMA. Again this is code which was
> > > > > > > previously submitted but re-organized so as to be easier to review.
> > > > > > > 
> > > > > > > Similar to how the READ series was organized the patches to build, receive,
> > > > > > > allocate resources etc are broken out. For details on TID RDMA as a whole
> > > > > > > again refer to the original cover letter.
> > > > > > > 
> > > > > > > https://www.spinics.net/lists/linux-rdma/msg66611.html
> > > > > > 
> > > > > > Help me out here Denny.  This appears to be a *monster* submission.  Are
> > > > > > you saying we need all of these patch series to make this work:
> > > > > > 
> > > > > > Subject:	[PATCH for-next 00/17] IB/hfi1: Add TID RDMA Read
> > > > > > Subject:	[PATCH for-next 0/6] IB/hfi1: Add OPFN
> > > > > > Subject:	[PATCH for-next 00/23] IB/hfi1: Add TID RDMA Write
> > > > > > 
> > > > > > So a total of 46 patches to add this support?  And do the series need to
> > > > > > be in that order?
> > > > > 
> > > > > Yes it is certainly a monster series. Same code in the end as we submitted
> > > > > before, Kaike just did a big re-org job on it all so it flows better for
> > > > > review.
> > > > 
> > > > The point of the re-org was to get into the standard accepted flow of
> > > > ~14 patches sent and applicable on their own, not to just re-organize
> > > > the 50 patches into a different set of 50 patches.
> > > 
> > > The code was all re-organized specifically for the "flow". Yeah it's a large
> > > number of patches but it's logically all arranged and not a set of 50
> > > hodge-podge patches. Logically these do stand on their own. We have the
> > > negotiation, we have the read, then we have the right. I don't see how
> > > breaking things up even more is going to help much. Open to suggestions
> > > though.
> > 
> > You said that if they get applied they might not work, so the
> > individual series are untested?
> 
> Kinda misspoke there. If you apply OPFN and stop, things are fine. If you
> apply OPFN and skip the read and go to the write. That I'm not sure about.
> But adding each series in succession things will work at each stop along the
> way.
> 
> > > Just taking a quick look through my mail box I see more than a couple series
> > > that are larger than 14. Including a 30 patch series that didn't seem to
> > > generate any complaints.
> > 
> > NFS seems to have different rules..
> 
> So are you setting a hard rule in stone here? No series over 14 patches?

No.. approx 14 is the general guideline netdev uses, with random
exceptions it seems.

Other trees do different things, but rdma is somewhat modeled on
netdev practices.

Jason
Dennis Dalessandro Jan. 31, 2019, 6:05 a.m. UTC | #8
On 1/30/2019 5:45 PM, Jason Gunthorpe wrote:
> On Wed, Jan 30, 2019 at 05:37:30PM -0500, Dennis Dalessandro wrote:
>> On 1/30/2019 5:01 PM, Jason Gunthorpe wrote:
>>> On Wed, Jan 30, 2019 at 04:45:45PM -0500, Dennis Dalessandro wrote:
>>>> [Drop Mitko's dead email address]
>>>>
>>>> On 1/30/2019 4:19 PM, Jason Gunthorpe wrote:
>>>>> On Wed, Jan 30, 2019 at 12:54:04PM -0500, Dennis Dalessandro wrote:
>>>>>> On 1/30/2019 12:21 PM, Doug Ledford wrote:
>>>>>>> On Wed, 2019-01-23 at 21:48 -0800, Dennis Dalessandro wrote:
>>>>>>>> Here is the final set of patches for TID RDMA. Again this is code which was
>>>>>>>> previously submitted but re-organized so as to be easier to review.
>>>>>>>>
>>>>>>>> Similar to how the READ series was organized the patches to build, receive,
>>>>>>>> allocate resources etc are broken out. For details on TID RDMA as a whole
>>>>>>>> again refer to the original cover letter.
>>>>>>>>
>>>>>>>> https://www.spinics.net/lists/linux-rdma/msg66611.html
>>>>>>>
>>>>>>> Help me out here Denny.  This appears to be a *monster* submission.  Are
>>>>>>> you saying we need all of these patch series to make this work:
>>>>>>>
>>>>>>> Subject:	[PATCH for-next 00/17] IB/hfi1: Add TID RDMA Read
>>>>>>> Subject:	[PATCH for-next 0/6] IB/hfi1: Add OPFN
>>>>>>> Subject:	[PATCH for-next 00/23] IB/hfi1: Add TID RDMA Write
>>>>>>>
>>>>>>> So a total of 46 patches to add this support?  And do the series need to
>>>>>>> be in that order?
>>>>>>
>>>>>> Yes it is certainly a monster series. Same code in the end as we submitted
>>>>>> before, Kaike just did a big re-org job on it all so it flows better for
>>>>>> review.
>>>>>
>>>>> The point of the re-org was to get into the standard accepted flow of
>>>>> ~14 patches sent and applicable on their own, not to just re-organize
>>>>> the 50 patches into a different set of 50 patches.
>>>>
>>>> The code was all re-organized specifically for the "flow". Yeah it's a large
>>>> number of patches but it's logically all arranged and not a set of 50
>>>> hodge-podge patches. Logically these do stand on their own. We have the
>>>> negotiation, we have the read, then we have the right. I don't see how
>>>> breaking things up even more is going to help much. Open to suggestions
>>>> though.
>>>
>>> You said that if they get applied they might not work, so the
>>> individual series are untested?
>>
>> Kinda misspoke there. If you apply OPFN and stop, things are fine. If you
>> apply OPFN and skip the read and go to the write. That I'm not sure about.
>> But adding each series in succession things will work at each stop along the
>> way.
>>
>>>> Just taking a quick look through my mail box I see more than a couple series
>>>> that are larger than 14. Including a 30 patch series that didn't seem to
>>>> generate any complaints.
>>>
>>> NFS seems to have different rules..
>>
>> So are you setting a hard rule in stone here? No series over 14 patches?
> 
> No.. approx 14 is the general guideline netdev uses, with random
> exceptions it seems.
> 
> Other trees do different things, but rdma is somewhat modeled on
> netdev practices.
>

Ok so what does this mean then for this series? Is it going to be a 
random exception or do you really want to see something else here.

-Denny
Wan, Kaike Jan. 31, 2019, 12:29 p.m. UTC | #9
> -----Original Message-----
> From: Dalessandro, Dennis
> Sent: Wednesday, January 30, 2019 5:38 PM
> To: Jason Gunthorpe <jgg@ziepe.ca>
> Cc: Doug Ledford <dledford@redhat.com>; Dixit, Ashutosh
> <ashutosh.dixit@intel.com>; linux-rdma@vger.kernel.org; Marciniszyn, Mike
> <mike.marciniszyn@intel.com>; Wan, Kaike <kaike.wan@intel.com>
> Subject: Re: [PATCH for-next 00/23] IB/hfi1: Add TID RDMA Write
> 
> On 1/30/2019 5:01 PM, Jason Gunthorpe wrote:
> > On Wed, Jan 30, 2019 at 04:45:45PM -0500, Dennis Dalessandro wrote:
> >> [Drop Mitko's dead email address]
> >>
> >> On 1/30/2019 4:19 PM, Jason Gunthorpe wrote:
> >>> On Wed, Jan 30, 2019 at 12:54:04PM -0500, Dennis Dalessandro wrote:
> >>>> On 1/30/2019 12:21 PM, Doug Ledford wrote:
> >>>>> On Wed, 2019-01-23 at 21:48 -0800, Dennis Dalessandro wrote:
> >>>>>> Here is the final set of patches for TID RDMA. Again this is code
> >>>>>> which was previously submitted but re-organized so as to be easier
> to review.
> >>>>>>
> >>>>>> Similar to how the READ series was organized the patches to
> >>>>>> build, receive, allocate resources etc are broken out. For
> >>>>>> details on TID RDMA as a whole again refer to the original cover
> letter.
> >>>>>>
> >>>>>> https://www.spinics.net/lists/linux-rdma/msg66611.html
> >>>>>
> >>>>> Help me out here Denny.  This appears to be a *monster*
> >>>>> submission.  Are you saying we need all of these patch series to make
> this work:
> >>>>>
> >>>>> Subject:	[PATCH for-next 00/17] IB/hfi1: Add TID RDMA Read
> >>>>> Subject:	[PATCH for-next 0/6] IB/hfi1: Add OPFN
> >>>>> Subject:	[PATCH for-next 00/23] IB/hfi1: Add TID RDMA Write
> >>>>>
> >>>>> So a total of 46 patches to add this support?  And do the series
> >>>>> need to be in that order?
> >>>>
> >>>> Yes it is certainly a monster series. Same code in the end as we
> >>>> submitted before, Kaike just did a big re-org job on it all so it
> >>>> flows better for review.
> >>>
> >>> The point of the re-org was to get into the standard accepted flow
> >>> of
> >>> ~14 patches sent and applicable on their own, not to just
> >>> re-organize the 50 patches into a different set of 50 patches.
> >>
> >> The code was all re-organized specifically for the "flow". Yeah it's
> >> a large number of patches but it's logically all arranged and not a
> >> set of 50 hodge-podge patches. Logically these do stand on their own.
> >> We have the negotiation, we have the read, then we have the right. I
> >> don't see how breaking things up even more is going to help much.
> >> Open to suggestions though.
> >
> > You said that if they get applied they might not work, so the
> > individual series are untested?
> 
> Kinda misspoke there. If you apply OPFN and stop, things are fine. If you
> apply OPFN and skip the read and go to the write. That I'm not sure about.
> But adding each series in succession things will work at each stop along the
> way.
> 
The three series are incrementally built, i.e, the TID RDMA READ series depends on OPFN series, and the TID RDMA WRITE series depends on both OPFN and TID RDMA READ series. Therefore, you should apply OPFN first, and TID RDMA READ, and lastly TID RDMA WRITE. Each stop (OPFN, OPFN + READ, OPFN + READ +WRITE) have been fully tested and functional.

Kaike
Doug Ledford Jan. 31, 2019, 4:33 p.m. UTC | #10
On Thu, 2019-01-31 at 12:29 +0000, Wan, Kaike wrote:
> > -----Original Message-----
> > From: Dalessandro, Dennis
> > Sent: Wednesday, January 30, 2019 5:38 PM
> > To: Jason Gunthorpe <jgg@ziepe.ca>
> > Cc: Doug Ledford <dledford@redhat.com>; Dixit, Ashutosh
> > <ashutosh.dixit@intel.com>; linux-rdma@vger.kernel.org; Marciniszyn, Mike
> > <mike.marciniszyn@intel.com>; Wan, Kaike <kaike.wan@intel.com>
> > Subject: Re: [PATCH for-next 00/23] IB/hfi1: Add TID RDMA Write
> > 
> > On 1/30/2019 5:01 PM, Jason Gunthorpe wrote:
> > > On Wed, Jan 30, 2019 at 04:45:45PM -0500, Dennis Dalessandro wrote:
> > > > [Drop Mitko's dead email address]
> > > > 
> > > > On 1/30/2019 4:19 PM, Jason Gunthorpe wrote:
> > > > > On Wed, Jan 30, 2019 at 12:54:04PM -0500, Dennis Dalessandro wrote:
> > > > > > On 1/30/2019 12:21 PM, Doug Ledford wrote:
> > > > > > > On Wed, 2019-01-23 at 21:48 -0800, Dennis Dalessandro wrote:
> > > > > > > > Here is the final set of patches for TID RDMA. Again this is code
> > > > > > > > which was previously submitted but re-organized so as to be easier
> > to review.
> > > > > > > > Similar to how the READ series was organized the patches to
> > > > > > > > build, receive, allocate resources etc are broken out. For
> > > > > > > > details on TID RDMA as a whole again refer to the original cover
> > letter.
> > > > > > > > https://www.spinics.net/lists/linux-rdma/msg66611.html
> > > > > > > 
> > > > > > > Help me out here Denny.  This appears to be a *monster*
> > > > > > > submission.  Are you saying we need all of these patch series to make
> > this work:
> > > > > > > Subject:	[PATCH for-next 00/17] IB/hfi1: Add TID RDMA Read
> > > > > > > Subject:	[PATCH for-next 0/6] IB/hfi1: Add OPFN
> > > > > > > Subject:	[PATCH for-next 00/23] IB/hfi1: Add TID RDMA Write
> > > > > > > 
> > > > > > > So a total of 46 patches to add this support?  And do the series
> > > > > > > need to be in that order?
> > > > > > 
> > > > > > Yes it is certainly a monster series. Same code in the end as we
> > > > > > submitted before, Kaike just did a big re-org job on it all so it
> > > > > > flows better for review.
> > > > > 
> > > > > The point of the re-org was to get into the standard accepted flow
> > > > > of
> > > > > ~14 patches sent and applicable on their own, not to just
> > > > > re-organize the 50 patches into a different set of 50 patches.
> > > > 
> > > > The code was all re-organized specifically for the "flow". Yeah it's
> > > > a large number of patches but it's logically all arranged and not a
> > > > set of 50 hodge-podge patches. Logically these do stand on their own.
> > > > We have the negotiation, we have the read, then we have the right. I
> > > > don't see how breaking things up even more is going to help much.
> > > > Open to suggestions though.
> > > 
> > > You said that if they get applied they might not work, so the
> > > individual series are untested?
> > 
> > Kinda misspoke there. If you apply OPFN and stop, things are fine. If you
> > apply OPFN and skip the read and go to the write. That I'm not sure about.
> > But adding each series in succession things will work at each stop along the
> > way.
> > 
> The three series are incrementally built, i.e, the TID RDMA READ series depends on OPFN series, and the TID RDMA WRITE series depends on both OPFN and TID RDMA READ series. Therefore, you should apply OPFN first, and TID RDMA READ, and lastly TID RDMA WRITE. Each stop (OPFN, OPFN + READ, OPFN + READ +WRITE) have been fully tested and functional.

For the sake of "completeness of code" when it get merged, I'll do the
three individually, but in their own branch, and then finally merge that
branch back to for-next.
Doug Ledford Feb. 5, 2019, 5:03 p.m. UTC | #11
On Wed, 2019-01-23 at 21:48 -0800, Dennis Dalessandro wrote:
> Here is the final set of patches for TID RDMA. Again this is code which was
> previously submitted but re-organized so as to be easier to review. 
> 
> Similar to how the READ series was organized the patches to build, receive, 
> allocate resources etc are broken out. For details on TID RDMA as a whole
> again refer to the original cover letter.
> 
> https://www.spinics.net/lists/linux-rdma/msg66611.html
> 
> ---
> 
> Kaike Wan (23):
>       IB/hfi1: Build TID RDMA WRITE request
>       IB/hfi1: Allow for extra entries in QP's s_ack_queue
>       IB/hfi1: Add an s_acked_ack_queue pointer
>       IB/hfi1: Add functions to receive TID RDMA WRITE request
>       IB/hfi1: Add a function to build TID RDMA WRITE response
>       IB/hfi1: Add TID resource timer
>       IB/hfi1: Add a function to receive TID RDMA WRITE response
>       IB/hfi1: Add a function to build TID RDMA WRITE DATA packet
>       IB/hfi1: Add a function to receive TID RDMA WRITE DATA packet
>       IB/hfi1: Add a function to build TID RDMA ACK packet
>       IB/hfi1: Add a function to receive TID RDMA ACK packet
>       IB/hfi1: Add TID RDMA retry timer
>       IB/hfi1: Add a function to build TID RDMA RESYNC packet
>       IB/hfi1: Add a function to receive TID RDMA RESYNC packet
>       IB/hfi1: Resend the TID RDMA WRITE DATA packets
>       IB/hfi1: Add the TID second leg send packet builder
>       IB/hfi1: Add the TID second leg ACK packet builder
>       IB/hfi1: Add the dual leg code
>       IB/hfi1: Add TID RDMA WRITE functionality into RDMA verbs
>       IB/hfi1: Add interlock between TID RDMA WRITE and other requests
>       IB/hfi1: Enable TID RDMA WRITE protocol
>       IB/hfi1: Add static trace for TID RDMA WRITE protocol
>       IB/hfi1: Prioritize the sending of ACK packets
> 
> 
>  drivers/infiniband/hw/hfi1/init.c         |    1 
>  drivers/infiniband/hw/hfi1/iowait.c       |   34 
>  drivers/infiniband/hw/hfi1/iowait.h       |   99 +
>  drivers/infiniband/hw/hfi1/opfn.c         |    5 
>  drivers/infiniband/hw/hfi1/pio.c          |   18 
>  drivers/infiniband/hw/hfi1/qp.c           |   57 +
>  drivers/infiniband/hw/hfi1/qp.h           |    5 
>  drivers/infiniband/hw/hfi1/rc.c           |  542 ++++++
>  drivers/infiniband/hw/hfi1/rc.h           |    1 
>  drivers/infiniband/hw/hfi1/ruc.c          |   32 
>  drivers/infiniband/hw/hfi1/sdma.c         |   24 
>  drivers/infiniband/hw/hfi1/sdma_txreq.h   |    1 
>  drivers/infiniband/hw/hfi1/tid_rdma.c     | 2504 +++++++++++++++++++++++++++++
>  drivers/infiniband/hw/hfi1/tid_rdma.h     |   88 +
>  drivers/infiniband/hw/hfi1/trace.c        |   66 +
>  drivers/infiniband/hw/hfi1/trace_ibhdrs.h |    6 
>  drivers/infiniband/hw/hfi1/trace_tid.h    |  532 ++++++
>  drivers/infiniband/hw/hfi1/trace_tx.h     |    6 
>  drivers/infiniband/hw/hfi1/user_sdma.c    |    9 
>  drivers/infiniband/hw/hfi1/verbs.c        |   20 
>  drivers/infiniband/hw/hfi1/verbs.h        |   35 
>  drivers/infiniband/hw/hfi1/verbs_txreq.h  |    1 
>  drivers/infiniband/hw/hfi1/vnic_sdma.c    |    6 
>  drivers/infiniband/sw/rdmavt/qp.c         |    1 
>  include/rdma/ib_hdrs.h                    |    5 
>  include/rdma/rdmavt_qp.h                  |    2 
>  include/rdma/tid_rdma_defs.h              |   56 +
>  27 files changed, 4025 insertions(+), 131 deletions(-)
> 
> --
> -Denny

I'm not sure if this series was really that much easier to review, or if
my brain had simply glazed over after reviewing the Read TID support
patch series.  Regardless, this patch series looks OK (I suspect it
really was easier...most of the locking and page counting stuff was
already in the read series and just reused here).  Please post the
respin of the read stuff soon.  I'd like to commit this before my eyes
scab over and I can no longer see to push the code to k.o ;-).
Wan, Kaike Feb. 5, 2019, 5:46 p.m. UTC | #12
> -----Original Message-----
> From: Doug Ledford [mailto:dledford@redhat.com]
> Sent: Tuesday, February 05, 2019 12:04 PM
> To: Dalessandro, Dennis <dennis.dalessandro@intel.com>; jgg@ziepe.ca
> Cc: Dixit, Ashutosh <ashutosh.dixit@intel.com>; linux-rdma@vger.kernel.org;
> Mitko Haralanov <mitko.haralanov@intel.com>; Marciniszyn, Mike
> <mike.marciniszyn@intel.com>; Wan, Kaike <kaike.wan@intel.com>
> Subject: Re: [PATCH for-next 00/23] IB/hfi1: Add TID RDMA Write
> 
> On Wed, 2019-01-23 at 21:48 -0800, Dennis Dalessandro wrote:
> > Here is the final set of patches for TID RDMA. Again this is code
> > which was previously submitted but re-organized so as to be easier to
> review.
> >
> > Similar to how the READ series was organized the patches to build,
> > receive, allocate resources etc are broken out. For details on TID
> > RDMA as a whole again refer to the original cover letter.
> >
> > https://www.spinics.net/lists/linux-rdma/msg66611.html
> >
> > ---
> >
> > Kaike Wan (23):
> >       IB/hfi1: Build TID RDMA WRITE request
> >       IB/hfi1: Allow for extra entries in QP's s_ack_queue
> >       IB/hfi1: Add an s_acked_ack_queue pointer
> >       IB/hfi1: Add functions to receive TID RDMA WRITE request
> >       IB/hfi1: Add a function to build TID RDMA WRITE response
> >       IB/hfi1: Add TID resource timer
> >       IB/hfi1: Add a function to receive TID RDMA WRITE response
> >       IB/hfi1: Add a function to build TID RDMA WRITE DATA packet
> >       IB/hfi1: Add a function to receive TID RDMA WRITE DATA packet
> >       IB/hfi1: Add a function to build TID RDMA ACK packet
> >       IB/hfi1: Add a function to receive TID RDMA ACK packet
> >       IB/hfi1: Add TID RDMA retry timer
> >       IB/hfi1: Add a function to build TID RDMA RESYNC packet
> >       IB/hfi1: Add a function to receive TID RDMA RESYNC packet
> >       IB/hfi1: Resend the TID RDMA WRITE DATA packets
> >       IB/hfi1: Add the TID second leg send packet builder
> >       IB/hfi1: Add the TID second leg ACK packet builder
> >       IB/hfi1: Add the dual leg code
> >       IB/hfi1: Add TID RDMA WRITE functionality into RDMA verbs
> >       IB/hfi1: Add interlock between TID RDMA WRITE and other requests
> >       IB/hfi1: Enable TID RDMA WRITE protocol
> >       IB/hfi1: Add static trace for TID RDMA WRITE protocol
> >       IB/hfi1: Prioritize the sending of ACK packets
> >
> >
> >  drivers/infiniband/hw/hfi1/init.c         |    1
> >  drivers/infiniband/hw/hfi1/iowait.c       |   34
> >  drivers/infiniband/hw/hfi1/iowait.h       |   99 +
> >  drivers/infiniband/hw/hfi1/opfn.c         |    5
> >  drivers/infiniband/hw/hfi1/pio.c          |   18
> >  drivers/infiniband/hw/hfi1/qp.c           |   57 +
> >  drivers/infiniband/hw/hfi1/qp.h           |    5
> >  drivers/infiniband/hw/hfi1/rc.c           |  542 ++++++
> >  drivers/infiniband/hw/hfi1/rc.h           |    1
> >  drivers/infiniband/hw/hfi1/ruc.c          |   32
> >  drivers/infiniband/hw/hfi1/sdma.c         |   24
> >  drivers/infiniband/hw/hfi1/sdma_txreq.h   |    1
> >  drivers/infiniband/hw/hfi1/tid_rdma.c     | 2504
> +++++++++++++++++++++++++++++
> >  drivers/infiniband/hw/hfi1/tid_rdma.h     |   88 +
> >  drivers/infiniband/hw/hfi1/trace.c        |   66 +
> >  drivers/infiniband/hw/hfi1/trace_ibhdrs.h |    6
> >  drivers/infiniband/hw/hfi1/trace_tid.h    |  532 ++++++
> >  drivers/infiniband/hw/hfi1/trace_tx.h     |    6
> >  drivers/infiniband/hw/hfi1/user_sdma.c    |    9
> >  drivers/infiniband/hw/hfi1/verbs.c        |   20
> >  drivers/infiniband/hw/hfi1/verbs.h        |   35
> >  drivers/infiniband/hw/hfi1/verbs_txreq.h  |    1
> >  drivers/infiniband/hw/hfi1/vnic_sdma.c    |    6
> >  drivers/infiniband/sw/rdmavt/qp.c         |    1
> >  include/rdma/ib_hdrs.h                    |    5
> >  include/rdma/rdmavt_qp.h                  |    2
> >  include/rdma/tid_rdma_defs.h              |   56 +
> >  27 files changed, 4025 insertions(+), 131 deletions(-)
> >
> > --
> > -Denny
> 
> I'm not sure if this series was really that much easier to review, or if my brain
> had simply glazed over after reviewing the Read TID support patch series.
> Regardless, this patch series looks OK (I suspect it really was easier...most of
> the locking and page counting stuff was already in the read series and just
> reused here).  Please post the respin of the read stuff soon.  I'd like to
> commit this before my eyes scab over and I can no longer see to push the
> code to k.o ;-).


Will do very soon.

Thanks,

Kaike
> 
> --
> Doug Ledford <dledford@redhat.com>
>     GPG KeyID: B826A3330E572FDD
>     Key fingerprint = AE6B 1BDA 122B 23B4 265B  1274 B826 A333 0E57 2FDD
Doug Ledford Feb. 5, 2019, 11:11 p.m. UTC | #13
On Tue, 2019-02-05 at 17:46 +0000, Wan, Kaike wrote:
> > -----Original Message-----
> > From: Doug Ledford [mailto:dledford@redhat.com]
> > Sent: Tuesday, February 05, 2019 12:04 PM
> > To: Dalessandro, Dennis <dennis.dalessandro@intel.com>; jgg@ziepe.ca
> > Cc: Dixit, Ashutosh <ashutosh.dixit@intel.com>; linux-rdma@vger.kernel.org;
> > Mitko Haralanov <mitko.haralanov@intel.com>; Marciniszyn, Mike
> > <mike.marciniszyn@intel.com>; Wan, Kaike <kaike.wan@intel.com>
> > Subject: Re: [PATCH for-next 00/23] IB/hfi1: Add TID RDMA Write
> > 
> > On Wed, 2019-01-23 at 21:48 -0800, Dennis Dalessandro wrote:
> > > Here is the final set of patches for TID RDMA. Again this is code
> > > which was previously submitted but re-organized so as to be easier to
> > review.
> > > Similar to how the READ series was organized the patches to build,
> > > receive, allocate resources etc are broken out. For details on TID
> > > RDMA as a whole again refer to the original cover letter.
> > > 
> > > https://www.spinics.net/lists/linux-rdma/msg66611.html
> > > 
> > > ---
> > > 
> > > Kaike Wan (23):
> > >       IB/hfi1: Build TID RDMA WRITE request
> > >       IB/hfi1: Allow for extra entries in QP's s_ack_queue
> > >       IB/hfi1: Add an s_acked_ack_queue pointer
> > >       IB/hfi1: Add functions to receive TID RDMA WRITE request
> > >       IB/hfi1: Add a function to build TID RDMA WRITE response
> > >       IB/hfi1: Add TID resource timer
> > >       IB/hfi1: Add a function to receive TID RDMA WRITE response
> > >       IB/hfi1: Add a function to build TID RDMA WRITE DATA packet
> > >       IB/hfi1: Add a function to receive TID RDMA WRITE DATA packet
> > >       IB/hfi1: Add a function to build TID RDMA ACK packet
> > >       IB/hfi1: Add a function to receive TID RDMA ACK packet
> > >       IB/hfi1: Add TID RDMA retry timer
> > >       IB/hfi1: Add a function to build TID RDMA RESYNC packet
> > >       IB/hfi1: Add a function to receive TID RDMA RESYNC packet
> > >       IB/hfi1: Resend the TID RDMA WRITE DATA packets
> > >       IB/hfi1: Add the TID second leg send packet builder
> > >       IB/hfi1: Add the TID second leg ACK packet builder
> > >       IB/hfi1: Add the dual leg code
> > >       IB/hfi1: Add TID RDMA WRITE functionality into RDMA verbs
> > >       IB/hfi1: Add interlock between TID RDMA WRITE and other requests
> > >       IB/hfi1: Enable TID RDMA WRITE protocol
> > >       IB/hfi1: Add static trace for TID RDMA WRITE protocol
> > >       IB/hfi1: Prioritize the sending of ACK packets
> > > 
> > > 
> > >  drivers/infiniband/hw/hfi1/init.c         |    1
> > >  drivers/infiniband/hw/hfi1/iowait.c       |   34
> > >  drivers/infiniband/hw/hfi1/iowait.h       |   99 +
> > >  drivers/infiniband/hw/hfi1/opfn.c         |    5
> > >  drivers/infiniband/hw/hfi1/pio.c          |   18
> > >  drivers/infiniband/hw/hfi1/qp.c           |   57 +
> > >  drivers/infiniband/hw/hfi1/qp.h           |    5
> > >  drivers/infiniband/hw/hfi1/rc.c           |  542 ++++++
> > >  drivers/infiniband/hw/hfi1/rc.h           |    1
> > >  drivers/infiniband/hw/hfi1/ruc.c          |   32
> > >  drivers/infiniband/hw/hfi1/sdma.c         |   24
> > >  drivers/infiniband/hw/hfi1/sdma_txreq.h   |    1
> > >  drivers/infiniband/hw/hfi1/tid_rdma.c     | 2504
> > +++++++++++++++++++++++++++++
> > >  drivers/infiniband/hw/hfi1/tid_rdma.h     |   88 +
> > >  drivers/infiniband/hw/hfi1/trace.c        |   66 +
> > >  drivers/infiniband/hw/hfi1/trace_ibhdrs.h |    6
> > >  drivers/infiniband/hw/hfi1/trace_tid.h    |  532 ++++++
> > >  drivers/infiniband/hw/hfi1/trace_tx.h     |    6
> > >  drivers/infiniband/hw/hfi1/user_sdma.c    |    9
> > >  drivers/infiniband/hw/hfi1/verbs.c        |   20
> > >  drivers/infiniband/hw/hfi1/verbs.h        |   35
> > >  drivers/infiniband/hw/hfi1/verbs_txreq.h  |    1
> > >  drivers/infiniband/hw/hfi1/vnic_sdma.c    |    6
> > >  drivers/infiniband/sw/rdmavt/qp.c         |    1
> > >  include/rdma/ib_hdrs.h                    |    5
> > >  include/rdma/rdmavt_qp.h                  |    2
> > >  include/rdma/tid_rdma_defs.h              |   56 +
> > >  27 files changed, 4025 insertions(+), 131 deletions(-)
> > > 
> > > --
> > > -Denny
> > 
> > I'm not sure if this series was really that much easier to review, or if my brain
> > had simply glazed over after reviewing the Read TID support patch series.
> > Regardless, this patch series looks OK (I suspect it really was easier...most of
> > the locking and page counting stuff was already in the read series and just
> > reused here).  Please post the respin of the read stuff soon.  I'd like to
> > commit this before my eyes scab over and I can no longer see to push the
> > code to k.o ;-).
> 
> Will do very soon.

Thanks for getting the others done.  This series has been applied and
the entire bundle has now been pushed to wip/dl-hfi1-tid.  I'll merge it
into for-next once it has passed 0day.  Thanks.