diff mbox

[RFC] IB/core: add max_send_sge and max_recv_sge attributes

Message ID 3636bb1ff3dee22093cd36e9824ba3c061dfcfbc.1528228523.git.swise@opengridcomputing.com (mailing list archive)
State Changes Requested
Headers show

Commit Message

Steve Wise June 5, 2018, 6:14 p.m. UTC
Some devices have vastly different max sge depths for RQs vs SQs.  So add
queue-specific attributes so applications can take full advantage of
hw capabilities.

Cc: Selvin Xavier <selvin.xavier@broadcom.com>
Cc: Devesh Sharma <devesh.sharma@broadcom.com>
Cc: Somnath Kotur <somnath.kotur@broadcom.com>
Cc: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Cc: Dennis Dalessandro <dennis.dalessandro@intel.com>
Cc: Mike Marciniszyn <mike.marciniszyn@intel.com>
Cc: Lijun Ou <oulijun@huawei.com>
Cc: Wei Hu(Xavier) <xavier.huwei@huawei.com>
Cc: Faisal Latif <faisal.latif@intel.com>
Cc: Shiraz Saleem <shiraz.saleem@intel.com>
Cc: Yishai Hadas <yishaih@mellanox.com>
Cc: Leon Romanovsky <leonro@mellanox.com>
Cc: Adit Ranadive <aditr@vmware.com>
Cc: VMware PV-Drivers <pv-drivers@vmware.com>
Cc: Moni Shoua <monis@mellanox.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Santosh Shilimkar <santosh.shilimkar@oracle.com>
Cc: Steve French <sfrench@samba.org>
Cc: Chuck Lever <chuck.lever@oracle.com>

Signed-off-by: Steve Wise <swise@opengridcomputing.com>
---
 drivers/infiniband/core/uverbs_cmd.c            |  2 +-
 drivers/infiniband/hw/bnxt_re/ib_verbs.c        |  3 ++-
 drivers/infiniband/hw/cxgb3/iwch_provider.c     |  3 ++-
 drivers/infiniband/hw/cxgb4/provider.c          |  3 ++-
 drivers/infiniband/hw/hfi1/verbs.c              |  3 ++-
 drivers/infiniband/hw/hns/hns_roce_main.c       |  3 ++-
 drivers/infiniband/hw/i40iw/i40iw_verbs.c       |  3 ++-
 drivers/infiniband/hw/mlx4/main.c               |  4 ++--
 drivers/infiniband/hw/mlx5/main.c               |  3 ++-
 drivers/infiniband/hw/mthca/mthca_provider.c    |  5 +++--
 drivers/infiniband/hw/nes/nes_verbs.c           |  3 ++-
 drivers/infiniband/hw/ocrdma/ocrdma_verbs.c     |  3 ++-
 drivers/infiniband/hw/qib/qib_verbs.c           |  3 ++-
 drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c |  3 ++-
 drivers/infiniband/sw/rdmavt/qp.c               |  5 +++--
 drivers/infiniband/sw/rxe/rxe.c                 |  3 ++-
 drivers/infiniband/sw/rxe/rxe_qp.c              |  8 ++++----
 drivers/infiniband/ulp/ipoib/ipoib_cm.c         |  4 ++--
 drivers/infiniband/ulp/ipoib/ipoib_verbs.c      |  2 +-
 drivers/infiniband/ulp/isert/ib_isert.c         |  5 +++--
 drivers/infiniband/ulp/srpt/ib_srpt.c           |  6 ++++--
 drivers/nvme/host/rdma.c                        |  2 +-
 drivers/nvme/target/rdma.c                      |  4 ++--
 fs/cifs/smbdirect.c                             | 13 ++++++++++---
 include/rdma/ib_verbs.h                         |  3 ++-
 net/rds/ib.c                                    |  2 +-
 net/sunrpc/xprtrdma/svc_rdma_transport.c        |  3 ++-
 net/sunrpc/xprtrdma/verbs.c                     |  2 +-
 28 files changed, 66 insertions(+), 40 deletions(-)

Comments

Chuck Lever III June 5, 2018, 9:25 p.m. UTC | #1
> On Jun 5, 2018, at 2:14 PM, Steve Wise <swise@opengridcomputing.com> wrote:
> 
> Some devices have vastly different max sge depths for RQs vs SQs.  So add
> queue-specific attributes so applications can take full advantage of
> hw capabilities.

:
:

> diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c
> index 96cc8f6..cb3471b 100644
> --- a/net/sunrpc/xprtrdma/svc_rdma_transport.c
> +++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c
> @@ -736,7 +736,8 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt)
> 
> 	/* Qualify the transport resource defaults with the
> 	 * capabilities of this particular device */
> -	newxprt->sc_max_sge = min((size_t)dev->attrs.max_sge,
> +	newxprt->sc_max_sge = min3((size_t)dev->attrs.max_send_sge,
> +				   (size_t)dev->attrs.max_recv_sge,
> 				  (size_t)RPCSVC_MAXPAGES);

A patch coming in v4.18 replaces sc_max_sge with sc_max_send_sge.
Another patch changes the NFS server's Receive path to require
only a single SGE, so min3 won't be necessary here.

Shouldn't be difficult to sort out.


> 	newxprt->sc_max_req_size = svcrdma_max_req_size;
> 	newxprt->sc_max_requests = svcrdma_max_requests;
> diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
> index fe5eaca..7ffa388 100644
> --- a/net/sunrpc/xprtrdma/verbs.c
> +++ b/net/sunrpc/xprtrdma/verbs.c
> @@ -504,7 +504,7 @@
> 	struct ib_cq *sendcq, *recvcq;
> 	int rc;
> 
> -	max_sge = min_t(unsigned int, ia->ri_device->attrs.max_sge,
> +	max_sge = min_t(unsigned int, ia->ri_device->attrs.max_send_sge,
> 			RPCRDMA_MAX_SEND_SGES);

That should work fine.


> 	if (max_sge < RPCRDMA_MIN_SEND_SGES) {
> 		pr_warn("rpcrdma: HCA provides only %d send SGEs\n", max_sge);
> -- 
> 1.8.3.1
> 

--
Chuck Lever



--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Santosh Shilimkar June 5, 2018, 9:31 p.m. UTC | #2
On 6/5/2018 11:14 AM, Steve Wise wrote:
> Some devices have vastly different max sge depths for RQs vs SQs.  So add
> queue-specific attributes so applications can take full advantage of
> hw capabilities.
> 

[...]

> 
> Signed-off-by: Steve Wise <swise@opengridcomputing.com>
> ---

>   net/rds/ib.c                                    |  2 +-
Looks fine to me Steve !!
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Leon Romanovsky June 6, 2018, 8:16 a.m. UTC | #3
On Tue, Jun 05, 2018 at 11:14:51AM -0700, Steve Wise wrote:
> Some devices have vastly different max sge depths for RQs vs SQs.  So add
> queue-specific attributes so applications can take full advantage of
> hw capabilities.

<..>

> Signed-off-by: Steve Wise <swise@opengridcomputing.com>
> ---
>  drivers/infiniband/core/uverbs_cmd.c            |  2 +-
>  drivers/infiniband/hw/mlx4/main.c               |  4 ++--
>  drivers/infiniband/hw/mlx5/main.c               |  3 ++-
>  drivers/infiniband/sw/rxe/rxe.c                 |  3 ++-
>  drivers/infiniband/sw/rxe/rxe_qp.c              |  8 ++++----

Looks good.

Thanks
Sagi Grimberg June 6, 2018, 9:26 a.m. UTC | #4
For iser and nvme-rdma this looks fine to me,

Acked-by: Sagi Grimberg <sagi@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Christoph Hellwig June 6, 2018, 12:31 p.m. UTC | #5
On Tue, Jun 05, 2018 at 11:14:51AM -0700, Steve Wise wrote:
> Some devices have vastly different max sge depths for RQs vs SQs.  So add
> queue-specific attributes so applications can take full advantage of
> hw capabilities.

Looks good to me.  I remember we went through the same shuffle for
rdma read/write a while ago, didn't we?

Acked-by: Christoph Hellwig <hch@lst.de>
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Steve Wise June 6, 2018, 2:43 p.m. UTC | #6
On 6/6/2018 7:31 AM, Christoph Hellwig wrote:
> On Tue, Jun 05, 2018 at 11:14:51AM -0700, Steve Wise wrote:
>> Some devices have vastly different max sge depths for RQs vs SQs.  So add
>> queue-specific attributes so applications can take full advantage of
>> hw capabilities.
> Looks good to me.  I remember we went through the same shuffle for
> rdma read/write a while ago, didn't we?
>
> Acked-by: Christoph Hellwig <hch@lst.de>

Currently max write sge depth == max_send_sge  (or max_sge prior to this
patch), and max read depth is max_send_rd.   Perhaps you're remembering
the rdma_rw work and the fact that iwarp wire protocol only allows one
read request sge...

Steve.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Selvin Xavier June 6, 2018, 2:58 p.m. UTC | #7
On Tue, Jun 5, 2018 at 11:44 PM, Steve Wise <swise@opengridcomputing.com> wrote:
> Some devices have vastly different max sge depths for RQs vs SQs.  So add
> queue-specific attributes so applications can take full advantage of
> hw capabilities.
>

Changes looks fine for bnxt_re and ocrdma. Thanks Steve.

Acked-by: Selvin Xavier <selvin.xavier@broadcom.com>
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Kalderon, Michal June 6, 2018, 5:15 p.m. UTC | #8
> From: linux-rdma-owner@vger.kernel.org [mailto:linux-rdma-
> owner@vger.kernel.org] On Behalf Of Steve Wise
> Sent: Tuesday, June 05, 2018 9:15 PM
> To: jgg@mellanox.com; dledford@redhat.com; linux-rdma@vger.kernel.org
> Cc: selvin.xavier@broadcom.com; devesh.sharma@broadcom.com;
> somnath.kotur@broadcom.com; sriharsha.basavapatna@broadcom.com;
> dennis.dalessandro@intel.com; mike.marciniszyn@intel.com;
> oulijun@huawei.com; xavier.huwei@huawei.com; faisal.latif@intel.com;
> shiraz.saleem@intel.com; yishaih@mellanox.com; leonro@mellanox.com;
> faisal.latif@intel.com; aditr@vmware.com; pv-drivers@vmware.com;
> monis@mellanox.com; sagi@grimberg.me; hch@lst.de;
> santosh.shilimkar@oracle.com; sfrench@samba.org;
> chuck.lever@oracle.com
> Subject: [PATCH RFC] IB/core: add max_send_sge and max_recv_sge
> attributes
> 
> Some devices have vastly different max sge depths for RQs vs SQs.  So add
> queue-specific attributes so applications can take full advantage of hw
> capabilities.
> 
> Cc: Selvin Xavier <selvin.xavier@broadcom.com>
> Cc: Devesh Sharma <devesh.sharma@broadcom.com>
> Cc: Somnath Kotur <somnath.kotur@broadcom.com>
> Cc: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
> Cc: Dennis Dalessandro <dennis.dalessandro@intel.com>
> Cc: Mike Marciniszyn <mike.marciniszyn@intel.com>
> Cc: Lijun Ou <oulijun@huawei.com>
> Cc: Wei Hu(Xavier) <xavier.huwei@huawei.com>
> Cc: Faisal Latif <faisal.latif@intel.com>
> Cc: Shiraz Saleem <shiraz.saleem@intel.com>
> Cc: Yishai Hadas <yishaih@mellanox.com>
> Cc: Leon Romanovsky <leonro@mellanox.com>
> Cc: Adit Ranadive <aditr@vmware.com>
> Cc: VMware PV-Drivers <pv-drivers@vmware.com>
> Cc: Moni Shoua <monis@mellanox.com>
> Cc: Sagi Grimberg <sagi@grimberg.me>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Santosh Shilimkar <santosh.shilimkar@oracle.com>
> Cc: Steve French <sfrench@samba.org>
> Cc: Chuck Lever <chuck.lever@oracle.com>
> 
> Signed-off-by: Steve Wise <swise@opengridcomputing.com>
> ---
>  drivers/infiniband/core/uverbs_cmd.c            |  2 +-
>  drivers/infiniband/hw/bnxt_re/ib_verbs.c        |  3 ++-
>  drivers/infiniband/hw/cxgb3/iwch_provider.c     |  3 ++-
>  drivers/infiniband/hw/cxgb4/provider.c          |  3 ++-
>  drivers/infiniband/hw/hfi1/verbs.c              |  3 ++-
>  drivers/infiniband/hw/hns/hns_roce_main.c       |  3 ++-
>  drivers/infiniband/hw/i40iw/i40iw_verbs.c       |  3 ++-
>  drivers/infiniband/hw/mlx4/main.c               |  4 ++--
>  drivers/infiniband/hw/mlx5/main.c               |  3 ++-
>  drivers/infiniband/hw/mthca/mthca_provider.c    |  5 +++--
>  drivers/infiniband/hw/nes/nes_verbs.c           |  3 ++-
>  drivers/infiniband/hw/ocrdma/ocrdma_verbs.c     |  3 ++-
>  drivers/infiniband/hw/qib/qib_verbs.c           |  3 ++-
>  drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c |  3 ++-
>  drivers/infiniband/sw/rdmavt/qp.c               |  5 +++--
>  drivers/infiniband/sw/rxe/rxe.c                 |  3 ++-
>  drivers/infiniband/sw/rxe/rxe_qp.c              |  8 ++++----
>  drivers/infiniband/ulp/ipoib/ipoib_cm.c         |  4 ++--
>  drivers/infiniband/ulp/ipoib/ipoib_verbs.c      |  2 +-
>  drivers/infiniband/ulp/isert/ib_isert.c         |  5 +++--
>  drivers/infiniband/ulp/srpt/ib_srpt.c           |  6 ++++--
>  drivers/nvme/host/rdma.c                        |  2 +-
>  drivers/nvme/target/rdma.c                      |  4 ++--
>  fs/cifs/smbdirect.c                             | 13 ++++++++++---
>  include/rdma/ib_verbs.h                         |  3 ++-
>  net/rds/ib.c                                    |  2 +-
>  net/sunrpc/xprtrdma/svc_rdma_transport.c        |  3 ++-
>  net/sunrpc/xprtrdma/verbs.c                     |  2 +-
>  28 files changed, 66 insertions(+), 40 deletions(-)
> 

Hey Steve, seems like qedr was left out. 
Thanks,

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Steve Wise June 6, 2018, 5:22 p.m. UTC | #9
On 6/6/2018 12:15 PM, Kalderon, Michal wrote:
>> From: linux-rdma-owner@vger.kernel.org [mailto:linux-rdma-
>> owner@vger.kernel.org] On Behalf Of Steve Wise
>> Sent: Tuesday, June 05, 2018 9:15 PM
>> To: jgg@mellanox.com; dledford@redhat.com; linux-rdma@vger.kernel.org
>> Cc: selvin.xavier@broadcom.com; devesh.sharma@broadcom.com;
>> somnath.kotur@broadcom.com; sriharsha.basavapatna@broadcom.com;
>> dennis.dalessandro@intel.com; mike.marciniszyn@intel.com;
>> oulijun@huawei.com; xavier.huwei@huawei.com; faisal.latif@intel.com;
>> shiraz.saleem@intel.com; yishaih@mellanox.com; leonro@mellanox.com;
>> faisal.latif@intel.com; aditr@vmware.com; pv-drivers@vmware.com;
>> monis@mellanox.com; sagi@grimberg.me; hch@lst.de;
>> santosh.shilimkar@oracle.com; sfrench@samba.org;
>> chuck.lever@oracle.com
>> Subject: [PATCH RFC] IB/core: add max_send_sge and max_recv_sge
>> attributes
>>
>> Some devices have vastly different max sge depths for RQs vs SQs.  So add
>> queue-specific attributes so applications can take full advantage of hw
>> capabilities.
>>
>> Cc: Selvin Xavier <selvin.xavier@broadcom.com>
>> Cc: Devesh Sharma <devesh.sharma@broadcom.com>
>> Cc: Somnath Kotur <somnath.kotur@broadcom.com>
>> Cc: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
>> Cc: Dennis Dalessandro <dennis.dalessandro@intel.com>
>> Cc: Mike Marciniszyn <mike.marciniszyn@intel.com>
>> Cc: Lijun Ou <oulijun@huawei.com>
>> Cc: Wei Hu(Xavier) <xavier.huwei@huawei.com>
>> Cc: Faisal Latif <faisal.latif@intel.com>
>> Cc: Shiraz Saleem <shiraz.saleem@intel.com>
>> Cc: Yishai Hadas <yishaih@mellanox.com>
>> Cc: Leon Romanovsky <leonro@mellanox.com>
>> Cc: Adit Ranadive <aditr@vmware.com>
>> Cc: VMware PV-Drivers <pv-drivers@vmware.com>
>> Cc: Moni Shoua <monis@mellanox.com>
>> Cc: Sagi Grimberg <sagi@grimberg.me>
>> Cc: Christoph Hellwig <hch@lst.de>
>> Cc: Santosh Shilimkar <santosh.shilimkar@oracle.com>
>> Cc: Steve French <sfrench@samba.org>
>> Cc: Chuck Lever <chuck.lever@oracle.com>
>>
>> Signed-off-by: Steve Wise <swise@opengridcomputing.com>
>> ---
>>  drivers/infiniband/core/uverbs_cmd.c            |  2 +-
>>  drivers/infiniband/hw/bnxt_re/ib_verbs.c        |  3 ++-
>>  drivers/infiniband/hw/cxgb3/iwch_provider.c     |  3 ++-
>>  drivers/infiniband/hw/cxgb4/provider.c          |  3 ++-
>>  drivers/infiniband/hw/hfi1/verbs.c              |  3 ++-
>>  drivers/infiniband/hw/hns/hns_roce_main.c       |  3 ++-
>>  drivers/infiniband/hw/i40iw/i40iw_verbs.c       |  3 ++-
>>  drivers/infiniband/hw/mlx4/main.c               |  4 ++--
>>  drivers/infiniband/hw/mlx5/main.c               |  3 ++-
>>  drivers/infiniband/hw/mthca/mthca_provider.c    |  5 +++--
>>  drivers/infiniband/hw/nes/nes_verbs.c           |  3 ++-
>>  drivers/infiniband/hw/ocrdma/ocrdma_verbs.c     |  3 ++-
>>  drivers/infiniband/hw/qib/qib_verbs.c           |  3 ++-
>>  drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c |  3 ++-
>>  drivers/infiniband/sw/rdmavt/qp.c               |  5 +++--
>>  drivers/infiniband/sw/rxe/rxe.c                 |  3 ++-
>>  drivers/infiniband/sw/rxe/rxe_qp.c              |  8 ++++----
>>  drivers/infiniband/ulp/ipoib/ipoib_cm.c         |  4 ++--
>>  drivers/infiniband/ulp/ipoib/ipoib_verbs.c      |  2 +-
>>  drivers/infiniband/ulp/isert/ib_isert.c         |  5 +++--
>>  drivers/infiniband/ulp/srpt/ib_srpt.c           |  6 ++++--
>>  drivers/nvme/host/rdma.c                        |  2 +-
>>  drivers/nvme/target/rdma.c                      |  4 ++--
>>  fs/cifs/smbdirect.c                             | 13 ++++++++++---
>>  include/rdma/ib_verbs.h                         |  3 ++-
>>  net/rds/ib.c                                    |  2 +-
>>  net/sunrpc/xprtrdma/svc_rdma_transport.c        |  3 ++-
>>  net/sunrpc/xprtrdma/verbs.c                     |  2 +-
>>  28 files changed, 66 insertions(+), 40 deletions(-)
>>
> Hey Steve, seems like qedr was left out. 
> Thanks,

Oops!  I'll add it.  I think I failed to enable it in the kernel config.

Thanks,

Steve.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Shiraz Saleem June 8, 2018, 7:37 p.m. UTC | #10
On Tue, Jun 05, 2018 at 12:14:51PM -0600, Steve Wise wrote:
> Some devices have vastly different max sge depths for RQs vs SQs.  So add
> queue-specific attributes so applications can take full advantage of
> hw capabilities.
> 
>  drivers/infiniband/hw/i40iw/i40iw_verbs.c       |  3 ++-
>
Thanks Steve!

Acked-by: Shiraz Saleem <shiraz.saleem@intel.com> 
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jason Gunthorpe June 11, 2018, 5:10 p.m. UTC | #11
On Tue, Jun 05, 2018 at 11:14:51AM -0700, Steve Wise wrote:
> Some devices have vastly different max sge depths for RQs vs SQs.  So add
> queue-specific attributes so applications can take full advantage of
> hw capabilities.
> 
> Cc: Selvin Xavier <selvin.xavier@broadcom.com>
> Cc: Devesh Sharma <devesh.sharma@broadcom.com>
> Cc: Somnath Kotur <somnath.kotur@broadcom.com>
> Cc: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
> Cc: Dennis Dalessandro <dennis.dalessandro@intel.com>
> Cc: Mike Marciniszyn <mike.marciniszyn@intel.com>
> Cc: Lijun Ou <oulijun@huawei.com>
> Cc: Wei Hu(Xavier) <xavier.huwei@huawei.com>
> Cc: Faisal Latif <faisal.latif@intel.com>
> Cc: Shiraz Saleem <shiraz.saleem@intel.com>
> Cc: Yishai Hadas <yishaih@mellanox.com>
> Cc: Leon Romanovsky <leonro@mellanox.com>
> Cc: Adit Ranadive <aditr@vmware.com>
> Cc: VMware PV-Drivers <pv-drivers@vmware.com>
> Cc: Moni Shoua <monis@mellanox.com>
> Cc: Sagi Grimberg <sagi@grimberg.me>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Santosh Shilimkar <santosh.shilimkar@oracle.com>
> Cc: Steve French <sfrench@samba.org>
> Cc: Chuck Lever <chuck.lever@oracle.com>
> 
> Signed-off-by: Steve Wise <swise@opengridcomputing.com>
> ---
>  drivers/infiniband/core/uverbs_cmd.c            |  2 +-
>  drivers/infiniband/hw/bnxt_re/ib_verbs.c        |  3 ++-
>  drivers/infiniband/hw/cxgb3/iwch_provider.c     |  3 ++-
>  drivers/infiniband/hw/cxgb4/provider.c          |  3 ++-
>  drivers/infiniband/hw/hfi1/verbs.c              |  3 ++-
>  drivers/infiniband/hw/hns/hns_roce_main.c       |  3 ++-
>  drivers/infiniband/hw/i40iw/i40iw_verbs.c       |  3 ++-
>  drivers/infiniband/hw/mlx4/main.c               |  4 ++--
>  drivers/infiniband/hw/mlx5/main.c               |  3 ++-
>  drivers/infiniband/hw/mthca/mthca_provider.c    |  5 +++--
>  drivers/infiniband/hw/nes/nes_verbs.c           |  3 ++-
>  drivers/infiniband/hw/ocrdma/ocrdma_verbs.c     |  3 ++-
>  drivers/infiniband/hw/qib/qib_verbs.c           |  3 ++-
>  drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c |  3 ++-
>  drivers/infiniband/sw/rdmavt/qp.c               |  5 +++--
>  drivers/infiniband/sw/rxe/rxe.c                 |  3 ++-
>  drivers/infiniband/sw/rxe/rxe_qp.c              |  8 ++++----
>  drivers/infiniband/ulp/ipoib/ipoib_cm.c         |  4 ++--
>  drivers/infiniband/ulp/ipoib/ipoib_verbs.c      |  2 +-
>  drivers/infiniband/ulp/isert/ib_isert.c         |  5 +++--
>  drivers/infiniband/ulp/srpt/ib_srpt.c           |  6 ++++--
>  drivers/nvme/host/rdma.c                        |  2 +-
>  drivers/nvme/target/rdma.c                      |  4 ++--
>  fs/cifs/smbdirect.c                             | 13 ++++++++++---
>  include/rdma/ib_verbs.h                         |  3 ++-
>  net/rds/ib.c                                    |  2 +-
>  net/sunrpc/xprtrdma/svc_rdma_transport.c        |  3 ++-
>  net/sunrpc/xprtrdma/verbs.c                     |  2 +-
>  28 files changed, 66 insertions(+), 40 deletions(-)

You are re-sending this, right?

Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Steve Wise June 11, 2018, 6:31 p.m. UTC | #12
> -----Original Message-----
> From: Jason Gunthorpe <jgg@ziepe.ca>
> Sent: Monday, June 11, 2018 12:11 PM
> To: Steve Wise <swise@opengridcomputing.com>
> Cc: dledford@redhat.com; linux-rdma@vger.kernel.org;
> selvin.xavier@broadcom.com; devesh.sharma@broadcom.com;
> somnath.kotur@broadcom.com; sriharsha.basavapatna@broadcom.com;
> dennis.dalessandro@intel.com; mike.marciniszyn@intel.com;
> oulijun@huawei.com; xavier.huwei@huawei.com; faisal.latif@intel.com;
> shiraz.saleem@intel.com; yishaih@mellanox.com; leonro@mellanox.com;
> aditr@vmware.com; pv-drivers@vmware.com; monis@mellanox.com;
> sagi@grimberg.me; hch@lst.de; santosh.shilimkar@oracle.com;
> sfrench@samba.org; chuck.lever@oracle.com
> Subject: Re: [PATCH RFC] IB/core: add max_send_sge and max_recv_sge
> attributes
> 
> On Tue, Jun 05, 2018 at 11:14:51AM -0700, Steve Wise wrote:
> > Some devices have vastly different max sge depths for RQs vs SQs.  So
add
> > queue-specific attributes so applications can take full advantage of
> > hw capabilities.
> >
> > Cc: Selvin Xavier <selvin.xavier@broadcom.com>
> > Cc: Devesh Sharma <devesh.sharma@broadcom.com>
> > Cc: Somnath Kotur <somnath.kotur@broadcom.com>
> > Cc: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
> > Cc: Dennis Dalessandro <dennis.dalessandro@intel.com>
> > Cc: Mike Marciniszyn <mike.marciniszyn@intel.com>
> > Cc: Lijun Ou <oulijun@huawei.com>
> > Cc: Wei Hu(Xavier) <xavier.huwei@huawei.com>
> > Cc: Faisal Latif <faisal.latif@intel.com>
> > Cc: Shiraz Saleem <shiraz.saleem@intel.com>
> > Cc: Yishai Hadas <yishaih@mellanox.com>
> > Cc: Leon Romanovsky <leonro@mellanox.com>
> > Cc: Adit Ranadive <aditr@vmware.com>
> > Cc: VMware PV-Drivers <pv-drivers@vmware.com>
> > Cc: Moni Shoua <monis@mellanox.com>
> > Cc: Sagi Grimberg <sagi@grimberg.me>
> > Cc: Christoph Hellwig <hch@lst.de>
> > Cc: Santosh Shilimkar <santosh.shilimkar@oracle.com>
> > Cc: Steve French <sfrench@samba.org>
> > Cc: Chuck Lever <chuck.lever@oracle.com>
> >
> > Signed-off-by: Steve Wise <swise@opengridcomputing.com>
> > ---
> >  drivers/infiniband/core/uverbs_cmd.c            |  2 +-
> >  drivers/infiniband/hw/bnxt_re/ib_verbs.c        |  3 ++-
> >  drivers/infiniband/hw/cxgb3/iwch_provider.c     |  3 ++-
> >  drivers/infiniband/hw/cxgb4/provider.c          |  3 ++-
> >  drivers/infiniband/hw/hfi1/verbs.c              |  3 ++-
> >  drivers/infiniband/hw/hns/hns_roce_main.c       |  3 ++-
> >  drivers/infiniband/hw/i40iw/i40iw_verbs.c       |  3 ++-
> >  drivers/infiniband/hw/mlx4/main.c               |  4 ++--
> >  drivers/infiniband/hw/mlx5/main.c               |  3 ++-
> >  drivers/infiniband/hw/mthca/mthca_provider.c    |  5 +++--
> >  drivers/infiniband/hw/nes/nes_verbs.c           |  3 ++-
> >  drivers/infiniband/hw/ocrdma/ocrdma_verbs.c     |  3 ++-
> >  drivers/infiniband/hw/qib/qib_verbs.c           |  3 ++-
> >  drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c |  3 ++-
> >  drivers/infiniband/sw/rdmavt/qp.c               |  5 +++--
> >  drivers/infiniband/sw/rxe/rxe.c                 |  3 ++-
> >  drivers/infiniband/sw/rxe/rxe_qp.c              |  8 ++++----
> >  drivers/infiniband/ulp/ipoib/ipoib_cm.c         |  4 ++--
> >  drivers/infiniband/ulp/ipoib/ipoib_verbs.c      |  2 +-
> >  drivers/infiniband/ulp/isert/ib_isert.c         |  5 +++--
> >  drivers/infiniband/ulp/srpt/ib_srpt.c           |  6 ++++--
> >  drivers/nvme/host/rdma.c                        |  2 +-
> >  drivers/nvme/target/rdma.c                      |  4 ++--
> >  fs/cifs/smbdirect.c                             | 13 ++++++++++---
> >  include/rdma/ib_verbs.h                         |  3 ++-
> >  net/rds/ib.c                                    |  2 +-
> >  net/sunrpc/xprtrdma/svc_rdma_transport.c        |  3 ++-
> >  net/sunrpc/xprtrdma/verbs.c                     |  2 +-
> >  28 files changed, 66 insertions(+), 40 deletions(-)
> 
> You are re-sending this, right?
> 
> Jason

Yes.  Should I wait until rdma/for-next moved to 4.18-rc1?

Steve.

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jason Gunthorpe June 11, 2018, 6:47 p.m. UTC | #13
On Mon, Jun 11, 2018 at 01:31:55PM -0500, Steve Wise wrote:
> > You are re-sending this, right?
> > 
> Yes.  Should I wait until rdma/for-next moved to 4.18-rc1?

It is OK, I'm trying to clear things off 

Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Steve Wise June 11, 2018, 6:57 p.m. UTC | #14
> 
> On Mon, Jun 11, 2018 at 01:31:55PM -0500, Steve Wise wrote:
> > > You are re-sending this, right?
> > >
> > Yes.  Should I wait until rdma/for-next moved to 4.18-rc1?
> 
> It is OK, I'm trying to clear things off
> 


Done!

Thanks,

Steve.

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
index e74262e..4a4f9c4 100644
--- a/drivers/infiniband/core/uverbs_cmd.c
+++ b/drivers/infiniband/core/uverbs_cmd.c
@@ -189,7 +189,7 @@  static void copy_query_dev_fields(struct ib_uverbs_file *file,
 	resp->max_qp		= attr->max_qp;
 	resp->max_qp_wr		= attr->max_qp_wr;
 	resp->device_cap_flags	= lower_32_bits(attr->device_cap_flags);
-	resp->max_sge		= attr->max_sge;
+	resp->max_sge		= min(attr->max_send_sge, attr->max_recv_sge);
 	resp->max_sge_rd	= attr->max_sge_rd;
 	resp->max_cq		= attr->max_cq;
 	resp->max_cqe		= attr->max_cqe;
diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
index a76e206..c647c68 100644
--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
@@ -166,7 +166,8 @@  int bnxt_re_query_device(struct ib_device *ibdev,
 				    | IB_DEVICE_MEM_WINDOW
 				    | IB_DEVICE_MEM_WINDOW_TYPE_2B
 				    | IB_DEVICE_MEM_MGT_EXTENSIONS;
-	ib_attr->max_sge = dev_attr->max_qp_sges;
+	ib_attr->max_send_sge = dev_attr->max_qp_sges;
+	ib_attr->max_recv_sge = dev_attr->max_qp_sges;
 	ib_attr->max_sge_rd = dev_attr->max_qp_sges;
 	ib_attr->max_cq = dev_attr->max_cq;
 	ib_attr->max_cqe = dev_attr->max_cq_wqes;
diff --git a/drivers/infiniband/hw/cxgb3/iwch_provider.c b/drivers/infiniband/hw/cxgb3/iwch_provider.c
index be097c6..68bc2f9 100644
--- a/drivers/infiniband/hw/cxgb3/iwch_provider.c
+++ b/drivers/infiniband/hw/cxgb3/iwch_provider.c
@@ -1103,7 +1103,8 @@  static int iwch_query_device(struct ib_device *ibdev, struct ib_device_attr *pro
 	props->max_mr_size = dev->attr.max_mr_size;
 	props->max_qp = dev->attr.max_qps;
 	props->max_qp_wr = dev->attr.max_wrs;
-	props->max_sge = dev->attr.max_sge_per_wr;
+	props->max_send_sge = dev->attr.max_sge_per_wr;
+	props->max_recv_sge = dev->attr.max_sge_per_wr;
 	props->max_sge_rd = 1;
 	props->max_qp_rd_atom = dev->attr.max_rdma_reads_per_qp;
 	props->max_qp_init_rd_atom = dev->attr.max_rdma_reads_per_qp;
diff --git a/drivers/infiniband/hw/cxgb4/provider.c b/drivers/infiniband/hw/cxgb4/provider.c
index 1feade8..61b8bdb 100644
--- a/drivers/infiniband/hw/cxgb4/provider.c
+++ b/drivers/infiniband/hw/cxgb4/provider.c
@@ -343,7 +343,8 @@  static int c4iw_query_device(struct ib_device *ibdev, struct ib_device_attr *pro
 	props->max_mr_size = T4_MAX_MR_SIZE;
 	props->max_qp = dev->rdev.lldi.vr->qp.size / 2;
 	props->max_qp_wr = dev->rdev.hw_queue.t4_max_qp_depth;
-	props->max_sge = T4_MAX_RECV_SGE;
+	props->max_send_sge = min(T4_MAX_SEND_SGE, T4_MAX_WRITE_SGE);
+	props->max_recv_sge = T4_MAX_RECV_SGE;
 	props->max_sge_rd = 1;
 	props->max_res_rd_atom = dev->rdev.lldi.max_ird_adapter;
 	props->max_qp_rd_atom = min(dev->rdev.lldi.max_ordird_qp,
diff --git a/drivers/infiniband/hw/hfi1/verbs.c b/drivers/infiniband/hw/hfi1/verbs.c
index 0899187..b7c75b6 100644
--- a/drivers/infiniband/hw/hfi1/verbs.c
+++ b/drivers/infiniband/hw/hfi1/verbs.c
@@ -1410,7 +1410,8 @@  static void hfi1_fill_device_attr(struct hfi1_devdata *dd)
 	rdi->dparms.props.max_fast_reg_page_list_len = UINT_MAX;
 	rdi->dparms.props.max_qp = hfi1_max_qps;
 	rdi->dparms.props.max_qp_wr = hfi1_max_qp_wrs;
-	rdi->dparms.props.max_sge = hfi1_max_sges;
+	rdi->dparms.props.max_send_sge = hfi1_max_sges;
+	rdi->dparms.props.max_recv_sge = hfi1_max_sges;
 	rdi->dparms.props.max_sge_rd = hfi1_max_sges;
 	rdi->dparms.props.max_cq = hfi1_max_cqs;
 	rdi->dparms.props.max_ah = hfi1_max_ahs;
diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
index c614f91..32b1be3 100644
--- a/drivers/infiniband/hw/hns/hns_roce_main.c
+++ b/drivers/infiniband/hw/hns/hns_roce_main.c
@@ -208,7 +208,8 @@  static int hns_roce_query_device(struct ib_device *ib_dev,
 	props->max_qp_wr = hr_dev->caps.max_wqes;
 	props->device_cap_flags = IB_DEVICE_PORT_ACTIVE_EVENT |
 				  IB_DEVICE_RC_RNR_NAK_GEN;
-	props->max_sge = max(hr_dev->caps.max_sq_sg, hr_dev->caps.max_rq_sg);
+	props->max_send_sge = hr_dev->caps.max_sq_sg;
+	props->max_recv_sge = hr_dev->caps.max_rq_sg;
 	props->max_sge_rd = 1;
 	props->max_cq = hr_dev->caps.num_cqs;
 	props->max_cqe = hr_dev->caps.max_cqes;
diff --git a/drivers/infiniband/hw/i40iw/i40iw_verbs.c b/drivers/infiniband/hw/i40iw/i40iw_verbs.c
index 68679ad..8884ff7 100644
--- a/drivers/infiniband/hw/i40iw/i40iw_verbs.c
+++ b/drivers/infiniband/hw/i40iw/i40iw_verbs.c
@@ -71,7 +71,8 @@  static int i40iw_query_device(struct ib_device *ibdev,
 	props->max_mr_size = I40IW_MAX_OUTBOUND_MESSAGE_SIZE;
 	props->max_qp = iwdev->max_qp - iwdev->used_qps;
 	props->max_qp_wr = I40IW_MAX_QP_WRS;
-	props->max_sge = I40IW_MAX_WQ_FRAGMENT_COUNT;
+	props->max_send_sge = I40IW_MAX_WQ_FRAGMENT_COUNT;
+	props->max_recv_sge = I40IW_MAX_WQ_FRAGMENT_COUNT;
 	props->max_cq = iwdev->max_cq - iwdev->used_cqs;
 	props->max_cqe = iwdev->max_cqe;
 	props->max_mr = iwdev->max_mr - iwdev->used_mrs;
diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c
index bf12394..3f61166 100644
--- a/drivers/infiniband/hw/mlx4/main.c
+++ b/drivers/infiniband/hw/mlx4/main.c
@@ -523,8 +523,8 @@  static int mlx4_ib_query_device(struct ib_device *ibdev,
 	props->page_size_cap	   = dev->dev->caps.page_size_cap;
 	props->max_qp		   = dev->dev->quotas.qp;
 	props->max_qp_wr	   = dev->dev->caps.max_wqes - MLX4_IB_SQ_MAX_SPARE;
-	props->max_sge		   = min(dev->dev->caps.max_sq_sg,
-					 dev->dev->caps.max_rq_sg);
+	props->max_send_sge	   = dev->dev->caps.max_sq_sg;
+	props->max_recv_sge	   = dev->dev->caps.max_rq_sg;
 	props->max_sge_rd	   = MLX4_MAX_SGE_RD;
 	props->max_cq		   = dev->dev->quotas.cq;
 	props->max_cqe		   = dev->dev->caps.max_cqes;
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index 25a271e..780e532 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -915,7 +915,8 @@  static int mlx5_ib_query_device(struct ib_device *ibdev,
 	max_sq_sg = (max_sq_desc - sizeof(struct mlx5_wqe_ctrl_seg) -
 		     sizeof(struct mlx5_wqe_raddr_seg)) /
 		sizeof(struct mlx5_wqe_data_seg);
-	props->max_sge = min(max_rq_sg, max_sq_sg);
+	props->max_send_sge = max_sq_sg;
+	props->max_recv_sge = max_rq_sg;
 	props->max_sge_rd	   = MLX5_MAX_SGE_RD;
 	props->max_cq		   = 1 << MLX5_CAP_GEN(mdev, log_max_cq);
 	props->max_cqe = (1 << MLX5_CAP_GEN(mdev, log_max_cq_sz)) - 1;
diff --git a/drivers/infiniband/hw/mthca/mthca_provider.c b/drivers/infiniband/hw/mthca/mthca_provider.c
index 541f237..20febaf 100644
--- a/drivers/infiniband/hw/mthca/mthca_provider.c
+++ b/drivers/infiniband/hw/mthca/mthca_provider.c
@@ -96,8 +96,9 @@  static int mthca_query_device(struct ib_device *ibdev, struct ib_device_attr *pr
 	props->page_size_cap       = mdev->limits.page_size_cap;
 	props->max_qp              = mdev->limits.num_qps - mdev->limits.reserved_qps;
 	props->max_qp_wr           = mdev->limits.max_wqes;
-	props->max_sge             = mdev->limits.max_sg;
-	props->max_sge_rd          = props->max_sge;
+	props->max_send_sge        = mdev->limits.max_sg;
+	props->max_recv_sge        = mdev->limits.max_sg;
+	props->max_sge_rd          = mdev->limits.max_sg;
 	props->max_cq              = mdev->limits.num_cqs - mdev->limits.reserved_cqs;
 	props->max_cqe             = mdev->limits.max_cqes;
 	props->max_mr              = mdev->limits.num_mpts - mdev->limits.reserved_mrws;
diff --git a/drivers/infiniband/hw/nes/nes_verbs.c b/drivers/infiniband/hw/nes/nes_verbs.c
index 1040a6e..9db118b 100644
--- a/drivers/infiniband/hw/nes/nes_verbs.c
+++ b/drivers/infiniband/hw/nes/nes_verbs.c
@@ -436,7 +436,8 @@  static int nes_query_device(struct ib_device *ibdev, struct ib_device_attr *prop
 	props->max_mr_size = 0x80000000;
 	props->max_qp = nesibdev->max_qp;
 	props->max_qp_wr = nesdev->nesadapter->max_qp_wr - 2;
-	props->max_sge = nesdev->nesadapter->max_sge;
+	props->max_send_sge = nesdev->nesadapter->max_sge;
+	props->max_recv_sge = nesdev->nesadapter->max_sge;
 	props->max_cq = nesibdev->max_cq;
 	props->max_cqe = nesdev->nesadapter->max_cqe;
 	props->max_mr = nesibdev->max_mr;
diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
index 784ed6b..8c55909 100644
--- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
+++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
@@ -89,7 +89,8 @@  int ocrdma_query_device(struct ib_device *ibdev, struct ib_device_attr *attr,
 					IB_DEVICE_SYS_IMAGE_GUID |
 					IB_DEVICE_LOCAL_DMA_LKEY |
 					IB_DEVICE_MEM_MGT_EXTENSIONS;
-	attr->max_sge = min(dev->attr.max_send_sge, dev->attr.max_recv_sge);
+	attr->max_send_sge = dev->attr.max_send_sge;
+	attr->max_recv_sge = dev->attr.max_recv_sge;
 	attr->max_sge_rd = dev->attr.max_rdma_sge;
 	attr->max_cq = dev->attr.max_cq;
 	attr->max_cqe = dev->attr.max_cqe;
diff --git a/drivers/infiniband/hw/qib/qib_verbs.c b/drivers/infiniband/hw/qib/qib_verbs.c
index 14b4057..41babbc 100644
--- a/drivers/infiniband/hw/qib/qib_verbs.c
+++ b/drivers/infiniband/hw/qib/qib_verbs.c
@@ -1489,7 +1489,8 @@  static void qib_fill_device_attr(struct qib_devdata *dd)
 	rdi->dparms.props.max_mr_size = ~0ULL;
 	rdi->dparms.props.max_qp = ib_qib_max_qps;
 	rdi->dparms.props.max_qp_wr = ib_qib_max_qp_wrs;
-	rdi->dparms.props.max_sge = ib_qib_max_sges;
+	rdi->dparms.props.max_send_sge = ib_qib_max_sges;
+	rdi->dparms.props.max_recv_sge = ib_qib_max_sges;
 	rdi->dparms.props.max_sge_rd = ib_qib_max_sges;
 	rdi->dparms.props.max_cq = ib_qib_max_cqs;
 	rdi->dparms.props.max_cqe = ib_qib_max_cqes;
diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c
index a51463c..816cc28 100644
--- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c
+++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c
@@ -82,7 +82,8 @@  int pvrdma_query_device(struct ib_device *ibdev,
 	props->max_qp = dev->dsr->caps.max_qp;
 	props->max_qp_wr = dev->dsr->caps.max_qp_wr;
 	props->device_cap_flags = dev->dsr->caps.device_cap_flags;
-	props->max_sge = dev->dsr->caps.max_sge;
+	props->max_send_sge = dev->dsr->caps.max_sge;
+	props->max_recv_sge = dev->dsr->caps.max_sge;
 	props->max_sge_rd = PVRDMA_GET_CAP(dev, dev->dsr->caps.max_sge,
 					   dev->dsr->caps.max_sge_rd);
 	props->max_srq = dev->dsr->caps.max_srq;
diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c
index 4004613..36db9ac 100644
--- a/drivers/infiniband/sw/rdmavt/qp.c
+++ b/drivers/infiniband/sw/rdmavt/qp.c
@@ -780,14 +780,15 @@  struct ib_qp *rvt_create_qp(struct ib_pd *ibpd,
 	if (!rdi)
 		return ERR_PTR(-EINVAL);
 
-	if (init_attr->cap.max_send_sge > rdi->dparms.props.max_sge ||
+	if (init_attr->cap.max_send_sge > rdi->dparms.props.max_send_sge ||
 	    init_attr->cap.max_send_wr > rdi->dparms.props.max_qp_wr ||
 	    init_attr->create_flags)
 		return ERR_PTR(-EINVAL);
 
 	/* Check receive queue parameters if no SRQ is specified. */
 	if (!init_attr->srq) {
-		if (init_attr->cap.max_recv_sge > rdi->dparms.props.max_sge ||
+		if (init_attr->cap.max_recv_sge >
+		    rdi->dparms.props.max_recv_sge ||
 		    init_attr->cap.max_recv_wr > rdi->dparms.props.max_qp_wr)
 			return ERR_PTR(-EINVAL);
 
diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c
index 7121e1b..10999fa 100644
--- a/drivers/infiniband/sw/rxe/rxe.c
+++ b/drivers/infiniband/sw/rxe/rxe.c
@@ -91,7 +91,8 @@  static void rxe_init_device_param(struct rxe_dev *rxe)
 	rxe->attr.max_qp			= RXE_MAX_QP;
 	rxe->attr.max_qp_wr			= RXE_MAX_QP_WR;
 	rxe->attr.device_cap_flags		= RXE_DEVICE_CAP_FLAGS;
-	rxe->attr.max_sge			= RXE_MAX_SGE;
+	rxe->attr.max_send_sge			= RXE_MAX_SGE;
+	rxe->attr.max_recv_sge			= RXE_MAX_SGE;
 	rxe->attr.max_sge_rd			= RXE_MAX_SGE_RD;
 	rxe->attr.max_cq			= RXE_MAX_CQ;
 	rxe->attr.max_cqe			= (1 << RXE_MAX_LOG_CQE) - 1;
diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
index b9f7aa1..d61348c 100644
--- a/drivers/infiniband/sw/rxe/rxe_qp.c
+++ b/drivers/infiniband/sw/rxe/rxe_qp.c
@@ -49,9 +49,9 @@  static int rxe_qp_chk_cap(struct rxe_dev *rxe, struct ib_qp_cap *cap,
 		goto err1;
 	}
 
-	if (cap->max_send_sge > rxe->attr.max_sge) {
+	if (cap->max_send_sge > rxe->attr.max_send_sge) {
 		pr_warn("invalid send sge = %d > %d\n",
-			cap->max_send_sge, rxe->attr.max_sge);
+			cap->max_send_sge, rxe->attr.max_send_sge);
 		goto err1;
 	}
 
@@ -62,9 +62,9 @@  static int rxe_qp_chk_cap(struct rxe_dev *rxe, struct ib_qp_cap *cap,
 			goto err1;
 		}
 
-		if (cap->max_recv_sge > rxe->attr.max_sge) {
+		if (cap->max_recv_sge > rxe->attr.max_recv_sge) {
 			pr_warn("invalid recv sge = %d > %d\n",
-				cap->max_recv_sge, rxe->attr.max_sge);
+				cap->max_recv_sge, rxe->attr.max_recv_sge);
 			goto err1;
 		}
 	}
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_cm.c b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
index 962fbcb..9601d65 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
@@ -1067,8 +1067,8 @@  static struct ib_qp *ipoib_cm_create_tx_qp(struct net_device *dev, struct ipoib_
 	struct ib_qp *tx_qp;
 
 	if (dev->features & NETIF_F_SG)
-		attr.cap.max_send_sge =
-			min_t(u32, priv->ca->attrs.max_sge, MAX_SKB_FRAGS + 1);
+		attr.cap.max_send_sge = min_t(u32, priv->ca->attrs.max_send_sge,
+					      MAX_SKB_FRAGS + 1);
 
 	tx_qp = ib_create_qp(priv->pd, &attr);
 	tx->max_send_sge = attr.cap.max_send_sge;
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_verbs.c b/drivers/infiniband/ulp/ipoib/ipoib_verbs.c
index 984a880..ba4669f 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_verbs.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_verbs.c
@@ -147,7 +147,7 @@  int ipoib_transport_dev_init(struct net_device *dev, struct ib_device *ca)
 		.cap = {
 			.max_send_wr  = ipoib_sendq_size,
 			.max_recv_wr  = ipoib_recvq_size,
-			.max_send_sge = min_t(u32, priv->ca->attrs.max_sge,
+			.max_send_sge = min_t(u32, priv->ca->attrs.max_send_sge,
 					      MAX_SKB_FRAGS + 1),
 			.max_recv_sge = IPOIB_UD_RX_SG
 		},
diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
index 6a55b87..12d5dae 100644
--- a/drivers/infiniband/ulp/isert/ib_isert.c
+++ b/drivers/infiniband/ulp/isert/ib_isert.c
@@ -136,7 +136,7 @@ 
 	attr.cap.max_send_wr = ISERT_QP_MAX_REQ_DTOS + 1;
 	attr.cap.max_recv_wr = ISERT_QP_MAX_RECV_DTOS + 1;
 	attr.cap.max_rdma_ctxs = ISCSI_DEF_XMIT_CMDS_MAX;
-	attr.cap.max_send_sge = device->ib_device->attrs.max_sge;
+	attr.cap.max_send_sge = device->ib_device->attrs.max_send_sge;
 	attr.cap.max_recv_sge = 1;
 	attr.sq_sig_type = IB_SIGNAL_REQ_WR;
 	attr.qp_type = IB_QPT_RC;
@@ -298,7 +298,8 @@ 
 	struct ib_device *ib_dev = device->ib_device;
 	int ret;
 
-	isert_dbg("devattr->max_sge: %d\n", ib_dev->attrs.max_sge);
+	isert_dbg("devattr->max_send_sge: %d devattr->max_recv_sge %d\n",
+		  ib_dev->attrs.max_send_sge, ib_dev->attrs.max_recv_sge);
 	isert_dbg("devattr->max_sge_rd: %d\n", ib_dev->attrs.max_sge_rd);
 
 	ret = isert_alloc_comps(device);
diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
index dfec0e1..9c79f88 100644
--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
+++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
@@ -1754,13 +1754,15 @@  static int srpt_create_ch_ib(struct srpt_rdma_ch *ch)
 	 */
 	qp_init->cap.max_send_wr = min(sq_size / 2, attrs->max_qp_wr);
 	qp_init->cap.max_rdma_ctxs = sq_size / 2;
-	qp_init->cap.max_send_sge = min(attrs->max_sge, SRPT_MAX_SG_PER_WQE);
+	qp_init->cap.max_send_sge = min(attrs->max_send_sge,
+					SRPT_MAX_SG_PER_WQE);
 	qp_init->port_num = ch->sport->port;
 	if (sdev->use_srq) {
 		qp_init->srq = sdev->srq;
 	} else {
 		qp_init->cap.max_recv_wr = ch->rq_size;
-		qp_init->cap.max_recv_sge = qp_init->cap.max_send_sge;
+		qp_init->cap.max_recv_sge = min(attrs->max_recv_sge,
+						SRPT_MAX_SG_PER_WQE);
 	}
 
 	if (ch->using_rdma_cm) {
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 622b13b..3c8612f 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -377,7 +377,7 @@  static int nvme_rdma_dev_get(struct nvme_rdma_device *dev)
 	}
 
 	ndev->num_inline_segments = min(NVME_RDMA_MAX_INLINE_SEGMENTS,
-					ndev->dev->attrs.max_sge - 1);
+					ndev->dev->attrs.max_send_sge - 1);
 	list_add(&ndev->entry, &device_list);
 out_unlock:
 	mutex_unlock(&device_list_mutex);
diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index eb5f1b0..b465b9c 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -880,7 +880,7 @@  static void nvmet_rdma_free_dev(struct kref *ref)
 
 	inline_page_count = num_pages(port->inline_data_size);
 	inline_sge_count = max(cm_id->device->attrs.max_sge_rd,
-				cm_id->device->attrs.max_sge) - 1;
+				cm_id->device->attrs.max_recv_sge) - 1;
 	if (inline_page_count > inline_sge_count) {
 		pr_warn("inline_data_size %d cannot be supported by device %s. Reducing to %lu.\n",
 			port->inline_data_size, cm_id->device->name,
@@ -957,7 +957,7 @@  static int nvmet_rdma_create_queue_ib(struct nvmet_rdma_queue *queue)
 	qp_attr.cap.max_send_wr = queue->send_queue_size + 1;
 	qp_attr.cap.max_rdma_ctxs = queue->send_queue_size;
 	qp_attr.cap.max_send_sge = max(ndev->device->attrs.max_sge_rd,
-					ndev->device->attrs.max_sge);
+					ndev->device->attrs.max_send_sge);
 
 	if (ndev->srq) {
 		qp_attr.srq = ndev->srq;
diff --git a/fs/cifs/smbdirect.c b/fs/cifs/smbdirect.c
index 5008af5..3fd6259 100644
--- a/fs/cifs/smbdirect.c
+++ b/fs/cifs/smbdirect.c
@@ -1660,9 +1660,16 @@  static struct smbd_connection *_smbd_get_connection(
 	info->max_receive_size = smbd_max_receive_size;
 	info->keep_alive_interval = smbd_keep_alive_interval;
 
-	if (info->id->device->attrs.max_sge < SMBDIRECT_MAX_SGE) {
-		log_rdma_event(ERR, "warning: device max_sge = %d too small\n",
-			info->id->device->attrs.max_sge);
+	if (info->id->device->attrs.max_send_sge < SMBDIRECT_MAX_SGE) {
+		log_rdma_event(ERR,
+			"warning: device max_send_sge = %d too small\n",
+			info->id->device->attrs.max_send_sge);
+		log_rdma_event(ERR, "Queue Pair creation may fail\n");
+	}
+	if (info->id->device->attrs.max_recv_sge < SMBDIRECT_MAX_SGE) {
+		log_rdma_event(ERR,
+			"warning: device max_recv_sge = %d too small\n",
+			info->id->device->attrs.max_recv_sge);
 		log_rdma_event(ERR, "Queue Pair creation may fail\n");
 	}
 
diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index 406c98d..da57ce0 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -344,7 +344,8 @@  struct ib_device_attr {
 	int			max_qp;
 	int			max_qp_wr;
 	u64			device_cap_flags;
-	int			max_sge;
+	int			max_send_sge;
+	int			max_recv_sge;
 	int			max_sge_rd;
 	int			max_cq;
 	int			max_cqe;
diff --git a/net/rds/ib.c b/net/rds/ib.c
index 02deee2..0c426ca 100644
--- a/net/rds/ib.c
+++ b/net/rds/ib.c
@@ -143,7 +143,7 @@  static void rds_ib_add_one(struct ib_device *device)
 	INIT_WORK(&rds_ibdev->free_work, rds_ib_dev_free);
 
 	rds_ibdev->max_wrs = device->attrs.max_qp_wr;
-	rds_ibdev->max_sge = min(device->attrs.max_sge, RDS_IB_MAX_SGE);
+	rds_ibdev->max_sge = min(device->attrs.max_send_sge, RDS_IB_MAX_SGE);
 
 	has_fr = (device->attrs.device_cap_flags &
 		  IB_DEVICE_MEM_MGT_EXTENSIONS);
diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c
index 96cc8f6..cb3471b 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_transport.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c
@@ -736,7 +736,8 @@  static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt)
 
 	/* Qualify the transport resource defaults with the
 	 * capabilities of this particular device */
-	newxprt->sc_max_sge = min((size_t)dev->attrs.max_sge,
+	newxprt->sc_max_sge = min3((size_t)dev->attrs.max_send_sge,
+				   (size_t)dev->attrs.max_recv_sge,
 				  (size_t)RPCSVC_MAXPAGES);
 	newxprt->sc_max_req_size = svcrdma_max_req_size;
 	newxprt->sc_max_requests = svcrdma_max_requests;
diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index fe5eaca..7ffa388 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -504,7 +504,7 @@ 
 	struct ib_cq *sendcq, *recvcq;
 	int rc;
 
-	max_sge = min_t(unsigned int, ia->ri_device->attrs.max_sge,
+	max_sge = min_t(unsigned int, ia->ri_device->attrs.max_send_sge,
 			RPCRDMA_MAX_SEND_SGES);
 	if (max_sge < RPCRDMA_MIN_SEND_SGES) {
 		pr_warn("rpcrdma: HCA provides only %d send SGEs\n", max_sge);