Message ID | 20210608103039.39080-4-jinpu.wang@ionos.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | misc update for RTRS | expand |
On Tue, Jun 08, 2021 at 12:30:38PM +0200, Jack Wang wrote: > From: Md Haris Iqbal <haris.iqbal@cloud.ionos.com> > > When using rdma_rxe, post_one_recv() returns > NOMEM error due to the full recv queue. > This patch increase the number of WR for receive queue > to support all devices. Why don't you query IB device to get max_qp_wr and set accordingly? Thanks > > Signed-off-by: Md Haris Iqbal <haris.iqbal@cloud.ionos.com> > Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com> > Signed-off-by: Gioh Kim <gi-oh.kim@ionos.com> > --- > drivers/infiniband/ulp/rtrs/rtrs-clt.c | 7 ++++--- > drivers/infiniband/ulp/rtrs/rtrs-srv.c | 2 +- > 2 files changed, 5 insertions(+), 4 deletions(-) > > diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c > index cd53edddfe1f..acf0fde410c3 100644 > --- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c > +++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c > @@ -1579,10 +1579,11 @@ static int create_con_cq_qp(struct rtrs_clt_con *con) > lockdep_assert_held(&con->con_mutex); > if (con->c.cid == 0) { > /* > - * One completion for each receive and two for each send > - * (send request + registration) > + * Two (request + registration) completion for send > + * Two for recv if always_invalidate is set on server > + * or one for recv. > * + 2 for drain and heartbeat > - * in case qp gets into error state > + * in case qp gets into error state. > */ > max_send_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2; > max_recv_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2; > diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c > index 04ec3080e9b5..bb73f7762a87 100644 > --- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c > +++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c > @@ -1656,7 +1656,7 @@ static int create_con(struct rtrs_srv_sess *sess, > * + 2 for drain and heartbeat > */ > max_send_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2; > - max_recv_wr = SERVICE_CON_QUEUE_DEPTH + 2; > + max_recv_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2; > cq_size = max_send_wr + max_recv_wr; > } else { > /* > -- > 2.25.1 >
On Thu, Jun 10, 2021 at 9:23 AM Leon Romanovsky <leon@kernel.org> wrote: > > On Tue, Jun 08, 2021 at 12:30:38PM +0200, Jack Wang wrote: > > From: Md Haris Iqbal <haris.iqbal@cloud.ionos.com> > > > > When using rdma_rxe, post_one_recv() returns > > NOMEM error due to the full recv queue. > > This patch increase the number of WR for receive queue > > to support all devices. > > Why don't you query IB device to get max_qp_wr and set accordingly? > > Thanks Hi Leon, We don't want to set the max_qp_wr, it will consume lots of memory. this patch is only for service connection used control messages. For IO connection, we do query and max_qp_wr of the device, but still we need to set the minimum to reduce the memory consumption. Thanks! Regards > > > > > Signed-off-by: Md Haris Iqbal <haris.iqbal@cloud.ionos.com> > > Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com> > > Signed-off-by: Gioh Kim <gi-oh.kim@ionos.com> > > --- > > drivers/infiniband/ulp/rtrs/rtrs-clt.c | 7 ++++--- > > drivers/infiniband/ulp/rtrs/rtrs-srv.c | 2 +- > > 2 files changed, 5 insertions(+), 4 deletions(-) > > > > diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c > > index cd53edddfe1f..acf0fde410c3 100644 > > --- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c > > +++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c > > @@ -1579,10 +1579,11 @@ static int create_con_cq_qp(struct rtrs_clt_con *con) > > lockdep_assert_held(&con->con_mutex); > > if (con->c.cid == 0) { > > /* > > - * One completion for each receive and two for each send > > - * (send request + registration) > > + * Two (request + registration) completion for send > > + * Two for recv if always_invalidate is set on server > > + * or one for recv. > > * + 2 for drain and heartbeat > > - * in case qp gets into error state > > + * in case qp gets into error state. > > */ > > max_send_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2; > > max_recv_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2; > > diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c > > index 04ec3080e9b5..bb73f7762a87 100644 > > --- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c > > +++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c > > @@ -1656,7 +1656,7 @@ static int create_con(struct rtrs_srv_sess *sess, > > * + 2 for drain and heartbeat > > */ > > max_send_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2; > > - max_recv_wr = SERVICE_CON_QUEUE_DEPTH + 2; > > + max_recv_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2; > > cq_size = max_send_wr + max_recv_wr; > > } else { > > /* > > -- > > 2.25.1 > >
On Thu, Jun 10, 2021 at 01:01:07PM +0200, Jinpu Wang wrote: > On Thu, Jun 10, 2021 at 9:23 AM Leon Romanovsky <leon@kernel.org> wrote: > > > > On Tue, Jun 08, 2021 at 12:30:38PM +0200, Jack Wang wrote: > > > From: Md Haris Iqbal <haris.iqbal@cloud.ionos.com> > > > > > > When using rdma_rxe, post_one_recv() returns > > > NOMEM error due to the full recv queue. > > > This patch increase the number of WR for receive queue > > > to support all devices. > > > > Why don't you query IB device to get max_qp_wr and set accordingly? > > > > Thanks > Hi Leon, > > We don't want to set the max_qp_wr, it will consume lots of memory. > this patch is only for service connection > used control messages. OK, so why don't you set min(your_define, max_qp_wr)? Thanks > > For IO connection, we do query and max_qp_wr of the device, but still > we need to set the minimum to > reduce the memory consumption. > > Thanks! Regards > > > > > > > > Signed-off-by: Md Haris Iqbal <haris.iqbal@cloud.ionos.com> > > > Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com> > > > Signed-off-by: Gioh Kim <gi-oh.kim@ionos.com> > > > --- > > > drivers/infiniband/ulp/rtrs/rtrs-clt.c | 7 ++++--- > > > drivers/infiniband/ulp/rtrs/rtrs-srv.c | 2 +- > > > 2 files changed, 5 insertions(+), 4 deletions(-) > > > > > > diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c > > > index cd53edddfe1f..acf0fde410c3 100644 > > > --- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c > > > +++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c > > > @@ -1579,10 +1579,11 @@ static int create_con_cq_qp(struct rtrs_clt_con *con) > > > lockdep_assert_held(&con->con_mutex); > > > if (con->c.cid == 0) { > > > /* > > > - * One completion for each receive and two for each send > > > - * (send request + registration) > > > + * Two (request + registration) completion for send > > > + * Two for recv if always_invalidate is set on server > > > + * or one for recv. > > > * + 2 for drain and heartbeat > > > - * in case qp gets into error state > > > + * in case qp gets into error state. > > > */ > > > max_send_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2; > > > max_recv_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2; > > > diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c > > > index 04ec3080e9b5..bb73f7762a87 100644 > > > --- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c > > > +++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c > > > @@ -1656,7 +1656,7 @@ static int create_con(struct rtrs_srv_sess *sess, > > > * + 2 for drain and heartbeat > > > */ > > > max_send_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2; > > > - max_recv_wr = SERVICE_CON_QUEUE_DEPTH + 2; > > > + max_recv_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2; > > > cq_size = max_send_wr + max_recv_wr; > > > } else { > > > /* > > > -- > > > 2.25.1 > > >
On Thu, Jun 10, 2021 at 1:47 PM Leon Romanovsky <leon@kernel.org> wrote: > > On Thu, Jun 10, 2021 at 01:01:07PM +0200, Jinpu Wang wrote: > > On Thu, Jun 10, 2021 at 9:23 AM Leon Romanovsky <leon@kernel.org> wrote: > > > > > > On Tue, Jun 08, 2021 at 12:30:38PM +0200, Jack Wang wrote: > > > > From: Md Haris Iqbal <haris.iqbal@cloud.ionos.com> > > > > > > > > When using rdma_rxe, post_one_recv() returns > > > > NOMEM error due to the full recv queue. > > > > This patch increase the number of WR for receive queue > > > > to support all devices. > > > > > > Why don't you query IB device to get max_qp_wr and set accordingly? > > > > > > Thanks > > Hi Leon, > > > > We don't want to set the max_qp_wr, it will consume lots of memory. > > this patch is only for service connection > > used control messages. > > OK, so why don't you set min(your_define, max_qp_wr)? > > Thanks Ok, will fix it. Thx. > > > > > For IO connection, we do query and max_qp_wr of the device, but still > > we need to set the minimum to > > reduce the memory consumption. > > > > Thanks! Regards > > > > > > > > > > > Signed-off-by: Md Haris Iqbal <haris.iqbal@cloud.ionos.com> > > > > Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com> > > > > Signed-off-by: Gioh Kim <gi-oh.kim@ionos.com> > > > > --- > > > > drivers/infiniband/ulp/rtrs/rtrs-clt.c | 7 ++++--- > > > > drivers/infiniband/ulp/rtrs/rtrs-srv.c | 2 +- > > > > 2 files changed, 5 insertions(+), 4 deletions(-) > > > > > > > > diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c > > > > index cd53edddfe1f..acf0fde410c3 100644 > > > > --- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c > > > > +++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c > > > > @@ -1579,10 +1579,11 @@ static int create_con_cq_qp(struct rtrs_clt_con *con) > > > > lockdep_assert_held(&con->con_mutex); > > > > if (con->c.cid == 0) { > > > > /* > > > > - * One completion for each receive and two for each send > > > > - * (send request + registration) > > > > + * Two (request + registration) completion for send > > > > + * Two for recv if always_invalidate is set on server > > > > + * or one for recv. > > > > * + 2 for drain and heartbeat > > > > - * in case qp gets into error state > > > > + * in case qp gets into error state. > > > > */ > > > > max_send_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2; > > > > max_recv_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2; > > > > diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c > > > > index 04ec3080e9b5..bb73f7762a87 100644 > > > > --- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c > > > > +++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c > > > > @@ -1656,7 +1656,7 @@ static int create_con(struct rtrs_srv_sess *sess, > > > > * + 2 for drain and heartbeat > > > > */ > > > > max_send_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2; > > > > - max_recv_wr = SERVICE_CON_QUEUE_DEPTH + 2; > > > > + max_recv_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2; > > > > cq_size = max_send_wr + max_recv_wr; > > > > } else { > > > > /* > > > > -- > > > > 2.25.1 > > > >
diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c index cd53edddfe1f..acf0fde410c3 100644 --- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c +++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c @@ -1579,10 +1579,11 @@ static int create_con_cq_qp(struct rtrs_clt_con *con) lockdep_assert_held(&con->con_mutex); if (con->c.cid == 0) { /* - * One completion for each receive and two for each send - * (send request + registration) + * Two (request + registration) completion for send + * Two for recv if always_invalidate is set on server + * or one for recv. * + 2 for drain and heartbeat - * in case qp gets into error state + * in case qp gets into error state. */ max_send_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2; max_recv_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2; diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c index 04ec3080e9b5..bb73f7762a87 100644 --- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c +++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c @@ -1656,7 +1656,7 @@ static int create_con(struct rtrs_srv_sess *sess, * + 2 for drain and heartbeat */ max_send_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2; - max_recv_wr = SERVICE_CON_QUEUE_DEPTH + 2; + max_recv_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2; cq_size = max_send_wr + max_recv_wr; } else { /*