diff mbox series

[v5,for-next,1/2] RDMA/hns: Add the workqueue framework for flush cqe handler

Message ID 1577503735-26685-2-git-send-email-liuyixian@huawei.com (mailing list archive)
State Superseded
Delegated to: Jason Gunthorpe
Headers show
Series [v5,for-next,1/2] RDMA/hns: Add the workqueue framework for flush cqe handler | expand

Commit Message

Yixian Liu Dec. 28, 2019, 3:28 a.m. UTC
HiP08 RoCE hardware lacks ability(a known hardware problem) to flush
outstanding WQEs if QP state gets into errored mode for some reason.
To overcome this hardware problem and as a workaround, when QP is
detected to be in errored state during various legs like post send,
post receive etc [1], flush needs to be performed from the driver.

The earlier patch[1] sent to solve the hardware limitation explained
in the cover-letter had a bug in the software flushing leg. It
acquired mutex while modifying QP state to errored state and while
conveying it to the hardware using the mailbox. This caused leg to
sleep while holding spin-lock and caused crash.

Suggested Solution:
we have proposed to defer the flushing of the QP in the Errored state
using the workqueue to get around with the limitation of our hardware.

This patch adds the framework of the workqueue and the flush handler
function.

[1] https://patchwork.kernel.org/patch/10534271/

Signed-off-by: Yixian Liu <liuyixian@huawei.com>
Reviewed-by: Salil Mehta <salil.mehta@huawei.com>
---
 drivers/infiniband/hw/hns/hns_roce_device.h |  2 ++
 drivers/infiniband/hw/hns/hns_roce_hw_v2.c  |  3 +-
 drivers/infiniband/hw/hns/hns_roce_qp.c     | 43 +++++++++++++++++++++++++++++
 3 files changed, 46 insertions(+), 2 deletions(-)

Comments

Jason Gunthorpe Jan. 10, 2020, 3:26 p.m. UTC | #1
On Sat, Dec 28, 2019 at 11:28:54AM +0800, Yixian Liu wrote:
> +void init_flush_work(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp)
> +{
> +	struct hns_roce_work *flush_work;
> +
> +	flush_work = kzalloc(sizeof(struct hns_roce_work), GFP_ATOMIC);
> +	if (!flush_work)
> +		return;

You changed it to only queue once, so why do we need the allocation
now? That was the whole point..

And the other patch shouldn't be manipulating being_pushed without
some kind of locking

Jason
Yixian Liu Jan. 11, 2020, 9:49 a.m. UTC | #2
On 2020/1/10 23:26, Jason Gunthorpe wrote:
> On Sat, Dec 28, 2019 at 11:28:54AM +0800, Yixian Liu wrote:
>> +void init_flush_work(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp)
>> +{
>> +	struct hns_roce_work *flush_work;
>> +
>> +	flush_work = kzalloc(sizeof(struct hns_roce_work), GFP_ATOMIC);
>> +	if (!flush_work)
>> +		return;
> 
> You changed it to only queue once, so why do we need the allocation
> now? That was the whole point..
> 
> And the other patch shouldn't be manipulating being_pushed without
> some kind of locking

Hi Jason, thanks for your suggestion, I will consider them in next version.

> 
> Jason
> 
>
Yixian Liu Jan. 13, 2020, 11:26 a.m. UTC | #3
On 2020/1/10 23:26, Jason Gunthorpe wrote:
> On Sat, Dec 28, 2019 at 11:28:54AM +0800, Yixian Liu wrote:
>> +void init_flush_work(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp)
>> +{
>> +	struct hns_roce_work *flush_work;
>> +
>> +	flush_work = kzalloc(sizeof(struct hns_roce_work), GFP_ATOMIC);
>> +	if (!flush_work)
>> +		return;
> 
> You changed it to only queue once, so why do we need the allocation
> now? That was the whole point..

Hi Jason,

The flush work is queued **not only once**. As the flag being_pushed is set to 0 during
the process of modifying qp like this:
	hns_roce_v2_modify_qp {
		...
		if (new_state == IB_QPS_ERR) {
			spin_lock_irqsave(&hr_qp->sq.lock, sq_flag);
			...
			hr_qp->state = IB_QPS_ERR;
			hr_qp->being_push = 0;
			...
		}
		...
	}
which means the new updated PI value needs to be updated with initializing a new flush work.
Thus, maybe there are two flush work in the workqueue. Thus, we still need the allocation here.

> 
> And the other patch shouldn't be manipulating being_pushed without
> some kind of locking

Agree. It needs to hold the spin lock of sq and rq when updating it in modify qp,
will fix next version.

> 
> Jason
> 
>
Jason Gunthorpe Jan. 13, 2020, 2:04 p.m. UTC | #4
On Mon, Jan 13, 2020 at 07:26:45PM +0800, Liuyixian (Eason) wrote:
> 
> 
> On 2020/1/10 23:26, Jason Gunthorpe wrote:
> > On Sat, Dec 28, 2019 at 11:28:54AM +0800, Yixian Liu wrote:
> >> +void init_flush_work(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp)
> >> +{
> >> +	struct hns_roce_work *flush_work;
> >> +
> >> +	flush_work = kzalloc(sizeof(struct hns_roce_work), GFP_ATOMIC);
> >> +	if (!flush_work)
> >> +		return;
> > 
> > You changed it to only queue once, so why do we need the allocation
> > now? That was the whole point..
> 
> Hi Jason,
> 
> The flush work is queued **not only once**. As the flag being_pushed is set to 0 during
> the process of modifying qp like this:
> 	hns_roce_v2_modify_qp {
> 		...
> 		if (new_state == IB_QPS_ERR) {
> 			spin_lock_irqsave(&hr_qp->sq.lock, sq_flag);
> 			...
> 			hr_qp->state = IB_QPS_ERR;
> 			hr_qp->being_push = 0;
> 			...
> 		}
> 		...
> 	}
> which means the new updated PI value needs to be updated with initializing a new flush work.
> Thus, maybe there are two flush work in the workqueue. Thus, we still need the allocation here.

I don't see how you should get two? One should be pending until the
modify is done with the new PI, then once the PI is updated the same
one should be re-queued the next time the PI needs changing.

Jason
Yixian Liu Jan. 15, 2020, 9:36 a.m. UTC | #5
On 2020/1/13 22:04, Jason Gunthorpe wrote:
> On Mon, Jan 13, 2020 at 07:26:45PM +0800, Liuyixian (Eason) wrote:
>>
>>
>> On 2020/1/10 23:26, Jason Gunthorpe wrote:
>>> On Sat, Dec 28, 2019 at 11:28:54AM +0800, Yixian Liu wrote:
>>>> +void init_flush_work(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp)
>>>> +{
>>>> +	struct hns_roce_work *flush_work;
>>>> +
>>>> +	flush_work = kzalloc(sizeof(struct hns_roce_work), GFP_ATOMIC);
>>>> +	if (!flush_work)
>>>> +		return;
>>>
>>> You changed it to only queue once, so why do we need the allocation
>>> now? That was the whole point..
>>
>> Hi Jason,
>>
>> The flush work is queued **not only once**. As the flag being_pushed is set to 0 during
>> the process of modifying qp like this:
>> 	hns_roce_v2_modify_qp {
>> 		...
>> 		if (new_state == IB_QPS_ERR) {
>> 			spin_lock_irqsave(&hr_qp->sq.lock, sq_flag);
>> 			...
>> 			hr_qp->state = IB_QPS_ERR;
>> 			hr_qp->being_push = 0;
>> 			...
>> 		}
>> 		...
>> 	}
>> which means the new updated PI value needs to be updated with initializing a new flush work.
>> Thus, maybe there are two flush work in the workqueue. Thus, we still need the allocation here.
> 
> I don't see how you should get two? One should be pending until the
> modify is done with the new PI, then once the PI is updated the same
> one should be re-queued the next time the PI needs changing.
Yixian Liu Jan. 15, 2020, 9:39 a.m. UTC | #6
On 2020/1/13 22:04, Jason Gunthorpe wrote:
> On Mon, Jan 13, 2020 at 07:26:45PM +0800, Liuyixian (Eason) wrote:
>>
>>
>> On 2020/1/10 23:26, Jason Gunthorpe wrote:
>>> On Sat, Dec 28, 2019 at 11:28:54AM +0800, Yixian Liu wrote:
>>>> +void init_flush_work(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp)
>>>> +{
>>>> +	struct hns_roce_work *flush_work;
>>>> +
>>>> +	flush_work = kzalloc(sizeof(struct hns_roce_work), GFP_ATOMIC);
>>>> +	if (!flush_work)
>>>> +		return;
>>>
>>> You changed it to only queue once, so why do we need the allocation
>>> now? That was the whole point..
>>
>> Hi Jason,
>>
>> The flush work is queued **not only once**. As the flag being_pushed is set to 0 during
>> the process of modifying qp like this:
>> 	hns_roce_v2_modify_qp {
>> 		...
>> 		if (new_state == IB_QPS_ERR) {
>> 			spin_lock_irqsave(&hr_qp->sq.lock, sq_flag);
>> 			...
>> 			hr_qp->state = IB_QPS_ERR;
>> 			hr_qp->being_push = 0;
>> 			...
>> 		}
>> 		...
>> 	}
>> which means the new updated PI value needs to be updated with initializing a new flush work.
>> Thus, maybe there are two flush work in the workqueue. Thus, we still need the allocation here.
> 
> I don't see how you should get two? One should be pending until the
> modify is done with the new PI, then once the PI is updated the same
> one should be re-queued the next time the PI needs changing.
> 
Hi Jason,

Thanks! I will fix it according to your suggestion in V7.
diff mbox series

Patch

diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
index 5617434..a87a838 100644
--- a/drivers/infiniband/hw/hns/hns_roce_device.h
+++ b/drivers/infiniband/hw/hns/hns_roce_device.h
@@ -900,6 +900,7 @@  struct hns_roce_caps {
 struct hns_roce_work {
 	struct hns_roce_dev *hr_dev;
 	struct work_struct work;
+	struct hns_roce_qp *hr_qp;
 	u32 qpn;
 	u32 cqn;
 	int event_type;
@@ -1220,6 +1221,7 @@  struct ib_qp *hns_roce_create_qp(struct ib_pd *ib_pd,
 				 struct ib_udata *udata);
 int hns_roce_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
 		       int attr_mask, struct ib_udata *udata);
+void init_flush_work(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp);
 void *get_recv_wqe(struct hns_roce_qp *hr_qp, int n);
 void *get_send_wqe(struct hns_roce_qp *hr_qp, int n);
 void *get_send_extend_sge(struct hns_roce_qp *hr_qp, int n);
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index 1026ac6..2afcedd 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -5966,8 +5966,7 @@  static int hns_roce_v2_init_eq_table(struct hns_roce_dev *hr_dev)
 		goto err_request_irq_fail;
 	}
 
-	hr_dev->irq_workq =
-		create_singlethread_workqueue("hns_roce_irq_workqueue");
+	hr_dev->irq_workq = alloc_ordered_workqueue("hns_roce_irq_workq", 0);
 	if (!hr_dev->irq_workq) {
 		dev_err(dev, "Create irq workqueue failed!\n");
 		ret = -ENOMEM;
diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
index a6565b6..0c1e74a 100644
--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
+++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
@@ -43,6 +43,49 @@ 
 
 #define SQP_NUM				(2 * HNS_ROCE_MAX_PORTS)
 
+static void flush_work_handle(struct work_struct *work)
+{
+	struct hns_roce_work *flush_work = container_of(work,
+					struct hns_roce_work, work);
+	struct hns_roce_qp *hr_qp = flush_work->hr_qp;
+	struct device *dev = flush_work->hr_dev->dev;
+	struct ib_qp_attr attr;
+	int attr_mask;
+	int ret;
+
+	attr_mask = IB_QP_STATE;
+	attr.qp_state = IB_QPS_ERR;
+
+	ret = hns_roce_modify_qp(&hr_qp->ibqp, &attr, attr_mask, NULL);
+	if (ret)
+		dev_err(dev, "Modify QP to error state failed(%d) during CQE flush\n",
+			ret);
+
+	kfree(flush_work);
+
+	/*
+	 * make sure we signal QP destroy leg that flush QP was completed
+	 * so that it can safely proceed ahead now and destroy QP
+	 */
+	if (atomic_dec_and_test(&hr_qp->refcount))
+		complete(&hr_qp->free);
+}
+
+void init_flush_work(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp)
+{
+	struct hns_roce_work *flush_work;
+
+	flush_work = kzalloc(sizeof(struct hns_roce_work), GFP_ATOMIC);
+	if (!flush_work)
+		return;
+
+	flush_work->hr_dev = hr_dev;
+	flush_work->hr_qp = hr_qp;
+	INIT_WORK(&flush_work->work, flush_work_handle);
+	atomic_inc(&hr_qp->refcount);
+	queue_work(hr_dev->irq_workq, &flush_work->work);
+}
+
 void hns_roce_qp_event(struct hns_roce_dev *hr_dev, u32 qpn, int event_type)
 {
 	struct device *dev = hr_dev->dev;