diff mbox

crash on device removal

Message ID 1468356054.5426.1.camel@ssi (mailing list archive)
State Rejected
Headers show

Commit Message

Ming Lin July 12, 2016, 8:40 p.m. UTC
On Tue, 2016-07-12 at 11:34 -0500, Steve Wise wrote:
> Hey Christoph, 
> 
> I see a crash when shutting down a nvme host node via 'reboot' that has 1 target
> device attached.  The shutdown causes iw_cxgb4 to be removed which triggers the
> device removal logic in the nvmf rdma transport.  The crash is here:
> 
> (gdb) list *nvme_rdma_free_qe+0x18
> 0x1e8 is in nvme_rdma_free_qe (drivers/nvme/host/rdma.c:196).
> 191     }
> 192
> 193     static void nvme_rdma_free_qe(struct ib_device *ibdev, struct
> nvme_rdma_qe *qe,
> 194                     size_t capsule_size, enum dma_data_direction dir)
> 195     {
> 196             ib_dma_unmap_single(ibdev, qe->dma, capsule_size, dir);
> 197             kfree(qe->data);
> 198     }
> 199
> 200     static int nvme_rdma_alloc_qe(struct ib_device *ibdev, struct
> nvme_rdma_qe *qe,
> 
> Apparently qe is NULL.
> 
> Looking at the device removal path, the logic appears correct (see
> nvme_rdma_device_unplug() and the nice function comment :) ).  I'm wondering if
> concurrently to the host device removal path cleaning up queues, the target is
> disconnecting all of its queues due to the first disconnect event from the host
> causing some cleanup race on the host side?  Although since the removal path
> executing in the cma event handler upcall, I don't think another thread would be
> handling a disconnect event.  Maybe the qp async event handler flow?
> 
> Thoughts?

We actually missed a kref_get in nvme_get_ns_from_disk().

This should fix it. Could you help to verify?


--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Steve Wise July 12, 2016, 9:09 p.m. UTC | #1
> On Tue, 2016-07-12 at 11:34 -0500, Steve Wise wrote:
> > Hey Christoph,
> >
> > I see a crash when shutting down a nvme host node via 'reboot' that has 1 target
> > device attached.  The shutdown causes iw_cxgb4 to be removed which triggers
> the
> > device removal logic in the nvmf rdma transport.  The crash is here:
> >
> > (gdb) list *nvme_rdma_free_qe+0x18
> > 0x1e8 is in nvme_rdma_free_qe (drivers/nvme/host/rdma.c:196).
> > 191     }
> > 192
> > 193     static void nvme_rdma_free_qe(struct ib_device *ibdev, struct
> > nvme_rdma_qe *qe,
> > 194                     size_t capsule_size, enum dma_data_direction dir)
> > 195     {
> > 196             ib_dma_unmap_single(ibdev, qe->dma, capsule_size, dir);
> > 197             kfree(qe->data);
> > 198     }
> > 199
> > 200     static int nvme_rdma_alloc_qe(struct ib_device *ibdev, struct
> > nvme_rdma_qe *qe,
> >
> > Apparently qe is NULL.
> >
> > Looking at the device removal path, the logic appears correct (see
> > nvme_rdma_device_unplug() and the nice function comment :) ).  I'm wondering
> if
> > concurrently to the host device removal path cleaning up queues, the target is
> > disconnecting all of its queues due to the first disconnect event from the host
> > causing some cleanup race on the host side?  Although since the removal path
> > executing in the cma event handler upcall, I don't think another thread would be
> > handling a disconnect event.  Maybe the qp async event handler flow?
> >
> > Thoughts?
> 
> We actually missed a kref_get in nvme_get_ns_from_disk().
> 
> This should fix it. Could you help to verify?
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 4babdf0..b146f52 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -183,6 +183,8 @@ static struct nvme_ns *nvme_get_ns_from_disk(struct
> gendisk *disk)
>  	}
>  	spin_unlock(&dev_list_lock);
> 
> +	kref_get(&ns->ctrl->kref);
> +
>  	return ns;
> 
>  fail_put_ns:

Hey Ming.  This avoids the crash in nvme_rdma_free_qe(), but now I see another crash:

[  975.633436] nvme nvme0: new ctrl: NQN "nqn.2014-08.org.nvmexpress.discovery", addr 10.0.1.14:4420
[  978.463636] nvme nvme0: creating 32 I/O queues.
[  979.187826] nvme nvme0: new ctrl: NQN "testnqn", addr 10.0.1.14:4420
[  987.778287] nvme nvme0: Got rdma device removal event, deleting ctrl
[  987.882202] BUG: unable to handle kernel paging request at ffff880e770e01f8
[  987.890024] IP: [<ffffffffa03a1a46>] __ib_process_cq+0x46/0xc0 [ib_core]

This looks like another problem with freeing the tag sets before stopping the QP.  I thought we fixed that once and for all, but perhaps there is some other path we missed. :(

Steve.

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Sagi Grimberg July 13, 2016, 10:06 a.m. UTC | #2
>> We actually missed a kref_get in nvme_get_ns_from_disk().
>>
>> This should fix it. Could you help to verify?
>>
>> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
>> index 4babdf0..b146f52 100644
>> --- a/drivers/nvme/host/core.c
>> +++ b/drivers/nvme/host/core.c
>> @@ -183,6 +183,8 @@ static struct nvme_ns *nvme_get_ns_from_disk(struct
>> gendisk *disk)
>>   	}
>>   	spin_unlock(&dev_list_lock);
>>
>> +	kref_get(&ns->ctrl->kref);
>> +
>>   	return ns;
>>
>>   fail_put_ns:
>
> Hey Ming.  This avoids the crash in nvme_rdma_free_qe(), but now I see another crash:
>
> [  975.633436] nvme nvme0: new ctrl: NQN "nqn.2014-08.org.nvmexpress.discovery", addr 10.0.1.14:4420
> [  978.463636] nvme nvme0: creating 32 I/O queues.
> [  979.187826] nvme nvme0: new ctrl: NQN "testnqn", addr 10.0.1.14:4420
> [  987.778287] nvme nvme0: Got rdma device removal event, deleting ctrl
> [  987.882202] BUG: unable to handle kernel paging request at ffff880e770e01f8
> [  987.890024] IP: [<ffffffffa03a1a46>] __ib_process_cq+0x46/0xc0 [ib_core]
>
> This looks like another problem with freeing the tag sets before stopping the QP.  I thought we fixed that once and for all, but perhaps there is some other path we missed. :(

The fix doesn't look right to me. But I wander how you got this crash
now? if at all, this would delay the controller removal...
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 4babdf0..b146f52 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -183,6 +183,8 @@  static struct nvme_ns *nvme_get_ns_from_disk(struct gendisk *disk)
 	}
 	spin_unlock(&dev_list_lock);
 
+	kref_get(&ns->ctrl->kref);
+
 	return ns;
 
 fail_put_ns: