diff mbox series

[v2,3/7] usb: mtu3: fix kernel panic at qmu transfer done irq handler

Message ID 20230407082937.14558-3-chunfeng.yun@mediatek.com (mailing list archive)
State Superseded
Headers show
Series [v2,1/7] usb: mtu3: give back request when rx error happens | expand

Commit Message

Chunfeng Yun April 7, 2023, 8:29 a.m. UTC
When handle qmu transfer irq, it will unlock @mtu->lock before give back
request, if another thread handle disconnect event at the same time, and
try to disable ep, it may lock @mtu->lock and free qmu ring, then qmu
irq hanlder may get a NULL gpd, avoid the KE by checking gpd's value before
handling it.

e.g.
qmu done irq on cpu0                 thread running on cpu1

qmu_done_tx()
  handle gpd [0]
    mtu3_requ_complete()        mtu3_gadget_ep_disable()
      unlock @mtu->lock
        give back request         lock @mtu->lock
                                    mtu3_ep_disable()
                                      mtu3_gpd_ring_free()
                                   unlock @mtu->lock
      lock @mtu->lock
    get next gpd [1]

[1]: goto [0] to handle next gpd, and next gpd may be NULL.

Signed-off-by: Chunfeng Yun <chunfeng.yun@mediatek.com>
---
v2: fix typo suggested by AngeloGioacchino
---
 drivers/usb/mtu3/mtu3_qmu.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

Comments

AngeloGioacchino Del Regno April 7, 2023, 9:09 a.m. UTC | #1
Il 07/04/23 10:29, Chunfeng Yun ha scritto:
> When handle qmu transfer irq, it will unlock @mtu->lock before give back
> request, if another thread handle disconnect event at the same time, and
> try to disable ep, it may lock @mtu->lock and free qmu ring, then qmu
> irq hanlder may get a NULL gpd, avoid the KE by checking gpd's value before
> handling it.
> 
> e.g.
> qmu done irq on cpu0                 thread running on cpu1
> 
> qmu_done_tx()
>    handle gpd [0]
>      mtu3_requ_complete()        mtu3_gadget_ep_disable()
>        unlock @mtu->lock
>          give back request         lock @mtu->lock
>                                      mtu3_ep_disable()
>                                        mtu3_gpd_ring_free()
>                                     unlock @mtu->lock
>        lock @mtu->lock
>      get next gpd [1]
> 
> [1]: goto [0] to handle next gpd, and next gpd may be NULL.
> 
> Signed-off-by: Chunfeng Yun <chunfeng.yun@mediatek.com>

NACK. You still miss the Fixes tag.

Regards,
Angelo
Chunfeng Yun April 17, 2023, 1:50 a.m. UTC | #2
On Fri, 2023-04-07 at 11:09 +0200, AngeloGioacchino Del Regno wrote:
> External email : Please do not click links or open attachments until
> you have verified the sender or the content.
> 
> 
> Il 07/04/23 10:29, Chunfeng Yun ha scritto:
> > When handle qmu transfer irq, it will unlock @mtu->lock before give
> > back
> > request, if another thread handle disconnect event at the same
> > time, and
> > try to disable ep, it may lock @mtu->lock and free qmu ring, then
> > qmu
> > irq hanlder may get a NULL gpd, avoid the KE by checking gpd's
> > value before
> > handling it.
> > 
> > e.g.
> > qmu done irq on cpu0                 thread running on cpu1
> > 
> > qmu_done_tx()
> >    handle gpd [0]
> >      mtu3_requ_complete()        mtu3_gadget_ep_disable()
> >        unlock @mtu->lock
> >          give back request         lock @mtu->lock
> >                                      mtu3_ep_disable()
> >                                        mtu3_gpd_ring_free()
> >                                     unlock @mtu->lock
> >        lock @mtu->lock
> >      get next gpd [1]
> > 
> > [1]: goto [0] to handle next gpd, and next gpd may be NULL.
> > 
> > Signed-off-by: Chunfeng Yun <chunfeng.yun@mediatek.com>
> 
> NACK. You still miss the Fixes tag.
Ok, I'll add it, thanks

> 
> Regards,
> Angelo
>
diff mbox series

Patch

diff --git a/drivers/usb/mtu3/mtu3_qmu.c b/drivers/usb/mtu3/mtu3_qmu.c
index 6be4977a5db5..3d77408e3133 100644
--- a/drivers/usb/mtu3/mtu3_qmu.c
+++ b/drivers/usb/mtu3/mtu3_qmu.c
@@ -210,6 +210,7 @@  static struct qmu_gpd *advance_enq_gpd(struct mtu3_gpd_ring *ring)
 	return ring->enqueue;
 }
 
+/* @dequeue may be NULL if ring is unallocated or freed */
 static struct qmu_gpd *advance_deq_gpd(struct mtu3_gpd_ring *ring)
 {
 	if (ring->dequeue < ring->end)
@@ -522,7 +523,7 @@  static void qmu_done_tx(struct mtu3 *mtu, u8 epnum)
 	dev_dbg(mtu->dev, "%s EP%d, last=%p, current=%p, enq=%p\n",
 		__func__, epnum, gpd, gpd_current, ring->enqueue);
 
-	while (gpd != gpd_current && !GET_GPD_HWO(gpd)) {
+	while (gpd && gpd != gpd_current && !GET_GPD_HWO(gpd)) {
 
 		mreq = next_request(mep);
 
@@ -561,7 +562,7 @@  static void qmu_done_rx(struct mtu3 *mtu, u8 epnum)
 	dev_dbg(mtu->dev, "%s EP%d, last=%p, current=%p, enq=%p\n",
 		__func__, epnum, gpd, gpd_current, ring->enqueue);
 
-	while (gpd != gpd_current && !GET_GPD_HWO(gpd)) {
+	while (gpd && gpd != gpd_current && !GET_GPD_HWO(gpd)) {
 
 		mreq = next_request(mep);