diff mbox series

remoteproc: imx_dsp_rproc: Add mutex protection for workqueue

Message ID 1664192893-14487-1-git-send-email-shengjiu.wang@nxp.com (mailing list archive)
State Superseded
Headers show
Series remoteproc: imx_dsp_rproc: Add mutex protection for workqueue | expand

Commit Message

Shengjiu Wang Sept. 26, 2022, 11:48 a.m. UTC
The workqueue may execute late even after remoteproc is stopped or
stopping, some resources (rpmsg device and endpoint) have been
released in rproc_stop_subdevices(), then rproc_vq_interrupt()
access these resources will cause kennel dump.

Call trace:
 virtqueue_add_split+0x1ac/0x560
 virtqueue_add_inbuf+0x4c/0x60
 rpmsg_recv_done+0x15c/0x294
 vring_interrupt+0x6c/0xa4
 rproc_vq_interrupt+0x30/0x50
 imx_dsp_rproc_vq_work+0x24/0x40 [imx_dsp_rproc]
 process_one_work+0x1d0/0x354
 worker_thread+0x13c/0x470
 kthread+0x154/0x160
 ret_from_fork+0x10/0x20

Add mutex protection in imx_dsp_rproc_vq_work(), if the state is
not running, then just skip calling rproc_vq_interrupt().

Also the flush workqueue operation can't be added in rproc stop
for same reason.

Fixes: ec0e5549f358 ("remoteproc: imx_dsp_rproc: Add remoteproc driver for DSP on i.MX")
Signed-off-by: Shengjiu Wang <shengjiu.wang@nxp.com>
---
 drivers/remoteproc/imx_dsp_rproc.c | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

Comments

Peng Fan Sept. 28, 2022, 10:22 a.m. UTC | #1
> Subject: [PATCH] remoteproc: imx_dsp_rproc: Add mutex protection for
> workqueue
> 
> The workqueue may execute late even after remoteproc is stopped or
> stopping, some resources (rpmsg device and endpoint) have been released
> in rproc_stop_subdevices(), then rproc_vq_interrupt() access these
> resources will cause kennel dump.
> 
> Call trace:
>  virtqueue_add_split+0x1ac/0x560
>  virtqueue_add_inbuf+0x4c/0x60
>  rpmsg_recv_done+0x15c/0x294
>  vring_interrupt+0x6c/0xa4
>  rproc_vq_interrupt+0x30/0x50
>  imx_dsp_rproc_vq_work+0x24/0x40 [imx_dsp_rproc]
>  process_one_work+0x1d0/0x354
>  worker_thread+0x13c/0x470
>  kthread+0x154/0x160
>  ret_from_fork+0x10/0x20
> 
> Add mutex protection in imx_dsp_rproc_vq_work(), if the state is not
> running, then just skip calling rproc_vq_interrupt().
> 
> Also the flush workqueue operation can't be added in rproc stop for same
> reason.
> 
> Fixes: ec0e5549f358 ("remoteproc: imx_dsp_rproc: Add remoteproc driver
> for DSP on i.MX")
> Signed-off-by: Shengjiu Wang <shengjiu.wang@nxp.com>

Reviewed-by: Peng Fan <peng.fan@nxp.com>

I also give a look at other drivers, seems almost all drivers use rproc_vq_interrupt
has same issue, should use mutex to protect the mbox rx callback.

Regards,
Peng.

> ---
>  drivers/remoteproc/imx_dsp_rproc.c | 12 +++++++++---
>  1 file changed, 9 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/remoteproc/imx_dsp_rproc.c
> b/drivers/remoteproc/imx_dsp_rproc.c
> index 899aa8dd12f0..95da1cbefacf 100644
> --- a/drivers/remoteproc/imx_dsp_rproc.c
> +++ b/drivers/remoteproc/imx_dsp_rproc.c
> @@ -347,9 +347,6 @@ static int imx_dsp_rproc_stop(struct rproc *rproc)
>  	struct device *dev = rproc->dev.parent;
>  	int ret = 0;
> 
> -	/* Make sure work is finished */
> -	flush_work(&priv->rproc_work);
> -
>  	if (rproc->state == RPROC_CRASHED) {
>  		priv->flags &= ~REMOTE_IS_READY;
>  		return 0;
> @@ -432,9 +429,18 @@ static void imx_dsp_rproc_vq_work(struct
> work_struct *work)  {
>  	struct imx_dsp_rproc *priv = container_of(work, struct
> imx_dsp_rproc,
>  						  rproc_work);
> +	struct rproc *rproc = priv->rproc;
> +
> +	mutex_lock(&rproc->lock);
> +
> +	if (rproc->state != RPROC_RUNNING)
> +		goto unlock_mutex;
> 
>  	rproc_vq_interrupt(priv->rproc, 0);
>  	rproc_vq_interrupt(priv->rproc, 1);
> +
> +unlock_mutex:
> +	mutex_unlock(&rproc->lock);
>  }
> 
>  /**
> --
> 2.34.1
Mathieu Poirier Sept. 28, 2022, 5:20 p.m. UTC | #2
On Mon, Sep 26, 2022 at 07:48:13PM +0800, Shengjiu Wang wrote:
> The workqueue may execute late even after remoteproc is stopped or
> stopping, some resources (rpmsg device and endpoint) have been
> released in rproc_stop_subdevices(), then rproc_vq_interrupt()
> access these resources will cause kennel dump.
> 
> Call trace:
>  virtqueue_add_split+0x1ac/0x560
>  virtqueue_add_inbuf+0x4c/0x60
>  rpmsg_recv_done+0x15c/0x294
>  vring_interrupt+0x6c/0xa4
>  rproc_vq_interrupt+0x30/0x50
>  imx_dsp_rproc_vq_work+0x24/0x40 [imx_dsp_rproc]
>  process_one_work+0x1d0/0x354
>  worker_thread+0x13c/0x470
>  kthread+0x154/0x160
>  ret_from_fork+0x10/0x20
> 
> Add mutex protection in imx_dsp_rproc_vq_work(), if the state is
> not running, then just skip calling rproc_vq_interrupt().
> 
> Also the flush workqueue operation can't be added in rproc stop
> for same reason.
> 
> Fixes: ec0e5549f358 ("remoteproc: imx_dsp_rproc: Add remoteproc driver for DSP on i.MX")
> Signed-off-by: Shengjiu Wang <shengjiu.wang@nxp.com>
> ---
>  drivers/remoteproc/imx_dsp_rproc.c | 12 +++++++++---
>  1 file changed, 9 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/remoteproc/imx_dsp_rproc.c b/drivers/remoteproc/imx_dsp_rproc.c
> index 899aa8dd12f0..95da1cbefacf 100644
> --- a/drivers/remoteproc/imx_dsp_rproc.c
> +++ b/drivers/remoteproc/imx_dsp_rproc.c
> @@ -347,9 +347,6 @@ static int imx_dsp_rproc_stop(struct rproc *rproc)
>  	struct device *dev = rproc->dev.parent;
>  	int ret = 0;
>  
> -	/* Make sure work is finished */
> -	flush_work(&priv->rproc_work);
> -

The kernel documentation for this function [1] indicate that once it returns
there will no more jobs to process in that queue, _unless_ another job has been
queued _after_ the flush has started.  What I suspect is happening here is that
a new job is queued between the time flush_work() returns and the remote processor
is switched off, something that should not be happening since all the
subdevices have been stopped in rproc_stop_subdevices().  

[1]. https://elixir.bootlin.com/linux/v6.0-rc7/source/kernel/workqueue.c#L3092


>  	if (rproc->state == RPROC_CRASHED) {
>  		priv->flags &= ~REMOTE_IS_READY;
>  		return 0;
> @@ -432,9 +429,18 @@ static void imx_dsp_rproc_vq_work(struct work_struct *work)
>  {
>  	struct imx_dsp_rproc *priv = container_of(work, struct imx_dsp_rproc,
>  						  rproc_work);
> +	struct rproc *rproc = priv->rproc;
> +
> +	mutex_lock(&rproc->lock);
> +
> +	if (rproc->state != RPROC_RUNNING)
> +		goto unlock_mutex;
>  
>  	rproc_vq_interrupt(priv->rproc, 0);
>  	rproc_vq_interrupt(priv->rproc, 1);

These are not guaranteed to be atomic and sleeping with the mutex held is
guaranteed to deadlock the system.

Thanks,
Mathieu

> +
> +unlock_mutex:
> +	mutex_unlock(&rproc->lock);
>  }
>  
>  /**
> -- 
> 2.34.1
>
Mathieu Poirier Sept. 29, 2022, 5:13 p.m. UTC | #3
On Thu, Sep 29, 2022 at 10:03:21AM +0800, Shengjiu Wang wrote:
> On Thu, Sep 29, 2022 at 1:20 AM Mathieu Poirier <mathieu.poirier@linaro.org>
> wrote:
> 
> > On Mon, Sep 26, 2022 at 07:48:13PM +0800, Shengjiu Wang wrote:
> > > The workqueue may execute late even after remoteproc is stopped or
> > > stopping, some resources (rpmsg device and endpoint) have been
> > > released in rproc_stop_subdevices(), then rproc_vq_interrupt()
> > > access these resources will cause kennel dump.
> > >
> > > Call trace:
> > >  virtqueue_add_split+0x1ac/0x560
> > >  virtqueue_add_inbuf+0x4c/0x60
> > >  rpmsg_recv_done+0x15c/0x294
> > >  vring_interrupt+0x6c/0xa4
> > >  rproc_vq_interrupt+0x30/0x50
> > >  imx_dsp_rproc_vq_work+0x24/0x40 [imx_dsp_rproc]
> > >  process_one_work+0x1d0/0x354
> > >  worker_thread+0x13c/0x470
> > >  kthread+0x154/0x160
> > >  ret_from_fork+0x10/0x20
> > >
> > > Add mutex protection in imx_dsp_rproc_vq_work(), if the state is
> > > not running, then just skip calling rproc_vq_interrupt().
> > >
> > > Also the flush workqueue operation can't be added in rproc stop
> > > for same reason.
> > >
> > > Fixes: ec0e5549f358 ("remoteproc: imx_dsp_rproc: Add remoteproc driver
> > for DSP on i.MX")
> > > Signed-off-by: Shengjiu Wang <shengjiu.wang@nxp.com>
> > > ---
> > >  drivers/remoteproc/imx_dsp_rproc.c | 12 +++++++++---
> > >  1 file changed, 9 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/drivers/remoteproc/imx_dsp_rproc.c
> > b/drivers/remoteproc/imx_dsp_rproc.c
> > > index 899aa8dd12f0..95da1cbefacf 100644
> > > --- a/drivers/remoteproc/imx_dsp_rproc.c
> > > +++ b/drivers/remoteproc/imx_dsp_rproc.c
> > > @@ -347,9 +347,6 @@ static int imx_dsp_rproc_stop(struct rproc *rproc)
> > >       struct device *dev = rproc->dev.parent;
> > >       int ret = 0;
> > >
> > > -     /* Make sure work is finished */
> > > -     flush_work(&priv->rproc_work);
> > > -
> >
> > The kernel documentation for this function [1] indicate that once it
> > returns
> > there will no more jobs to process in that queue, _unless_ another job has
> > been
> > queued _after_ the flush has started.  What I suspect is happening here is
> > that
> > a new job is queued between the time flush_work() returns and the remote
> > processor
> > is switched off, something that should not be happening since all the
> > subdevices have been stopped in rproc_stop_subdevices().
> >
> > [1].
> > https://elixir.bootlin.com/linux/v6.0-rc7/source/kernel/workqueue.c#L3092
> 
> 
> The call sequence with echo stop > remoteproc
> 
> rproc_shutdown
> -> rproc_stop
>    ->*rproc_stop_subdevices*
>    ->rproc->ops->stop()
>        ->imx_dsp_rproc_stop
>            ->*flush_work*
>               -> rproc_vq_interrupt

I understand now - thanks for the details.  Please send me another revision with
the above call sequence in the patch changelog.  The one that is currently there
is obscure and doesn't provide a clear picture of what the problem is.

> 
> So the *flush_work* is not safe, because the resource has been released in
> *rproc_stop_subdevices.  *the resource needed by rproc_vq_interrupt
> is not accessible.
> 
> 
> 
> >
> >
> >
> > >       if (rproc->state == RPROC_CRASHED) {
> > >               priv->flags &= ~REMOTE_IS_READY;
> > >               return 0;
> > > @@ -432,9 +429,18 @@ static void imx_dsp_rproc_vq_work(struct
> > work_struct *work)
> > >  {
> > >       struct imx_dsp_rproc *priv = container_of(work, struct
> > imx_dsp_rproc,
> > >                                                 rproc_work);
> > > +     struct rproc *rproc = priv->rproc;
> > > +
> > > +     mutex_lock(&rproc->lock);
> > > +
> > > +     if (rproc->state != RPROC_RUNNING)
> > > +             goto unlock_mutex;
> > >
> > >       rproc_vq_interrupt(priv->rproc, 0);
> > >       rproc_vq_interrupt(priv->rproc, 1);
> >
> > These are not guaranteed to be atomic and sleeping with the mutex held is
> > guaranteed to deadlock the system.
> >
> > spinlock should be a problem with sleep,  but here using the mutex, should
> be ok.
> right?

I was thinking more about this worker thread executing concurrently with
the remoteproc core but we are fine as long as a single mutex is used.

> 
> best regards
> wang shengjiu
> 
> > +
> > > +unlock_mutex:
> > > +     mutex_unlock(&rproc->lock);
> > >  }
> > >
> > >  /**
> > > --
> > > 2.34.1
> > >
> >
diff mbox series

Patch

diff --git a/drivers/remoteproc/imx_dsp_rproc.c b/drivers/remoteproc/imx_dsp_rproc.c
index 899aa8dd12f0..95da1cbefacf 100644
--- a/drivers/remoteproc/imx_dsp_rproc.c
+++ b/drivers/remoteproc/imx_dsp_rproc.c
@@ -347,9 +347,6 @@  static int imx_dsp_rproc_stop(struct rproc *rproc)
 	struct device *dev = rproc->dev.parent;
 	int ret = 0;
 
-	/* Make sure work is finished */
-	flush_work(&priv->rproc_work);
-
 	if (rproc->state == RPROC_CRASHED) {
 		priv->flags &= ~REMOTE_IS_READY;
 		return 0;
@@ -432,9 +429,18 @@  static void imx_dsp_rproc_vq_work(struct work_struct *work)
 {
 	struct imx_dsp_rproc *priv = container_of(work, struct imx_dsp_rproc,
 						  rproc_work);
+	struct rproc *rproc = priv->rproc;
+
+	mutex_lock(&rproc->lock);
+
+	if (rproc->state != RPROC_RUNNING)
+		goto unlock_mutex;
 
 	rproc_vq_interrupt(priv->rproc, 0);
 	rproc_vq_interrupt(priv->rproc, 1);
+
+unlock_mutex:
+	mutex_unlock(&rproc->lock);
 }
 
 /**