diff mbox

[v2,1/2] dmaengine: cppi41: Fix list not empty warning on runtime suspend

Message ID 20170109170337.6957-2-abailon@baylibre.com (mailing list archive)
State New, archived
Headers show

Commit Message

Alexandre Bailon Jan. 9, 2017, 5:03 p.m. UTC
Sometime, a transfer may not be queued due to a race between runtime pm
and cppi41_dma_issue_pending().
Sometime, cppi41_runtime_resume() may be interrupted right before to
update device PM state to RESUMED.
When it happens, if a new dma transfer is issued, because the device is not
in active state, the descriptor will be added to the pendding list.
But because the descriptors in the pendding list are only queued to cppi41
on runtime resume, the descriptor will not be queued.
On runtime suspend, the list is not empty, which is causing a warning.
Queue the descriptor if the device is active or resuming.

Signed-off-by: Alexandre Bailon <abailon@baylibre.com>
---
 drivers/dma/cppi41.c | 18 +++++++++++++++++-
 1 file changed, 17 insertions(+), 1 deletion(-)

Comments

Sergei Shtylyov Jan. 9, 2017, 6:08 p.m. UTC | #1
Hello!

On 01/09/2017 08:03 PM, Alexandre Bailon wrote:

> Sometime, a transfer may not be queued due to a race between runtime pm
> and cppi41_dma_issue_pending().
> Sometime, cppi41_runtime_resume() may be interrupted right before to
> update device PM state to RESUMED.
> When it happens, if a new dma transfer is issued, because the device is not
> in active state, the descriptor will be added to the pendding list.

    Pending.

> But because the descriptors in the pendding list are only queued to cppi41

    Likewise.

> on runtime resume, the descriptor will not be queued.
> On runtime suspend, the list is not empty, which is causing a warning.
> Queue the descriptor if the device is active or resuming.
>
> Signed-off-by: Alexandre Bailon <abailon@baylibre.com>
[...]

MBR, Sergei

--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Grygorii Strashko Jan. 9, 2017, 6:16 p.m. UTC | #2
On 01/09/2017 11:03 AM, Alexandre Bailon wrote:
> Sometime, a transfer may not be queued due to a race between runtime pm
> and cppi41_dma_issue_pending().
> Sometime, cppi41_runtime_resume() may be interrupted right before to
> update device PM state to RESUMED.
> When it happens, if a new dma transfer is issued, because the device is not
> in active state, the descriptor will be added to the pendding list.
> But because the descriptors in the pendding list are only queued to cppi41
> on runtime resume, the descriptor will not be queued.
> On runtime suspend, the list is not empty, which is causing a warning.
> Queue the descriptor if the device is active or resuming.
> 
> Signed-off-by: Alexandre Bailon <abailon@baylibre.com>
> ---
>  drivers/dma/cppi41.c | 18 +++++++++++++++++-
>  1 file changed, 17 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/dma/cppi41.c b/drivers/dma/cppi41.c
> index d5ba43a..025fee4 100644
> --- a/drivers/dma/cppi41.c
> +++ b/drivers/dma/cppi41.c
> @@ -471,6 +471,8 @@ static void cppi41_dma_issue_pending(struct dma_chan *chan)
>  {
>  	struct cppi41_channel *c = to_cpp41_chan(chan);
>  	struct cppi41_dd *cdd = c->cdd;
> +	unsigned long flags;
> +	bool active;
>  	int error;
>  
>  	error = pm_runtime_get(cdd->ddev.dev);
> @@ -482,7 +484,21 @@ static void cppi41_dma_issue_pending(struct dma_chan *chan)
>  		return;
>  	}
>  
> -	if (likely(pm_runtime_active(cdd->ddev.dev)))
> +	active = pm_runtime_active(cdd->ddev.dev);
> +	if (!active) {

just curious, what does prevent from using pm_runtime_get_sync() here and in
cppi41_dma_issue_pending()?

> +		/*
> +		 * Runtime resume may be interrupted before runtime_status
> +		 * has been updated. Test if device has resumed.
> +		 */
> +		if (error == -EINPROGRESS) {
> +			spin_lock_irqsave(&cdd->lock, flags);
> +			if (list_empty(&cdd->pending))
> +				active = true;
> +			spin_unlock_irqrestore(&cdd->lock, flags);
> +		}
> +	}
> +
> +	if (likely(active))
>  		push_desc_queue(c);
>  	else
>  		pending_desc(c);
>
Tony Lindgren Jan. 9, 2017, 6:34 p.m. UTC | #3
Hi,

* Alexandre Bailon <abailon@baylibre.com> [170109 09:04]:
> Sometime, a transfer may not be queued due to a race between runtime pm
> and cppi41_dma_issue_pending().
> Sometime, cppi41_runtime_resume() may be interrupted right before to
> update device PM state to RESUMED.
> When it happens, if a new dma transfer is issued, because the device is not
> in active state, the descriptor will be added to the pendding list.
> But because the descriptors in the pendding list are only queued to cppi41
> on runtime resume, the descriptor will not be queued.
> On runtime suspend, the list is not empty, which is causing a warning.
> Queue the descriptor if the device is active or resuming.
> 
> Signed-off-by: Alexandre Bailon <abailon@baylibre.com>
> ---
>  drivers/dma/cppi41.c | 18 +++++++++++++++++-
>  1 file changed, 17 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/dma/cppi41.c b/drivers/dma/cppi41.c
> index d5ba43a..025fee4 100644
> --- a/drivers/dma/cppi41.c
> +++ b/drivers/dma/cppi41.c
> @@ -471,6 +471,8 @@ static void cppi41_dma_issue_pending(struct dma_chan *chan)
>  {
>  	struct cppi41_channel *c = to_cpp41_chan(chan);
>  	struct cppi41_dd *cdd = c->cdd;
> +	unsigned long flags;
> +	bool active;
>  	int error;
>  
>  	error = pm_runtime_get(cdd->ddev.dev);
> @@ -482,7 +484,21 @@ static void cppi41_dma_issue_pending(struct dma_chan *chan)
>  		return;
>  	}
>  
> -	if (likely(pm_runtime_active(cdd->ddev.dev)))
> +	active = pm_runtime_active(cdd->ddev.dev);
> +	if (!active) {
> +		/*
> +		 * Runtime resume may be interrupted before runtime_status
> +		 * has been updated. Test if device has resumed.
> +		 */
> +		if (error == -EINPROGRESS) {
> +			spin_lock_irqsave(&cdd->lock, flags);
> +			if (list_empty(&cdd->pending))
> +				active = true;
> +			spin_unlock_irqrestore(&cdd->lock, flags);
> +		}
> +	}
> +
> +	if (likely(active))
>  		push_desc_queue(c);
>  	else
>  		pending_desc(c);

What guarantees that the PM runtime state is really active few lines later?

A safer approach might be to check the queue for new entries by in
cppi41_runtime_resume() using "while (!list_empty())" instead of
list_for_each_entry(). That releases the spinlock between each entry
and rechecks the list.

And instead of doing WARN_ON(!list_empty(&cdd->pending)) it seems we
should run the queue also on cppi41_runtime_suspend()?

Regards,

Tony
--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alexandre Bailon Jan. 10, 2017, 9:46 a.m. UTC | #4
On 01/09/2017 07:16 PM, Grygorii Strashko wrote:
> 
> 
> On 01/09/2017 11:03 AM, Alexandre Bailon wrote:
>> Sometime, a transfer may not be queued due to a race between runtime pm
>> and cppi41_dma_issue_pending().
>> Sometime, cppi41_runtime_resume() may be interrupted right before to
>> update device PM state to RESUMED.
>> When it happens, if a new dma transfer is issued, because the device is not
>> in active state, the descriptor will be added to the pendding list.
>> But because the descriptors in the pendding list are only queued to cppi41
>> on runtime resume, the descriptor will not be queued.
>> On runtime suspend, the list is not empty, which is causing a warning.
>> Queue the descriptor if the device is active or resuming.
>>
>> Signed-off-by: Alexandre Bailon <abailon@baylibre.com>
>> ---
>>  drivers/dma/cppi41.c | 18 +++++++++++++++++-
>>  1 file changed, 17 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/dma/cppi41.c b/drivers/dma/cppi41.c
>> index d5ba43a..025fee4 100644
>> --- a/drivers/dma/cppi41.c
>> +++ b/drivers/dma/cppi41.c
>> @@ -471,6 +471,8 @@ static void cppi41_dma_issue_pending(struct dma_chan *chan)
>>  {
>>  	struct cppi41_channel *c = to_cpp41_chan(chan);
>>  	struct cppi41_dd *cdd = c->cdd;
>> +	unsigned long flags;
>> +	bool active;
>>  	int error;
>>  
>>  	error = pm_runtime_get(cdd->ddev.dev);
>> @@ -482,7 +484,21 @@ static void cppi41_dma_issue_pending(struct dma_chan *chan)
>>  		return;
>>  	}
>>  
>> -	if (likely(pm_runtime_active(cdd->ddev.dev)))
>> +	active = pm_runtime_active(cdd->ddev.dev);
>> +	if (!active) {
> 
> just curious, what does prevent from using pm_runtime_get_sync() here and in
> cppi41_dma_issue_pending()?
This function is called from atomic or interrupt context.
> 
>> +		/*
>> +		 * Runtime resume may be interrupted before runtime_status
>> +		 * has been updated. Test if device has resumed.
>> +		 */
>> +		if (error == -EINPROGRESS) {
>> +			spin_lock_irqsave(&cdd->lock, flags);
>> +			if (list_empty(&cdd->pending))
>> +				active = true;
>> +			spin_unlock_irqrestore(&cdd->lock, flags);
>> +		}
>> +	}
>> +
>> +	if (likely(active))
>>  		push_desc_queue(c);
>>  	else
>>  		pending_desc(c);
>>
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/dma/cppi41.c b/drivers/dma/cppi41.c
index d5ba43a..025fee4 100644
--- a/drivers/dma/cppi41.c
+++ b/drivers/dma/cppi41.c
@@ -471,6 +471,8 @@  static void cppi41_dma_issue_pending(struct dma_chan *chan)
 {
 	struct cppi41_channel *c = to_cpp41_chan(chan);
 	struct cppi41_dd *cdd = c->cdd;
+	unsigned long flags;
+	bool active;
 	int error;
 
 	error = pm_runtime_get(cdd->ddev.dev);
@@ -482,7 +484,21 @@  static void cppi41_dma_issue_pending(struct dma_chan *chan)
 		return;
 	}
 
-	if (likely(pm_runtime_active(cdd->ddev.dev)))
+	active = pm_runtime_active(cdd->ddev.dev);
+	if (!active) {
+		/*
+		 * Runtime resume may be interrupted before runtime_status
+		 * has been updated. Test if device has resumed.
+		 */
+		if (error == -EINPROGRESS) {
+			spin_lock_irqsave(&cdd->lock, flags);
+			if (list_empty(&cdd->pending))
+				active = true;
+			spin_unlock_irqrestore(&cdd->lock, flags);
+		}
+	}
+
+	if (likely(active))
 		push_desc_queue(c);
 	else
 		pending_desc(c);