diff mbox series

scsi: ufs: core: fix devfreq deadlocks

Message ID 20221222102121.18682-1-johan+linaro@kernel.org (mailing list archive)
State Superseded
Headers show
Series scsi: ufs: core: fix devfreq deadlocks | expand

Commit Message

Johan Hovold Dec. 22, 2022, 10:21 a.m. UTC
There is a lock inversion and rwsem read-lock recursion in the devfreq
target callback which can lead to deadlocks.

Specifically, ufshcd_devfreq_scale() already holds a clk_scaling_lock
read lock when toggling the write booster, which involves taking the
dev_cmd mutex before taking another clk_scaling_lock read lock.

This can lead to a deadlock if another thread:

  1) tries to acquire the dev_cmd and clk_scaling locks in the correct
     order, or

  2) takes a clk_scaling write lock before the attempt to take the
     clk_scaling read lock a second time.

Fix this by dropping the clk_scaling_lock before toggling the write
booster as was done before commit 0e9d4ca43ba8 ("scsi: ufs: Protect some
contexts from unexpected clock scaling").

While the devfreq callbacks are already serialised, add a second
serialising mutex to handle the unlikely case where a callback triggered
through the devfreq sysfs interface is racing with a request to disable
clock scaling through the UFS controller 'clkscale_enable' sysfs
attribute. This could otherwise lead to the write booster being left
disabled after having disabled clock scaling.

Also take the new mutex in ufshcd_clk_scaling_allow() to make sure that
any pending write booster update has completed on return.

Note that this currently only affects Qualcomm platforms since commit
87bd05016a64 ("scsi: ufs: core: Allow host driver to disable wb toggling
during clock scaling").

The lock inversion (i.e. 1 above) was reported by lockdep as:

 ======================================================
 WARNING: possible circular locking dependency detected
 6.1.0-next-20221216 #211 Not tainted
 ------------------------------------------------------
 kworker/u16:2/71 is trying to acquire lock:
 ffff076280ba98a0 (&hba->dev_cmd.lock){+.+.}-{3:3}, at: ufshcd_query_flag+0x50/0x1c0

 but task is already holding lock:
 ffff076280ba9cf0 (&hba->clk_scaling_lock){++++}-{3:3}, at: ufshcd_devfreq_scale+0x2b8/0x380

 which lock already depends on the new lock.
[  +0.011606]
 the existing dependency chain (in reverse order) is:

 -> #1 (&hba->clk_scaling_lock){++++}-{3:3}:
        lock_acquire+0x68/0x90
        down_read+0x58/0x80
        ufshcd_exec_dev_cmd+0x70/0x2c0
        ufshcd_verify_dev_init+0x68/0x170
        ufshcd_probe_hba+0x398/0x1180
        ufshcd_async_scan+0x30/0x320
        async_run_entry_fn+0x34/0x150
        process_one_work+0x288/0x6c0
        worker_thread+0x74/0x450
        kthread+0x118/0x120
        ret_from_fork+0x10/0x20

 -> #0 (&hba->dev_cmd.lock){+.+.}-{3:3}:
        __lock_acquire+0x12a0/0x2240
        lock_acquire.part.0+0xcc/0x220
        lock_acquire+0x68/0x90
        __mutex_lock+0x98/0x430
        mutex_lock_nested+0x2c/0x40
        ufshcd_query_flag+0x50/0x1c0
        ufshcd_query_flag_retry+0x64/0x100
        ufshcd_wb_toggle+0x5c/0x120
        ufshcd_devfreq_scale+0x2c4/0x380
        ufshcd_devfreq_target+0xf4/0x230
        devfreq_set_target+0x84/0x2f0
        devfreq_update_target+0xc4/0xf0
        devfreq_monitor+0x38/0x1f0
        process_one_work+0x288/0x6c0
        worker_thread+0x74/0x450
        kthread+0x118/0x120
        ret_from_fork+0x10/0x20

 other info that might help us debug this:
  Possible unsafe locking scenario:
        CPU0                    CPU1
        ----                    ----
   lock(&hba->clk_scaling_lock);
                                lock(&hba->dev_cmd.lock);
                                lock(&hba->clk_scaling_lock);
   lock(&hba->dev_cmd.lock);

  *** DEADLOCK ***

Fixes: 0e9d4ca43ba8 ("scsi: ufs: Protect some contexts from unexpected clock scaling")
Cc: stable@vger.kernel.org      # 5.12
Cc: Can Guo <quic_cang@quicinc.com>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
---

This issue has apparently been known for over a year [1] without anyone
bothering to fix it. Aside from the potential deadlocks, this also leads
to developers using Qualcomm platforms not being able to use lockdep to
prevent further issues like this from being introduced in other places.

Johan

[1] https://lore.kernel.org/lkml/1631843521-2863-1-git-send-email-cang@codeaurora.org


 drivers/ufs/core/ufshcd.c | 29 +++++++++++++++--------------
 include/ufs/ufshcd.h      |  2 ++
 2 files changed, 17 insertions(+), 14 deletions(-)

Comments

Andrew Halaney Jan. 3, 2023, 9:28 p.m. UTC | #1
On Thu, Dec 22, 2022 at 11:21:21AM +0100, Johan Hovold wrote:
> There is a lock inversion and rwsem read-lock recursion in the devfreq
> target callback which can lead to deadlocks.
> 
> Specifically, ufshcd_devfreq_scale() already holds a clk_scaling_lock
> read lock when toggling the write booster, which involves taking the
> dev_cmd mutex before taking another clk_scaling_lock read lock.
> 
> This can lead to a deadlock if another thread:
> 
>   1) tries to acquire the dev_cmd and clk_scaling locks in the correct
>      order, or
> 
>   2) takes a clk_scaling write lock before the attempt to take the
>      clk_scaling read lock a second time.
> 
> Fix this by dropping the clk_scaling_lock before toggling the write
> booster as was done before commit 0e9d4ca43ba8 ("scsi: ufs: Protect some
> contexts from unexpected clock scaling").
> 
> While the devfreq callbacks are already serialised, add a second
> serialising mutex to handle the unlikely case where a callback triggered
> through the devfreq sysfs interface is racing with a request to disable
> clock scaling through the UFS controller 'clkscale_enable' sysfs
> attribute. This could otherwise lead to the write booster being left
> disabled after having disabled clock scaling.
> 
> Also take the new mutex in ufshcd_clk_scaling_allow() to make sure that
> any pending write booster update has completed on return.
> 
> Note that this currently only affects Qualcomm platforms since commit
> 87bd05016a64 ("scsi: ufs: core: Allow host driver to disable wb toggling
> during clock scaling").
> 
> The lock inversion (i.e. 1 above) was reported by lockdep as:
> 
>  ======================================================
>  WARNING: possible circular locking dependency detected
>  6.1.0-next-20221216 #211 Not tainted
>  ------------------------------------------------------
>  kworker/u16:2/71 is trying to acquire lock:
>  ffff076280ba98a0 (&hba->dev_cmd.lock){+.+.}-{3:3}, at: ufshcd_query_flag+0x50/0x1c0
> 
>  but task is already holding lock:
>  ffff076280ba9cf0 (&hba->clk_scaling_lock){++++}-{3:3}, at: ufshcd_devfreq_scale+0x2b8/0x380
> 
>  which lock already depends on the new lock.
> [  +0.011606]
>  the existing dependency chain (in reverse order) is:
> 
>  -> #1 (&hba->clk_scaling_lock){++++}-{3:3}:
>         lock_acquire+0x68/0x90
>         down_read+0x58/0x80
>         ufshcd_exec_dev_cmd+0x70/0x2c0
>         ufshcd_verify_dev_init+0x68/0x170
>         ufshcd_probe_hba+0x398/0x1180
>         ufshcd_async_scan+0x30/0x320
>         async_run_entry_fn+0x34/0x150
>         process_one_work+0x288/0x6c0
>         worker_thread+0x74/0x450
>         kthread+0x118/0x120
>         ret_from_fork+0x10/0x20
> 
>  -> #0 (&hba->dev_cmd.lock){+.+.}-{3:3}:
>         __lock_acquire+0x12a0/0x2240
>         lock_acquire.part.0+0xcc/0x220
>         lock_acquire+0x68/0x90
>         __mutex_lock+0x98/0x430
>         mutex_lock_nested+0x2c/0x40
>         ufshcd_query_flag+0x50/0x1c0
>         ufshcd_query_flag_retry+0x64/0x100
>         ufshcd_wb_toggle+0x5c/0x120
>         ufshcd_devfreq_scale+0x2c4/0x380
>         ufshcd_devfreq_target+0xf4/0x230
>         devfreq_set_target+0x84/0x2f0
>         devfreq_update_target+0xc4/0xf0
>         devfreq_monitor+0x38/0x1f0
>         process_one_work+0x288/0x6c0
>         worker_thread+0x74/0x450
>         kthread+0x118/0x120
>         ret_from_fork+0x10/0x20
> 
>  other info that might help us debug this:
>   Possible unsafe locking scenario:
>         CPU0                    CPU1
>         ----                    ----
>    lock(&hba->clk_scaling_lock);
>                                 lock(&hba->dev_cmd.lock);
>                                 lock(&hba->clk_scaling_lock);
>    lock(&hba->dev_cmd.lock);
> 
>   *** DEADLOCK ***
> 
> Fixes: 0e9d4ca43ba8 ("scsi: ufs: Protect some contexts from unexpected clock scaling")
> Cc: stable@vger.kernel.org      # 5.12
> Cc: Can Guo <quic_cang@quicinc.com>
> Signed-off-by: Johan Hovold <johan+linaro@kernel.org>

Tested-by: Andrew Halaney <ahalaney@redhat.com>

Thanks for taking the time to fix this, I can't get any UFS related
splats to show after a brief round of testing now.

For what it is worth, the change looks good to me as well but I'll leave
it to someone familiar with the UFS core to add a proper Reviewed-by tag.

Thanks,
Andrew
Bart Van Assche Jan. 3, 2023, 9:45 p.m. UTC | #2
On 12/22/22 02:21, Johan Hovold wrote:
> +	/* Enable Write Booster if we have scaled up else disable it */
> +	if (ufshcd_enable_wb_if_scaling_up(hba))
> +		ufshcd_wb_toggle(hba, scale_up);

Hi Asutosh,

This patch is the second complaint about the mechanism that toggles the 
WriteBooster during clock scaling. Can this mechanism be removed entirely?

I think this commit introduced that mechanism: 3d17b9b5ab11 ("scsi: ufs: 
Add write booster feature support"; v5.8).

Thanks,

Bart.
Avri Altman Jan. 4, 2023, 8:24 a.m. UTC | #3
> On 12/22/22 02:21, Johan Hovold wrote:
> > +     /* Enable Write Booster if we have scaled up else disable it */
> > +     if (ufshcd_enable_wb_if_scaling_up(hba))
> > +             ufshcd_wb_toggle(hba, scale_up);
> 
> Hi Asutosh,
> 
> This patch is the second complaint about the mechanism that toggles the
> WriteBooster during clock scaling. Can this mechanism be removed entirely?
commit 87bd05016a64 that introduced UFSHCD_CAP_WB_WITH_CLK_SCALING enables
the platform vendors and OEMs to maintain wb toggling - should they choose so.
Why remove it in its entirety?

Thanks,
Avri

> 
> I think this commit introduced that mechanism: 3d17b9b5ab11 ("scsi: ufs:
> Add write booster feature support"; v5.8).
> 
> Thanks,
> 
> Bart.
Asutosh Das Jan. 4, 2023, 2:10 p.m. UTC | #4
On Tue, Jan 03 2023 at 13:45 -0800, Bart Van Assche wrote:
>On 12/22/22 02:21, Johan Hovold wrote:
>>+	/* Enable Write Booster if we have scaled up else disable it */
>>+	if (ufshcd_enable_wb_if_scaling_up(hba))
>>+		ufshcd_wb_toggle(hba, scale_up);
>
>Hi Asutosh,
>
>This patch is the second complaint about the mechanism that toggles 
>the WriteBooster during clock scaling. Can this mechanism be removed 
>entirely?
>
>I think this commit introduced that mechanism: 3d17b9b5ab11 ("scsi: 
>ufs: Add write booster feature support"; v5.8).
>
>Thanks,
>
>Bart.

Hello Bart,
Load based toggling of WB seemed fine to me then.
I haven't thought about another method to toggle WriteBooster yet.
Let me see if I can come up with something.
IMT if you have a mechanism in mind, please let me know.

-asd

>
Bart Van Assche Jan. 4, 2023, 10:31 p.m. UTC | #5
On 1/4/23 06:10, Asutosh Das wrote:
> Load based toggling of WB seemed fine to me then.
> I haven't thought about another method to toggle WriteBooster yet.
> Let me see if I can come up with something.
> IMT if you have a mechanism in mind, please let me know.

Hi Asutosh,

Which UFS devices need this mechanism? All UFS devices I'm familiar with 
can achieve wire speed for large write requests without enabling the 
WriteBooster.

Thanks,

Bart.
Bart Van Assche Jan. 4, 2023, 10:34 p.m. UTC | #6
On 1/3/23 00:24, Avri Altman wrote:
>   
>> On 12/22/22 02:21, Johan Hovold wrote:
>>> +     /* Enable Write Booster if we have scaled up else disable it */
>>> +     if (ufshcd_enable_wb_if_scaling_up(hba))
>>> +             ufshcd_wb_toggle(hba, scale_up);
>>
>> Hi Asutosh,
>>
>> This patch is the second complaint about the mechanism that toggles the
>> WriteBooster during clock scaling. Can this mechanism be removed entirely?
> commit 87bd05016a64 that introduced UFSHCD_CAP_WB_WITH_CLK_SCALING enables
> the platform vendors and OEMs to maintain wb toggling - should they choose so.
> Why remove it in its entirety?

Hi Avri,

I'm in favor of keeping kernel code simple :-)

UFSHCD_CAP_WB_WITH_CLK_SCALING controls whether or not clock scaling 
affects the WriteBooster depending on which host controller is in use. 
Shouldn't this depend on which UFS device is present instead of on the 
type of host controller?

Thanks,

Bart.
Avri Altman Jan. 5, 2023, 7:21 a.m. UTC | #7
> On 1/4/23 06:10, Asutosh Das wrote:
> > Load based toggling of WB seemed fine to me then.
> > I haven't thought about another method to toggle WriteBooster yet.
> > Let me see if I can come up with something.
> > IMT if you have a mechanism in mind, please let me know.
> 
> Hi Asutosh,
> 
> Which UFS devices need this mechanism? All UFS devices I'm familiar with can
> achieve wire speed for large write requests without enabling the WriteBooster.
This feature assures SLC-performance for writes to the WriteBooster buffer.
So enabling it is advantageous as far as write performance.

As for the toggling functionality, compared to e.g. enabling it on init and leave it on,
some flash vendors require it because of device health considerations.
This is not the case for us, so let others to comment.

Thanks,
Avri
> 
> Thanks,
> 
> Bart.
Bart Van Assche Jan. 5, 2023, 6:38 p.m. UTC | #8
On 12/22/22 02:21, Johan Hovold wrote:
> diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
> index bda61be5f035..5c3821b2fcf8 100644
> --- a/drivers/ufs/core/ufshcd.c
> +++ b/drivers/ufs/core/ufshcd.c
> @@ -1234,12 +1234,14 @@ static int ufshcd_clock_scaling_prepare(struct ufs_hba *hba)
>   	 * clock scaling is in progress
>   	 */
>   	ufshcd_scsi_block_requests(hba);
> +	mutex_lock(&hba->wb_mutex);
>   	down_write(&hba->clk_scaling_lock);
>   
>   	if (!hba->clk_scaling.is_allowed ||
>   	    ufshcd_wait_for_doorbell_clr(hba, DOORBELL_CLR_TOUT_US)) {
>   		ret = -EBUSY;
>   		up_write(&hba->clk_scaling_lock);
> +		mutex_unlock(&hba->wb_mutex);
>   		ufshcd_scsi_unblock_requests(hba);
>   		goto out;
>   	}
> @@ -1251,12 +1253,16 @@ static int ufshcd_clock_scaling_prepare(struct ufs_hba *hba)
>   	return ret;
>   }

Please add an __acquires(&hba->wb_mutex) annotation for sparse.

> -static void ufshcd_clock_scaling_unprepare(struct ufs_hba *hba, bool writelock)
> +static void ufshcd_clock_scaling_unprepare(struct ufs_hba *hba, bool scale_up)
>   {
> -	if (writelock)
> -		up_write(&hba->clk_scaling_lock);
> -	else
> -		up_read(&hba->clk_scaling_lock);
> +	up_write(&hba->clk_scaling_lock);
> +
> +	/* Enable Write Booster if we have scaled up else disable it */
> +	if (ufshcd_enable_wb_if_scaling_up(hba))
> +		ufshcd_wb_toggle(hba, scale_up);
> +
> +	mutex_unlock(&hba->wb_mutex);
> +
>   	ufshcd_scsi_unblock_requests(hba);
>   	ufshcd_release(hba);
>   }

Please add a __releases(&hba->wb_mutex) annotation for sparse.

> @@ -1273,7 +1279,6 @@ static void ufshcd_clock_scaling_unprepare(struct ufs_hba *hba, bool writelock)
>   static int ufshcd_devfreq_scale(struct ufs_hba *hba, bool scale_up)
>   {
>   	int ret = 0;
> -	bool is_writelock = true;
>   
>   	ret = ufshcd_clock_scaling_prepare(hba);
>   	if (ret)
> @@ -1302,15 +1307,8 @@ static int ufshcd_devfreq_scale(struct ufs_hba *hba, bool scale_up)
>   		}
>   	}
>   
> -	/* Enable Write Booster if we have scaled up else disable it */
> -	if (ufshcd_enable_wb_if_scaling_up(hba)) {
> -		downgrade_write(&hba->clk_scaling_lock);
> -		is_writelock = false;
> -		ufshcd_wb_toggle(hba, scale_up);
> -	}
> -
>   out_unprepare:
> -	ufshcd_clock_scaling_unprepare(hba, is_writelock);
> +	ufshcd_clock_scaling_unprepare(hba, scale_up);
>   	return ret;
>   }

This patch moves the ufshcd_wb_toggle() from before the out_unprepare 
label to after the out_unprepare label (into 
ufshcd_clock_scaling_unprepare()). Does this change perhaps introduce a 
new call to ufshcd_wb_toggle() in error paths?

Thanks,

Bart.
Jinyoung Choi Jan. 6, 2023, 2:24 a.m. UTC | #9
>> On 1/4/23 06:10, Asutosh Das wrote:
>> > Load based toggling of WB seemed fine to me then.
>> > I haven't thought about another method to toggle WriteBooster yet.
>> > Let me see if I can come up with something.
>> > IMT if you have a mechanism in mind, please let me know.
>> 
>> Hi Asutosh,
>> 
>> Which UFS devices need this mechanism? All UFS devices I'm familiar with can
>> achieve wire speed for large write requests without enabling the WriteBooster.
>This feature assures SLC-performance for writes to the WriteBooster buffer.
>So enabling it is advantageous as far as write performance.

I agree with you. Also, it can be used in various ways.

>As for the toggling functionality, compared to e.g. enabling it on init and leave it on,
>some flash vendors require it because of device health considerations.
>This is not the case for us, so let others to comment.

In our case, it does not matter whether to toggle or not.
To make the code simple, it seems to be a good way to enable it on init and leave it on.
Considering device health, WB can be disabled through lifetime check.

Thanks,
Jinyoung.

>
>Thanks,
>Avri
>> 
>> Thanks,
>> 
>> Bart.
Asutosh Das Jan. 6, 2023, 2:51 p.m. UTC | #10
On Fri, Jan 06 2023 at 18:25 -0800, Jinyoung CHOI wrote:
>>> On 1/4/23 06:10, Asutosh Das wrote:
>>> > Load based toggling of WB seemed fine to me then.
>>> > I haven't thought about another method to toggle WriteBooster yet.
>>> > Let me see if I can come up with something.
>>> > IMT if you have a mechanism in mind, please let me know.
>>>
>>> Hi Asutosh,
>>>
>>> Which UFS devices need this mechanism? All UFS devices I'm familiar with can
>>> achieve wire speed for large write requests without enabling the WriteBooster.
>>This feature assures SLC-performance for writes to the WriteBooster buffer.
>>So enabling it is advantageous as far as write performance.
>
>I agree with you. Also, it can be used in various ways.
>
>>As for the toggling functionality, compared to e.g. enabling it on init and leave it on,
>>some flash vendors require it because of device health considerations.
>>This is not the case for us, so let others to comment.
>
>In our case, it does not matter whether to toggle or not.
>To make the code simple, it seems to be a good way to enable it on init and leave it on.
>Considering device health, WB can be disabled through lifetime check.
>
>Thanks,
>Jinyoung.
>
Hi Jinyoung / Avri / Bart

My understanding during the WriteBooster development was that the endurance of
the SLC buffer is considerably less and hence it's *not* a good idea to always
keep it ON. From the spec,
"
7279 Whenever the endurance of the WriteBooster Buffer is consumed completely, a write command is
7280 processed as if WriteBooster feature was disabled. Therefore, it is recommended to set fWriteBoosterEn to
7281 one, only when WriteBooster performance is needed, so that WriteBooster feature can be used for a longer
7282 time.
"
Going by this toggling it to ON when the load is high and performance is needed seemed reasonable.

Now then, if you confirm that there's no considerable difference in the WB buffer lifetime by
either always keeping it on at init OR on-demand, then this toggle code should be removed.
I will talk to other UFS device vendors who may not be active in this list and get back.

-asd

>>
>>Thanks,
>>Avri
>>>
>>> Thanks,
>>>
>>> Bart.
>
Johan Hovold Jan. 16, 2023, 4:03 p.m. UTC | #11
On Thu, Jan 05, 2023 at 10:38:58AM -0800, Bart Van Assche wrote:
> On 12/22/22 02:21, Johan Hovold wrote:
> > diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
> > index bda61be5f035..5c3821b2fcf8 100644
> > --- a/drivers/ufs/core/ufshcd.c
> > +++ b/drivers/ufs/core/ufshcd.c
> > @@ -1234,12 +1234,14 @@ static int ufshcd_clock_scaling_prepare(struct ufs_hba *hba)
> >   	 * clock scaling is in progress
> >   	 */
> >   	ufshcd_scsi_block_requests(hba);
> > +	mutex_lock(&hba->wb_mutex);
> >   	down_write(&hba->clk_scaling_lock);
> >   
> >   	if (!hba->clk_scaling.is_allowed ||
> >   	    ufshcd_wait_for_doorbell_clr(hba, DOORBELL_CLR_TOUT_US)) {
> >   		ret = -EBUSY;
> >   		up_write(&hba->clk_scaling_lock);
> > +		mutex_unlock(&hba->wb_mutex);
> >   		ufshcd_scsi_unblock_requests(hba);
> >   		goto out;
> >   	}
> > @@ -1251,12 +1253,16 @@ static int ufshcd_clock_scaling_prepare(struct ufs_hba *hba)
> >   	return ret;
> >   }
> 
> Please add an __acquires(&hba->wb_mutex) annotation for sparse.
> 
> > -static void ufshcd_clock_scaling_unprepare(struct ufs_hba *hba, bool writelock)
> > +static void ufshcd_clock_scaling_unprepare(struct ufs_hba *hba, bool scale_up)
> >   {
> > -	if (writelock)
> > -		up_write(&hba->clk_scaling_lock);
> > -	else
> > -		up_read(&hba->clk_scaling_lock);
> > +	up_write(&hba->clk_scaling_lock);
> > +
> > +	/* Enable Write Booster if we have scaled up else disable it */
> > +	if (ufshcd_enable_wb_if_scaling_up(hba))
> > +		ufshcd_wb_toggle(hba, scale_up);
> > +
> > +	mutex_unlock(&hba->wb_mutex);
> > +
> >   	ufshcd_scsi_unblock_requests(hba);
> >   	ufshcd_release(hba);
> >   }
> 
> Please add a __releases(&hba->wb_mutex) annotation for sparse.

This would actually introduce new sparse warnings as mutex_lock/unlock()
are not sparse annotated:

drivers/ufs/core/ufshcd.c:1254:9: warning: context imbalance in 'ufshcd_clock_scaling_prepare' - wrong count at exit
drivers/ufs/core/ufshcd.c:1266:9: warning: context imbalance in 'ufshcd_clock_scaling_unprepare' - wrong count at exit
drivers/ufs/core/ufshcd.c:1281:12: warning: context imbalance in 'ufshcd_devfreq_scale' - wrong count at exit

I guess it's not worth adding explicit __acquire()/__release() to these
helpers either (e.g. as they are only used in one function).

> > @@ -1273,7 +1279,6 @@ static void ufshcd_clock_scaling_unprepare(struct ufs_hba *hba, bool writelock)
> >   static int ufshcd_devfreq_scale(struct ufs_hba *hba, bool scale_up)
> >   {
> >   	int ret = 0;
> > -	bool is_writelock = true;
> >   
> >   	ret = ufshcd_clock_scaling_prepare(hba);
> >   	if (ret)
> > @@ -1302,15 +1307,8 @@ static int ufshcd_devfreq_scale(struct ufs_hba *hba, bool scale_up)
> >   		}
> >   	}
> >   
> > -	/* Enable Write Booster if we have scaled up else disable it */
> > -	if (ufshcd_enable_wb_if_scaling_up(hba)) {
> > -		downgrade_write(&hba->clk_scaling_lock);
> > -		is_writelock = false;
> > -		ufshcd_wb_toggle(hba, scale_up);
> > -	}
> > -
> >   out_unprepare:
> > -	ufshcd_clock_scaling_unprepare(hba, is_writelock);
> > +	ufshcd_clock_scaling_unprepare(hba, scale_up);
> >   	return ret;
> >   }
> 
> This patch moves the ufshcd_wb_toggle() from before the out_unprepare 
> label to after the out_unprepare label (into 
> ufshcd_clock_scaling_unprepare()). Does this change perhaps introduce a 
> new call to ufshcd_wb_toggle() in error paths?

Thanks for spotting that. I'll leave the setting unchanged on errors in
v2.

Johan
diff mbox series

Patch

diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
index bda61be5f035..5c3821b2fcf8 100644
--- a/drivers/ufs/core/ufshcd.c
+++ b/drivers/ufs/core/ufshcd.c
@@ -1234,12 +1234,14 @@  static int ufshcd_clock_scaling_prepare(struct ufs_hba *hba)
 	 * clock scaling is in progress
 	 */
 	ufshcd_scsi_block_requests(hba);
+	mutex_lock(&hba->wb_mutex);
 	down_write(&hba->clk_scaling_lock);
 
 	if (!hba->clk_scaling.is_allowed ||
 	    ufshcd_wait_for_doorbell_clr(hba, DOORBELL_CLR_TOUT_US)) {
 		ret = -EBUSY;
 		up_write(&hba->clk_scaling_lock);
+		mutex_unlock(&hba->wb_mutex);
 		ufshcd_scsi_unblock_requests(hba);
 		goto out;
 	}
@@ -1251,12 +1253,16 @@  static int ufshcd_clock_scaling_prepare(struct ufs_hba *hba)
 	return ret;
 }
 
-static void ufshcd_clock_scaling_unprepare(struct ufs_hba *hba, bool writelock)
+static void ufshcd_clock_scaling_unprepare(struct ufs_hba *hba, bool scale_up)
 {
-	if (writelock)
-		up_write(&hba->clk_scaling_lock);
-	else
-		up_read(&hba->clk_scaling_lock);
+	up_write(&hba->clk_scaling_lock);
+
+	/* Enable Write Booster if we have scaled up else disable it */
+	if (ufshcd_enable_wb_if_scaling_up(hba))
+		ufshcd_wb_toggle(hba, scale_up);
+
+	mutex_unlock(&hba->wb_mutex);
+
 	ufshcd_scsi_unblock_requests(hba);
 	ufshcd_release(hba);
 }
@@ -1273,7 +1279,6 @@  static void ufshcd_clock_scaling_unprepare(struct ufs_hba *hba, bool writelock)
 static int ufshcd_devfreq_scale(struct ufs_hba *hba, bool scale_up)
 {
 	int ret = 0;
-	bool is_writelock = true;
 
 	ret = ufshcd_clock_scaling_prepare(hba);
 	if (ret)
@@ -1302,15 +1307,8 @@  static int ufshcd_devfreq_scale(struct ufs_hba *hba, bool scale_up)
 		}
 	}
 
-	/* Enable Write Booster if we have scaled up else disable it */
-	if (ufshcd_enable_wb_if_scaling_up(hba)) {
-		downgrade_write(&hba->clk_scaling_lock);
-		is_writelock = false;
-		ufshcd_wb_toggle(hba, scale_up);
-	}
-
 out_unprepare:
-	ufshcd_clock_scaling_unprepare(hba, is_writelock);
+	ufshcd_clock_scaling_unprepare(hba, scale_up);
 	return ret;
 }
 
@@ -6066,9 +6064,11 @@  static void ufshcd_force_error_recovery(struct ufs_hba *hba)
 
 static void ufshcd_clk_scaling_allow(struct ufs_hba *hba, bool allow)
 {
+	mutex_lock(&hba->wb_mutex);
 	down_write(&hba->clk_scaling_lock);
 	hba->clk_scaling.is_allowed = allow;
 	up_write(&hba->clk_scaling_lock);
+	mutex_unlock(&hba->wb_mutex);
 }
 
 static void ufshcd_clk_scaling_suspend(struct ufs_hba *hba, bool suspend)
@@ -9793,6 +9793,7 @@  int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
 	/* Initialize mutex for exception event control */
 	mutex_init(&hba->ee_ctrl_mutex);
 
+	mutex_init(&hba->wb_mutex);
 	init_rwsem(&hba->clk_scaling_lock);
 
 	ufshcd_init_clk_gating(hba);
diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h
index 5cf81dff60aa..727084cd79be 100644
--- a/include/ufs/ufshcd.h
+++ b/include/ufs/ufshcd.h
@@ -808,6 +808,7 @@  struct ufs_hba_monitor {
  * @urgent_bkops_lvl: keeps track of urgent bkops level for device
  * @is_urgent_bkops_lvl_checked: keeps track if the urgent bkops level for
  *  device is known or not.
+ * @wb_mutex: used to serialize devfreq and sysfs write booster toggling
  * @clk_scaling_lock: used to serialize device commands and clock scaling
  * @desc_size: descriptor sizes reported by device
  * @scsi_block_reqs_cnt: reference counting for scsi block requests
@@ -951,6 +952,7 @@  struct ufs_hba {
 	enum bkops_status urgent_bkops_lvl;
 	bool is_urgent_bkops_lvl_checked;
 
+	struct mutex wb_mutex;
 	struct rw_semaphore clk_scaling_lock;
 	unsigned char desc_size[QUERY_DESC_IDN_MAX];
 	atomic_t scsi_block_reqs_cnt;