Message ID | 20240220074300.10805-1-wangrumeng@xiaomi.corp-partner.google.com (mailing list archive) |
---|---|
State | Handled Elsewhere, archived |
Headers | show |
Series | interconnect:Replace mutex with rt_mutex for icc_bw_lock | expand |
Hello, On 20.02.24 9:43, Rumeng Wang wrote: > From: wangrumeng <wangrumeng@xiaomi.corp-partner.google.com> > > Replace existing mutex with rt_mutex to prevent priority inversion > between clients, which can cause unacceptable delays in some cases. > > Signed-off-by: wangrumeng <wangrumeng@xiaomi.corp-partner.google.com> A similar patch [1] has been posted some time ago. Please check the review comments. Thanks, Georgi [1] https://lore.kernel.org/all/20220906191423.30109-1-quic_mdtipton@quicinc.com/ > --- > drivers/interconnect/core.c | 15 ++++++++------- > 1 file changed, 8 insertions(+), 7 deletions(-) > > diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c > index 50bac2d79d9b..467d42cc7e49 100644 > --- a/drivers/interconnect/core.c > +++ b/drivers/interconnect/core.c > @@ -14,6 +14,7 @@ > #include <linux/interconnect-provider.h> > #include <linux/list.h> > #include <linux/mutex.h> > +#include <linux/rtmutex.h> > #include <linux/slab.h> > #include <linux/of.h> > #include <linux/overflow.h> > @@ -28,7 +29,7 @@ static LIST_HEAD(icc_providers); > static int providers_count; > static bool synced_state; > static DEFINE_MUTEX(icc_lock); > -static DEFINE_MUTEX(icc_bw_lock); > +static DEFINE_RT_MUTEX(icc_bw_lock); > static struct dentry *icc_debugfs_dir; > > static void icc_summary_show_one(struct seq_file *s, struct icc_node *n) > @@ -698,7 +699,7 @@ int icc_set_bw(struct icc_path *path, u32 avg_bw, u32 peak_bw) > if (WARN_ON(IS_ERR(path) || !path->num_nodes)) > return -EINVAL; > > - mutex_lock(&icc_bw_lock); > + rt_mutex_lock(&icc_bw_lock); > > old_avg = path->reqs[0].avg_bw; > old_peak = path->reqs[0].peak_bw; > @@ -730,7 +731,7 @@ int icc_set_bw(struct icc_path *path, u32 avg_bw, u32 peak_bw) > apply_constraints(path); > } > > - mutex_unlock(&icc_bw_lock); > + rt_mutex_unlock(&icc_bw_lock); > > trace_icc_set_bw_end(path, ret); > > @@ -939,7 +940,7 @@ void icc_node_add(struct icc_node *node, struct icc_provider *provider) > return; > > mutex_lock(&icc_lock); > - mutex_lock(&icc_bw_lock); > + rt_mutex_lock(&icc_bw_lock); > > node->provider = provider; > list_add_tail(&node->node_list, &provider->nodes); > @@ -968,7 +969,7 @@ void icc_node_add(struct icc_node *node, struct icc_provider *provider) > node->avg_bw = 0; > node->peak_bw = 0; > > - mutex_unlock(&icc_bw_lock); > + rt_mutex_unlock(&icc_bw_lock); > mutex_unlock(&icc_lock); > } > EXPORT_SYMBOL_GPL(icc_node_add); > @@ -1094,7 +1095,7 @@ void icc_sync_state(struct device *dev) > return; > > mutex_lock(&icc_lock); > - mutex_lock(&icc_bw_lock); > + rt_mutex_lock(&icc_bw_lock); > synced_state = true; > list_for_each_entry(p, &icc_providers, provider_list) { > dev_dbg(p->dev, "interconnect provider is in synced state\n"); > @@ -1107,7 +1108,7 @@ void icc_sync_state(struct device *dev) > } > } > } > - mutex_unlock(&icc_bw_lock); > + rt_mutex_unlock(&icc_bw_lock); > mutex_unlock(&icc_lock); > } > EXPORT_SYMBOL_GPL(icc_sync_state);
Thank you for your reply. The modification you mentioned is a modification made by Qualcomm in 2022 to address the issue of icc_set_bw, which involves changing the icc_lock from mutex to rt_mutex. However, this modification has not been introduced into the new version as it has been upgraded over the years. Therefore, there is still an issue with icc_set_bw on Android U. My modification this time is to change the icc_bw_lock from mutex to rt_mutex, only modifying the icc_bw_lock without modifying the icc_lock, in order to reduce the scope of impact. Here are the reasons for the modification: We execute test scripts on Xiaomi phones. Each test will have 5 rounds, and each round of testing will involve continuously entering and exiting 30 applications. Every time an application enters or exits, it will grab a trace to count the frame loss situation. We tested it four times using the Google solution, which is icc_set_bw use mutex. Every time the test is conducted, there will be frame loss caused by the icc_set_bw D state issue. The recurrence probability of the iccset_bw D state problem is 5/600 We tested it three times using the Xiaomi solution, which is icc_set_bw use rt_mutex. Every time there is a test, even if there is a frame loss, it is not caused by the icc_set_bw D state issue. The recurrence probability of the iccset_bw D state problem is 0/450 Of course, the above tests still have their limitations, as they only tested the application in and out scenarios, without testing any other scenarios. However, based on the current results, modifying mutex to rt_mutex has a significant optimization effect on frame loss.
Hello, it has been a week since the last reply. Is there any progress on this issue? Thanks.
diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c index 50bac2d79d9b..467d42cc7e49 100644 --- a/drivers/interconnect/core.c +++ b/drivers/interconnect/core.c @@ -14,6 +14,7 @@ #include <linux/interconnect-provider.h> #include <linux/list.h> #include <linux/mutex.h> +#include <linux/rtmutex.h> #include <linux/slab.h> #include <linux/of.h> #include <linux/overflow.h> @@ -28,7 +29,7 @@ static LIST_HEAD(icc_providers); static int providers_count; static bool synced_state; static DEFINE_MUTEX(icc_lock); -static DEFINE_MUTEX(icc_bw_lock); +static DEFINE_RT_MUTEX(icc_bw_lock); static struct dentry *icc_debugfs_dir; static void icc_summary_show_one(struct seq_file *s, struct icc_node *n) @@ -698,7 +699,7 @@ int icc_set_bw(struct icc_path *path, u32 avg_bw, u32 peak_bw) if (WARN_ON(IS_ERR(path) || !path->num_nodes)) return -EINVAL; - mutex_lock(&icc_bw_lock); + rt_mutex_lock(&icc_bw_lock); old_avg = path->reqs[0].avg_bw; old_peak = path->reqs[0].peak_bw; @@ -730,7 +731,7 @@ int icc_set_bw(struct icc_path *path, u32 avg_bw, u32 peak_bw) apply_constraints(path); } - mutex_unlock(&icc_bw_lock); + rt_mutex_unlock(&icc_bw_lock); trace_icc_set_bw_end(path, ret); @@ -939,7 +940,7 @@ void icc_node_add(struct icc_node *node, struct icc_provider *provider) return; mutex_lock(&icc_lock); - mutex_lock(&icc_bw_lock); + rt_mutex_lock(&icc_bw_lock); node->provider = provider; list_add_tail(&node->node_list, &provider->nodes); @@ -968,7 +969,7 @@ void icc_node_add(struct icc_node *node, struct icc_provider *provider) node->avg_bw = 0; node->peak_bw = 0; - mutex_unlock(&icc_bw_lock); + rt_mutex_unlock(&icc_bw_lock); mutex_unlock(&icc_lock); } EXPORT_SYMBOL_GPL(icc_node_add); @@ -1094,7 +1095,7 @@ void icc_sync_state(struct device *dev) return; mutex_lock(&icc_lock); - mutex_lock(&icc_bw_lock); + rt_mutex_lock(&icc_bw_lock); synced_state = true; list_for_each_entry(p, &icc_providers, provider_list) { dev_dbg(p->dev, "interconnect provider is in synced state\n"); @@ -1107,7 +1108,7 @@ void icc_sync_state(struct device *dev) } } } - mutex_unlock(&icc_bw_lock); + rt_mutex_unlock(&icc_bw_lock); mutex_unlock(&icc_lock); } EXPORT_SYMBOL_GPL(icc_sync_state);