diff mbox series

[-next] memcg: don't handle event_list for v2 when offlining

Message ID 20240514131106.1326323-1-xiujianfeng@huawei.com (mailing list archive)
State New
Headers show
Series [-next] memcg: don't handle event_list for v2 when offlining | expand

Commit Message

Xiu Jianfeng May 14, 2024, 1:11 p.m. UTC
The event_list for memcg is only valid for v1 and not used for v2,
so it's unnessesary to handle event_list for v2.

Signed-off-by: Xiu Jianfeng <xiujianfeng@huawei.com>
---
 mm/memcontrol.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

Comments

Michal Hocko May 14, 2024, 2:09 p.m. UTC | #1
On Tue 14-05-24 13:11:06, Xiu Jianfeng wrote:
> The event_list for memcg is only valid for v1 and not used for v2,
> so it's unnessesary to handle event_list for v2.

You are right but the code as is works just fine. The list will be
empty. It is true that we do not need to take event_list_lock lock but
nobody should be using this lock anyway. Also the offline callback is
not particularly hot path. So why do we want to change the code?

> 
> Signed-off-by: Xiu Jianfeng <xiujianfeng@huawei.com>
> ---
>  mm/memcontrol.c | 12 +++++++-----
>  1 file changed, 7 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index d127c9c5fabf..4254f9cd05f4 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -5881,12 +5881,14 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css)
>  	 * Notify userspace about cgroup removing only after rmdir of cgroup
>  	 * directory to avoid race between userspace and kernelspace.
>  	 */
> -	spin_lock_irq(&memcg->event_list_lock);
> -	list_for_each_entry_safe(event, tmp, &memcg->event_list, list) {
> -		list_del_init(&event->list);
> -		schedule_work(&event->remove);
> +	if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) {
> +		spin_lock_irq(&memcg->event_list_lock);
> +		list_for_each_entry_safe(event, tmp, &memcg->event_list, list) {
> +			list_del_init(&event->list);
> +			schedule_work(&event->remove);
> +		}
> +		spin_unlock_irq(&memcg->event_list_lock);
>  	}
> -	spin_unlock_irq(&memcg->event_list_lock);
>  
>  	page_counter_set_min(&memcg->memory, 0);
>  	page_counter_set_low(&memcg->memory, 0);
> -- 
> 2.34.1
Roman Gushchin May 14, 2024, 3:21 p.m. UTC | #2
On Tue, May 14, 2024 at 04:09:58PM +0200, Michal Hocko wrote:
> On Tue 14-05-24 13:11:06, Xiu Jianfeng wrote:
> > The event_list for memcg is only valid for v1 and not used for v2,
> > so it's unnessesary to handle event_list for v2.
> 
> You are right but the code as is works just fine. The list will be
> empty. It is true that we do not need to take event_list_lock lock but
> nobody should be using this lock anyway. Also the offline callback is
> not particularly hot path. So why do we want to change the code?

+1 to that.

Plus this code will be moved to a separate function in mm/memcontrol-v1.c
and luckily can be compiled out entirely for users who don't need the
cgroup v1 support.

Thanks!
Xiu Jianfeng May 15, 2024, 2:45 a.m. UTC | #3
On 2024/5/14 22:09, Michal Hocko wrote:
> On Tue 14-05-24 13:11:06, Xiu Jianfeng wrote:
>> The event_list for memcg is only valid for v1 and not used for v2,
>> so it's unnessesary to handle event_list for v2.
> 
> You are right but the code as is works just fine. The list will be
> empty. It is true that we do not need to take event_list_lock lock but
> nobody should be using this lock anyway. Also the offline callback is
> not particularly hot path. So why do we want to change the code?
> 

Actually, I don’t quite agree, but I don't insist on this patch.
Thanks for your feedback.


>>
>> Signed-off-by: Xiu Jianfeng <xiujianfeng@huawei.com>
>> ---
>>  mm/memcontrol.c | 12 +++++++-----
>>  1 file changed, 7 insertions(+), 5 deletions(-)
>>
>> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
>> index d127c9c5fabf..4254f9cd05f4 100644
>> --- a/mm/memcontrol.c
>> +++ b/mm/memcontrol.c
>> @@ -5881,12 +5881,14 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css)
>>  	 * Notify userspace about cgroup removing only after rmdir of cgroup
>>  	 * directory to avoid race between userspace and kernelspace.
>>  	 */
>> -	spin_lock_irq(&memcg->event_list_lock);
>> -	list_for_each_entry_safe(event, tmp, &memcg->event_list, list) {
>> -		list_del_init(&event->list);
>> -		schedule_work(&event->remove);
>> +	if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) {
>> +		spin_lock_irq(&memcg->event_list_lock);
>> +		list_for_each_entry_safe(event, tmp, &memcg->event_list, list) {
>> +			list_del_init(&event->list);
>> +			schedule_work(&event->remove);
>> +		}
>> +		spin_unlock_irq(&memcg->event_list_lock);
>>  	}
>> -	spin_unlock_irq(&memcg->event_list_lock);
>>  
>>  	page_counter_set_min(&memcg->memory, 0);
>>  	page_counter_set_low(&memcg->memory, 0);
>> -- 
>> 2.34.1
>
Xiu Jianfeng May 15, 2024, 2:47 a.m. UTC | #4
On 2024/5/14 23:21, Roman Gushchin wrote:
> On Tue, May 14, 2024 at 04:09:58PM +0200, Michal Hocko wrote:
>> On Tue 14-05-24 13:11:06, Xiu Jianfeng wrote:
>>> The event_list for memcg is only valid for v1 and not used for v2,
>>> so it's unnessesary to handle event_list for v2.
>>
>> You are right but the code as is works just fine. The list will be
>> empty. It is true that we do not need to take event_list_lock lock but
>> nobody should be using this lock anyway. Also the offline callback is
>> not particularly hot path. So why do we want to change the code?
> 
> +1 to that.
> 
> Plus this code will be moved to a separate function in mm/memcontrol-v1.c
> and luckily can be compiled out entirely for users who don't need the
> cgroup v1 support.

I found the patchset you mentioned, Thanks.

> 
> Thanks!
diff mbox series

Patch

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index d127c9c5fabf..4254f9cd05f4 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -5881,12 +5881,14 @@  static void mem_cgroup_css_offline(struct cgroup_subsys_state *css)
 	 * Notify userspace about cgroup removing only after rmdir of cgroup
 	 * directory to avoid race between userspace and kernelspace.
 	 */
-	spin_lock_irq(&memcg->event_list_lock);
-	list_for_each_entry_safe(event, tmp, &memcg->event_list, list) {
-		list_del_init(&event->list);
-		schedule_work(&event->remove);
+	if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) {
+		spin_lock_irq(&memcg->event_list_lock);
+		list_for_each_entry_safe(event, tmp, &memcg->event_list, list) {
+			list_del_init(&event->list);
+			schedule_work(&event->remove);
+		}
+		spin_unlock_irq(&memcg->event_list_lock);
 	}
-	spin_unlock_irq(&memcg->event_list_lock);
 
 	page_counter_set_min(&memcg->memory, 0);
 	page_counter_set_low(&memcg->memory, 0);