diff mbox series

[RFC,08/10] memcg: assert in_task for couple of local_lock holders

Message ID 20250314061511.1308152-9-shakeel.butt@linux.dev (mailing list archive)
State New
Headers show
Series memcg: stock code cleanups | expand

Commit Message

Shakeel Butt March 14, 2025, 6:15 a.m. UTC
The drain_local_stock() and memcg_hotplug_cpu_dead() only run in task
context, so there is no need to localtry_trylock_irqsave() the local
stock_lock in those functions. The plan is to convert all stock_lock
users which can be called in multiple context to use
localtry_trylock_irqsave() and subsequently switch to non-irq disabling
interface. So, for functions which are never called in non-task context,
this patch adds the asserts.

Signed-off-by: Shakeel Butt <shakeel.butt@linux.dev>
---
 mm/memcontrol.c | 4 ++++
 1 file changed, 4 insertions(+)
diff mbox series

Patch

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index dfe9c2eb7816..c803d2f5e322 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1857,6 +1857,8 @@  static void drain_local_stock(struct work_struct *dummy)
 	struct memcg_stock_pcp *stock;
 	unsigned long flags;
 
+	lockdep_assert_once(in_task());
+
 	/*
 	 * The only protection from cpu hotplug (memcg_hotplug_cpu_dead) vs.
 	 * drain_stock races is that we always operate on local CPU stock
@@ -1953,6 +1955,8 @@  static int memcg_hotplug_cpu_dead(unsigned int cpu)
 	struct memcg_stock_pcp *stock;
 	unsigned long flags;
 
+	lockdep_assert_once(in_task());
+
 	stock = &per_cpu(memcg_stock, cpu);
 
 	/* drain_obj_stock requires stock_lock */