diff mbox series

[v5,1/2] uprobes: Remove redundant spinlock in uprobe_deny_signal()

Message ID 20250124093826.2123675-2-liaochang1@huawei.com (mailing list archive)
State New
Headers show
Series uprobes: Improve scalability by reducing the contention on siglock | expand

Checks

Context Check Description
netdev/tree_selection success Not a local patch

Commit Message

Liao Chang Jan. 24, 2025, 9:38 a.m. UTC
Since clearing a bit in thread_info is an atomic operation, the spinlock
is redundant and can be removed, reducing lock contention is good for
performance.

Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Acked-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Liao Chang <liaochang1@huawei.com>
---
 kernel/events/uprobes.c | 2 --
 1 file changed, 2 deletions(-)

Comments

Steven Rostedt Jan. 24, 2025, 3:27 p.m. UTC | #1
On Fri, 24 Jan 2025 09:38:25 +0000
Liao Chang <liaochang1@huawei.com> wrote:

> Since clearing a bit in thread_info is an atomic operation, the spinlock
> is redundant and can be removed, reducing lock contention is good for
> performance.

Although this patch is probably fine, the change log suggests a dangerous
precedence. Just because clearing a flag is atomic, that alone does not
guarantee that it doesn't need spin locks around it.

There may be another path that tests the flag within a spin lock, and then
does a bunch of work assuming that the flag does not change while it is
doing that work. That other path would require a spin lock around the
clearing of the flag elsewhere.

I don't know this code well enough to know if this has that scenario, and
seeing the Acked-by from Oleg, I'm assuming it does not. But in any case,
the change log needs to give a better rationale for removing a spin lock than
just "clearing a flag atomically doesn't need a spin lock"!

-- Steve


> 
> Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> Acked-by: Oleg Nesterov <oleg@redhat.com>
> Signed-off-by: Liao Chang <liaochang1@huawei.com>
Oleg Nesterov Jan. 24, 2025, 5:25 p.m. UTC | #2
On 01/24, Steven Rostedt wrote:
>
> On Fri, 24 Jan 2025 09:38:25 +0000
> Liao Chang <liaochang1@huawei.com> wrote:
>
> > Since clearing a bit in thread_info is an atomic operation, the spinlock
> > is redundant and can be removed, reducing lock contention is good for
> > performance.
>
> Although this patch is probably fine, the change log suggests a dangerous
> precedence. Just because clearing a flag is atomic, that alone does not
> guarantee that it doesn't need spin locks around it.

Yes. And iirc we already have the lockless users of clear(TIF_SIGPENDING)
(some if not most of them look buggy). But afaics in this (very special)
case it should be fine.

See also https://lore.kernel.org/all/20240812120738.GC11656@redhat.com/

> There may be another path that tests the flag within a spin lock,

Yes, retarget_shared_pending() or the complete_signal/wants_signal loop.
That is why it was decided to take siglock in uprobe_deny_signal(), just
to be "safe".

But I still think this patch is fine. The current task is going to execute
a single insn which can't enter the kernel and/or return to the userspace
before it calls handle_singlestep() and restores TIF_SIGPENDING. We do not
care if it races with another source of TIF_SIGPENDING.

The only problem is that task_sigpending() from another task can "wrongly"
return false in this window, but I don't see any problem.

Oleg.
Oleg Nesterov Jan. 24, 2025, 5:38 p.m. UTC | #3
On 01/24, Oleg Nesterov wrote:
>
> But I still think this patch is fine. The current task is going to execute
> a single insn which can't enter the kernel and/or return to the userspace
                      ^^^^^^^^^^^^^^^^^^^^^^
I mean't, it can't do syscall, sorry for the possible confusion.

Oleg.
diff mbox series

Patch

diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index e421a5f2ec7d..7a3348dfedeb 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -2298,9 +2298,7 @@  bool uprobe_deny_signal(void)
 	WARN_ON_ONCE(utask->state != UTASK_SSTEP);
 
 	if (task_sigpending(t)) {
-		spin_lock_irq(&t->sighand->siglock);
 		clear_tsk_thread_flag(t, TIF_SIGPENDING);
-		spin_unlock_irq(&t->sighand->siglock);
 
 		if (__fatal_signal_pending(t) || arch_uprobe_xol_was_trapped(t)) {
 			utask->state = UTASK_SSTEP_TRAPPED;