Message ID | f7ab552d8b7f00ec33766f4bf8554c8fc67517bc.1641659630.git.luto@kernel.org (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm, sched: Rework lazy mm handling | expand |
----- On Jan 8, 2022, at 11:43 AM, Andy Lutomirski luto@kernel.org wrote: > exec_mmap() supplies a brand-new mm from mm_alloc(), and membarrier_state > is already 0. There's no need to clear it again. Then I suspect we might want to tweak the comment just above the memory barrier ? /* * Issue a memory barrier before clearing membarrier_state to * guarantee that no memory access prior to exec is reordered after * clearing this state. */ Is that barrier still needed ? Thanks, Mathieu > > Signed-off-by: Andy Lutomirski <luto@kernel.org> > --- > kernel/sched/membarrier.c | 1 - > 1 file changed, 1 deletion(-) > > diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c > index eb73eeaedc7d..c38014c2ed66 100644 > --- a/kernel/sched/membarrier.c > +++ b/kernel/sched/membarrier.c > @@ -285,7 +285,6 @@ void membarrier_exec_mmap(struct mm_struct *mm) > * clearing this state. > */ > smp_mb(); > - atomic_set(&mm->membarrier_state, 0); > /* > * Keep the runqueue membarrier_state in sync with this mm > * membarrier_state. > -- > 2.33.1
diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c index eb73eeaedc7d..c38014c2ed66 100644 --- a/kernel/sched/membarrier.c +++ b/kernel/sched/membarrier.c @@ -285,7 +285,6 @@ void membarrier_exec_mmap(struct mm_struct *mm) * clearing this state. */ smp_mb(); - atomic_set(&mm->membarrier_state, 0); /* * Keep the runqueue membarrier_state in sync with this mm * membarrier_state.
exec_mmap() supplies a brand-new mm from mm_alloc(), and membarrier_state is already 0. There's no need to clear it again. Signed-off-by: Andy Lutomirski <luto@kernel.org> --- kernel/sched/membarrier.c | 1 - 1 file changed, 1 deletion(-)