Message ID | 20200223192520.20808-2-aarcange@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/3] mm: use_mm: fix for arches checking mm_users to optimize TLB flushes | expand |
On Sun, Feb 23, 2020 at 02:25:18PM -0500, Andrea Arcangeli wrote: > alpha, ia64, mips, powerpc, sh, sparc are relying on a check on > mm->mm_users to know if they can skip some remote TLB flushes for > single threaded processes. > > Most callers of use_mm() tend to invoke mmget_not_zero() or > get_task_mm() before use_mm() to ensure the mm will remain alive in > between use_mm() and unuse_mm(). > > Some callers however don't increase mm_users and they instead rely on > serialization in __mmput() to ensure the mm will remain alive in > between use_mm() and unuse_mm(). Not increasing mm_users during > use_mm() is however unsafe for aforementioned arch TLB flushes > optimizations. So either mmget()/mmput() should be added to the > problematic callers of use_mm()/unuse_mm() or we can embed them in > use_mm()/unuse_mm() which is more robust. > > Signed-off-by: Andrea Arcangeli <aarcange@redhat.com> > --- > mm/mmu_context.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/mm/mmu_context.c b/mm/mmu_context.c > index 3e612ae748e9..ced0e1218c0f 100644 > --- a/mm/mmu_context.c > +++ b/mm/mmu_context.c > @@ -30,6 +30,7 @@ void use_mm(struct mm_struct *mm) > mmgrab(mm); > tsk->active_mm = mm; > } > + mmget(mm); > tsk->mm = mm; > switch_mm(active_mm, mm, tsk); > task_unlock(tsk); > @@ -57,6 +58,7 @@ void unuse_mm(struct mm_struct *mm) > task_lock(tsk); > sync_mm_rss(mm); > tsk->mm = NULL; > + mmput(mm); > /* active_mm is still 'mm' */ > enter_lazy_tlb(mm, tsk); > task_unlock(tsk); Acked-by: Rafael Aquini <aquini@redhat.com>
diff --git a/mm/mmu_context.c b/mm/mmu_context.c index 3e612ae748e9..ced0e1218c0f 100644 --- a/mm/mmu_context.c +++ b/mm/mmu_context.c @@ -30,6 +30,7 @@ void use_mm(struct mm_struct *mm) mmgrab(mm); tsk->active_mm = mm; } + mmget(mm); tsk->mm = mm; switch_mm(active_mm, mm, tsk); task_unlock(tsk); @@ -57,6 +58,7 @@ void unuse_mm(struct mm_struct *mm) task_lock(tsk); sync_mm_rss(mm); tsk->mm = NULL; + mmput(mm); /* active_mm is still 'mm' */ enter_lazy_tlb(mm, tsk); task_unlock(tsk);
alpha, ia64, mips, powerpc, sh, sparc are relying on a check on mm->mm_users to know if they can skip some remote TLB flushes for single threaded processes. Most callers of use_mm() tend to invoke mmget_not_zero() or get_task_mm() before use_mm() to ensure the mm will remain alive in between use_mm() and unuse_mm(). Some callers however don't increase mm_users and they instead rely on serialization in __mmput() to ensure the mm will remain alive in between use_mm() and unuse_mm(). Not increasing mm_users during use_mm() is however unsafe for aforementioned arch TLB flushes optimizations. So either mmget()/mmput() should be added to the problematic callers of use_mm()/unuse_mm() or we can embed them in use_mm()/unuse_mm() which is more robust. Signed-off-by: Andrea Arcangeli <aarcange@redhat.com> --- mm/mmu_context.c | 2 ++ 1 file changed, 2 insertions(+)