diff mbox

[PATCHv5,2/3] mm: reduce atomic use on use_mm fast path

Message ID 20090827160720.GC23722@redhat.com (mailing list archive)
State New, archived
Headers show

Commit Message

Michael S. Tsirkin Aug. 27, 2009, 4:07 p.m. UTC
When mm switched to matches that of active mm, we don't need to
increment and then drop the mm count. Making that conditional reduces
contention on that cache line on SMP systems.

Acked-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 mm/mmu_context.c |    9 ++++++---
 1 files changed, 6 insertions(+), 3 deletions(-)
diff mbox

Patch

diff --git a/mm/mmu_context.c b/mm/mmu_context.c
index 9989c2f..0777654 100644
--- a/mm/mmu_context.c
+++ b/mm/mmu_context.c
@@ -27,13 +27,16 @@  void use_mm(struct mm_struct *mm)
 
 	task_lock(tsk);
 	active_mm = tsk->active_mm;
-	atomic_inc(&mm->mm_count);
+	if (active_mm != mm) {
+		atomic_inc(&mm->mm_count);
+		tsk->active_mm = mm;
+	}
 	tsk->mm = mm;
-	tsk->active_mm = mm;
 	switch_mm(active_mm, mm, tsk);
 	task_unlock(tsk);
 
-	mmdrop(active_mm);
+	if (active_mm != mm)
+		mmdrop(active_mm);
 }
 EXPORT_SYMBOL_GPL(use_mm);