From patchwork Thu Aug 27 16:07:20 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Michael S. Tsirkin" X-Patchwork-Id: 44283 Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id n7RGCZQu011014 for ; Thu, 27 Aug 2009 16:12:36 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752943AbZH0QKt (ORCPT ); Thu, 27 Aug 2009 12:10:49 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1752944AbZH0QKt (ORCPT ); Thu, 27 Aug 2009 12:10:49 -0400 Received: from mx1.redhat.com ([209.132.183.28]:39667 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752937AbZH0QKr (ORCPT ); Thu, 27 Aug 2009 12:10:47 -0400 Received: from int-mx01.intmail.prod.int.phx2.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id n7RG8psK001917; Thu, 27 Aug 2009 12:08:51 -0400 Received: from redhat.com (vpn-10-97.str.redhat.com [10.32.10.97]) by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id n7RG8hJs000780 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO); Thu, 27 Aug 2009 12:08:47 -0400 Date: Thu, 27 Aug 2009 19:07:20 +0300 From: "Michael S. Tsirkin" To: netdev@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, mingo@elte.hu, linux-mm@kvack.org, akpm@linux-foundation.org, hpa@zytor.com, gregory.haskins@gmail.com, Rusty Russell , s.hetze@linux-ag.com Subject: [PATCHv5 2/3] mm: reduce atomic use on use_mm fast path Message-ID: <20090827160720.GC23722@redhat.com> References: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.19 (2009-01-05) X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When mm switched to matches that of active mm, we don't need to increment and then drop the mm count. Making that conditional reduces contention on that cache line on SMP systems. Acked-by: Andrea Arcangeli Signed-off-by: Michael S. Tsirkin --- mm/mmu_context.c | 9 ++++++--- 1 files changed, 6 insertions(+), 3 deletions(-) diff --git a/mm/mmu_context.c b/mm/mmu_context.c index 9989c2f..0777654 100644 --- a/mm/mmu_context.c +++ b/mm/mmu_context.c @@ -27,13 +27,16 @@ void use_mm(struct mm_struct *mm) task_lock(tsk); active_mm = tsk->active_mm; - atomic_inc(&mm->mm_count); + if (active_mm != mm) { + atomic_inc(&mm->mm_count); + tsk->active_mm = mm; + } tsk->mm = mm; - tsk->active_mm = mm; switch_mm(active_mm, mm, tsk); task_unlock(tsk); - mmdrop(active_mm); + if (active_mm != mm) + mmdrop(active_mm); } EXPORT_SYMBOL_GPL(use_mm);