From patchwork Tue Aug 21 15:58:05 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 1356171 Return-Path: X-Original-To: patchwork-ceph-devel@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id 46563DFB34 for ; Tue, 21 Aug 2012 15:56:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755840Ab2HUPz5 (ORCPT ); Tue, 21 Aug 2012 11:55:57 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:47269 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755751Ab2HUPzy (ORCPT ); Tue, 21 Aug 2012 11:55:54 -0400 Received: from localhost (c-50-131-44-6.hsd1.ca.comcast.net [50.131.44.6]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 4F24526D; Tue, 21 Aug 2012 15:55:53 +0000 (UTC) Date: Tue, 21 Aug 2012 08:58:05 -0700 From: Andrew Morton To: Linus Torvalds Cc: Mel Gorman , Sage Weil , David Miller , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, ceph-devel@vger.kernel.org, Neil Brown , Peter Zijlstra , michaelc@cs.wisc.edu, emunson@mgebm.net, Eric Dumazet , Christoph Lameter Subject: Re: regression with poll(2) Message-Id: <20120821085805.41a102f1.akpm@linux-foundation.org> In-Reply-To: References: <20120820090443.GA3275@suse.de> X-Mailer: Sylpheed 2.7.1 (GTK+ 2.18.9; x86_64-redhat-linux-gnu) Mime-Version: 1.0 Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org On Mon, 20 Aug 2012 10:02:05 -0700 Linus Torvalds wrote: > On Mon, Aug 20, 2012 at 2:04 AM, Mel Gorman wrote: > > > > Can the following patch be tested please? It is reported to fix an fio > > regression that may be similar to what you are experiencing but has not > > been picked up yet. > > Andrew, is this in your queue, or should I take this directly, or > what? It seems to fix the problem for Eric and Sage, at least. Yes, I have a copy queued: From: Alex Shi Subject: mm: correct page->pfmemalloc to fix deactivate_slab regression cfd19c5a9ec ("mm: only set page->pfmemalloc when ALLOC_NO_WATERMARKS was used") tried to narrow down page->pfmemalloc setting, but it missed some places the pfmemalloc should be set. So, in __slab_alloc, the unalignment pfmemalloc and ALLOC_NO_WATERMARKS cause incorrect deactivate_slab() on our core2 server: 64.73% fio [kernel.kallsyms] [k] _raw_spin_lock | --- _raw_spin_lock | |---0.34%-- deactivate_slab | __slab_alloc | kmem_cache_alloc | | That causes our fio sync write performance to have a 40% regression. Move the checking in get_page_from_freelist() which resolves this issue. Signed-off-by: Alex Shi Acked-by: Mel Gorman Cc: David Miller Tested-by: Eric Dumazet Tested-by: Sage Weil Signed-off-by: Andrew Morton --- mm/page_alloc.c | 21 +++++++++++---------- 1 file changed, 11 insertions(+), 10 deletions(-) diff -puN mm/page_alloc.c~mm-correct-page-pfmemalloc-to-fix-deactivate_slab-regression mm/page_alloc.c --- a/mm/page_alloc.c~mm-correct-page-pfmemalloc-to-fix-deactivate_slab-regression +++ a/mm/page_alloc.c @@ -1928,6 +1928,17 @@ this_zone_full: zlc_active = 0; goto zonelist_scan; } + + if (page) + /* + * page->pfmemalloc is set when ALLOC_NO_WATERMARKS was + * necessary to allocate the page. The expectation is + * that the caller is taking steps that will free more + * memory. The caller should avoid the page being used + * for !PFMEMALLOC purposes. + */ + page->pfmemalloc = !!(alloc_flags & ALLOC_NO_WATERMARKS); + return page; } @@ -2389,14 +2400,6 @@ rebalance: zonelist, high_zoneidx, nodemask, preferred_zone, migratetype); if (page) { - /* - * page->pfmemalloc is set when ALLOC_NO_WATERMARKS was - * necessary to allocate the page. The expectation is - * that the caller is taking steps that will free more - * memory. The caller should avoid the page being used - * for !PFMEMALLOC purposes. - */ - page->pfmemalloc = true; goto got_pg; } } @@ -2569,8 +2572,6 @@ retry_cpuset: page = __alloc_pages_slowpath(gfp_mask, order, zonelist, high_zoneidx, nodemask, preferred_zone, migratetype); - else - page->pfmemalloc = false; trace_mm_page_alloc(page, order, gfp_mask, migratetype);