From patchwork Tue Apr 22 15:06:56 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 4033161 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 6F08D9F391 for ; Tue, 22 Apr 2014 15:10:21 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 723CD20219 for ; Tue, 22 Apr 2014 15:10:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5F67D2021B for ; Tue, 22 Apr 2014 15:10:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932876AbaDVPHN (ORCPT ); Tue, 22 Apr 2014 11:07:13 -0400 Received: from zene.cmpxchg.org ([85.214.230.12]:59462 "EHLO zene.cmpxchg.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751995AbaDVPHK (ORCPT ); Tue, 22 Apr 2014 11:07:10 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=cmpxchg.org; s=zene; h=In-Reply-To:Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date; bh=R4NCMfyO/sajORpzt2uAMvjzOkXP9wo4P/iq9SjSxj0=; b=AvrJXbcad7OaXS9xFFH9zSvSw3Ti/lb8lvelkUuvDINMcg6yaY5ZkoqPwzc/tchbVfL1febQcXDvSW3ZoVWc9ZkzxeGZ0D23WlEaxNDTMVdX/PoyEZqNJCSKW9gHQITi4PekYkxFw0FpYH6/KF9Mlz6/Lg6IvNhtvEh+aRmISPs=; Received: from ool-44c763f5.dyn.optonline.net ([68.199.99.245] helo=phnom.home.cmpxchg.org) by zene.cmpxchg.org with esmtpsa (TLSv1:AES128-SHA:128) (Exim 4.76) (envelope-from ) id 1WccHl-0005zB-N5; Tue, 22 Apr 2014 15:07:02 +0000 Date: Tue, 22 Apr 2014 11:06:56 -0400 From: Johannes Weiner To: Christian Borntraeger Cc: Rafael Aquini , Rik van Riel , Mel Gorman , Hugh Dickins , Suleiman Souhlal , stable@kernel.org, Andrew Morton , Linux Kernel Mailing List , Christian Ehrhardt , KVM list Subject: Re: commit 0bf1457f0cfca7b " mm: vmscan: do not swap anon pages just because free+file is low" causes heavy performance regression on paging Message-ID: <20140422150656.GA29866@cmpxchg.org> References: <53564AA9.3060905@de.ibm.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <53564AA9.3060905@de.ibm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-7.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Hi Christian, On Tue, Apr 22, 2014 at 12:55:37PM +0200, Christian Borntraeger wrote: > While preparing/testing some KVM on s390 patches for the next merge window (target is kvm/next which is based on 3.15-rc1) I faced a very severe performance hickup on guest paging (all anonymous memory). > > All memory bound guests are in "D" state now and the system is barely unusable. > > Reverting commit 0bf1457f0cfca7bc026a82323ad34bcf58ad035d > "mm: vmscan: do not swap anon pages just because free+file is low" makes the problem go away. > > According to /proc/vmstat the system is now in direct reclaim almost all the time for every page fault (more than 10x more direct reclaims than kswap reclaims) > With the patch being reverted everything is fine again. Ouch. Yes, I think we have to revert this for now. How about this? Acked-by: Christian Borntraeger Acked-by: Rafael Aquini --- From: Johannes Weiner Subject: [patch] Revert "mm: vmscan: do not swap anon pages just because free+file is low" This reverts commit 0bf1457f0cfc ("mm: vmscan: do not swap anon pages just because free+file is low") because it introduced a regression in mostly-anonymous workloads, where reclaim would become ineffective and trap every allocating task in direct reclaim. The problem is that there is a runaway feedback loop in the scan balance between file and anon, where the balance tips heavily towards a tiny thrashing file LRU and anonymous pages are no longer being looked at. The commit in question removed the safe guard that would detect such situations and respond with forced anonymous reclaim. This commit was part of a series to fix premature swapping in loads with relatively little cache, and while it made a small difference, the cure is obviously worse than the disease. Revert it. Reported-by: Christian Borntraeger Signed-off-by: Johannes Weiner Cc: [3.12+] --- mm/vmscan.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/mm/vmscan.c b/mm/vmscan.c index 9b6497eda806..169acb8e31c9 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1916,6 +1916,24 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, get_lru_size(lruvec, LRU_INACTIVE_FILE); /* + * Prevent the reclaimer from falling into the cache trap: as + * cache pages start out inactive, every cache fault will tip + * the scan balance towards the file LRU. And as the file LRU + * shrinks, so does the window for rotation from references. + * This means we have a runaway feedback loop where a tiny + * thrashing file LRU becomes infinitely more attractive than + * anon pages. Try to detect this based on file LRU size. + */ + if (global_reclaim(sc)) { + unsigned long free = zone_page_state(zone, NR_FREE_PAGES); + + if (unlikely(file + free <= high_wmark_pages(zone))) { + scan_balance = SCAN_ANON; + goto out; + } + } + + /* * There is enough inactive page cache, do not reclaim * anything from the anonymous working set right now. */