From patchwork Wed May 20 23:25:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 11561715 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D63C6739 for ; Wed, 20 May 2020 23:26:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A3F3B2145D for ; Wed, 20 May 2020 23:26:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=cmpxchg-org.20150623.gappssmtp.com header.i=@cmpxchg-org.20150623.gappssmtp.com header.b="OELmW9qL" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A3F3B2145D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=cmpxchg.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6012C80012; Wed, 20 May 2020 19:26:17 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 5B4818000A; Wed, 20 May 2020 19:26:17 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 455DE80012; Wed, 20 May 2020 19:26:17 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0047.hostedemail.com [216.40.44.47]) by kanga.kvack.org (Postfix) with ESMTP id 25BAA8000A for ; Wed, 20 May 2020 19:26:17 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id D72F052CB for ; Wed, 20 May 2020 23:26:16 +0000 (UTC) X-FDA: 76838683152.09.metal33_19b317598331b X-Spam-Summary: 10,1,0,47c5d14c4cc63a84,d41d8cd98f00b204,hannes@cmpxchg.org,,RULES_HIT:41:355:379:541:800:960:966:968:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1542:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:2693:2740:2898:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:3872:3874:4250:4321:4385:5007:6261:6653:9010:9509:10004:10241:11026:11658:11914:12043:12297:12517:12519:12555:12895:12986:13894:14096:14181:14394:14721:21080:21213:21444:21451:21627:21891:30054:30070,0,RBL:209.85.222.193:@cmpxchg.org:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: metal33_19b317598331b X-Filterd-Recvd-Size: 5472 Received: from mail-qk1-f193.google.com (mail-qk1-f193.google.com [209.85.222.193]) by imf37.hostedemail.com (Postfix) with ESMTP for ; Wed, 20 May 2020 23:26:16 +0000 (UTC) Received: by mail-qk1-f193.google.com with SMTP id i14so5451206qka.10 for ; Wed, 20 May 2020 16:26:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cWi8GUMvjXVcjgAmpN1v26MqAhRkGwn3WDFKVSmqebQ=; b=OELmW9qLobpcB8i03ujKYrp+niXe2qkQDDyCEagaV+SP83teno7CkK8mwvctRBTkg1 Jn2aAhUzNJaZz9tfP80zhnf138nqWeO0vsvISlzXvS2SGVB93ZvUEVNaN3q2O+GkN7NE bjg8Tg/TDJarAz+t2CHuC3NxYvV30ehHzuIqNk7gjrpRyt3UkikzztTsRkDHEd+njr0f nM8CH+weltbNVSkCzZRV+t3r4RbdHr+Pw5a8Kuz1ujBKDCzLMMOzuAcXrX8etVZiQqlc HFQ0hCpStUibnNrSOGGtuXoKx+Y7Xr81FW+cwTS/uhV6bMLBMl+hCXebE0sLmM8ZmxPZ /01g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cWi8GUMvjXVcjgAmpN1v26MqAhRkGwn3WDFKVSmqebQ=; b=TZ63l6PuBpzE7JdeK8HVdY/4ANHCGbnCMKyDIaDQTWOUVFVy5P88Q7Xl7rcrEvq0o4 erq+cZeItzVy81pUgFECr/jm9YIGq7CGQwfDytoZawROXmMb+fPJHqIwswLWSEPOXtPP oHbxdr82340cHRtzyQBRufDGKluaH3tPQ3vaz4vI32IEcahuHdZWLBZNfcoTMEnMvq8e EX6c91Igw6TVYtzyWHGF02cVl5TinQCVy3M/ud9KhdZfefeIrUX4FcyT1KSzo7aTC5sg m5n+feX3BIFl9P/OXO8wvkX6H/gOVn7R9ubM2dxe6zfZRnRMJmfvLHfpeH2aehuZOtne YczA== X-Gm-Message-State: AOAM531yiwrhBD8EvLTKzut1EhlJiwisc7UF9bqtMdLTNB1VMzpYsXiP 9qLDbcqpWDwyYwss6uZHKFscDNr68+8= X-Google-Smtp-Source: ABdhPJw3U94cPbOfdJ5cTZaAWKG+ZGczSAYc2f5W6Y4QceSR5KNnjHRzPv09HTHZ8XUkGYZlDFHjoA== X-Received: by 2002:a37:bd81:: with SMTP id n123mr7017392qkf.57.1590017175541; Wed, 20 May 2020 16:26:15 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::1:4708]) by smtp.gmail.com with ESMTPSA id p22sm2693945qkp.109.2020.05.20.16.26.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 May 2020 16:26:14 -0700 (PDT) From: Johannes Weiner To: linux-mm@kvack.org Cc: Rik van Riel , Minchan Kim , Michal Hocko , Andrew Morton , Joonsoo Kim , linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 10/14] mm: only count actual rotations as LRU reclaim cost Date: Wed, 20 May 2020 19:25:21 -0400 Message-Id: <20200520232525.798933-11-hannes@cmpxchg.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200520232525.798933-1-hannes@cmpxchg.org> References: <20200520232525.798933-1-hannes@cmpxchg.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000008, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When shrinking the active file list we rotate referenced pages only when they're in an executable mapping. The others get deactivated. When it comes to balancing scan pressure, though, we count all referenced pages as rotated, even the deactivated ones. Yet they do not carry the same cost to the system: the deactivated page *might* refault later on, but the deactivation is tangible progress toward freeing pages; rotations on the other hand cost time and effort without getting any closer to freeing memory. Don't treat both events as equal. The following patch will hook up LRU balancing to cache and anon refaults, which are a much more concrete cost signal for reclaiming one list over the other. Thus, remove the maybe-IO cost bias from page references, and only note the CPU cost for actual rotations that prevent the pages from getting reclaimed. v2: readable changelog (Michal Hocko) Signed-off-by: Johannes Weiner Acked-by: Minchan Kim Acked-by: Michal Hocko --- mm/vmscan.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 6ff63906a288..2c3fb8dd1159 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2054,7 +2054,6 @@ static void shrink_active_list(unsigned long nr_to_scan, if (page_referenced(page, 0, sc->target_mem_cgroup, &vm_flags)) { - nr_rotated += hpage_nr_pages(page); /* * Identify referenced, file-backed active pages and * give them one more trip around the active list. So @@ -2065,6 +2064,7 @@ static void shrink_active_list(unsigned long nr_to_scan, * so we ignore them here. */ if ((vm_flags & VM_EXEC) && page_is_file_lru(page)) { + nr_rotated += hpage_nr_pages(page); list_add(&page->lru, &l_active); continue; } @@ -2080,10 +2080,8 @@ static void shrink_active_list(unsigned long nr_to_scan, */ spin_lock_irq(&pgdat->lru_lock); /* - * Count referenced pages from currently used mappings as rotated, - * even though only some of them are actually re-activated. This - * helps balance scan pressure between file and anonymous pages in - * get_scan_count. + * Rotating pages costs CPU without actually + * progressing toward the reclaim goal. */ lru_note_cost(lruvec, file, nr_rotated);