From patchwork Tue Feb 8 08:19:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 12738307 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 761B7C433EF for ; Tue, 8 Feb 2022 08:19:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B797C6B009D; Tue, 8 Feb 2022 03:19:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AB3356B009E; Tue, 8 Feb 2022 03:19:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 952CB6B009F; Tue, 8 Feb 2022 03:19:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0061.hostedemail.com [216.40.44.61]) by kanga.kvack.org (Postfix) with ESMTP id 852DF6B009D for ; Tue, 8 Feb 2022 03:19:42 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 4DB661801A210 for ; Tue, 8 Feb 2022 08:19:42 +0000 (UTC) X-FDA: 79118913804.23.5961922 Received: from mail-il1-f202.google.com (mail-il1-f202.google.com [209.85.166.202]) by imf19.hostedemail.com (Postfix) with ESMTP id E42F61A0004 for ; Tue, 8 Feb 2022 08:19:41 +0000 (UTC) Received: by mail-il1-f202.google.com with SMTP id b3-20020a056e020c8300b002be19f9e043so4013664ile.13 for ; Tue, 08 Feb 2022 00:19:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc:content-transfer-encoding; bh=MIB++7G+/LCy16uIS5SE62AmAfXnIjlhBHvrLUsKEns=; b=j04x2wOt2cZHbDzPHqdx2vq9JtnvFNLOAiVY3qfGUiWCnq3cnJPNG2/mmfrtuDXHjE CBnj5o5JUsSj+nfz3qSJl/mv7bqUj57dW6tzR5ZDewYq6AFhTcKVzQTb5Cbi/4JBlI0m cG6HHx6Ice5xypjaoYERkrXU4jQSN1fcndUbRjnoxR01y1ork4tsgZT4NBEF53Bb1rKJ tZI8+cZgwHl51rjkt/+vrgZUGvQfvcxOizfgWPOAUBbm46MeVmMNFUUiRCuLXDt2tmnP A0xtSEKDRkzS2qauLrPiS9wSlrrkURAU701cZy5LzdPkcJ2auSuEVyDmOgFtZqWlOg5V 38yQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc:content-transfer-encoding; bh=MIB++7G+/LCy16uIS5SE62AmAfXnIjlhBHvrLUsKEns=; b=jAnrWjEoqnVGahX3k6KmuZhXFKacy0iCMAzPJ8pIXQ9TE1ACJmG8Y7pesRNygZs3oh 3j2e81ovagvlu2K7YdQMtcrVPl7FzSxe1CYYWV/Ezzp8zjlyuvJABCdlpFT7BoUDi+hb rKoG7DLFOQM1mWHID3vdacb1fkDQ04jPE3gHvCMALZBceDecYVcYhmpinF0AShDhVolX gpXg3bp9tc7ZJgk2YagxrGInPqahKpm6cr6Um0EwTCcyiV/AO6jvlr4VA4IVvSL95pU9 jF57sVRfUVGzCPoWqfjKbUtR3mzFTnQAyHUV+RUhUOQyU2Kaon9UYP2eZNo5EIkGc0Fg G7FQ== X-Gm-Message-State: AOAM532fNQyaiLm9CTae8nLhc7DkQB5f6vGK0T7USbnQFKbEuGI2ddj0 gnttGLydiBOYyea8u6ZvTCY5TPrFQ1U= X-Google-Smtp-Source: ABdhPJwvGsx9x7biM2i13AIBVIwwI7yLTvs/RgJHVZky0l2R0ILRLR7J5MVRvavlQNnwnqDruUBvYL02o6s= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:5f31:19c3:21f5:7300]) (user=yuzhao job=sendgmr) by 2002:a92:dacb:: with SMTP id o11mr720466ilq.188.1644308381316; Tue, 08 Feb 2022 00:19:41 -0800 (PST) Date: Tue, 8 Feb 2022 01:19:00 -0700 In-Reply-To: <20220208081902.3550911-1-yuzhao@google.com> Message-Id: <20220208081902.3550911-11-yuzhao@google.com> Mime-Version: 1.0 References: <20220208081902.3550911-1-yuzhao@google.com> X-Mailer: git-send-email 2.35.0.263.gb82422642f-goog Subject: [PATCH v7 10/12] mm: multigenerational LRU: thrashing prevention From: Yu Zhao To: Andrew Morton , Johannes Weiner , Mel Gorman , Michal Hocko Cc: Andi Kleen , Aneesh Kumar , Barry Song <21cnbao@gmail.com>, Catalin Marinas , Dave Hansen , Hillf Danton , Jens Axboe , Jesse Barnes , Jonathan Corbet , Linus Torvalds , Matthew Wilcox , Michael Larabel , Mike Rapoport , Rik van Riel , Vlastimil Babka , Will Deacon , Ying Huang , linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, page-reclaim@google.com, x86@kernel.org, Yu Zhao , Brian Geffon , Jan Alexander Steffens , Oleksandr Natalenko , Steven Barrett , Suleiman Souhlal , Daniel Byrne , Donald Carr , " =?utf-8?q?Holger_Hoffst=C3=A4tte?= " , Konstantin Kharlamov , Shuang Zhai , Sofia Trinh X-Stat-Signature: gg89xye8pih3w77ijbo3daz7sr7jcfqf X-Rspam-User: Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=j04x2wOt; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf19.hostedemail.com: domain of 3nScCYgYKCAwA6Btm0s00sxq.o0yxuz69-yyw7mow.03s@flex--yuzhao.bounces.google.com designates 209.85.166.202 as permitted sender) smtp.mailfrom=3nScCYgYKCAwA6Btm0s00sxq.o0yxuz69-yyw7mow.03s@flex--yuzhao.bounces.google.com X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: E42F61A0004 X-HE-Tag: 1644308381-622573 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add /sys/kernel/mm/lru_gen/min_ttl_ms for thrashing prevention, as requested by many desktop users [1]. When set to value N, it prevents the working set of N milliseconds from getting evicted. The OOM killer is triggered if this working set can't be kept in memory. Based on the average human detectable lag (~100ms), N=1000 usually eliminates intolerable lags due to thrashing. Larger values like N=3000 make lags less noticeable at the risk of premature OOM kills. Compared with the size-based approach, e.g., [2], this time-based approach has the following advantages: 1) It's easier to configure because it's agnostic to applications and memory sizes. 2) It's more reliable because it's directly wired to the OOM killer. [1] https://lore.kernel.org/lkml/Ydza%2FzXKY9ATRoh6@google.com/ [2] https://lore.kernel.org/lkml/20211130201652.2218636d@mail.inbox.lv/ Signed-off-by: Yu Zhao Acked-by: Brian Geffon Acked-by: Jan Alexander Steffens (heftig) Acked-by: Oleksandr Natalenko Acked-by: Steven Barrett Acked-by: Suleiman Souhlal Tested-by: Daniel Byrne Tested-by: Donald Carr Tested-by: Holger Hoffstätte Tested-by: Konstantin Kharlamov Tested-by: Shuang Zhai Tested-by: Sofia Trinh --- mm/vmscan.c | 64 +++++++++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 60 insertions(+), 4 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 700c35f2a030..4d37d63668b5 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4335,7 +4335,8 @@ static long get_nr_evictable(struct lruvec *lruvec, unsigned long max_seq, return total > 0 ? total : 0; } -static void age_lruvec(struct lruvec *lruvec, struct scan_control *sc) +static bool age_lruvec(struct lruvec *lruvec, struct scan_control *sc, + unsigned long min_ttl) { bool need_aging; long nr_to_scan; @@ -4344,14 +4345,22 @@ static void age_lruvec(struct lruvec *lruvec, struct scan_control *sc) DEFINE_MAX_SEQ(lruvec); DEFINE_MIN_SEQ(lruvec); + if (min_ttl) { + int gen = lru_gen_from_seq(min_seq[TYPE_FILE]); + unsigned long birth = READ_ONCE(lruvec->lrugen.timestamps[gen]); + + if (time_is_after_jiffies(birth + min_ttl)) + return false; + } + mem_cgroup_calculate_protection(NULL, memcg); if (mem_cgroup_below_min(memcg)) - return; + return false; nr_to_scan = get_nr_evictable(lruvec, max_seq, min_seq, swappiness, &need_aging); if (!nr_to_scan) - return; + return false; nr_to_scan >>= sc->priority; @@ -4360,11 +4369,18 @@ static void age_lruvec(struct lruvec *lruvec, struct scan_control *sc) if (nr_to_scan && need_aging && (!mem_cgroup_below_low(memcg) || sc->memcg_low_reclaim)) try_to_inc_max_seq(lruvec, max_seq, sc, swappiness, false); + + return true; } +/* to protect the working set of the last N jiffies */ +static unsigned long lru_gen_min_ttl __read_mostly; + static void lru_gen_age_node(struct pglist_data *pgdat, struct scan_control *sc) { struct mem_cgroup *memcg; + bool success = false; + unsigned long min_ttl = READ_ONCE(lru_gen_min_ttl); VM_BUG_ON(!current_is_kswapd()); @@ -4390,11 +4406,28 @@ static void lru_gen_age_node(struct pglist_data *pgdat, struct scan_control *sc) do { struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat); - age_lruvec(lruvec, sc); + if (age_lruvec(lruvec, sc, min_ttl)) + success = true; cond_resched(); } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL))); + /* + * The main goal is to OOM kill if every generation from all memcgs is + * younger than min_ttl. However, another theoretical possibility is all + * memcgs are either below min or empty. + */ + if (!success && mutex_trylock(&oom_lock)) { + struct oom_control oc = { + .gfp_mask = sc->gfp_mask, + .order = sc->order, + }; + + out_of_memory(&oc); + + mutex_unlock(&oom_lock); + } + current->reclaim_state->mm_walk = NULL; } @@ -5107,6 +5140,28 @@ static void lru_gen_change_state(bool enable) * sysfs interface ******************************************************************************/ +static ssize_t show_min_ttl(struct kobject *kobj, struct kobj_attribute *attr, char *buf) +{ + return sprintf(buf, "%u\n", jiffies_to_msecs(READ_ONCE(lru_gen_min_ttl))); +} + +static ssize_t store_min_ttl(struct kobject *kobj, struct kobj_attribute *attr, + const char *buf, size_t len) +{ + unsigned int msecs; + + if (kstrtouint(buf, 0, &msecs)) + return -EINVAL; + + WRITE_ONCE(lru_gen_min_ttl, msecs_to_jiffies(msecs)); + + return len; +} + +static struct kobj_attribute lru_gen_min_ttl_attr = __ATTR( + min_ttl_ms, 0644, show_min_ttl, store_min_ttl +); + static ssize_t show_enable(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { unsigned int caps = 0; @@ -5155,6 +5210,7 @@ static struct kobj_attribute lru_gen_enabled_attr = __ATTR( ); static struct attribute *lru_gen_attrs[] = { + &lru_gen_min_ttl_attr.attr, &lru_gen_enabled_attr.attr, NULL };