From patchwork Sun Sep 18 08:00:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 12979351 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83501C6FA86 for ; Sun, 18 Sep 2022 08:01:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1A38E80014; Sun, 18 Sep 2022 04:01:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0DDE780007; Sun, 18 Sep 2022 04:01:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E4B3680014; Sun, 18 Sep 2022 04:01:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id CBD4680007 for ; Sun, 18 Sep 2022 04:01:18 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id ACF20120BEC for ; Sun, 18 Sep 2022 08:01:18 +0000 (UTC) X-FDA: 79924461036.18.E7EB131 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf31.hostedemail.com (Postfix) with ESMTP id 6C3F620009 for ; Sun, 18 Sep 2022 08:01:18 +0000 (UTC) Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-349cf83cfc7so101309017b3.5 for ; Sun, 18 Sep 2022 01:01:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:from:to:cc:subject:date; bh=0lTs8OS4olIPIqW+zqXK+/D/SmJn39+HbHh4ssv4OPc=; b=ge8HszPxUD+tCp8FfsT5/b9vWMoIlv7vDucG3NX8Jd/WWZRXKp3s6KdpS2ATsFZ0h4 yr9uRGrLWGDXh3tB05gX1g3OKUW+1CId3jrGLA6Cn3S952Xi/+7itXbZ6/sSIwNIid40 2jx54DLIjOD+ZefkkWEUG7NxCurgwPHPE7b68cC3X2q6FhIrqaOxpdRwuXH0hXNnuZKP tKYp5T5ArNkzd9RKemRHLxyKae++8c0rAHX2PqHaxmuwFp6D2Tq5lIo6sbYCPXMBrcWw 0G8Wj8+DdhwSZrBP5j9qKBkpw0mS3Y77QI1QCk77h36GttXdpeJkRKot4LvOyJPbS4ll NNpw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:x-gm-message-state:from:to :cc:subject:date; bh=0lTs8OS4olIPIqW+zqXK+/D/SmJn39+HbHh4ssv4OPc=; b=MNQzPvPDwE6nJz9jS96SI1fealyBVGM9+DeH4zhNVxEQAP7oEjaHIododJPy80yy/B 0yOUXa8UeBx3a7C6voNr2DqHHZUJFcYCdYi2tvwNhG+y8NXii/evHkYslB1zhiXGcl53 KyW3rA67zB28HNiI797FhRqzW9cCyvecnRvDwjq2Zy6aTrraJr6wm9Y4j2O5/fyWnNrC Z2tGIO/bpkIVnf+Tl4XFU2+BBBWEI5+mIes1ePeISQE9Yr1LKggT062yHgPGL82xbvny UI2OKK3VvYl9IPeL3NDDdNoHxoSMG+Q2P7nGC74accx/xQuXi5U/TTY/XMt0iE1QYH24 RLFQ== X-Gm-Message-State: ACrzQf0YLTYRpMTT/aEonGbTdLbjSfZL1IJJ764h8IKsH9YjGGj1A/Iu +uIycrqYa1kV9IePhi92/fVTBRNAPug= X-Google-Smtp-Source: AMsMyM7zHFJ1jk7yR5tVWl3kOUihOs1Lu7Ep0zEUNVRne0ZlxIUTVL1OexpefsjiMYQICWpRaU4/Wz/Gjng= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:c05a:2e99:29cd:d157]) (user=yuzhao job=sendgmr) by 2002:a25:c512:0:b0:6a8:fec2:7c88 with SMTP id v18-20020a25c512000000b006a8fec27c88mr9813193ybe.61.1663488077668; Sun, 18 Sep 2022 01:01:17 -0700 (PDT) Date: Sun, 18 Sep 2022 02:00:08 -0600 In-Reply-To: <20220918080010.2920238-1-yuzhao@google.com> Message-Id: <20220918080010.2920238-12-yuzhao@google.com> Mime-Version: 1.0 References: <20220918080010.2920238-1-yuzhao@google.com> X-Mailer: git-send-email 2.37.3.968.ga6b4b080e4-goog Subject: [PATCH mm-unstable v15 11/14] mm: multi-gen LRU: thrashing prevention From: Yu Zhao To: Andrew Morton Cc: Andi Kleen , Aneesh Kumar , Catalin Marinas , Dave Hansen , Hillf Danton , Jens Axboe , Johannes Weiner , Jonathan Corbet , Linus Torvalds , Matthew Wilcox , Mel Gorman , Michael Larabel , Michal Hocko , Mike Rapoport , Peter Zijlstra , Tejun Heo , Vlastimil Babka , Will Deacon , linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org, page-reclaim@google.com, Yu Zhao , Brian Geffon , Jan Alexander Steffens , Oleksandr Natalenko , Steven Barrett , Suleiman Souhlal , Daniel Byrne , Donald Carr , " =?utf-8?q?Holger_Hoffst=C3=A4tte?= " , Konstantin Kharlamov , Shuang Zhai , Sofia Trinh , Vaibhav Jain ARC-Authentication-Results: i=1; imf31.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=ge8HszPx; spf=pass (imf31.hostedemail.com: domain of 3TdAmYwYKCKggchPIWOWWOTM.KWUTQVcf-UUSdIKS.WZO@flex--yuzhao.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3TdAmYwYKCKggchPIWOWWOTM.KWUTQVcf-UUSdIKS.WZO@flex--yuzhao.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1663488078; a=rsa-sha256; cv=none; b=QB2ESZQpKaUjVdmVcQ06IapkDHDIF5UmvoVCK8KBO9+k9wqbVyKFqjVhuPYvfA5JyYGEew mvD5TPOpwm8je0eenjWhMOuWQc4SrSKtv8jWHe/L+nkjGG7lMiiMwgwfSyEAWzE6Tn/ln9 T64mBxMUFzB5nM/bIA+Dgmddwukto9M= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1663488078; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0lTs8OS4olIPIqW+zqXK+/D/SmJn39+HbHh4ssv4OPc=; b=s4oHPO5QXwVWdejXqbUoq08rYuqGidI+uQh5K5GR0hTBbz9vvN/2XKDYNEIs17aR+qgBfI GtKIEsewlsnjbzEUPpHv0c9OKEvrPZUCZW6ilSl0Ql5QBegpPwbQj3p5dabvDQ2OjCGsK3 O9GNDJowND7JLHRhriQKxfKdmrdN/zs= Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=ge8HszPx; spf=pass (imf31.hostedemail.com: domain of 3TdAmYwYKCKggchPIWOWWOTM.KWUTQVcf-UUSdIKS.WZO@flex--yuzhao.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3TdAmYwYKCKggchPIWOWWOTM.KWUTQVcf-UUSdIKS.WZO@flex--yuzhao.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Stat-Signature: xj4wpe18qkdmynrhqhsp7e4m3c5atnhk X-Rspamd-Queue-Id: 6C3F620009 X-Rspam-User: X-Rspamd-Server: rspam05 X-HE-Tag: 1663488078-841749 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add /sys/kernel/mm/lru_gen/min_ttl_ms for thrashing prevention, as requested by many desktop users [1]. When set to value N, it prevents the working set of N milliseconds from getting evicted. The OOM killer is triggered if this working set cannot be kept in memory. Based on the average human detectable lag (~100ms), N=1000 usually eliminates intolerable lags due to thrashing. Larger values like N=3000 make lags less noticeable at the risk of premature OOM kills. Compared with the size-based approach [2], this time-based approach has the following advantages: 1. It is easier to configure because it is agnostic to applications and memory sizes. 2. It is more reliable because it is directly wired to the OOM killer. [1] https://lore.kernel.org/r/Ydza%2FzXKY9ATRoh6@google.com/ [2] https://lore.kernel.org/r/20101028191523.GA14972@google.com/ Signed-off-by: Yu Zhao Acked-by: Brian Geffon Acked-by: Jan Alexander Steffens (heftig) Acked-by: Oleksandr Natalenko Acked-by: Steven Barrett Acked-by: Suleiman Souhlal Tested-by: Daniel Byrne Tested-by: Donald Carr Tested-by: Holger Hoffstätte Tested-by: Konstantin Kharlamov Tested-by: Shuang Zhai Tested-by: Sofia Trinh Tested-by: Vaibhav Jain --- include/linux/mmzone.h | 2 ++ mm/vmscan.c | 74 ++++++++++++++++++++++++++++++++++++++++-- 2 files changed, 73 insertions(+), 3 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 95c58c7fbdff..87347945270b 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -422,6 +422,8 @@ struct lru_gen_struct { unsigned long max_seq; /* the eviction increments the oldest generation numbers */ unsigned long min_seq[ANON_AND_FILE]; + /* the birth time of each generation in jiffies */ + unsigned long timestamps[MAX_NR_GENS]; /* the multi-gen LRU lists, lazily sorted on eviction */ struct list_head lists[MAX_NR_GENS][ANON_AND_FILE][MAX_NR_ZONES]; /* the multi-gen LRU sizes, eventually consistent */ diff --git a/mm/vmscan.c b/mm/vmscan.c index 10f31f3c5054..9ef2ec3d3c0c 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4293,6 +4293,7 @@ static void inc_max_seq(struct lruvec *lruvec, bool can_swap) for (type = 0; type < ANON_AND_FILE; type++) reset_ctrl_pos(lruvec, type, false); + WRITE_ONCE(lrugen->timestamps[next], jiffies); /* make sure preceding modifications appear */ smp_store_release(&lrugen->max_seq, lrugen->max_seq + 1); @@ -4422,7 +4423,7 @@ static bool should_run_aging(struct lruvec *lruvec, unsigned long max_seq, unsig return false; } -static void age_lruvec(struct lruvec *lruvec, struct scan_control *sc) +static bool age_lruvec(struct lruvec *lruvec, struct scan_control *sc, unsigned long min_ttl) { bool need_aging; unsigned long nr_to_scan; @@ -4436,16 +4437,36 @@ static void age_lruvec(struct lruvec *lruvec, struct scan_control *sc) mem_cgroup_calculate_protection(NULL, memcg); if (mem_cgroup_below_min(memcg)) - return; + return false; need_aging = should_run_aging(lruvec, max_seq, min_seq, sc, swappiness, &nr_to_scan); + + if (min_ttl) { + int gen = lru_gen_from_seq(min_seq[LRU_GEN_FILE]); + unsigned long birth = READ_ONCE(lruvec->lrugen.timestamps[gen]); + + if (time_is_after_jiffies(birth + min_ttl)) + return false; + + /* the size is likely too small to be helpful */ + if (!nr_to_scan && sc->priority != DEF_PRIORITY) + return false; + } + if (need_aging) try_to_inc_max_seq(lruvec, max_seq, sc, swappiness); + + return true; } +/* to protect the working set of the last N jiffies */ +static unsigned long lru_gen_min_ttl __read_mostly; + static void lru_gen_age_node(struct pglist_data *pgdat, struct scan_control *sc) { struct mem_cgroup *memcg; + bool success = false; + unsigned long min_ttl = READ_ONCE(lru_gen_min_ttl); VM_WARN_ON_ONCE(!current_is_kswapd()); @@ -4468,12 +4489,32 @@ static void lru_gen_age_node(struct pglist_data *pgdat, struct scan_control *sc) do { struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat); - age_lruvec(lruvec, sc); + if (age_lruvec(lruvec, sc, min_ttl)) + success = true; cond_resched(); } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL))); clear_mm_walk(); + + /* check the order to exclude compaction-induced reclaim */ + if (success || !min_ttl || sc->order) + return; + + /* + * The main goal is to OOM kill if every generation from all memcgs is + * younger than min_ttl. However, another possibility is all memcgs are + * either below min or empty. + */ + if (mutex_trylock(&oom_lock)) { + struct oom_control oc = { + .gfp_mask = sc->gfp_mask, + }; + + out_of_memory(&oc); + + mutex_unlock(&oom_lock); + } } /* @@ -5231,6 +5272,28 @@ static void lru_gen_change_state(bool enabled) * sysfs interface ******************************************************************************/ +static ssize_t show_min_ttl(struct kobject *kobj, struct kobj_attribute *attr, char *buf) +{ + return sprintf(buf, "%u\n", jiffies_to_msecs(READ_ONCE(lru_gen_min_ttl))); +} + +static ssize_t store_min_ttl(struct kobject *kobj, struct kobj_attribute *attr, + const char *buf, size_t len) +{ + unsigned int msecs; + + if (kstrtouint(buf, 0, &msecs)) + return -EINVAL; + + WRITE_ONCE(lru_gen_min_ttl, msecs_to_jiffies(msecs)); + + return len; +} + +static struct kobj_attribute lru_gen_min_ttl_attr = __ATTR( + min_ttl_ms, 0644, show_min_ttl, store_min_ttl +); + static ssize_t show_enabled(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { unsigned int caps = 0; @@ -5279,6 +5342,7 @@ static struct kobj_attribute lru_gen_enabled_attr = __ATTR( ); static struct attribute *lru_gen_attrs[] = { + &lru_gen_min_ttl_attr.attr, &lru_gen_enabled_attr.attr, NULL }; @@ -5294,12 +5358,16 @@ static struct attribute_group lru_gen_attr_group = { void lru_gen_init_lruvec(struct lruvec *lruvec) { + int i; int gen, type, zone; struct lru_gen_struct *lrugen = &lruvec->lrugen; lrugen->max_seq = MIN_NR_GENS + 1; lrugen->enabled = lru_gen_enabled(); + for (i = 0; i <= MIN_NR_GENS + 1; i++) + lrugen->timestamps[i] = jiffies; + for_each_gen_type_zone(gen, type, zone) INIT_LIST_HEAD(&lrugen->lists[gen][type][zone]);