From patchwork Tue Sep 12 18:45:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13382049 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAFB0EE3F0B for ; Tue, 12 Sep 2023 18:45:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 479CB6B014E; Tue, 12 Sep 2023 14:45:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 42AB56B014F; Tue, 12 Sep 2023 14:45:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 27E9A6B0150; Tue, 12 Sep 2023 14:45:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 125086B014E for ; Tue, 12 Sep 2023 14:45:53 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id E18F8A0188 for ; Tue, 12 Sep 2023 18:45:52 +0000 (UTC) X-FDA: 81228824544.04.09DF365 Received: from mail-ot1-f43.google.com (mail-ot1-f43.google.com [209.85.210.43]) by imf21.hostedemail.com (Postfix) with ESMTP id DA1581C000E for ; Tue, 12 Sep 2023 18:45:50 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=KQMjvwG+; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf21.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.43 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1694544350; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=AqLJgyQqfzUIIKOgaKwfSh8F/4j0hn9Vy7CkRzUnKKA=; b=EB+5Xzc418JqZzItbhkVSFkf/83MHI/rJcwCkyZwgNDF1E3AAQK7SPVxwG2tDUNzDKvipW pE2gHxWw6u+my8Tzike5XD73JN7HgPwRJjcdEHr3d1cpKao+6PrHVzrXj397LBeyKZnPxK eOSF2KqBhJeL/tscalE7HWUlFpZqRXE= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=KQMjvwG+; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf21.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.43 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1694544350; a=rsa-sha256; cv=none; b=evE5mYh3/pUHNS7NVMcRqfOWZmY4wQCoDcARfhRQk3+AY0Z1P9jThsswgLm8wmXlmelxV5 O13p+Nh0dGE1T4adcF18rVJT95M73lF/f2+kXQlEzT9IomYzlgvpfzZ9vA7BVm59BGOhVG QYyL4K4KjI3Ya3j2lUUR0haJsOdBbcI= Received: by mail-ot1-f43.google.com with SMTP id 46e09a7af769-6c0b727c1caso3763322a34.0 for ; Tue, 12 Sep 2023 11:45:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1694544349; x=1695149149; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=AqLJgyQqfzUIIKOgaKwfSh8F/4j0hn9Vy7CkRzUnKKA=; b=KQMjvwG+5hU2Ocu5tuApGRL8MRztHO3pJQU6TGnFottyvD/QPr1Wofy1B/SMteKXmS vdWNkdnFncO7ZuCtqBHmvkGBJnVJ8wLmLkutodySYx8X8pnYSRI0GylQJWgcZvwd6kdt 530ypxhr3zMo31ZFJm4IUyC8t2fmsX3AgL9OuqNNlWGkX1L4HIyD6/MCmVXTLtnNRIBU HZRayytfh4KQFmAuXaTCCvT0p+esz/9vwfwc+YEzLUdlf/9Cz4YUUHWI1sNW00npYnef nOPZUc8f23uu5ptkBKeBHMDHBp/fCzPs6ljdw27XprOybpZ5q2dQADc+qSq6tSfQ2dSx JYng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694544349; x=1695149149; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=AqLJgyQqfzUIIKOgaKwfSh8F/4j0hn9Vy7CkRzUnKKA=; b=o8Rh+/KohBvo9lLI2KbOnDaJoTrzMl9A/bL6DqQh6i0sE7D+Urx2BTzxtoi+Jl1XpR JTTlJ2hbTxWWmEk0A1uZ7wQUW40aK8WRK37wncJHFkvAGO9NTonNOH6LT2Q1RqBayeDh IvwIZ/PzJxyiNbrOvAUOgVRwZQoh4miFYcDvgjNETgGGRJ0temn5gaY8W/GdhCzgZ7xe CgHjRXY0uNEjuEp5ykbKcRJQpRr/uAmbHjQu9oYppBQ0MrlibDhX12tXA/TjJBmaQFzs 1s0LkaoyzPEVhD9vfrL5t9BoBGTxWISgGG8AfVyYwpM54fSnlQ72POLJpwXpAr9wPqjh RLfw== X-Gm-Message-State: AOJu0Yyie36eae6Q7N/ccCs0zomtrK6bN64oPQ4rgXSnsccz+WD4FWxJ Q4QfDxChfz8jzLiYY7fx6AWUW5yzFlTeCf9FNN8= X-Google-Smtp-Source: AGHT+IHzmBcb/tyS2PxbvKKzErE9Tokw2tGryW2awbHorwgkZ53+YLkXi7rPle6t4POxSNl6IIx4yQ== X-Received: by 2002:a9d:7a92:0:b0:6bc:f276:717f with SMTP id l18-20020a9d7a92000000b006bcf276717fmr657333otn.13.1694544349135; Tue, 12 Sep 2023 11:45:49 -0700 (PDT) Received: from KASONG-MB2.tencent.com ([124.127.145.18]) by smtp.gmail.com with ESMTPSA id q18-20020a63bc12000000b00553b9e0510esm7390605pge.60.2023.09.12.11.45.45 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 12 Sep 2023 11:45:48 -0700 (PDT) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Yu Zhao , Roman Gushchin , Johannes Weiner , Michal Hocko , Hugh Dickins , Nhat Pham , Yuanchu Xie , Suren Baghdasaryan , "T . J . Mercier" , linux-kernel@vger.kernel.orng, Kairui Song Subject: [RFC PATCH v2 5/5] workingset, lru_gen: apply refault-distance based re-activation Date: Wed, 13 Sep 2023 02:45:11 +0800 Message-ID: <20230912184511.49333-6-ryncsn@gmail.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230912184511.49333-1-ryncsn@gmail.com> References: <20230912184511.49333-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Rspamd-Queue-Id: DA1581C000E X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: s6u1n7p38wrko15z18shooaowpxig1za X-HE-Tag: 1694544350-579249 X-HE-Meta: U2FsdGVkX19PfRIYbAuxUbvWz6/tP8FUsBUTNkqUg9KmqhD+Wmy87YFpgzDQAyqjxi2c2XiGgQMzark3AY+PpV03zGpiWsgspNApop+BYowP4TjPK78HbUaLBlG0p7yPv10Qt3rCRorKtJSq2qBwGc0mwGnN+o1RxSYBSDIQXGZ8VxIokTeLOOqeINHOFCNh9YR5FUWeDHk+cGhDXdghisksX6jM+alpB2SVx/jw8P0IEAsGSeVLo4WYvQljoKQXGbPTLq2cF0bhl+ByRUtlge4wha7+WkTkra6iaiLpHd//STgoTPegE0lOa1Y0LBzz4+c1K6lg4Xb6yYx9Y+2LA2dNbrMsaZ+XPZWKwUoXLZdUzw22AGgQLhqaz0yjbMDAQXrpiRnzq6IdlmeA6wUaD69DBdD5SsmAc+/TRrehUfwuPhz83wC0NSnY+eWwQVYrsrC/Uorg/a3NOVH13vEjhl9U3jccreLH1bfVVAfHfwU48KNkBZfbJ5ROubI0ayvE0Uyjot9BEdEGqv6NNDBFZ2guCAVpiFGJa9C1V4ftgOVpl2IF/yCZLZKVff+GZw4tTv6l7iIOEigBmP5Z/INCUiwb2quuFz2QzticzWyLK2nGH2sHnyV0Hsk6btC8tdCR8UwAnosmjwGfH5YOv7IlTfj9jqtHhN+SPkLXxyNYhCxkvpniUsA37sCQkdESdek5aUHGPlLuJtYg4Ly7OeJ7I4xfQWwWYwZKBYR03uL+npwnlWENJv3yH4WT/n03cJQOGDqZA3u208LIBmMJJW6APRvbu05rFKO4/tQHwHgme8vRM3eur82TRlvSgj/oFxBcfGfkMo26Y//ljqystDAUDzcPnA44Lpm5z1P095aAEUI2jRFvp+twMyjAeAE7bqfnK6UBhHz5B29yRGKi5+eUrqrG7dcGU0psW8FAk/Ym+IuAIOaZyT3XvDCV9rHaLqAriS3PvcH3rRC5zbq/Wz6 3TH8NWB5 HlRSZJNmiqBO/bCH0nJauaU9cu1YC6E4Ri2AFvSPBuo7sdoJK+RgjcQVgo5y7K5OwBPTE4aMsnpRXjP+8CW0VfbYTdZjosV3pFT31ntngjQdRgDAGuqLemBkoqpilNHb3QqVpbivEOqQeaJ4GfASos1B+NacIj/v4CBAatbQEgGtVL6ihTYnIhJ6n4EoKc/VEyE81D/bRFPJJRwyLblUo/mNKwtpcOZ7LmHvEoo7uHNBI6e9l7Wdy7B5Q2xdZPgNaOHd2qGhFzYRiqCGqCB6/MRp5whJstAvebD4H3HlD010FpHGeJBtTByLDIO1hBa6oDGRSP4Or7d0ke99rgpwGkc6UniKF/qrKkCUH1AhYOzUVLlX8u/5pmGwOuTQ5Q2TD5M/NnUDEvEw6N8Wib2TMteuXSuEvd/4ZLP2G0b9nv2JoQ4WaAUPuc19DDw+G4sXSa/ygiGrEhGev8fRA9cji16k32BEgSkPDPj7h1MEMkLnFziuTi4mg+0njk0Qglp79lclBP2HGEgrTat+1+D3dA43XAcLCQySkSQ92hzA0PUfUIRAVChYAbKBf6TOyzQnKNaIFC3jUXlW2QiHi7XaljE1+A+XdBwg0cRyGzT/W5S/l9aO+63QI9fsv+c1AB+qgaUw7Uuoy9jrnfnSIEgNZHJJDUa1lzNBq+GaDa4rHyzjW9k4h4doPKopMg8svYW1lfRMQihfptPbeQHg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Kairui Song I noticed MGLRU not working very well on certain workflows, which is observed on some heavily stressed databases. That is when the file page workingset size exceeds total memory, and the access distance (the left-shift time of a page before it gets activated, considering LRU starts from right) of file pages also larger than total memory. All file pages are stuck on the oldest generation and getting read-in then evicted permutably. Despite anon pages being idle, they never get aged. PID controller didn't kickin until there are some minor access pattern changes. And file pages are not promoted or reused. Even though the memory can't cover the whole workingset, the refault-distance based re-activation can help hold part of the workingset in-memory to help reduce the IO workload significantly. So apply it for MGLRU as well. The updated refault-distance model fits well for MGLRU in most cases, if we just consider the last two generation as the inactive LRU and the first two generations as active LRU. Some adjustment is done to fit the logic better, also make the refault-distance contributed to page tiering and PID refault detection of MGLRU: - If a tier-0 page have a qualified refault-distance, just promote it to higher tier, send it to second oldest gen. - If a tier >= 1 page have a qualified refault-distance, mark it as active and send it to youngest gen. - Increase the reference of every page that have a qualified refault-distance and increase the PID countroled refault rate of the updated tier. Following benchmark showed improvement. To simulate the workflow, I setup a 3-replicated mongodb cluster, each use 5 gb of cache and 10g of oplog, on a 32G VM. The benchmark is done using https://github.com/apavlo/py-tpcc.git, modified to run STOCK_LEVEL query only, for simulating slow query and get a stable result. Before the patch (with 10G swap, the result won't change whether swap is on or not): $ tpcc.py --config=mongodb.config mongodb --duration=900 --warehouses=500 --clients=30 ================================================================== Execution Results after 904 seconds ------------------------------------------------------------------ Executed Time (µs) Rate STOCK_LEVEL 503 27150226136.4 0.02 txn/s ------------------------------------------------------------------ TOTAL 503 27150226136.4 0.02 txn/s $ cat /proc/vmstat | grep working workingset_nodes 53391 workingset_refault_anon 0 workingset_refault_file 23856735 workingset_activate_anon 0 workingset_activate_file 23845737 workingset_restore_anon 0 workingset_restore_file 18280692 workingset_nodereclaim 1024 $ free -m total used free shared buff/cache available Mem: 31837 6752 379 23 24706 24607 Swap: 10239 0 10239 After the patch (with 10G swap on same disk, similiar result using ZRAM): $ tpcc.py --config=mongodb.config mongodb --duration=900 --warehouses=500 --clients=30 ================================================================== Execution Results after 903 seconds ------------------------------------------------------------------ Executed Time (µs) Rate STOCK_LEVEL 2575 27094953498.8 0.10 txn/s ------------------------------------------------------------------ TOTAL 2575 27094953498.8 0.10 txn/s $ cat /proc/vmstat | grep working workingset_nodes 78249 workingset_refault_anon 10139 workingset_refault_file 23001863 workingset_activate_anon 7238 workingset_activate_file 6718032 workingset_restore_anon 7432 workingset_restore_file 6719406 workingset_nodereclaim 9747 $ free -m total used free shared buff/cache available Mem: 31837 7376 320 3 24140 24014 Swap: 10239 1662 8577 The performance is 5x times better than before, and the idle anon pages now can get swapped out as expected. The result is also better with lower test stress, testing with lower stress also shows a improvement. I also checked the benchmark with memtier/memcached and fio, using similar setup as in commit ac35a4902374 but scaled down to fit in my test environment: memtier test (16G ramdisk as swap, 2G memcg limit, VM on a EPYC 7K62): memcached -u nobody -m 16384 -s /tmp/memcached.socket -a 0766 \ -t 12 -B binary & memtier_benchmark -S /tmp/memcached.socket -P memcache_binary -n allkeys\ --key-minimum=1 --key-maximum=24000000 --key-pattern=P:P -c 1 \ -t 12 --ratio 1:0 --pipeline 8 -d 2000 -x 6 fio test (16G ramdisk on /mnt, 4G memcg limit, VM on a EPYC 7K62): fio -name=refault --numjobs=14 --directory=/mnt --size=1024m \ --buffered=1 --ioengine=io_uring --iodepth=128 \ --iodepth_batch_submit=32 --iodepth_batch_complete=32 \ --rw=randread --random_distribution=random --norandommap \ --time_based --ramp_time=5m --runtime=5m --group_reporting mysql test (15G buffer pool with 16G memcg limit, VM on a EPYC 7K62): sysbench /usr/share/sysbench/oltp_read_only.lua \ --tables=48 --table-size=2000000 --threads=32 --time=1800 Before this patch: memtier: 379329.77 op/s fio: 5786.8k iops mysql: 150190.43 qps After this patch: memtier: 373877.41 op/s fio: 5805.5k iops mysql: 150220.93 qps The test looks ok except a bit extra overhead introduced by atomic operations introduced, there seems to be no LRU accuracy drop. Signed-off-by: Kairui Song --- mm/workingset.c | 78 +++++++++++++++++++++++++++++++++---------------- 1 file changed, 53 insertions(+), 25 deletions(-) diff --git a/mm/workingset.c b/mm/workingset.c index ff7587456b7f..1fa336054528 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -175,6 +175,7 @@ MEM_CGROUP_ID_SHIFT) #define EVICTION_BITS (BITS_PER_LONG - (EVICTION_SHIFT)) #define EVICTION_MASK (~0UL >> EVICTION_SHIFT) +#define LRU_GEN_EVICTION_BITS (EVICTION_BITS - LRU_REFS_WIDTH - LRU_GEN_WIDTH) /* * Eviction timestamps need to be able to cover the full range of @@ -185,6 +186,7 @@ * evictions into coarser buckets by shaving off lower timestamp bits. */ static unsigned int bucket_order __read_mostly; +static unsigned int lru_gen_bucket_order __read_mostly; static void *pack_shadow(int memcgid, pg_data_t *pgdat, unsigned long eviction, bool workingset) @@ -240,7 +242,7 @@ static inline bool lru_refault(struct mem_cgroup *memcg, int bits, int bucket_order) { unsigned long refault, distance; - unsigned long workingset, active, inactive, inactive_file, inactive_anon = 0; + unsigned long active, inactive_file, inactive_anon = 0; eviction <<= bucket_order; refault = atomic_long_read(&lruvec->nonresident_age); @@ -280,7 +282,7 @@ static inline bool lru_refault(struct mem_cgroup *memcg, * active pages with one time refaulted page may not be a good idea. */ if (active >= (inactive_anon + inactive_file)) - return distance < inactive_anon + inactive_file; + return distance < (inactive_anon + inactive_file); else return distance < active + (file ? inactive_anon : inactive_file); } @@ -333,10 +335,14 @@ static void *lru_gen_eviction(struct folio *folio) lruvec = mem_cgroup_lruvec(memcg, pgdat); lrugen = &lruvec->lrugen; min_seq = READ_ONCE(lrugen->min_seq[type]); + token = (min_seq << LRU_REFS_WIDTH) | max(refs - 1, 0); + token <<= LRU_GEN_EVICTION_BITS; + token |= lru_eviction(lruvec, LRU_GEN_EVICTION_BITS, lru_gen_bucket_order); hist = lru_hist_from_seq(min_seq); atomic_long_add(delta, &lrugen->evicted[hist][type][tier]); + workingset_age_nonresident(lruvec, folio_nr_pages(folio)); return pack_shadow(mem_cgroup_id(memcg), pgdat, token, refs); } @@ -351,44 +357,55 @@ static bool lru_gen_test_recent(struct lruvec *lruvec, bool file, unsigned long min_seq; min_seq = READ_ONCE(lruvec->lrugen.min_seq[file]); + token >>= LRU_GEN_EVICTION_BITS; return (token >> LRU_REFS_WIDTH) == (min_seq & (EVICTION_MASK >> LRU_REFS_WIDTH)); } static void lru_gen_refault(struct folio *folio, void *shadow) { int memcgid; - bool recent; + bool refault; bool workingset; unsigned long token; + bool recent = false; + int refault_tier = 0; int hist, tier, refs; struct lruvec *lruvec; + struct mem_cgroup *memcg; struct pglist_data *pgdat; struct lru_gen_folio *lrugen; int type = folio_is_file_lru(folio); int delta = folio_nr_pages(folio); - rcu_read_lock(); - unpack_shadow(shadow, &memcgid, &pgdat, &token, &workingset); - lruvec = mem_cgroup_lruvec(mem_cgroup_from_id(memcgid), pgdat); - if (lruvec != folio_lruvec(folio)) - goto unlock; + memcg = mem_cgroup_from_id(memcgid); + lruvec = mem_cgroup_lruvec(memcg, pgdat); + /* memcg can be NULL, go through lruvec */ + memcg = lruvec_memcg(lruvec); mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + type, delta); - - recent = lru_gen_test_recent(lruvec, type, token); - if (!recent) - goto unlock; + refault = lru_refault(memcg, lruvec, token, type, + LRU_GEN_EVICTION_BITS, lru_gen_bucket_order); + if (lruvec == folio_lruvec(folio)) + recent = lru_gen_test_recent(lruvec, type, token); + if (!recent && !refault) + return; lrugen = &lruvec->lrugen; - hist = lru_hist_from_seq(READ_ONCE(lrugen->min_seq[type])); /* see the comment in folio_lru_refs() */ + token >>= LRU_GEN_EVICTION_BITS; refs = (token & (BIT(LRU_REFS_WIDTH) - 1)) + workingset; tier = lru_tier_from_refs(refs); - - atomic_long_add(delta, &lrugen->refaulted[hist][type][tier]); - mod_lruvec_state(lruvec, WORKINGSET_ACTIVATE_BASE + type, delta); + refault_tier = tier; + + if (refault) { + if (refs) + folio_set_active(folio); + if (refs != BIT(LRU_REFS_WIDTH)) + refault_tier = lru_tier_from_refs(refs + 1); + mod_lruvec_state(lruvec, WORKINGSET_ACTIVATE_BASE + type, delta); + } /* * Count the following two cases as stalls: @@ -397,12 +414,17 @@ static void lru_gen_refault(struct folio *folio, void *shadow) * 2. For pages accessed multiple times through file descriptors, * numbers of accesses might have been out of the range. */ - if (lru_gen_in_fault() || refs == BIT(LRU_REFS_WIDTH)) { + if (refault || lru_gen_in_fault() || refs == BIT(LRU_REFS_WIDTH)) { folio_set_workingset(folio); mod_lruvec_state(lruvec, WORKINGSET_RESTORE_BASE + type, delta); } -unlock: - rcu_read_unlock(); + + if (recent && refault_tier == tier) { + atomic_long_add(delta, &lrugen->refaulted[hist][type][tier]); + } else { + atomic_long_add(delta, &lrugen->avg_total[type][refault_tier]); + atomic_long_add(delta, &lrugen->avg_refaulted[type][refault_tier]); + } } #else /* !CONFIG_LRU_GEN */ @@ -524,16 +546,15 @@ void workingset_refault(struct folio *folio, void *shadow) bool workingset; long nr; - if (lru_gen_enabled()) { - lru_gen_refault(folio, shadow); - return; - } - /* Flush stats (and potentially sleep) before holding RCU read lock */ mem_cgroup_flush_stats_ratelimited(); - rcu_read_lock(); + if (lru_gen_enabled()) { + lru_gen_refault(folio, shadow); + goto out; + } + /* * The activation decision for this folio is made at the level * where the eviction occurred, as that is where the LRU order @@ -780,6 +801,13 @@ static int __init workingset_init(void) pr_info("workingset: timestamp_bits=%d max_order=%d bucket_order=%u\n", EVICTION_BITS, max_order, bucket_order); +#ifdef CONFIG_LRU_GEN + if (max_order > LRU_GEN_EVICTION_BITS) + lru_gen_bucket_order = max_order - LRU_GEN_EVICTION_BITS; + pr_info("workingset: lru_gen_timestamp_bits=%d lru_gen_bucket_order=%u\n", + LRU_GEN_EVICTION_BITS, lru_gen_bucket_order); +#endif + ret = prealloc_shrinker(&workingset_shadow_shrinker, "mm-shadow"); if (ret) goto err;