From patchwork Wed Aug 18 06:31:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 12443393 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3048C4320A for ; Wed, 18 Aug 2021 06:31:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 777DD6109F for ; Wed, 18 Aug 2021 06:31:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 777DD6109F Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id DA3CF6B0081; Wed, 18 Aug 2021 02:31:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D2DD26B0082; Wed, 18 Aug 2021 02:31:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AE3B36B0083; Wed, 18 Aug 2021 02:31:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0107.hostedemail.com [216.40.44.107]) by kanga.kvack.org (Postfix) with ESMTP id 899016B0081 for ; Wed, 18 Aug 2021 02:31:23 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 4707A22873 for ; Wed, 18 Aug 2021 06:31:23 +0000 (UTC) X-FDA: 78487229646.02.8986346 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf25.hostedemail.com (Postfix) with ESMTP id DC39CB0025A3 for ; Wed, 18 Aug 2021 06:31:22 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id f8-20020a2585480000b02905937897e3daso1836069ybn.2 for ; Tue, 17 Aug 2021 23:31:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=VNwpW6WIdnuG3AYWvj+4AM1ncawHlKx5Ng/ARryIXuE=; b=i8Y2GYtpM7PPZh/DNiRTva/K7VOlAWL7Ipv4DJKZysp0znS369tGr1/sEesSHeya08 K5N3BQXLn6KTOTQ1GM9AGZL8I2YcJ+spHm/Fd0/7XtOEqFBYWXAl5le7jGBKaofufkI5 OzSvHbtp/R/KoG1aqvuWt2aHIBin+CFZizS36WYIrEQOxgiYdmGy4f94pW60l+gC0sl5 ux75gso95f348dzFl7neo53KnmIr5C/uyENxzS7cQAKozMuIrA4mXDfWfEtQrANan8mN fpWxhXrxC4FQZQowBtIcWfx764xAqyz3q/FQ3bbmvl0367t7gwyEWzPrYWyg6b1SZUaw lzjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=VNwpW6WIdnuG3AYWvj+4AM1ncawHlKx5Ng/ARryIXuE=; b=QRZyLmsNYwF0G4p+6W6dnEdWUCtb/k6TLn+fza2RmDFlDQzYj/z/K75dRWnijFtZ/g sDr6aLPKaeXDBOih0+1sUAOms00rTydOphPq3pIB9oCfriLXcj0JyUlV+QZPxeBf2Q8G ltOZ4u9lMZ0TnxLA1v9fV5AqtENiWeCyX9O/j8IgByjvrAkCkNYOjtsjV4R0oYlx+Nat lUV5Y2lfqNgeXuEM05gOmjf8DHI/pykyrIG2KdsZ99OPhyoMd31LbFPJsbkSRX+8Lu8B reGJaQ//gT2H4sck9L3uos/N26KynTflPu9F8V0lYn7pONGjTveCL7JBU53tKptPxpCA KjVQ== X-Gm-Message-State: AOAM531aO0v6XHuA5x1ffnqeCTP67WiycMa2UISqnIxuG6dnkBncuG/W kUh2p4uOGPMG12dciqFDbTYgIggeppILvAq9xdo2Em9VECUTKcVrKNgpTc7AH4SsdnrRRH+Z7Se p2pMKoLF+bJbcm94amVWkSlZX0F/LAeR5n5I+hUEwQF8QlmJn9wgz90Aa X-Google-Smtp-Source: ABdhPJz7Rwe9d+2oKKXDKvePgtoi9ZiFTYPt74TFugQ+Cu9epD8Yd7UFPRBaiMGC6Io319u3NJRTQDCBQrA= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:41f0:f89:87cd:8bd0]) (user=yuzhao job=sendgmr) by 2002:a5b:4cf:: with SMTP id u15mr9608628ybp.118.1629268282152; Tue, 17 Aug 2021 23:31:22 -0700 (PDT) Date: Wed, 18 Aug 2021 00:31:04 -0600 In-Reply-To: <20210818063107.2696454-1-yuzhao@google.com> Message-Id: <20210818063107.2696454-9-yuzhao@google.com> Mime-Version: 1.0 References: <20210818063107.2696454-1-yuzhao@google.com> X-Mailer: git-send-email 2.33.0.rc1.237.g0d66db33f3-goog Subject: [PATCH v4 08/11] mm: multigenerational lru: eviction From: Yu Zhao To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Hillf Danton , page-reclaim@google.com, Yu Zhao , Konstantin Kharlamov X-Rspamd-Queue-Id: DC39CB0025A3 Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=i8Y2GYtp; spf=pass (imf25.hostedemail.com: domain of 3OqkcYQYKCA8D9Ewp3v33v0t.r310x29C-11zAprz.36v@flex--yuzhao.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3OqkcYQYKCA8D9Ewp3v33v0t.r310x29C-11zAprz.36v@flex--yuzhao.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam01 X-Stat-Signature: unw5omaufip3b6z17qs8rzm3rhywckaz X-HE-Tag: 1629268282-161513 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The eviction consumes old generations. Given an lruvec, the eviction scans pages on lrugen->lists indexed by anon and file min_seq[2] (modulo MAX_NR_GENS). It first tries to select a type based on the values of min_seq[2]. If they are equal, it selects the type that has a lower refault rate. The eviction sorts a page according to its updated generation number if the aging has found this page accessed. It also moves a page to the next generation if this page is from an upper tier that has a higher refault rate than the base tier. The eviction increments min_seq[2] of a selected type when it finds lrugen->lists indexed by min_seq[2] of this selected type are empty. With the aging and the eviction in place, implementing page reclaim becomes quite straightforward: 1) To reduce the latency, direct reclaim skips the aging unless both min_seq[2] are equal to max_seq-1. Then it invokes the eviction. 2) To avoid the aging in the direct reclaim path, kswapd invokes the aging if either of min_seq[2] is equal to max_seq-1. Then it invokes the eviction. Signed-off-by: Yu Zhao Tested-by: Konstantin Kharlamov --- mm/vmscan.c | 440 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 440 insertions(+) diff --git a/mm/vmscan.c b/mm/vmscan.c index 757ba4f415cc..2f1fffbd2d61 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1311,6 +1311,11 @@ static unsigned int shrink_page_list(struct list_head *page_list, if (!sc->may_unmap && page_mapped(page)) goto keep_locked; + /* lru_gen_scan_around() has updated this page? */ + if (lru_gen_enabled() && !ignore_references && + page_mapped(page) && PageReferenced(page)) + goto keep_locked; + may_enter_fs = (sc->gfp_mask & __GFP_FS) || (PageSwapCache(page) && (sc->gfp_mask & __GFP_IO)); @@ -2447,6 +2452,9 @@ static void prepare_scan_count(pg_data_t *pgdat, struct scan_control *sc) unsigned long file; struct lruvec *target_lruvec; + if (lru_gen_enabled()) + return; + target_lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat); /* @@ -4087,6 +4095,426 @@ void lru_gen_scan_around(struct page_vma_mapped_walk *pvmw) unlock_page_memcg(pvmw->page); } +/****************************************************************************** + * the eviction + ******************************************************************************/ + +static bool sort_page(struct page *page, struct lruvec *lruvec, int tier_to_isolate) +{ + bool success; + int gen = page_lru_gen(page); + int type = page_is_file_lru(page); + int zone = page_zonenum(page); + int tier = page_lru_tier(page); + struct lrugen *lrugen = &lruvec->evictable; + + VM_BUG_ON_PAGE(gen < 0, page); + VM_BUG_ON_PAGE(tier_to_isolate < 0, page); + + /* a lazy-free page that has been written into? */ + if (type && PageDirty(page) && PageAnon(page)) { + success = lru_gen_del_page(page, lruvec, false); + VM_BUG_ON_PAGE(!success, page); + SetPageSwapBacked(page); + add_page_to_lru_list_tail(page, lruvec); + return true; + } + + /* page_update_gen() has updated this page? */ + if (gen != lru_gen_from_seq(lrugen->min_seq[type])) { + list_move(&page->lru, &lrugen->lists[gen][type][zone]); + return true; + } + + /* protect this page if its tier has a higher refault rate */ + if (tier_to_isolate < tier) { + int hist = lru_hist_from_seq(gen); + + page_inc_gen(page, lruvec, false); + WRITE_ONCE(lrugen->protected[hist][type][tier - 1], + lrugen->protected[hist][type][tier - 1] + thp_nr_pages(page)); + inc_lruvec_state(lruvec, WORKINGSET_ACTIVATE_BASE + type); + return true; + } + + /* mark this page for reclaim if it's pending writeback */ + if (PageWriteback(page) || (type && PageDirty(page))) { + page_inc_gen(page, lruvec, true); + return true; + } + + return false; +} + +static bool isolate_page(struct page *page, struct lruvec *lruvec, struct scan_control *sc) +{ + bool success; + + if (!sc->may_unmap && page_mapped(page)) + return false; + + if (!(sc->may_writepage && (sc->gfp_mask & __GFP_IO)) && + (PageDirty(page) || (PageAnon(page) && !PageSwapCache(page)))) + return false; + + if (!get_page_unless_zero(page)) + return false; + + if (!TestClearPageLRU(page)) { + put_page(page); + return false; + } + + success = lru_gen_del_page(page, lruvec, true); + VM_BUG_ON_PAGE(!success, page); + + return true; +} + +static int scan_pages(struct lruvec *lruvec, struct scan_control *sc, long *nr_to_scan, + int type, int tier, struct list_head *list) +{ + bool success; + int gen, zone; + enum vm_event_item item; + int sorted = 0; + int scanned = 0; + int isolated = 0; + int remaining = MAX_BATCH_SIZE; + struct lrugen *lrugen = &lruvec->evictable; + struct mem_cgroup *memcg = lruvec_memcg(lruvec); + + VM_BUG_ON(!list_empty(list)); + + if (get_nr_gens(lruvec, type) == MIN_NR_GENS) + return -ENOENT; + + gen = lru_gen_from_seq(lrugen->min_seq[type]); + + for (zone = sc->reclaim_idx; zone >= 0; zone--) { + LIST_HEAD(moved); + int skipped = 0; + struct list_head *head = &lrugen->lists[gen][type][zone]; + + while (!list_empty(head)) { + struct page *page = lru_to_page(head); + int delta = thp_nr_pages(page); + + VM_BUG_ON_PAGE(PageTail(page), page); + VM_BUG_ON_PAGE(PageUnevictable(page), page); + VM_BUG_ON_PAGE(PageActive(page), page); + VM_BUG_ON_PAGE(page_is_file_lru(page) != type, page); + VM_BUG_ON_PAGE(page_zonenum(page) != zone, page); + + prefetchw_prev_lru_page(page, head, flags); + + scanned += delta; + + if (sort_page(page, lruvec, tier)) + sorted += delta; + else if (isolate_page(page, lruvec, sc)) { + list_add(&page->lru, list); + isolated += delta; + } else { + list_move(&page->lru, &moved); + skipped += delta; + } + + if (!--remaining) + break; + + if (max(isolated, skipped) >= SWAP_CLUSTER_MAX) + break; + } + + list_splice(&moved, head); + __count_zid_vm_events(PGSCAN_SKIP, zone, skipped); + + if (!remaining || isolated >= SWAP_CLUSTER_MAX) + break; + } + + success = try_to_inc_min_seq(lruvec, type); + + *nr_to_scan -= scanned; + + item = current_is_kswapd() ? PGSCAN_KSWAPD : PGSCAN_DIRECT; + if (!cgroup_reclaim(sc)) { + __count_vm_events(item, isolated); + __count_vm_events(PGREFILL, sorted); + } + __count_memcg_events(memcg, item, isolated); + __count_memcg_events(memcg, PGREFILL, sorted); + __count_vm_events(PGSCAN_ANON + type, isolated); + + if (isolated) + return isolated; + /* + * We may have trouble finding eligible pages due to reclaim_idx, + * may_unmap and may_writepage. The following check makes sure we won't + * be stuck if we aren't making enough progress. + */ + return !remaining || success || *nr_to_scan <= 0 ? 0 : -ENOENT; +} + +static int get_tier_to_isolate(struct lruvec *lruvec, int type) +{ + int tier; + struct controller_pos sp, pv; + + /* + * Ideally we don't want to evict upper tiers that have higher refault + * rates. However, we need to leave a margin for the fluctuations in + * refault rates. So we use a larger gain factor to make sure upper + * tiers are indeed more active. We choose 2 because the lowest upper + * tier would have twice of the refault rate of the base tier, according + * to their numbers of accesses. + */ + read_controller_pos(&sp, lruvec, type, 0, 1); + for (tier = 1; tier < MAX_NR_TIERS; tier++) { + read_controller_pos(&pv, lruvec, type, tier, 2); + if (!positive_ctrl_err(&sp, &pv)) + break; + } + + return tier - 1; +} + +static int get_type_to_scan(struct lruvec *lruvec, int swappiness, int *tier_to_isolate) +{ + int type, tier; + struct controller_pos sp, pv; + int gain[ANON_AND_FILE] = { swappiness, 200 - swappiness }; + + /* + * Compare the refault rates between the base tiers of anon and file to + * determine which type to evict. Also need to compare the refault rates + * of the upper tiers of the selected type with that of the base tier of + * the other type to determine which tier of the selected type to evict. + */ + read_controller_pos(&sp, lruvec, 0, 0, gain[0]); + read_controller_pos(&pv, lruvec, 1, 0, gain[1]); + type = positive_ctrl_err(&sp, &pv); + + read_controller_pos(&sp, lruvec, !type, 0, gain[!type]); + for (tier = 1; tier < MAX_NR_TIERS; tier++) { + read_controller_pos(&pv, lruvec, type, tier, gain[type]); + if (!positive_ctrl_err(&sp, &pv)) + break; + } + + *tier_to_isolate = tier - 1; + + return type; +} + +static int isolate_pages(struct lruvec *lruvec, struct scan_control *sc, int swappiness, + long *nr_to_scan, int *type_to_scan, struct list_head *list) +{ + int i; + int type; + int isolated; + int tier = -1; + DEFINE_MAX_SEQ(lruvec); + DEFINE_MIN_SEQ(lruvec); + + VM_BUG_ON(!seq_is_valid(lruvec)); + + if (get_hi_wmark(max_seq, min_seq, swappiness) == MIN_NR_GENS) + return 0; + /* + * Try to select a type based on generations and swappiness, and if that + * fails, fall back to get_type_to_scan(). When anon and file are both + * available from the same generation, swappiness 200 is interpreted as + * anon first and swappiness 1 is interpreted as file first. + */ + if (!swappiness) + type = 1; + else if (min_seq[0] > min_seq[1]) + type = 1; + else if (min_seq[0] < min_seq[1]) + type = 0; + else if (swappiness == 1) + type = 1; + else if (swappiness == 200) + type = 0; + else + type = get_type_to_scan(lruvec, swappiness, &tier); + + if (tier == -1) + tier = get_tier_to_isolate(lruvec, type); + + for (i = !swappiness; i < ANON_AND_FILE; i++) { + isolated = scan_pages(lruvec, sc, nr_to_scan, type, tier, list); + if (isolated >= 0) + break; + + type = !type; + tier = get_tier_to_isolate(lruvec, type); + } + + if (isolated < 0) + isolated = *nr_to_scan = 0; + + *type_to_scan = type; + + return isolated; +} + +/* Main function used by the foreground, the background and the user-triggered eviction. */ +static bool evict_pages(struct lruvec *lruvec, struct scan_control *sc, int swappiness, + long *nr_to_scan) +{ + int type; + int isolated; + int reclaimed; + LIST_HEAD(list); + struct page *page; + enum vm_event_item item; + struct reclaim_stat stat; + struct mem_cgroup *memcg = lruvec_memcg(lruvec); + struct pglist_data *pgdat = lruvec_pgdat(lruvec); + + spin_lock_irq(&lruvec->lru_lock); + + isolated = isolate_pages(lruvec, sc, swappiness, nr_to_scan, &type, &list); + VM_BUG_ON(list_empty(&list) == !!isolated); + + if (isolated) + __mod_node_page_state(pgdat, NR_ISOLATED_ANON + type, isolated); + + spin_unlock_irq(&lruvec->lru_lock); + + if (!isolated) + goto done; + + reclaimed = shrink_page_list(&list, pgdat, sc, &stat, false); + /* + * We need to prevent rejected pages from being added back to the same + * lists they were isolated from. Otherwise we may risk looping on them + * forever. + */ + list_for_each_entry(page, &list, lru) { + if (!page_evictable(page)) + continue; + + if (!PageReclaim(page) || !(PageDirty(page) || PageWriteback(page))) + SetPageActive(page); + + ClearPageReferenced(page); + ClearPageWorkingset(page); + } + + spin_lock_irq(&lruvec->lru_lock); + + move_pages_to_lru(lruvec, &list); + + __mod_node_page_state(pgdat, NR_ISOLATED_ANON + type, -isolated); + + item = current_is_kswapd() ? PGSTEAL_KSWAPD : PGSTEAL_DIRECT; + if (!cgroup_reclaim(sc)) + __count_vm_events(item, reclaimed); + __count_memcg_events(memcg, item, reclaimed); + __count_vm_events(PGSTEAL_ANON + type, reclaimed); + + spin_unlock_irq(&lruvec->lru_lock); + + mem_cgroup_uncharge_list(&list); + free_unref_page_list(&list); + + sc->nr_reclaimed += reclaimed; +done: + return *nr_to_scan > 0 && sc->nr_reclaimed < sc->nr_to_reclaim; +} + +static long get_nr_to_scan(struct lruvec *lruvec, struct scan_control *sc, int swappiness) +{ + int gen, type, zone; + int nr_gens; + long nr_to_scan = 0; + struct lrugen *lrugen = &lruvec->evictable; + struct mem_cgroup *memcg = lruvec_memcg(lruvec); + DEFINE_MAX_SEQ(lruvec); + DEFINE_MIN_SEQ(lruvec); + + for (type = !swappiness; type < ANON_AND_FILE; type++) { + unsigned long seq; + + for (seq = min_seq[type]; seq <= max_seq; seq++) { + gen = lru_gen_from_seq(seq); + + for (zone = 0; zone <= sc->reclaim_idx; zone++) + nr_to_scan += READ_ONCE(lrugen->sizes[gen][type][zone]); + } + } + + if (nr_to_scan <= 0) + return 0; + + nr_gens = get_hi_wmark(max_seq, min_seq, swappiness); + + if (current_is_kswapd()) { + gen = lru_gen_from_seq(max_seq - nr_gens + 1); + if (time_is_before_eq_jiffies(READ_ONCE(lrugen->timestamps[gen]) + + READ_ONCE(lru_gen_min_ttl))) + sc->file_is_tiny = 0; + + /* leave the work to lru_gen_age_node() */ + if (nr_gens == MIN_NR_GENS) + return 0; + + if (nr_to_scan >= sc->nr_to_reclaim) + sc->force_deactivate = 0; + } + + nr_to_scan = max(nr_to_scan >> sc->priority, (long)!mem_cgroup_online(memcg)); + if (!nr_to_scan || nr_gens > MIN_NR_GENS) + return nr_to_scan; + + /* move onto other memcgs if we haven't tried them all yet */ + if (!mem_cgroup_disabled() && !sc->force_deactivate) { + sc->skipped_deactivate = 1; + return 0; + } + + return try_to_inc_max_seq(lruvec, max_seq, sc, swappiness) ? nr_to_scan : 0; +} + +static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) +{ + struct blk_plug plug; + long scanned = 0; + struct mem_cgroup *memcg = lruvec_memcg(lruvec); + + lru_add_drain(); + + blk_start_plug(&plug); + + while (true) { + long nr_to_scan; + int swappiness = sc->may_swap ? get_swappiness(memcg) : 0; + + nr_to_scan = get_nr_to_scan(lruvec, sc, swappiness) - scanned; + if (nr_to_scan <= 0) + break; + + scanned += nr_to_scan; + + if (!evict_pages(lruvec, sc, swappiness, &nr_to_scan)) + break; + + scanned -= nr_to_scan; + + if (mem_cgroup_below_min(memcg) || + (mem_cgroup_below_low(memcg) && !sc->memcg_low_reclaim)) + break; + + cond_resched(); + } + + blk_finish_plug(&plug); +} + /****************************************************************************** * state change ******************************************************************************/ @@ -4358,6 +4786,10 @@ static void lru_gen_age_node(struct pglist_data *pgdat, struct scan_control *sc) { } +static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) +{ +} + #endif /* CONFIG_LRU_GEN */ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) @@ -4371,6 +4803,11 @@ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) struct blk_plug plug; bool scan_adjusted; + if (lru_gen_enabled()) { + lru_gen_shrink_lruvec(lruvec, sc); + return; + } + get_scan_count(lruvec, sc, nr); /* Record the original scan target for proportional adjustments later */ @@ -4837,6 +5274,9 @@ static void snapshot_refaults(struct mem_cgroup *target_memcg, pg_data_t *pgdat) struct lruvec *target_lruvec; unsigned long refaults; + if (lru_gen_enabled()) + return; + target_lruvec = mem_cgroup_lruvec(target_memcg, pgdat); refaults = lruvec_page_state(target_lruvec, WORKINGSET_ACTIVATE_ANON); target_lruvec->refaults[0] = refaults;