From patchwork Sun Sep 18 08:00:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 12979343 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EAEB5C32771 for ; Sun, 18 Sep 2022 08:01:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F386F8000B; Sun, 18 Sep 2022 04:01:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EE73D80007; Sun, 18 Sep 2022 04:01:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D5FFC8000B; Sun, 18 Sep 2022 04:01:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id C174480007 for ; Sun, 18 Sep 2022 04:01:06 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 8C23340978 for ; Sun, 18 Sep 2022 08:01:06 +0000 (UTC) X-FDA: 79924460532.26.6105EDE Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf29.hostedemail.com (Postfix) with ESMTP id 36FC7120004 for ; Sun, 18 Sep 2022 08:01:06 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id t6-20020a25b706000000b006b38040b6f7so2750061ybj.6 for ; Sun, 18 Sep 2022 01:01:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:from:to:cc:subject:date; bh=0oR2AMZtLrilKX/tOsvHYJlM6m0J2f6rXreWGImCdcM=; b=rDEn6mC2Cv0+8MsqLOcyrUGJ3L5PE5JM9QmsHPTsrAltS824dr1FaNQxTKgtdOM6ae WVXPvjOApoJgNDJ7fwOu8vkltCQbbF2eWrOnhseihB9E4iPKU9/LxyyuwQH/TPkkacGp XRK42L245E/5OalwHE4b2OVLWNC8Q1cMJ6itgWdkFmJ1YhxF5etyyLjyLuDhjfgohoAY FLffoFZZG3PA++ELM6Ujs4+rIWEEjptBHHMicKLB5baK845Kd53WOsKoqNHqbleAdJAQ CFWuKqldh5lEcBM3vB7JEIWDNLF9dqMfMStkkTHTrPcci3VTqHL/t+FGZb34VbGsNrQ3 egxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:x-gm-message-state:from:to :cc:subject:date; bh=0oR2AMZtLrilKX/tOsvHYJlM6m0J2f6rXreWGImCdcM=; b=ZhPFrpgbLZ5xCA4s6cJJgNarOTfW+6+TwPbQ0Bb7RSxddEP4ru88F0kVieupqiajWA 8xpTx4X0CciB16B2IIGrHoo//M04FEDUikpB0v4d45VeeAmD3jE2OYoShdNweYJoED8t o7S4UIvGSFc+3LpXleHhMJIdYhAMvix9hs8s1myoqxPSyiPIsjcnYJ6hZ1muEpQJexta FbyV2z/i5loyMSXuGqmWCOH4d+FqZWh2296qL/M7FDgR4ICXTQP0ZfhTKoRjsjHx1GqD AYsWv5iZZrhj0R5vXhpOjtL4VcbcNTBiXGwb2Tlfb0U2v9/hYcDOBuPfA4og8sfq0i77 WJyg== X-Gm-Message-State: ACrzQf3ebaxFdm2fp16iSUP0whxaz3mg8ZdDEC6A17UllVdixjfb3atd HjRbT5qZ0dE8sYWSXu56MkqnJXLEmPw= X-Google-Smtp-Source: AMsMyM5PnCPBLmbsXAdHjH5htekdms1afDgU5VPbP3qpBhgQiikrLdUDyagKZij0FTSSobO7VMQ/mWUczP8= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:c05a:2e99:29cd:d157]) (user=yuzhao job=sendgmr) by 2002:a81:6555:0:b0:349:ecfc:ef8 with SMTP id z82-20020a816555000000b00349ecfc0ef8mr11045135ywb.135.1663488065508; Sun, 18 Sep 2022 01:01:05 -0700 (PDT) Date: Sun, 18 Sep 2022 02:00:00 -0600 In-Reply-To: <20220918080010.2920238-1-yuzhao@google.com> Message-Id: <20220918080010.2920238-4-yuzhao@google.com> Mime-Version: 1.0 References: <20220918080010.2920238-1-yuzhao@google.com> X-Mailer: git-send-email 2.37.3.968.ga6b4b080e4-goog Subject: [PATCH mm-unstable v15 03/14] mm/vmscan.c: refactor shrink_node() From: Yu Zhao To: Andrew Morton Cc: Andi Kleen , Aneesh Kumar , Catalin Marinas , Dave Hansen , Hillf Danton , Jens Axboe , Johannes Weiner , Jonathan Corbet , Linus Torvalds , Matthew Wilcox , Mel Gorman , Michael Larabel , Michal Hocko , Mike Rapoport , Peter Zijlstra , Tejun Heo , Vlastimil Babka , Will Deacon , linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org, page-reclaim@google.com, Yu Zhao , Barry Song , Miaohe Lin , Brian Geffon , Jan Alexander Steffens , Oleksandr Natalenko , Steven Barrett , Suleiman Souhlal , Daniel Byrne , Donald Carr , " =?utf-8?q?Holger_Hoffst=C3=A4tte?= " , Konstantin Kharlamov , Shuang Zhai , Sofia Trinh , Vaibhav Jain ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=rDEn6mC2; spf=pass (imf29.hostedemail.com: domain of 3QdAmYwYKCJwUQVD6KCKKCHA.8KIHEJQT-IIGR68G.KNC@flex--yuzhao.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3QdAmYwYKCJwUQVD6KCKKCHA.8KIHEJQT-IIGR68G.KNC@flex--yuzhao.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1663488066; a=rsa-sha256; cv=none; b=j4vYzhSI8wD6jF4yuuFQTztwdjx+dg2k5Xeb/pDSS+sUcjdH/vIo5+YvZ+ieGsriRksC6o 6Vqmq+e7I1oeex6VPDxmF9v3X7/+ePe3RcgZIbbQVkSIisPDVypxijiy2rT53kdhmsUt89 JTHWHF55p9dDIPptAsh91L7Zb93xwu8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1663488066; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0oR2AMZtLrilKX/tOsvHYJlM6m0J2f6rXreWGImCdcM=; b=u1zzOtr1szp7lOQH3owa+xkXDJMdAeTJlcG8nAiY5wu8YS0RYMYURmQXgap4ioIJyje7nQ mbu4WkBuPXRTL03NjQ2xsxCemNvDKJOeySZLGUcs3JGkAqb5/x5c9EjdDCSyL0/MyZBtkq 6aLx0ntpx6QQMd+El6jG1sBb6dh4pIs= X-Rspam-User: X-Stat-Signature: ry1c1ejnbnhzdtga5junzkwaukh8uox7 X-Rspamd-Queue-Id: 36FC7120004 Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=rDEn6mC2; spf=pass (imf29.hostedemail.com: domain of 3QdAmYwYKCJwUQVD6KCKKCHA.8KIHEJQT-IIGR68G.KNC@flex--yuzhao.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3QdAmYwYKCJwUQVD6KCKKCHA.8KIHEJQT-IIGR68G.KNC@flex--yuzhao.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam04 X-HE-Tag: 1663488066-624301 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch refactors shrink_node() to improve readability for the upcoming changes to mm/vmscan.c. Signed-off-by: Yu Zhao Reviewed-by: Barry Song Reviewed-by: Miaohe Lin Acked-by: Brian Geffon Acked-by: Jan Alexander Steffens (heftig) Acked-by: Oleksandr Natalenko Acked-by: Steven Barrett Acked-by: Suleiman Souhlal Tested-by: Daniel Byrne Tested-by: Donald Carr Tested-by: Holger Hoffstätte Tested-by: Konstantin Kharlamov Tested-by: Shuang Zhai Tested-by: Sofia Trinh Tested-by: Vaibhav Jain --- mm/vmscan.c | 198 +++++++++++++++++++++++++++------------------------- 1 file changed, 104 insertions(+), 94 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 992ba6a0bf10..0869cee13a90 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2728,6 +2728,109 @@ enum scan_balance { SCAN_FILE, }; +static void prepare_scan_count(pg_data_t *pgdat, struct scan_control *sc) +{ + unsigned long file; + struct lruvec *target_lruvec; + + target_lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat); + + /* + * Flush the memory cgroup stats, so that we read accurate per-memcg + * lruvec stats for heuristics. + */ + mem_cgroup_flush_stats(); + + /* + * Determine the scan balance between anon and file LRUs. + */ + spin_lock_irq(&target_lruvec->lru_lock); + sc->anon_cost = target_lruvec->anon_cost; + sc->file_cost = target_lruvec->file_cost; + spin_unlock_irq(&target_lruvec->lru_lock); + + /* + * Target desirable inactive:active list ratios for the anon + * and file LRU lists. + */ + if (!sc->force_deactivate) { + unsigned long refaults; + + /* + * When refaults are being observed, it means a new + * workingset is being established. Deactivate to get + * rid of any stale active pages quickly. + */ + refaults = lruvec_page_state(target_lruvec, + WORKINGSET_ACTIVATE_ANON); + if (refaults != target_lruvec->refaults[WORKINGSET_ANON] || + inactive_is_low(target_lruvec, LRU_INACTIVE_ANON)) + sc->may_deactivate |= DEACTIVATE_ANON; + else + sc->may_deactivate &= ~DEACTIVATE_ANON; + + refaults = lruvec_page_state(target_lruvec, + WORKINGSET_ACTIVATE_FILE); + if (refaults != target_lruvec->refaults[WORKINGSET_FILE] || + inactive_is_low(target_lruvec, LRU_INACTIVE_FILE)) + sc->may_deactivate |= DEACTIVATE_FILE; + else + sc->may_deactivate &= ~DEACTIVATE_FILE; + } else + sc->may_deactivate = DEACTIVATE_ANON | DEACTIVATE_FILE; + + /* + * If we have plenty of inactive file pages that aren't + * thrashing, try to reclaim those first before touching + * anonymous pages. + */ + file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE); + if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE)) + sc->cache_trim_mode = 1; + else + sc->cache_trim_mode = 0; + + /* + * Prevent the reclaimer from falling into the cache trap: as + * cache pages start out inactive, every cache fault will tip + * the scan balance towards the file LRU. And as the file LRU + * shrinks, so does the window for rotation from references. + * This means we have a runaway feedback loop where a tiny + * thrashing file LRU becomes infinitely more attractive than + * anon pages. Try to detect this based on file LRU size. + */ + if (!cgroup_reclaim(sc)) { + unsigned long total_high_wmark = 0; + unsigned long free, anon; + int z; + + free = sum_zone_node_page_state(pgdat->node_id, NR_FREE_PAGES); + file = node_page_state(pgdat, NR_ACTIVE_FILE) + + node_page_state(pgdat, NR_INACTIVE_FILE); + + for (z = 0; z < MAX_NR_ZONES; z++) { + struct zone *zone = &pgdat->node_zones[z]; + + if (!managed_zone(zone)) + continue; + + total_high_wmark += high_wmark_pages(zone); + } + + /* + * Consider anon: if that's low too, this isn't a + * runaway file reclaim problem, but rather just + * extreme pressure. Reclaim as per usual then. + */ + anon = node_page_state(pgdat, NR_INACTIVE_ANON); + + sc->file_is_tiny = + file + free <= total_high_wmark && + !(sc->may_deactivate & DEACTIVATE_ANON) && + anon >> sc->priority; + } +} + /* * Determine how aggressively the anon and file LRU lists should be * scanned. @@ -3195,109 +3298,16 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) unsigned long nr_reclaimed, nr_scanned; struct lruvec *target_lruvec; bool reclaimable = false; - unsigned long file; target_lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat); again: - /* - * Flush the memory cgroup stats, so that we read accurate per-memcg - * lruvec stats for heuristics. - */ - mem_cgroup_flush_stats(); - memset(&sc->nr, 0, sizeof(sc->nr)); nr_reclaimed = sc->nr_reclaimed; nr_scanned = sc->nr_scanned; - /* - * Determine the scan balance between anon and file LRUs. - */ - spin_lock_irq(&target_lruvec->lru_lock); - sc->anon_cost = target_lruvec->anon_cost; - sc->file_cost = target_lruvec->file_cost; - spin_unlock_irq(&target_lruvec->lru_lock); - - /* - * Target desirable inactive:active list ratios for the anon - * and file LRU lists. - */ - if (!sc->force_deactivate) { - unsigned long refaults; - - /* - * When refaults are being observed, it means a new - * workingset is being established. Deactivate to get - * rid of any stale active pages quickly. - */ - refaults = lruvec_page_state(target_lruvec, - WORKINGSET_ACTIVATE_ANON); - if (refaults != target_lruvec->refaults[WORKINGSET_ANON] || - inactive_is_low(target_lruvec, LRU_INACTIVE_ANON)) - sc->may_deactivate |= DEACTIVATE_ANON; - else - sc->may_deactivate &= ~DEACTIVATE_ANON; - - refaults = lruvec_page_state(target_lruvec, - WORKINGSET_ACTIVATE_FILE); - if (refaults != target_lruvec->refaults[WORKINGSET_FILE] || - inactive_is_low(target_lruvec, LRU_INACTIVE_FILE)) - sc->may_deactivate |= DEACTIVATE_FILE; - else - sc->may_deactivate &= ~DEACTIVATE_FILE; - } else - sc->may_deactivate = DEACTIVATE_ANON | DEACTIVATE_FILE; - - /* - * If we have plenty of inactive file pages that aren't - * thrashing, try to reclaim those first before touching - * anonymous pages. - */ - file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE); - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE)) - sc->cache_trim_mode = 1; - else - sc->cache_trim_mode = 0; - - /* - * Prevent the reclaimer from falling into the cache trap: as - * cache pages start out inactive, every cache fault will tip - * the scan balance towards the file LRU. And as the file LRU - * shrinks, so does the window for rotation from references. - * This means we have a runaway feedback loop where a tiny - * thrashing file LRU becomes infinitely more attractive than - * anon pages. Try to detect this based on file LRU size. - */ - if (!cgroup_reclaim(sc)) { - unsigned long total_high_wmark = 0; - unsigned long free, anon; - int z; - - free = sum_zone_node_page_state(pgdat->node_id, NR_FREE_PAGES); - file = node_page_state(pgdat, NR_ACTIVE_FILE) + - node_page_state(pgdat, NR_INACTIVE_FILE); - - for (z = 0; z < MAX_NR_ZONES; z++) { - struct zone *zone = &pgdat->node_zones[z]; - if (!managed_zone(zone)) - continue; - - total_high_wmark += high_wmark_pages(zone); - } - - /* - * Consider anon: if that's low too, this isn't a - * runaway file reclaim problem, but rather just - * extreme pressure. Reclaim as per usual then. - */ - anon = node_page_state(pgdat, NR_INACTIVE_ANON); - - sc->file_is_tiny = - file + free <= total_high_wmark && - !(sc->may_deactivate & DEACTIVATE_ANON) && - anon >> sc->priority; - } + prepare_scan_count(pgdat, sc); shrink_node_memcgs(pgdat, sc);