From patchwork Tue Jan 5 22:58:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12000477 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2DD83C433E9 for ; Tue, 5 Jan 2021 22:59:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C979F22EBE for ; Tue, 5 Jan 2021 22:59:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C979F22EBE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 42C0F8D00C4; Tue, 5 Jan 2021 17:59:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3B4408D006E; Tue, 5 Jan 2021 17:59:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 140638D00C4; Tue, 5 Jan 2021 17:59:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0212.hostedemail.com [216.40.44.212]) by kanga.kvack.org (Postfix) with ESMTP id E74AD8D006E for ; Tue, 5 Jan 2021 17:59:09 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id B0C4D3629 for ; Tue, 5 Jan 2021 22:59:09 +0000 (UTC) X-FDA: 77673238818.14.smash32_040d269274dc Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin14.hostedemail.com (Postfix) with ESMTP id 9350D1822987A for ; Tue, 5 Jan 2021 22:59:09 +0000 (UTC) X-HE-Tag: smash32_040d269274dc X-Filterd-Recvd-Size: 6027 Received: from mail-pj1-f42.google.com (mail-pj1-f42.google.com [209.85.216.42]) by imf43.hostedemail.com (Postfix) with ESMTP for ; Tue, 5 Jan 2021 22:59:09 +0000 (UTC) Received: by mail-pj1-f42.google.com with SMTP id b5so571575pjl.0 for ; Tue, 05 Jan 2021 14:59:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/w8zHA39ZxWTccojV8sgLaQmxN/C0nsAOHnTjuJkkjk=; b=pKw4juce56d+GBEmByu2Qylqs4jJTT/Le0PivjE/+r/aD5Wxc44Wei78Nv3+gdnxqw MNjdwru+fxvfv4VCkLlnVImf7Is22pfli8p+jUrXFtnRkfq+B062VnccJtCfaSXy/RVC 5o/4rF3m9fuN2aIAKlIIfOKPOtKlQb2msBauxqK2Y/NQpzwr09sJG18Dzfp7vcIffN7n Xi3fPh4AcChPjKv/Gk46xGLEESYyN0XGD+JJ7V0nVOThUhBAJdVvMVzvs9p5g4FFb8zl xbITy9vE6FiQFC8FqKAdZsmeHz6EB6mCwCv40mHC2wllatjZkSUMIXy2HED4VzxNyrlY 8MYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/w8zHA39ZxWTccojV8sgLaQmxN/C0nsAOHnTjuJkkjk=; b=nz2yXd1STNzgxsktZ0LPiHVXLFEukRrLMa7CILolR/gh2FaZqcnzpitNNd/kMivMV2 3MTD8fNVTeD/ch3l6c1cATmLV4VJa1YbPgeaKkndcDMPz+FDbGnFmQyTN1aa6BnNEjVS WNRxFKM4KokHLLOac45wZTXDttI/TiCAkEBrWHx9h4AADjgckNZS1MB1aPsAZTaRemYv 0eVhm/6Dh8mGMQUrMXoDHMvKge2CVVFfQkNqbri8kaFslwUb8LHIBHhzSzCPRg9BbWfb fXtI1nrI1TQLABwx4x3pYTmxVi0uXDL5r7gegAlBZfvbiNLtQTjefcZXRqmuJR0sbvlO c6KA== X-Gm-Message-State: AOAM531y2PqEvWMe35KHnQ2qqcPIA83inNeAN7NTs90fLtvJ4dVMw2RW 74mp1wOa2XOab5V9ndiKXbY= X-Google-Smtp-Source: ABdhPJxcHNSPb1E7+FuR9nJPd1obGqKUl+cSei//usANPGUrV3vQhFy/nLBh6vsG4MmdfaYibgjhHA== X-Received: by 2002:a17:902:ee52:b029:da:4dee:1a54 with SMTP id 18-20020a170902ee52b02900da4dee1a54mr1306441plo.29.1609887548434; Tue, 05 Jan 2021 14:59:08 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id fw12sm244233pjb.43.2021.01.05.14.59.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 05 Jan 2021 14:59:07 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v3 PATCH 11/11] mm: vmscan: shrink deferred objects proportional to priority Date: Tue, 5 Jan 2021 14:58:17 -0800 Message-Id: <20210105225817.1036378-12-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210105225817.1036378-1-shy828301@gmail.com> References: <20210105225817.1036378-1-shy828301@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The number of deferred objects might get windup to an absurd number, and it results in clamp of slab objects. It is undesirable for sustaining workingset. So shrink deferred objects proportional to priority and cap nr_deferred to twice of cache items. The idea is borrowed fron Dave Chinner's patch: https://lore.kernel.org/linux-xfs/20191031234618.15403-13-david@fromorbit.com/ Tested with kernel build and vfs metadata heavy workload, no regression is spotted so far. Signed-off-by: Yang Shi --- mm/vmscan.c | 40 +++++----------------------------------- 1 file changed, 5 insertions(+), 35 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 71056057d26d..6832f1d24d2b 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -671,7 +671,6 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, */ nr = count_nr_deferred(shrinker, shrinkctl); - total_scan = nr; if (shrinker->seeks) { delta = freeable >> priority; delta *= 4; @@ -685,37 +684,9 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, delta = freeable / 2; } + total_scan = nr >> priority; total_scan += delta; - if (total_scan < 0) { - pr_err("shrink_slab: %pS negative objects to delete nr=%ld\n", - shrinker->scan_objects, total_scan); - total_scan = freeable; - next_deferred = nr; - } else - next_deferred = total_scan; - - /* - * We need to avoid excessive windup on filesystem shrinkers - * due to large numbers of GFP_NOFS allocations causing the - * shrinkers to return -1 all the time. This results in a large - * nr being built up so when a shrink that can do some work - * comes along it empties the entire cache due to nr >>> - * freeable. This is bad for sustaining a working set in - * memory. - * - * Hence only allow the shrinker to scan the entire cache when - * a large delta change is calculated directly. - */ - if (delta < freeable / 4) - total_scan = min(total_scan, freeable / 2); - - /* - * Avoid risking looping forever due to too large nr value: - * never try to free more than twice the estimate number of - * freeable entries. - */ - if (total_scan > freeable * 2) - total_scan = freeable * 2; + total_scan = min(total_scan, (2 * freeable)); trace_mm_shrink_slab_start(shrinker, shrinkctl, nr, freeable, delta, total_scan, priority); @@ -754,10 +725,9 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, cond_resched(); } - if (next_deferred >= scanned) - next_deferred -= scanned; - else - next_deferred = 0; + next_deferred = max_t(long, (nr - scanned), 0) + total_scan; + next_deferred = min(next_deferred, (2 * freeable)); + /* * move the unused scan count back into the shrinker in a * manner that handles concurrent updates.