From patchwork Mon Dec 14 22:37:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 11973303 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A19AC4361B for ; Mon, 14 Dec 2020 22:38:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DC67D20731 for ; Mon, 14 Dec 2020 22:38:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DC67D20731 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7A4776B0074; Mon, 14 Dec 2020 17:38:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6B56A6B0075; Mon, 14 Dec 2020 17:38:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 57D256B0078; Mon, 14 Dec 2020 17:38:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0069.hostedemail.com [216.40.44.69]) by kanga.kvack.org (Postfix) with ESMTP id 3EB516B0074 for ; Mon, 14 Dec 2020 17:38:03 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id F37018249980 for ; Mon, 14 Dec 2020 22:38:02 +0000 (UTC) X-FDA: 77593352046.13.group74_310d3d92741e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id 8A44C18140B69 for ; Mon, 14 Dec 2020 22:38:02 +0000 (UTC) X-HE-Tag: group74_310d3d92741e X-Filterd-Recvd-Size: 5755 Received: from mail-pf1-f193.google.com (mail-pf1-f193.google.com [209.85.210.193]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Mon, 14 Dec 2020 22:38:02 +0000 (UTC) Received: by mail-pf1-f193.google.com with SMTP id q22so13148626pfk.12 for ; Mon, 14 Dec 2020 14:38:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4UvK5zU8YmbSA4ZaOjSKDv3yX4mccXYxyaNC7iOnySk=; b=GeoybizuMfpDpfAV+/8gXc4rn4T8bM6oj5/eYEvqje82syR/3rKUqljo8Tm4WFWUg1 1w3F/pPUjwfEfRfpHW9pcmwhju62t7p3ZHTesDosxN1rma5RaKBYCUeBZL8cVQW7R79n OUaSC1f5iQFNeQHbcisZmna4WNL6WB3YRH+v5d/mMEW1oMAJ6BQBkqmf//LLdSlBMxP6 zsC0eX1wJB0kp1pCC5P6dLcZDzlLMAjPM3zUDse2W7bS6YBN5J/x6RhkXV1hKcLIa8zO D45Jw/7xDAM8F8o2M9p+L767QPqRyEGI77zEZbivRA/B5qGv7qFXpKsR3TvKy0dKlxBS Qmnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4UvK5zU8YmbSA4ZaOjSKDv3yX4mccXYxyaNC7iOnySk=; b=RWhNRJA1BTW9tUDmiwpxBCS3EgtbX6MybajmntC83z3aPolmkBg95AopIiW83swKDY FpnUpyIlfoy09qbCfSRIWAXJTO/qNxGHlCz72uXKLxbULdvJDrYZ5ScTUcbHJBpa3IV1 JaXkaQnEkzw5z3UJ6yOapmRhkEaCMgMDGi9aSGkSokOrMg/RGy2PkI8+RAlM739FuUL0 oUHjtreDgqqDeTwu931hAlmMtB1FYF3nVLdm8DPAH5XeI8rxneFIDkgIeykePXGDpkwy 2ENXt2B/HwiZ/f/HoI84vq5wr5mV3FCfbZHCt2/vYLTnRCkYm9BVWFNPD7YcRQp0wBiQ mrXw== X-Gm-Message-State: AOAM5325l3wUx31pTJiH9c1UZis/KDT+Xs1WSuKZXEZaXwq1Oy88O/ue F/3Jy4KFv0fX2C95tZAfBKM= X-Google-Smtp-Source: ABdhPJzYQzO1DELUNIEnqXV01z7jV6nNrNROlE526aXtT8QmQi4/cQdvSHCGGkYYtvsWa+qjj0okbA== X-Received: by 2002:a63:344b:: with SMTP id b72mr16684261pga.406.1607985481301; Mon, 14 Dec 2020 14:38:01 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id d4sm20610758pfo.127.2020.12.14.14.37.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Dec 2020 14:38:00 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v2 PATCH 9/9] mm: vmscan: shrink deferred objects proportional to priority Date: Mon, 14 Dec 2020 14:37:22 -0800 Message-Id: <20201214223722.232537-10-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201214223722.232537-1-shy828301@gmail.com> References: <20201214223722.232537-1-shy828301@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The number of deferred objects might get windup to an absurd number, and it results in clamp of slab objects. It is undesirable for sustaining workingset. So shrink deferred objects proportional to priority and cap nr_deferred to twice of cache items. Signed-off-by: Yang Shi --- mm/vmscan.c | 40 +++++----------------------------------- 1 file changed, 5 insertions(+), 35 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 693a41e89969..58f4a383f0df 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -525,7 +525,6 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, */ nr = count_nr_deferred(shrinker, shrinkctl); - total_scan = nr; if (shrinker->seeks) { delta = freeable >> priority; delta *= 4; @@ -539,37 +538,9 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, delta = freeable / 2; } + total_scan = nr >> priority; total_scan += delta; - if (total_scan < 0) { - pr_err("shrink_slab: %pS negative objects to delete nr=%ld\n", - shrinker->scan_objects, total_scan); - total_scan = freeable; - next_deferred = nr; - } else - next_deferred = total_scan; - - /* - * We need to avoid excessive windup on filesystem shrinkers - * due to large numbers of GFP_NOFS allocations causing the - * shrinkers to return -1 all the time. This results in a large - * nr being built up so when a shrink that can do some work - * comes along it empties the entire cache due to nr >>> - * freeable. This is bad for sustaining a working set in - * memory. - * - * Hence only allow the shrinker to scan the entire cache when - * a large delta change is calculated directly. - */ - if (delta < freeable / 4) - total_scan = min(total_scan, freeable / 2); - - /* - * Avoid risking looping forever due to too large nr value: - * never try to free more than twice the estimate number of - * freeable entries. - */ - if (total_scan > freeable * 2) - total_scan = freeable * 2; + total_scan = min(total_scan, (2 * freeable)); trace_mm_shrink_slab_start(shrinker, shrinkctl, nr, freeable, delta, total_scan, priority); @@ -608,10 +579,9 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, cond_resched(); } - if (next_deferred >= scanned) - next_deferred -= scanned; - else - next_deferred = 0; + next_deferred = max_t(long, (nr - scanned), 0) + total_scan; + next_deferred = min(next_deferred, (2 * freeable)); + /* * move the unused scan count back into the shrinker in a * manner that handles concurrent updates.