From patchwork Wed Feb 17 00:13:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12090793 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4BF9AC433DB for ; Wed, 17 Feb 2021 00:14:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DFD3C64EB1 for ; Wed, 17 Feb 2021 00:13:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DFD3C64EB1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7DA268D0015; Tue, 16 Feb 2021 19:13:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 766A18D000D; Tue, 16 Feb 2021 19:13:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5B9928D0015; Tue, 16 Feb 2021 19:13:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0109.hostedemail.com [216.40.44.109]) by kanga.kvack.org (Postfix) with ESMTP id 4191C8D000D for ; Tue, 16 Feb 2021 19:13:59 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 070D175AF for ; Wed, 17 Feb 2021 00:13:59 +0000 (UTC) X-FDA: 77825836998.23.8E0FD46 Received: from mail-pf1-f178.google.com (mail-pf1-f178.google.com [209.85.210.178]) by imf02.hostedemail.com (Postfix) with ESMTP id 80387407F8ED for ; Wed, 17 Feb 2021 00:13:51 +0000 (UTC) Received: by mail-pf1-f178.google.com with SMTP id q20so7239752pfu.8 for ; Tue, 16 Feb 2021 16:13:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=l3hgH+28WYE3ZXoblhdSElWyJzRaS9xgsMhEuGQG4Ck=; b=QMtwxddL/Doi8ti3RVqupaiMXJnaV9oGrObTbm+fsfP+CLJwAhTg4H2cVedturf5DV zxMhDQidZP6zDkrUwUy3kcfi15WIvwH1mEzbKU7Ans11F5RGcUqAjy0++7ZCS1Fbt7UR Y6Ivl518m6sTiAbBGyoEKpHzsD2kY7Q4DVKMy1qWplkTEvLvyjpv9y5gW4RWppP6eE4M bzVo3X+si5N15PafDj0vU/TQRrW9QAKXgxtiKEJZcZ6m0SjlfAJ6KPp7s3soh2rRiMLJ MEcAAAj9p+wWS6Sck1vMJ3xrRtpHGXpntrpfA1T6vI15ZY7v+XbVstR+0U155i9htOoU brvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=l3hgH+28WYE3ZXoblhdSElWyJzRaS9xgsMhEuGQG4Ck=; b=oENdCdZYWjwB5YAlV6rrs9H6+dr6ufbwPmpOtPWxKqRB9oEaffxvwu2Ed7QPe4ebeQ BQRHSGfvMz337p+Uyhmr9G8Mb0DaIZG4LgkabErl7D1dEe4duxymqKXne3O/UkrljL33 18J/G4VX8JNiWXUzutEd0ZL/pxfTfG/sKJ0aWG1y6vr9RF4z28c9i6uiXN+5lQR/ALTK EdnMcy3HfGhzCt0D2+jMBxaIxs1dMA+37I6D7kuy0Upli8zgq/VchP7eaOXJbkiSDeQS u2SAcQkUCq0j5H3fqfuGAgVg8wd0S7nOKwexCnI8blLCdSw1mnbwPmzdIb3Y+oC8fHH3 SH9g== X-Gm-Message-State: AOAM531G9C5gbRMWgBH/mM27rW51N/PuK0thTBn3eiEQG53Wz2Z6dGNl azFMnP5deobsD2MyS30ZSDc= X-Google-Smtp-Source: ABdhPJzDqnLKOjhckWkxEnIUdYEfij0kOz6DzuqxoZpDS+iPjz7y/hkLXnRk9SrvxZKe5UMlLh6QPA== X-Received: by 2002:aa7:888b:0:b029:1ec:df4a:4da2 with SMTP id z11-20020aa7888b0000b02901ecdf4a4da2mr14656pfe.66.1613520837746; Tue, 16 Feb 2021 16:13:57 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y12sm99220pjc.56.2021.02.16.16.13.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Feb 2021 16:13:56 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, vbabka@suse.cz, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v8 PATCH 13/13] mm: vmscan: shrink deferred objects proportional to priority Date: Tue, 16 Feb 2021 16:13:22 -0800 Message-Id: <20210217001322.2226796-14-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210217001322.2226796-1-shy828301@gmail.com> References: <20210217001322.2226796-1-shy828301@gmail.com> MIME-Version: 1.0 X-Stat-Signature: wrdi8qpxbfizkdemyzc1cahhn84hu9yt X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 80387407F8ED Received-SPF: none (gmail.com>: No applicable sender policy available) receiver=imf02; identity=mailfrom; envelope-from=""; helo=mail-pf1-f178.google.com; client-ip=209.85.210.178 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1613520831-369691 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The number of deferred objects might get windup to an absurd number, and it results in clamp of slab objects. It is undesirable for sustaining workingset. So shrink deferred objects proportional to priority and cap nr_deferred to twice of cache items. The idea is borrowed from Dave Chinner's patch: https://lore.kernel.org/linux-xfs/20191031234618.15403-13-david@fromorbit.com/ Tested with kernel build and vfs metadata heavy workload in our production environment, no regression is spotted so far. Signed-off-by: Yang Shi --- mm/vmscan.c | 46 +++++++++++----------------------------------- 1 file changed, 11 insertions(+), 35 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 4247a3568585..b3bdc3ba8edc 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -661,7 +661,6 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, */ nr = xchg_nr_deferred(shrinker, shrinkctl); - total_scan = nr; if (shrinker->seeks) { delta = freeable >> priority; delta *= 4; @@ -675,37 +674,9 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, delta = freeable / 2; } + total_scan = nr >> priority; total_scan += delta; - if (total_scan < 0) { - pr_err("shrink_slab: %pS negative objects to delete nr=%ld\n", - shrinker->scan_objects, total_scan); - total_scan = freeable; - next_deferred = nr; - } else - next_deferred = total_scan; - - /* - * We need to avoid excessive windup on filesystem shrinkers - * due to large numbers of GFP_NOFS allocations causing the - * shrinkers to return -1 all the time. This results in a large - * nr being built up so when a shrink that can do some work - * comes along it empties the entire cache due to nr >>> - * freeable. This is bad for sustaining a working set in - * memory. - * - * Hence only allow the shrinker to scan the entire cache when - * a large delta change is calculated directly. - */ - if (delta < freeable / 4) - total_scan = min(total_scan, freeable / 2); - - /* - * Avoid risking looping forever due to too large nr value: - * never try to free more than twice the estimate number of - * freeable entries. - */ - if (total_scan > freeable * 2) - total_scan = freeable * 2; + total_scan = min(total_scan, (2 * freeable)); trace_mm_shrink_slab_start(shrinker, shrinkctl, nr, freeable, delta, total_scan, priority); @@ -744,10 +715,15 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, cond_resched(); } - if (next_deferred >= scanned) - next_deferred -= scanned; - else - next_deferred = 0; + /* + * The deferred work is increased by any new work (delta) that wasn't + * done, decreased by old deferred work that was done now. + * + * And it is capped to two times of the freeable items. + */ + next_deferred = max_t(long, (nr + delta - scanned), 0); + next_deferred = min(next_deferred, (2 * freeable)); + /* * move the unused scan count back into the shrinker in a * manner that handles concurrent updates.