From patchwork Wed Mar 10 17:46:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12128827 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10236C433E0 for ; Wed, 10 Mar 2021 17:46:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ACED964F45 for ; Wed, 10 Mar 2021 17:46:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ACED964F45 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2118F8D01E9; Wed, 10 Mar 2021 12:46:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1EBC98D01D5; Wed, 10 Mar 2021 12:46:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 016BC8D01E9; Wed, 10 Mar 2021 12:46:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0228.hostedemail.com [216.40.44.228]) by kanga.kvack.org (Postfix) with ESMTP id DADFB8D01D5 for ; Wed, 10 Mar 2021 12:46:36 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 957728249980 for ; Wed, 10 Mar 2021 17:46:36 +0000 (UTC) X-FDA: 77904694392.10.82964E9 Received: from mail-pj1-f45.google.com (mail-pj1-f45.google.com [209.85.216.45]) by imf28.hostedemail.com (Postfix) with ESMTP id 2D74A20053DC for ; Wed, 10 Mar 2021 17:46:37 +0000 (UTC) Received: by mail-pj1-f45.google.com with SMTP id lr10-20020a17090b4b8ab02900dd61b95c5eso5384249pjb.4 for ; Wed, 10 Mar 2021 09:46:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=AJz0Wb5sPGubosZGWYITbWJb9prsfgWGS78iJTHia7k=; b=hwmGVy7R7pl3/e+v47GWlbKqHBOLmJ94Qrk62MCC030HH4KX6PRx0iAmUE1p8pDzNi F/LSOLjE0cet/nr83buTzNrZpM0dpyEyr/hVUGQIvBIsIxsyHtWURh0uCjYjU9ENb4Db dPOL3E+vRSg5NRdseOw6yYlVj952heRYvZEcYigrIWTi8WNFatQt9NKijhRkVKb1AUtC 97HCsLisryN80Ldfxr/nv+TBd4VNWwtltn8jtn+gxcTl5OgJxM2DJsgKC+AMxOPy2VQ5 Oequx25raFoyykWstClKOop08hSLpFylGK/uB6UEO7khh5mvsikyMfrVyAqtDIqTaMcI LBqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=AJz0Wb5sPGubosZGWYITbWJb9prsfgWGS78iJTHia7k=; b=B3c6gVgiW9boWjOydwmJZvSPu+hyiPVTYAB83wJGBG7xLxNWH06Xmhl3A8qppUcJ8+ C51KPNy49siB4HdFPAAnjzQcr1kCw1VNDstu1l/01OGeAWRprqFLYWarb2QP4JZNiYwm GE3MrZlTYbB07kLQ5e06MgbdnI4omAwsePtQHXAY3lIB/e2haQUeQewd88cjVOgniDmn fIr+hcIIT1w0glObY9PFDyC3R41QDwrVnoNA4DPCmCdFvXQd6YZbI5IpuCVb2jCICa3S HgX7ifU988f0t5M2vh7AuftdS/YpOuc0ho7ETbQOaVAbbRioUd/7pr1qvqi8r83E1Zue o0sw== X-Gm-Message-State: AOAM531aYbcAeMGcFECPA1sqLMHxbjDIRR4VKa8DwS8xqxh92Z8PHdPW 9nr5QbV61zNT5F0pcqK9tas= X-Google-Smtp-Source: ABdhPJxd01rtfsohsp6B/Mcet9kZtJ3KN9JVwchG8OO5e7MCaQ8wJTNHPWJpN2zoiIerX1hfoH8USQ== X-Received: by 2002:a17:90b:609:: with SMTP id gb9mr4568728pjb.209.1615398395320; Wed, 10 Mar 2021 09:46:35 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id d6sm145804pfq.109.2021.03.10.09.46.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 10 Mar 2021 09:46:34 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, vbabka@suse.cz, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v9 PATCH 10/13] mm: vmscan: use per memcg nr_deferred of shrinker Date: Wed, 10 Mar 2021 09:46:00 -0800 Message-Id: <20210310174603.5093-11-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210310174603.5093-1-shy828301@gmail.com> References: <20210310174603.5093-1-shy828301@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 2D74A20053DC X-Stat-Signature: nxb619b7utpmk4zgpkcu99gjygonpxy8 Received-SPF: none (gmail.com>: No applicable sender policy available) receiver=imf28; identity=mailfrom; envelope-from=""; helo=mail-pj1-f45.google.com; client-ip=209.85.216.45 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615398397-193209 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use per memcg's nr_deferred for memcg aware shrinkers. The shrinker's nr_deferred will be used in the following cases: 1. Non memcg aware shrinkers 2. !CONFIG_MEMCG 3. memcg is disabled by boot parameter Acked-by: Roman Gushchin Acked-by: Kirill Tkhai Reviewed-by: Shakeel Butt Signed-off-by: Yang Shi --- mm/vmscan.c | 78 ++++++++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 66 insertions(+), 12 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index ae82afe6cec6..326f0e0c4356 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -374,6 +374,24 @@ static void unregister_memcg_shrinker(struct shrinker *shrinker) idr_remove(&shrinker_idr, id); } +static long xchg_nr_deferred_memcg(int nid, struct shrinker *shrinker, + struct mem_cgroup *memcg) +{ + struct shrinker_info *info; + + info = shrinker_info_protected(memcg, nid); + return atomic_long_xchg(&info->nr_deferred[shrinker->id], 0); +} + +static long add_nr_deferred_memcg(long nr, int nid, struct shrinker *shrinker, + struct mem_cgroup *memcg) +{ + struct shrinker_info *info; + + info = shrinker_info_protected(memcg, nid); + return atomic_long_add_return(nr, &info->nr_deferred[shrinker->id]); +} + static bool cgroup_reclaim(struct scan_control *sc) { return sc->target_mem_cgroup; @@ -412,6 +430,18 @@ static void unregister_memcg_shrinker(struct shrinker *shrinker) { } +static long xchg_nr_deferred_memcg(int nid, struct shrinker *shrinker, + struct mem_cgroup *memcg) +{ + return 0; +} + +static long add_nr_deferred_memcg(long nr, int nid, struct shrinker *shrinker, + struct mem_cgroup *memcg) +{ + return 0; +} + static bool cgroup_reclaim(struct scan_control *sc) { return false; @@ -423,6 +453,39 @@ static bool writeback_throttling_sane(struct scan_control *sc) } #endif +static long xchg_nr_deferred(struct shrinker *shrinker, + struct shrink_control *sc) +{ + int nid = sc->nid; + + if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) + nid = 0; + + if (sc->memcg && + (shrinker->flags & SHRINKER_MEMCG_AWARE)) + return xchg_nr_deferred_memcg(nid, shrinker, + sc->memcg); + + return atomic_long_xchg(&shrinker->nr_deferred[nid], 0); +} + + +static long add_nr_deferred(long nr, struct shrinker *shrinker, + struct shrink_control *sc) +{ + int nid = sc->nid; + + if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) + nid = 0; + + if (sc->memcg && + (shrinker->flags & SHRINKER_MEMCG_AWARE)) + return add_nr_deferred_memcg(nr, nid, shrinker, + sc->memcg); + + return atomic_long_add_return(nr, &shrinker->nr_deferred[nid]); +} + /* * This misses isolated pages which are not accounted for to save counters. * As the data only determines if reclaim or compaction continues, it is @@ -559,14 +622,10 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, long freeable; long nr; long new_nr; - int nid = shrinkctl->nid; long batch_size = shrinker->batch ? shrinker->batch : SHRINK_BATCH; long scanned = 0, next_deferred; - if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) - nid = 0; - freeable = shrinker->count_objects(shrinker, shrinkctl); if (freeable == 0 || freeable == SHRINK_EMPTY) return freeable; @@ -576,7 +635,7 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, * and zero it so that other concurrent shrinker invocations * don't also do this scanning work. */ - nr = atomic_long_xchg(&shrinker->nr_deferred[nid], 0); + nr = xchg_nr_deferred(shrinker, shrinkctl); total_scan = nr; if (shrinker->seeks) { @@ -667,14 +726,9 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, next_deferred = 0; /* * move the unused scan count back into the shrinker in a - * manner that handles concurrent updates. If we exhausted the - * scan, there is no need to do an update. + * manner that handles concurrent updates. */ - if (next_deferred > 0) - new_nr = atomic_long_add_return(next_deferred, - &shrinker->nr_deferred[nid]); - else - new_nr = atomic_long_read(&shrinker->nr_deferred[nid]); + new_nr = add_nr_deferred(next_deferred, shrinker, shrinkctl); trace_mm_shrink_slab_end(shrinker, shrinkctl->nid, freed, nr, new_nr, total_scan); return freed;