From patchwork Tue Jan 5 22:58:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12000457 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0EA50C433E6 for ; Tue, 5 Jan 2021 22:58:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 80FD1230FE for ; Tue, 5 Jan 2021 22:58:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 80FD1230FE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 208B48D00BA; Tue, 5 Jan 2021 17:58:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 11BFE8D006E; Tue, 5 Jan 2021 17:58:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EB27C8D00BA; Tue, 5 Jan 2021 17:58:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0185.hostedemail.com [216.40.44.185]) by kanga.kvack.org (Postfix) with ESMTP id CEDD78D006E for ; Tue, 5 Jan 2021 17:58:47 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 930E13629 for ; Tue, 5 Jan 2021 22:58:47 +0000 (UTC) X-FDA: 77673237894.27.death81_400619d274dc Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id 731B63D668 for ; Tue, 5 Jan 2021 22:58:47 +0000 (UTC) X-HE-Tag: death81_400619d274dc X-Filterd-Recvd-Size: 4171 Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) by imf39.hostedemail.com (Postfix) with ESMTP for ; Tue, 5 Jan 2021 22:58:46 +0000 (UTC) Received: by mail-pj1-f47.google.com with SMTP id j13so560501pjz.3 for ; Tue, 05 Jan 2021 14:58:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Mj1bmEJCVqZDq37aTsMsDn48932Q6fF96CwsNmqPO6Y=; b=l6guGeiSaE0IEqfjMtbVN8hAc5PaSHcmwc2Y6oOyoPTiVmnmk9Wx6MU+/PVi/no+ZU +apFzZVxFT610tTpUoGUX5vgHyQl3fREP+WpyNVSMyACT9L+ZVHYHj+CLnqppGGTpYR4 2wP8iUH0o6tL+ZEDMbLzPQZ5+Kpwd/i91VANPhLDiFfJ06vHmA0IX4GCc8sBFZvapm4L fhf49mYPMhO+MWVmOkOxbB8+qDAAtFvZwTDZkZMMGlGB6l+GyFlkGbUfPdpAxZ+ykl3i RAwSsDwleMbC+c2ggvzF0c15zYCuLcfCxuUyci/TunRZerjCYvx1QMtuliHKucC4CLUO ioGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Mj1bmEJCVqZDq37aTsMsDn48932Q6fF96CwsNmqPO6Y=; b=jzaJiOd6GisMuCOt8ZzGjHRepBVt1eN/jB0FYTDe0sQPqsI2s/Er2XhyjzQnTVqDGf uiZdKJZ5PxlKviYxLFiGuf84X/M+uKmF+p9TprWD7tsZB+w3U+dFMWmVFvfQwtVkfMrX oCRzTdjgRR/KgATn9buNpHEHbLReejQDvGLdrbiFINNqtjcVJSFca5O0EWysVL7V7QnN ha7E5r/pkrTnJ00sxYuSyne4HbqJphfTxwryBMqSp9eIO9noIQ52CCPj4d+HPe1Mn8bU 44Dj35rJ2Yy4qdGSqmmW9TN0Kb11s7KJKbV3jgtAlmdvANp4xErXUqQAnCv2Y5XC6Fby kPXw== X-Gm-Message-State: AOAM533LQZ7BDhjF2/UyWlRZLrKLtDlaGT9lbrJRRXRZ3kv4XM1rS9PA wYIUT5wLapDY0CSusprOO7I= X-Google-Smtp-Source: ABdhPJx3sjraPJbb4sb5z508wNnRUCdbSzdgaBWAdmfFb6aWqcHCf9wNz7r8PfSC5ak1vdWUIP+JhQ== X-Received: by 2002:a17:902:bcc6:b029:db:e257:9050 with SMTP id o6-20020a170902bcc6b02900dbe2579050mr1692327pls.22.1609887526027; Tue, 05 Jan 2021 14:58:46 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id fw12sm244233pjb.43.2021.01.05.14.58.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 05 Jan 2021 14:58:45 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v3 PATCH 01/11] mm: vmscan: use nid from shrink_control for tracepoint Date: Tue, 5 Jan 2021 14:58:07 -0800 Message-Id: <20210105225817.1036378-2-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210105225817.1036378-1-shy828301@gmail.com> References: <20210105225817.1036378-1-shy828301@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The tracepoint's nid should show what node the shrink happens on, the start tracepoint uses nid from shrinkctl, but the nid might be set to 0 before end tracepoint if the shrinker is not NUMA aware, so the traceing log may show the shrink happens on one node but end up on the other node. It seems confusing. And the following patch will remove using nid directly in do_shrink_slab(), this patch also helps cleanup the code. Signed-off-by: Yang Shi --- mm/vmscan.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 257cba79a96d..cb24ef952efc 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -535,7 +535,7 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, else new_nr = atomic_long_read(&shrinker->nr_deferred[nid]); - trace_mm_shrink_slab_end(shrinker, nid, freed, nr, new_nr, total_scan); + trace_mm_shrink_slab_end(shrinker, shrinkctl->nid, freed, nr, new_nr, total_scan); return freed; } From patchwork Tue Jan 5 22:58:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12000459 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C480C433E0 for ; Tue, 5 Jan 2021 22:58:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9FCED22E00 for ; Tue, 5 Jan 2021 22:58:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9FCED22E00 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 317338D00BB; Tue, 5 Jan 2021 17:58:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2A71D8D006E; Tue, 5 Jan 2021 17:58:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 02F738D00BB; Tue, 5 Jan 2021 17:58:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0237.hostedemail.com [216.40.44.237]) by kanga.kvack.org (Postfix) with ESMTP id DAA948D006E for ; Tue, 5 Jan 2021 17:58:49 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id AC18C3629 for ; Tue, 5 Jan 2021 22:58:49 +0000 (UTC) X-FDA: 77673237978.19.rock13_21018d3274dc Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id 9073C1AD1B3 for ; Tue, 5 Jan 2021 22:58:49 +0000 (UTC) X-HE-Tag: rock13_21018d3274dc X-Filterd-Recvd-Size: 11475 Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Tue, 5 Jan 2021 22:58:49 +0000 (UTC) Received: by mail-pl1-f182.google.com with SMTP id e2so537370plt.12 for ; Tue, 05 Jan 2021 14:58:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BtzkGwYCF7iiL9RJdxkc2LInj3eT01vio3n5yrDXOSA=; b=pSfQ/digrUY1CaSTwgYc+ifH6Kr+cRVtFhlg2WZ8oZ6Zf26702NoIhuBQrc74S9Fe+ bAzFPWIuke4uPoyH2Dl+6vDS/p6mdml9B1QX1igu2WSaD7cJTAH93b9Eldl00+7ZY7wk qHcJH8xBscu3P6uR8q6FC3I3DbMvXgww0DZ6xH0ngVtVm/CEFTLDuLxrk2/ZCacxu7QY 8VJSLyQig7eu5dKn0yMMgT/gIeUvlTY5IG61KN9DdCCJ+zUH9TvmEc9+nh6QEDzeOlU3 iJArDJV9YxVxebkWlNpACANvQN6T1eEQy2pQsIQh3URmODt7AiiYAQHzWVP9ybd9QYFk RqGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BtzkGwYCF7iiL9RJdxkc2LInj3eT01vio3n5yrDXOSA=; b=TA8sDZjcPbU87PK4DCeBfC5vx/A/fCcslwDSY6v+RXKYSXaJcuQN9VQS1uc8gcx16a mL5frslvk/EKq/mHCK57TbzyPexoZX6Wn3eHqeybdyu8ipaGLD1vrMw0dYc3/K/LGZBl CHPr6zdxbfuANgiXfXOZ/u77cZTfFlG1BDCJVE98i9EZaKUd2S02FcDTLcoI0+srvQLo 9tvhQVkukHAv3bQlrSN6k4h0QwINKnCb53JuWIbULJoYZDE+excrxelJjwnb4y+8QGOZ 0BHY3r05qsbFtRTwuYqfei2DHMPHVihW7gA3/L4ruISggPazadTriTSESpXvr4JYolu5 89rg== X-Gm-Message-State: AOAM532p9THzqxukQr4w07VrE7Xct4s/83JedeEV1CnLqa+PAAYUyGlI j8a/t9c4SAAvTlDWLcdBc9A= X-Google-Smtp-Source: ABdhPJwYHvRJMu/xFhZtAxI7EPrkD6bOlQFuNk0100uQ8a9BNDkmwtnYTz3X6vbh27M+iIZBZyohaQ== X-Received: by 2002:a17:902:7001:b029:da:bbb6:c7e2 with SMTP id y1-20020a1709027001b02900dabbb6c7e2mr1711242plk.50.1609887528393; Tue, 05 Jan 2021 14:58:48 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id fw12sm244233pjb.43.2021.01.05.14.58.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 05 Jan 2021 14:58:47 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v3 PATCH 02/11] mm: vmscan: consolidate shrinker_maps handling code Date: Tue, 5 Jan 2021 14:58:08 -0800 Message-Id: <20210105225817.1036378-3-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210105225817.1036378-1-shy828301@gmail.com> References: <20210105225817.1036378-1-shy828301@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The shrinker map management is not really memcg specific, it's just allocation and assignment of a structure, and the only memcg bit is the map is being stored in a memcg structure. So move the shrinker_maps handling code into vmscan.c for tighter integration with shrinker code. There is no functional change. Signed-off-by: Yang Shi --- include/linux/memcontrol.h | 4 +- mm/memcontrol.c | 124 ------------------------------------ mm/vmscan.c | 126 +++++++++++++++++++++++++++++++++++++ 3 files changed, 128 insertions(+), 126 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index d827bd7f3bfe..d128d2842f22 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1581,8 +1581,8 @@ static inline bool mem_cgroup_under_socket_pressure(struct mem_cgroup *memcg) return false; } -extern int memcg_expand_shrinker_maps(int new_id); - +extern int memcg_alloc_shrinker_maps(struct mem_cgroup *memcg); +extern void memcg_free_shrinker_maps(struct mem_cgroup *memcg); extern void memcg_set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id); #else diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 605f671203ef..817dde366258 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -397,130 +397,6 @@ DEFINE_STATIC_KEY_FALSE(memcg_kmem_enabled_key); EXPORT_SYMBOL(memcg_kmem_enabled_key); #endif -static int memcg_shrinker_map_size; -static DEFINE_MUTEX(memcg_shrinker_map_mutex); - -static void memcg_free_shrinker_map_rcu(struct rcu_head *head) -{ - kvfree(container_of(head, struct memcg_shrinker_map, rcu)); -} - -static int memcg_expand_one_shrinker_map(struct mem_cgroup *memcg, - int size, int old_size) -{ - struct memcg_shrinker_map *new, *old; - int nid; - - lockdep_assert_held(&memcg_shrinker_map_mutex); - - for_each_node(nid) { - old = rcu_dereference_protected( - mem_cgroup_nodeinfo(memcg, nid)->shrinker_map, true); - /* Not yet online memcg */ - if (!old) - return 0; - - new = kvmalloc_node(sizeof(*new) + size, GFP_KERNEL, nid); - if (!new) - return -ENOMEM; - - /* Set all old bits, clear all new bits */ - memset(new->map, (int)0xff, old_size); - memset((void *)new->map + old_size, 0, size - old_size); - - rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, new); - call_rcu(&old->rcu, memcg_free_shrinker_map_rcu); - } - - return 0; -} - -static void memcg_free_shrinker_maps(struct mem_cgroup *memcg) -{ - struct mem_cgroup_per_node *pn; - struct memcg_shrinker_map *map; - int nid; - - if (mem_cgroup_is_root(memcg)) - return; - - for_each_node(nid) { - pn = mem_cgroup_nodeinfo(memcg, nid); - map = rcu_dereference_protected(pn->shrinker_map, true); - if (map) - kvfree(map); - rcu_assign_pointer(pn->shrinker_map, NULL); - } -} - -static int memcg_alloc_shrinker_maps(struct mem_cgroup *memcg) -{ - struct memcg_shrinker_map *map; - int nid, size, ret = 0; - - if (mem_cgroup_is_root(memcg)) - return 0; - - mutex_lock(&memcg_shrinker_map_mutex); - size = memcg_shrinker_map_size; - for_each_node(nid) { - map = kvzalloc_node(sizeof(*map) + size, GFP_KERNEL, nid); - if (!map) { - memcg_free_shrinker_maps(memcg); - ret = -ENOMEM; - break; - } - rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, map); - } - mutex_unlock(&memcg_shrinker_map_mutex); - - return ret; -} - -int memcg_expand_shrinker_maps(int new_id) -{ - int size, old_size, ret = 0; - struct mem_cgroup *memcg; - - size = DIV_ROUND_UP(new_id + 1, BITS_PER_LONG) * sizeof(unsigned long); - old_size = memcg_shrinker_map_size; - if (size <= old_size) - return 0; - - mutex_lock(&memcg_shrinker_map_mutex); - if (!root_mem_cgroup) - goto unlock; - - for_each_mem_cgroup(memcg) { - if (mem_cgroup_is_root(memcg)) - continue; - ret = memcg_expand_one_shrinker_map(memcg, size, old_size); - if (ret) { - mem_cgroup_iter_break(NULL, memcg); - goto unlock; - } - } -unlock: - if (!ret) - memcg_shrinker_map_size = size; - mutex_unlock(&memcg_shrinker_map_mutex); - return ret; -} - -void memcg_set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id) -{ - if (shrinker_id >= 0 && memcg && !mem_cgroup_is_root(memcg)) { - struct memcg_shrinker_map *map; - - rcu_read_lock(); - map = rcu_dereference(memcg->nodeinfo[nid]->shrinker_map); - /* Pairs with smp mb in shrink_slab() */ - smp_mb__before_atomic(); - set_bit(shrinker_id, map->map); - rcu_read_unlock(); - } -} - /** * mem_cgroup_css_from_page - css of the memcg associated with a page * @page: page of interest diff --git a/mm/vmscan.c b/mm/vmscan.c index cb24ef952efc..9db7b4d6d0ae 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -185,6 +185,132 @@ static LIST_HEAD(shrinker_list); static DECLARE_RWSEM(shrinker_rwsem); #ifdef CONFIG_MEMCG + +static int memcg_shrinker_map_size; +static DEFINE_MUTEX(memcg_shrinker_map_mutex); + +static void memcg_free_shrinker_map_rcu(struct rcu_head *head) +{ + kvfree(container_of(head, struct memcg_shrinker_map, rcu)); +} + +static int memcg_expand_one_shrinker_map(struct mem_cgroup *memcg, + int size, int old_size) +{ + struct memcg_shrinker_map *new, *old; + int nid; + + lockdep_assert_held(&memcg_shrinker_map_mutex); + + for_each_node(nid) { + old = rcu_dereference_protected( + mem_cgroup_nodeinfo(memcg, nid)->shrinker_map, true); + /* Not yet online memcg */ + if (!old) + return 0; + + new = kvmalloc(sizeof(*new) + size, GFP_KERNEL); + if (!new) + return -ENOMEM; + + /* Set all old bits, clear all new bits */ + memset(new->map, (int)0xff, old_size); + memset((void *)new->map + old_size, 0, size - old_size); + + rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, new); + call_rcu(&old->rcu, memcg_free_shrinker_map_rcu); + } + + return 0; +} + +void memcg_free_shrinker_maps(struct mem_cgroup *memcg) +{ + struct mem_cgroup_per_node *pn; + struct memcg_shrinker_map *map; + int nid; + + if (mem_cgroup_is_root(memcg)) + return; + + for_each_node(nid) { + pn = mem_cgroup_nodeinfo(memcg, nid); + map = rcu_dereference_protected(pn->shrinker_map, true); + if (map) + kvfree(map); + rcu_assign_pointer(pn->shrinker_map, NULL); + } +} + +int memcg_alloc_shrinker_maps(struct mem_cgroup *memcg) +{ + struct memcg_shrinker_map *map; + int nid, size, ret = 0; + + if (mem_cgroup_is_root(memcg)) + return 0; + + mutex_lock(&memcg_shrinker_map_mutex); + size = memcg_shrinker_map_size; + for_each_node(nid) { + map = kvzalloc(sizeof(*map) + size, GFP_KERNEL); + if (!map) { + memcg_free_shrinker_maps(memcg); + ret = -ENOMEM; + break; + } + rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, map); + } + mutex_unlock(&memcg_shrinker_map_mutex); + + return ret; +} + +static int memcg_expand_shrinker_maps(int new_id) +{ + int size, old_size, ret = 0; + struct mem_cgroup *memcg; + + size = DIV_ROUND_UP(new_id + 1, BITS_PER_LONG) * sizeof(unsigned long); + old_size = memcg_shrinker_map_size; + if (size <= old_size) + return 0; + + mutex_lock(&memcg_shrinker_map_mutex); + if (!root_mem_cgroup) + goto unlock; + + memcg = mem_cgroup_iter(NULL, NULL, NULL); + do { + if (mem_cgroup_is_root(memcg)) + continue; + ret = memcg_expand_one_shrinker_map(memcg, size, old_size); + if (ret) { + mem_cgroup_iter_break(NULL, memcg); + goto unlock; + } + } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL); +unlock: + if (!ret) + memcg_shrinker_map_size = size; + mutex_unlock(&memcg_shrinker_map_mutex); + return ret; +} + +void memcg_set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id) +{ + if (shrinker_id >= 0 && memcg && !mem_cgroup_is_root(memcg)) { + struct memcg_shrinker_map *map; + + rcu_read_lock(); + map = rcu_dereference(memcg->nodeinfo[nid]->shrinker_map); + /* Pairs with smp mb in shrink_slab() */ + smp_mb__before_atomic(); + set_bit(shrinker_id, map->map); + rcu_read_unlock(); + } +} + /* * We allow subsystems to populate their shrinker-related * LRU lists before register_shrinker_prepared() is called From patchwork Tue Jan 5 22:58:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12000461 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0196C433E0 for ; Tue, 5 Jan 2021 22:58:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5B71D22E00 for ; Tue, 5 Jan 2021 22:58:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5B71D22E00 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D57838D00BC; Tue, 5 Jan 2021 17:58:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CB6478D006E; Tue, 5 Jan 2021 17:58:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BA1E48D00BC; Tue, 5 Jan 2021 17:58:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0076.hostedemail.com [216.40.44.76]) by kanga.kvack.org (Postfix) with ESMTP id A0A838D006E for ; Tue, 5 Jan 2021 17:58:52 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 6E5F78245571 for ; Tue, 5 Jan 2021 22:58:52 +0000 (UTC) X-FDA: 77673238104.17.bread06_4d08f8a274dc Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id 5610E180D0185 for ; Tue, 5 Jan 2021 22:58:52 +0000 (UTC) X-HE-Tag: bread06_4d08f8a274dc X-Filterd-Recvd-Size: 5563 Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) by imf12.hostedemail.com (Postfix) with ESMTP for ; Tue, 5 Jan 2021 22:58:51 +0000 (UTC) Received: by mail-pl1-f173.google.com with SMTP id v3so535755plz.13 for ; Tue, 05 Jan 2021 14:58:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=bGybKTjA83xTJvlWZhj7XA4Wdp0i+ARI2lQlbpxX72o=; b=WcWzs+V8qDAX9mVXmv3EA0qhcUeTJb8tzF5h/PgGmz/7fhFhkASM0MgxImZPsN7TCl ayyFUhw9DKlf0jB9LjqOE4VgsW3d7zH0i33HZT+qjY4/KX6MDGSGIBdrTze8RvbPO5mX +OLlz+mgARD35P5CzIlMgXAmAhGF0/fG6jQWTVDrzhSTQS+P3Vw1rr+Nk5R2eDmWgKhG VHHr/frUx53Sd+RvHB0oqMWMynhxVqUEIPRSbcq2rLhFuUFRy0UMihlwdFuogdtAYVGz kJnNkoMEMiRzxwPy7ijGjt353m0GTeIb23w5FsZlpg2mDqFWJQ/0yfy0jl6SJchwBWy2 9I1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=bGybKTjA83xTJvlWZhj7XA4Wdp0i+ARI2lQlbpxX72o=; b=nuYDZpNYA71/rqagcypwv+ba1HnTXH6+je95P8ATcQ1P7s2DIRd5HVm7xcC6y3g6F9 c88KF0B3+0o/MzP2OBwYIzsMQ3RMu8wleoGXukfHqa1qzl5y2pCj6YEX27ZtVTi7zuMi YULFl28Qu8CcX3jYU4AZ64t9H7/ItP08mSSBkb6rviL9t3e+9zCt77JqejnMbvFeV4j7 1uCQ4ipGxRablACgd0A0FSKAWdTsn9rtpRh0Y60shWYaN7HASiOTk3TiMkPxk3N8b8DH WjBEax6CwnfkqrwV2m7xduWJiFKKMCjFVR8MmRqjn3dlxf0Bc7p4edPRyxSWG1Wnv9Bg yEeg== X-Gm-Message-State: AOAM530l8adVLWvgvUoOEKEEIHpWA5hms7ciizBxxN66a6jAWGfNvl+q UvtcvddLAc2ZS6Ha0H4v/1U= X-Google-Smtp-Source: ABdhPJzP2qgztI07SHedDB/qQDrkBM+TAdRyMKBnnsus28kMHVYBpk9b+kGQ5ZTyANOp1Uxn8YT1lg== X-Received: by 2002:a17:902:a983:b029:dc:2564:91f2 with SMTP id bh3-20020a170902a983b02900dc256491f2mr1671677plb.46.1609887531190; Tue, 05 Jan 2021 14:58:51 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id fw12sm244233pjb.43.2021.01.05.14.58.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 05 Jan 2021 14:58:49 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v3 PATCH 03/11] mm: vmscan: use shrinker_rwsem to protect shrinker_maps allocation Date: Tue, 5 Jan 2021 14:58:09 -0800 Message-Id: <20210105225817.1036378-4-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210105225817.1036378-1-shy828301@gmail.com> References: <20210105225817.1036378-1-shy828301@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since memcg_shrinker_map_size just can be changd under holding shrinker_rwsem exclusively, the read side can be protected by holding read lock, so it sounds superfluous to have a dedicated mutex. This should not exacerbate the contention to shrinker_rwsem since just one read side critical section is added. Signed-off-by: Yang Shi --- mm/vmscan.c | 16 ++++++---------- 1 file changed, 6 insertions(+), 10 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 9db7b4d6d0ae..ddb9f972f856 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -187,7 +187,6 @@ static DECLARE_RWSEM(shrinker_rwsem); #ifdef CONFIG_MEMCG static int memcg_shrinker_map_size; -static DEFINE_MUTEX(memcg_shrinker_map_mutex); static void memcg_free_shrinker_map_rcu(struct rcu_head *head) { @@ -200,8 +199,6 @@ static int memcg_expand_one_shrinker_map(struct mem_cgroup *memcg, struct memcg_shrinker_map *new, *old; int nid; - lockdep_assert_held(&memcg_shrinker_map_mutex); - for_each_node(nid) { old = rcu_dereference_protected( mem_cgroup_nodeinfo(memcg, nid)->shrinker_map, true); @@ -250,7 +247,7 @@ int memcg_alloc_shrinker_maps(struct mem_cgroup *memcg) if (mem_cgroup_is_root(memcg)) return 0; - mutex_lock(&memcg_shrinker_map_mutex); + down_read(&shrinker_rwsem); size = memcg_shrinker_map_size; for_each_node(nid) { map = kvzalloc(sizeof(*map) + size, GFP_KERNEL); @@ -261,7 +258,7 @@ int memcg_alloc_shrinker_maps(struct mem_cgroup *memcg) } rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, map); } - mutex_unlock(&memcg_shrinker_map_mutex); + up_read(&shrinker_rwsem); return ret; } @@ -276,9 +273,8 @@ static int memcg_expand_shrinker_maps(int new_id) if (size <= old_size) return 0; - mutex_lock(&memcg_shrinker_map_mutex); if (!root_mem_cgroup) - goto unlock; + goto out; memcg = mem_cgroup_iter(NULL, NULL, NULL); do { @@ -287,13 +283,13 @@ static int memcg_expand_shrinker_maps(int new_id) ret = memcg_expand_one_shrinker_map(memcg, size, old_size); if (ret) { mem_cgroup_iter_break(NULL, memcg); - goto unlock; + goto out; } } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL); -unlock: +out: if (!ret) memcg_shrinker_map_size = size; - mutex_unlock(&memcg_shrinker_map_mutex); + return ret; } From patchwork Tue Jan 5 22:58:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12000463 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D46BC433E0 for ; Tue, 5 Jan 2021 22:58:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CFCBF22E00 for ; Tue, 5 Jan 2021 22:58:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CFCBF22E00 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5FD188D00BD; Tue, 5 Jan 2021 17:58:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5ABB58D006E; Tue, 5 Jan 2021 17:58:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4513D8D00BD; Tue, 5 Jan 2021 17:58:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0053.hostedemail.com [216.40.44.53]) by kanga.kvack.org (Postfix) with ESMTP id 292FE8D006E for ; Tue, 5 Jan 2021 17:58:55 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id E06683629 for ; Tue, 5 Jan 2021 22:58:54 +0000 (UTC) X-FDA: 77673238188.17.word38_610adf6274dc Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id C8821180D0185 for ; Tue, 5 Jan 2021 22:58:54 +0000 (UTC) X-HE-Tag: word38_610adf6274dc X-Filterd-Recvd-Size: 5089 Received: from mail-pg1-f175.google.com (mail-pg1-f175.google.com [209.85.215.175]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Tue, 5 Jan 2021 22:58:54 +0000 (UTC) Received: by mail-pg1-f175.google.com with SMTP id i5so898183pgo.1 for ; Tue, 05 Jan 2021 14:58:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Tp6nvSnYfrzas6KqqlhHOeDbjFx3dmjGnjHn57+Ml3M=; b=PyPds34cq+wU8WRce7eiKrTHVg4GL0fiJGZFtk6LMlqEKH+nyE7Fhz06jcvZRyPtBk SXhTYB9blJzG2KdJybceWKTb4YDswXlPgl3+AevqIKkvN+DqtKgprLV9vo/GAfhTMCIM O6oBzeUXmWibHUWsgztMowdInRXjYHZ6zTyLq9/TbWrY4Yw+wmjhJGpfheFRA5zBOdrE Luq9/kyI1G3ESeXwrinAPZejWjoXUWXPOLUwbwfIWf20Cx7ryPM+WLgXzFe9Iml/yM45 SX5gPCNVw/Subk6EGZgiYnpomhUGMIoA4BNXAklySPPJ5rxchEIJ9Y7UdlMD2nvHCUHy l1xQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Tp6nvSnYfrzas6KqqlhHOeDbjFx3dmjGnjHn57+Ml3M=; b=ZQ3x/vNicgXp5OeLFDCNNChcIv34OcAxbb3J0SWgwkUzpJ2we5FHNw5A3+QUw18px7 /oYo1cM1ubVeKGt2xO58wJCjVn+UfgbpXSb9FcUADyxmr5tzDEhLCb6VS4YFkrlvtR6U F/yBDqGPgGj1oA9GZ8bQAzUeD/s6XKUU1C9ohdqO7C4/4l28bvAR7jI9SX/9dGfJ47QX GiVbouZQwad03bsVRCl8zPZnWQIA6YtAwAQcZx+5DK/OItJ9dxBfVcnajiICfdIZ/FXv bQBt27vfflndDBEc6vcKDRXG/13oux4sxZaB8P/Nz02j8iAzT87MXikzvTAcM0TZhfNx BllA== X-Gm-Message-State: AOAM531OSQkrqkjSWi+zCrXJRcCsXvqpAR9DOmYLXEs1VvHXu89D/wnt mpCARDbSb1N/QNkwP9oKXNU= X-Google-Smtp-Source: ABdhPJzb6Ix4JjXnWhJWl2DmIhFwzSRpr9MVjwmdlocKRbwrUon4tQzHVJKvCNxw+akSjZj81zWglQ== X-Received: by 2002:a63:da4f:: with SMTP id l15mr1397712pgj.22.1609887533410; Tue, 05 Jan 2021 14:58:53 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id fw12sm244233pjb.43.2021.01.05.14.58.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 05 Jan 2021 14:58:52 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v3 PATCH 04/11] mm: vmscan: remove memcg_shrinker_map_size Date: Tue, 5 Jan 2021 14:58:10 -0800 Message-Id: <20210105225817.1036378-5-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210105225817.1036378-1-shy828301@gmail.com> References: <20210105225817.1036378-1-shy828301@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Both memcg_shrinker_map_size and shrinker_nr_max is maintained, but actually the map size can be calculated via shrinker_nr_max, so it seems unnecessary to keep both. Remove memcg_shrinker_map_size since shrinker_nr_max is also used by iterating the bit map. Signed-off-by: Yang Shi --- mm/vmscan.c | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index ddb9f972f856..8da765a85569 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -185,8 +185,7 @@ static LIST_HEAD(shrinker_list); static DECLARE_RWSEM(shrinker_rwsem); #ifdef CONFIG_MEMCG - -static int memcg_shrinker_map_size; +static int shrinker_nr_max; static void memcg_free_shrinker_map_rcu(struct rcu_head *head) { @@ -248,7 +247,7 @@ int memcg_alloc_shrinker_maps(struct mem_cgroup *memcg) return 0; down_read(&shrinker_rwsem); - size = memcg_shrinker_map_size; + size = DIV_ROUND_UP(shrinker_nr_max, BITS_PER_LONG) * sizeof(unsigned long); for_each_node(nid) { map = kvzalloc(sizeof(*map) + size, GFP_KERNEL); if (!map) { @@ -269,7 +268,7 @@ static int memcg_expand_shrinker_maps(int new_id) struct mem_cgroup *memcg; size = DIV_ROUND_UP(new_id + 1, BITS_PER_LONG) * sizeof(unsigned long); - old_size = memcg_shrinker_map_size; + old_size = DIV_ROUND_UP(shrinker_nr_max, BITS_PER_LONG) * sizeof(unsigned long); if (size <= old_size) return 0; @@ -286,10 +285,8 @@ static int memcg_expand_shrinker_maps(int new_id) goto out; } } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL); -out: - if (!ret) - memcg_shrinker_map_size = size; +out: return ret; } @@ -321,7 +318,6 @@ void memcg_set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id) #define SHRINKER_REGISTERING ((struct shrinker *)~0UL) static DEFINE_IDR(shrinker_idr); -static int shrinker_nr_max; static int prealloc_memcg_shrinker(struct shrinker *shrinker) { From patchwork Tue Jan 5 22:58:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12000465 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 599AAC433E6 for ; Tue, 5 Jan 2021 22:58:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E9D5122EBF for ; Tue, 5 Jan 2021 22:58:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E9D5122EBF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7107A8D00BE; Tue, 5 Jan 2021 17:58:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 64A918D006E; Tue, 5 Jan 2021 17:58:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4EC488D00BE; Tue, 5 Jan 2021 17:58:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0175.hostedemail.com [216.40.44.175]) by kanga.kvack.org (Postfix) with ESMTP id 34B618D006E for ; Tue, 5 Jan 2021 17:58:57 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id F2475181AC1E9 for ; Tue, 5 Jan 2021 22:58:56 +0000 (UTC) X-FDA: 77673238272.27.home67_0d1147c274dc Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id C172B3D663 for ; Tue, 5 Jan 2021 22:58:56 +0000 (UTC) X-HE-Tag: home67_0d1147c274dc X-Filterd-Recvd-Size: 5533 Received: from mail-pf1-f175.google.com (mail-pf1-f175.google.com [209.85.210.175]) by imf33.hostedemail.com (Postfix) with ESMTP for ; Tue, 5 Jan 2021 22:58:56 +0000 (UTC) Received: by mail-pf1-f175.google.com with SMTP id c79so613942pfc.2 for ; Tue, 05 Jan 2021 14:58:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qpu1i4q0ijAqTRSCt6SNBy+m3wFuqzhxFrW2P//1NQA=; b=DijFY48hNMYK6anOj/dwQEh7dVjSFg7bIP4tomaqjA34Hb3q9vOJToWJriIR4OFHEk KJh9f3fU4XWcl5TTFHyThszZVA2z/lpMuAksFrMV/e0+lna8pyXuBoe4xPy1Eunxo02D DUQP/yj8NBv8EPgJWPxEf1+wqePFZI9Q0Mmtbcg1Z4f1NvcIRsaQcWCsDvfmD7pmYq4A gKIxgHz2dCWhOgDoSFgmWiwDqpY/JcurWUQsmKlZesVYWNAL+orW8TTkBdzXKpMUslTR YHShCcxnLn3LA6s1Ob7bMD8RsLwn/5emyAlZPHE9455CHYQ+qDqCgICOdccEAPNZgTU0 Gsnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qpu1i4q0ijAqTRSCt6SNBy+m3wFuqzhxFrW2P//1NQA=; b=gvhxNCcIxCxtNhVNOGQNomC3WdK66tPjZvInl8kXsIFq3OnVrEYIn7VqMQK+EGGaor BD+Byzj8LxM5TZyTMdkag8eU89rvCIwUncRXLsOmAQALnkoqwDp5iILjWr3TQ85cVUNC 5QC6DxiN28KQBWWL0BiElxfNJrOSQMay9TTp1uZz/wDdhwOkRUYzB7Lz8ucYxsfUjbbG NHuGBAkeBOZveMqXSRxS8+SlOPHkXnmdN9f0KYZ8GyC+Dj4JEx8yGCGtZ1W9iZRtvs0T TdllCWszKNweR/jEwNkg3UkfGGdBQNhEOUmPwXOi5diQb3oYTLSXmbA/QWrk09cKHUTY DUFg== X-Gm-Message-State: AOAM530X7h4e5zN6IPLLeYqqyMM7m5Raf2agsti17TfsQNx4I2OZwaCX UGSFLGPijhT90Nl9wwH2Fhg= X-Google-Smtp-Source: ABdhPJw4z9EqUFEodw9IFzrIRRXt5KZzuqKFATmgY9X8Mqa/VzTgZf43k0DZQ8+rY1H8MLW/I+o15A== X-Received: by 2002:a63:65c5:: with SMTP id z188mr1386110pgb.332.1609887535455; Tue, 05 Jan 2021 14:58:55 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id fw12sm244233pjb.43.2021.01.05.14.58.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 05 Jan 2021 14:58:54 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v3 PATCH 05/11] mm: vmscan: use a new flag to indicate shrinker is registered Date: Tue, 5 Jan 2021 14:58:11 -0800 Message-Id: <20210105225817.1036378-6-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210105225817.1036378-1-shy828301@gmail.com> References: <20210105225817.1036378-1-shy828301@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently registered shrinker is indicated by non-NULL shrinker->nr_deferred. This approach is fine with nr_deferred at the shrinker level, but the following patches will move MEMCG_AWARE shrinkers' nr_deferred to memcg level, so their shrinker->nr_deferred would always be NULL. This would prevent the shrinkers from unregistering correctly. Signed-off-by: Yang Shi --- include/linux/shrinker.h | 7 ++++--- mm/vmscan.c | 13 +++++++++---- 2 files changed, 13 insertions(+), 7 deletions(-) diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h index 0f80123650e2..1eac79ce57d4 100644 --- a/include/linux/shrinker.h +++ b/include/linux/shrinker.h @@ -79,13 +79,14 @@ struct shrinker { #define DEFAULT_SEEKS 2 /* A good number if you don't know better. */ /* Flags */ -#define SHRINKER_NUMA_AWARE (1 << 0) -#define SHRINKER_MEMCG_AWARE (1 << 1) +#define SHRINKER_REGISTERED (1 << 0) +#define SHRINKER_NUMA_AWARE (1 << 1) +#define SHRINKER_MEMCG_AWARE (1 << 2) /* * It just makes sense when the shrinker is also MEMCG_AWARE for now, * non-MEMCG_AWARE shrinker should not have this flag set. */ -#define SHRINKER_NONSLAB (1 << 2) +#define SHRINKER_NONSLAB (1 << 3) extern int prealloc_shrinker(struct shrinker *shrinker); extern void register_shrinker_prepared(struct shrinker *shrinker); diff --git a/mm/vmscan.c b/mm/vmscan.c index 8da765a85569..9761c7c27412 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -494,6 +494,7 @@ void register_shrinker_prepared(struct shrinker *shrinker) if (shrinker->flags & SHRINKER_MEMCG_AWARE) idr_replace(&shrinker_idr, shrinker, shrinker->id); #endif + shrinker->flags |= SHRINKER_REGISTERED; up_write(&shrinker_rwsem); } @@ -513,13 +514,17 @@ EXPORT_SYMBOL(register_shrinker); */ void unregister_shrinker(struct shrinker *shrinker) { - if (!shrinker->nr_deferred) - return; - if (shrinker->flags & SHRINKER_MEMCG_AWARE) - unregister_memcg_shrinker(shrinker); down_write(&shrinker_rwsem); + if (!(shrinker->flags & SHRINKER_REGISTERED)) { + up_write(&shrinker_rwsem); + return; + } list_del(&shrinker->list); + shrinker->flags &= ~SHRINKER_REGISTERED; up_write(&shrinker_rwsem); + + if (shrinker->flags & SHRINKER_MEMCG_AWARE) + unregister_memcg_shrinker(shrinker); kfree(shrinker->nr_deferred); shrinker->nr_deferred = NULL; } From patchwork Tue Jan 5 22:58:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12000467 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79CFBC433E0 for ; Tue, 5 Jan 2021 22:59:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1377822EBE for ; Tue, 5 Jan 2021 22:59:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1377822EBE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 625318D00BF; Tue, 5 Jan 2021 17:58:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5AE3A8D006E; Tue, 5 Jan 2021 17:58:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 451208D00BF; Tue, 5 Jan 2021 17:58:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0134.hostedemail.com [216.40.44.134]) by kanga.kvack.org (Postfix) with ESMTP id 259858D006E for ; Tue, 5 Jan 2021 17:58:59 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id E3968180AD817 for ; Tue, 5 Jan 2021 22:58:58 +0000 (UTC) X-FDA: 77673238356.26.grape95_37132ca274dc Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin26.hostedemail.com (Postfix) with ESMTP id AFEAE1804B655 for ; Tue, 5 Jan 2021 22:58:58 +0000 (UTC) X-HE-Tag: grape95_37132ca274dc X-Filterd-Recvd-Size: 12035 Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Tue, 5 Jan 2021 22:58:58 +0000 (UTC) Received: by mail-pl1-f177.google.com with SMTP id y8so549713plp.8 for ; Tue, 05 Jan 2021 14:58:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HVD+0PudMxS7pWAcIYmblNF/4MBXPBSI8wHMZCA/csE=; b=fNa6cn9RrijY8wtK6/A8xi8v8YcD519nXhl1JHle9kIkHhCw67ug8QSg7hcxrFXy4B 4tgDradVpnWcOhGSYL0jyf7NX2Ko/WIgJtmQyJQEei9/tl2tQ1SPfnuTXzdyvn4Dk9pA H1cLXCdGCH/E5SEsRfu8PGEqhRbMiX0mehXSUTL3NpsYTBiDenJMAlXhOEmL0YP0r6Nr C/lUCXUxutAkSlyeTTSe1roeiVQ9/j353OvN0hF3CSFadWIsU8pwCrxrfjXiLCJ7vL79 8WpkorH5iJHgmDI+BQnquKlh7HvtVq+M5fNOUtrxVZw7o2DRWL4bTWRAcd5NGD1ZJuSE vcvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HVD+0PudMxS7pWAcIYmblNF/4MBXPBSI8wHMZCA/csE=; b=nrPq0zatQ/Hu1x12E/lSCIQjYtvE7234g4Fag1c0pGjXcC6k7Qms9PSgHTdehSmKCK TAlEHtKANKciBzuYyJyV098w5ezDtrAcJS/1oBn+K55iPlQ5FDkqDcKJJWvC49dSorP9 FV5O28e+Ly8TVSah2+eK+ClQI/JpJH+pedEMrpXaV3j5jNz7Tju0U0kOmTkY8Rn4WHEN hHWQYRRNQwnUr9EkMHQ+CdUk+0gRrTXPeiTnziOk/TLtTzKBGZlydkhKxMrrH90f7ahh Riu/CEJ983xltPQPkwJskcyIpEPTBw3VrAO9CCzKRTFhe+iyjTuUXkrBtXR2g+GCQxOE 3fzw== X-Gm-Message-State: AOAM533O7EgV1z2TkGVvV7p4HK9KdexOd97HmRJPZbCaqno7P0Y5U2di 4dfuTD8Z0ntKQefTpgprtRU= X-Google-Smtp-Source: ABdhPJw4ZbzhOvkLv15bMlFMsL2M/6rI8x0R+OmuBjdC1rckk8n66vGQ0qwvcm0gLhGSErcLhNo4bA== X-Received: by 2002:a17:90a:ae13:: with SMTP id t19mr1408180pjq.52.1609887537432; Tue, 05 Jan 2021 14:58:57 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id fw12sm244233pjb.43.2021.01.05.14.58.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 05 Jan 2021 14:58:56 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v3 PATCH 06/11] mm: memcontrol: rename shrinker_map to shrinker_info Date: Tue, 5 Jan 2021 14:58:12 -0800 Message-Id: <20210105225817.1036378-7-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210105225817.1036378-1-shy828301@gmail.com> References: <20210105225817.1036378-1-shy828301@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The following patch is going to add nr_deferred into shrinker_map, the change will make shrinker_map not only include map anymore, so rename it to a more general name. And this should make the patch adding nr_deferred cleaner and readable and make review easier. Signed-off-by: Yang Shi --- include/linux/memcontrol.h | 8 ++--- mm/memcontrol.c | 6 ++-- mm/vmscan.c | 66 +++++++++++++++++++------------------- 3 files changed, 40 insertions(+), 40 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index d128d2842f22..e05bbe8277cc 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -96,7 +96,7 @@ struct lruvec_stat { * Bitmap of shrinker::id corresponding to memcg-aware shrinkers, * which have elements charged to this memcg. */ -struct memcg_shrinker_map { +struct memcg_shrinker_info { struct rcu_head rcu; unsigned long map[]; }; @@ -118,7 +118,7 @@ struct mem_cgroup_per_node { struct mem_cgroup_reclaim_iter iter; - struct memcg_shrinker_map __rcu *shrinker_map; + struct memcg_shrinker_info __rcu *shrinker_info; struct rb_node tree_node; /* RB tree node */ unsigned long usage_in_excess;/* Set to the value by which */ @@ -1581,8 +1581,8 @@ static inline bool mem_cgroup_under_socket_pressure(struct mem_cgroup *memcg) return false; } -extern int memcg_alloc_shrinker_maps(struct mem_cgroup *memcg); -extern void memcg_free_shrinker_maps(struct mem_cgroup *memcg); +extern int memcg_alloc_shrinker_info(struct mem_cgroup *memcg); +extern void memcg_free_shrinker_info(struct mem_cgroup *memcg); extern void memcg_set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id); #else diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 817dde366258..126f1fd550c8 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5248,11 +5248,11 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css) struct mem_cgroup *memcg = mem_cgroup_from_css(css); /* - * A memcg must be visible for memcg_expand_shrinker_maps() + * A memcg must be visible for memcg_expand_shrinker_info() * by the time the maps are allocated. So, we allocate maps * here, when for_each_mem_cgroup() can't skip it. */ - if (memcg_alloc_shrinker_maps(memcg)) { + if (memcg_alloc_shrinker_info(memcg)) { mem_cgroup_id_remove(memcg); return -ENOMEM; } @@ -5316,7 +5316,7 @@ static void mem_cgroup_css_free(struct cgroup_subsys_state *css) vmpressure_cleanup(&memcg->vmpressure); cancel_work_sync(&memcg->high_work); mem_cgroup_remove_from_trees(memcg); - memcg_free_shrinker_maps(memcg); + memcg_free_shrinker_info(memcg); memcg_free_kmem(memcg); mem_cgroup_free(memcg); } diff --git a/mm/vmscan.c b/mm/vmscan.c index 9761c7c27412..0033659abf9e 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -187,20 +187,20 @@ static DECLARE_RWSEM(shrinker_rwsem); #ifdef CONFIG_MEMCG static int shrinker_nr_max; -static void memcg_free_shrinker_map_rcu(struct rcu_head *head) +static void memcg_free_shrinker_info_rcu(struct rcu_head *head) { - kvfree(container_of(head, struct memcg_shrinker_map, rcu)); + kvfree(container_of(head, struct memcg_shrinker_info, rcu)); } -static int memcg_expand_one_shrinker_map(struct mem_cgroup *memcg, - int size, int old_size) +static int memcg_expand_one_shrinker_info(struct mem_cgroup *memcg, + int size, int old_size) { - struct memcg_shrinker_map *new, *old; + struct memcg_shrinker_info *new, *old; int nid; for_each_node(nid) { old = rcu_dereference_protected( - mem_cgroup_nodeinfo(memcg, nid)->shrinker_map, true); + mem_cgroup_nodeinfo(memcg, nid)->shrinker_info, true); /* Not yet online memcg */ if (!old) return 0; @@ -213,17 +213,17 @@ static int memcg_expand_one_shrinker_map(struct mem_cgroup *memcg, memset(new->map, (int)0xff, old_size); memset((void *)new->map + old_size, 0, size - old_size); - rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, new); - call_rcu(&old->rcu, memcg_free_shrinker_map_rcu); + rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, new); + call_rcu(&old->rcu, memcg_free_shrinker_info_rcu); } return 0; } -void memcg_free_shrinker_maps(struct mem_cgroup *memcg) +void memcg_free_shrinker_info(struct mem_cgroup *memcg) { struct mem_cgroup_per_node *pn; - struct memcg_shrinker_map *map; + struct memcg_shrinker_info *info; int nid; if (mem_cgroup_is_root(memcg)) @@ -231,16 +231,16 @@ void memcg_free_shrinker_maps(struct mem_cgroup *memcg) for_each_node(nid) { pn = mem_cgroup_nodeinfo(memcg, nid); - map = rcu_dereference_protected(pn->shrinker_map, true); - if (map) - kvfree(map); - rcu_assign_pointer(pn->shrinker_map, NULL); + info = rcu_dereference_protected(pn->shrinker_info, true); + if (info) + kvfree(info); + rcu_assign_pointer(pn->shrinker_info, NULL); } } -int memcg_alloc_shrinker_maps(struct mem_cgroup *memcg) +int memcg_alloc_shrinker_info(struct mem_cgroup *memcg) { - struct memcg_shrinker_map *map; + struct memcg_shrinker_info *info; int nid, size, ret = 0; if (mem_cgroup_is_root(memcg)) @@ -249,20 +249,20 @@ int memcg_alloc_shrinker_maps(struct mem_cgroup *memcg) down_read(&shrinker_rwsem); size = DIV_ROUND_UP(shrinker_nr_max, BITS_PER_LONG) * sizeof(unsigned long); for_each_node(nid) { - map = kvzalloc(sizeof(*map) + size, GFP_KERNEL); - if (!map) { - memcg_free_shrinker_maps(memcg); + info = kvzalloc(sizeof(*info) + size, GFP_KERNEL); + if (!info) { + memcg_free_shrinker_info(memcg); ret = -ENOMEM; break; } - rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, map); + rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, info); } up_read(&shrinker_rwsem); return ret; } -static int memcg_expand_shrinker_maps(int new_id) +static int memcg_expand_shrinker_info(int new_id) { int size, old_size, ret = 0; struct mem_cgroup *memcg; @@ -279,7 +279,7 @@ static int memcg_expand_shrinker_maps(int new_id) do { if (mem_cgroup_is_root(memcg)) continue; - ret = memcg_expand_one_shrinker_map(memcg, size, old_size); + ret = memcg_expand_one_shrinker_info(memcg, size, old_size); if (ret) { mem_cgroup_iter_break(NULL, memcg); goto out; @@ -293,13 +293,13 @@ static int memcg_expand_shrinker_maps(int new_id) void memcg_set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id) { if (shrinker_id >= 0 && memcg && !mem_cgroup_is_root(memcg)) { - struct memcg_shrinker_map *map; + struct memcg_shrinker_info *info; rcu_read_lock(); - map = rcu_dereference(memcg->nodeinfo[nid]->shrinker_map); + info = rcu_dereference(memcg->nodeinfo[nid]->shrinker_info); /* Pairs with smp mb in shrink_slab() */ smp_mb__before_atomic(); - set_bit(shrinker_id, map->map); + set_bit(shrinker_id, info->map); rcu_read_unlock(); } } @@ -330,7 +330,7 @@ static int prealloc_memcg_shrinker(struct shrinker *shrinker) goto unlock; if (id >= shrinker_nr_max) { - if (memcg_expand_shrinker_maps(id)) { + if (memcg_expand_shrinker_info(id)) { idr_remove(&shrinker_idr, id); goto unlock; } @@ -666,7 +666,7 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, struct mem_cgroup *memcg, int priority) { - struct memcg_shrinker_map *map; + struct memcg_shrinker_info *info; unsigned long ret, freed = 0; int i; @@ -676,12 +676,12 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, if (!down_read_trylock(&shrinker_rwsem)) return 0; - map = rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_map, - true); - if (unlikely(!map)) + info = rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_info, + true); + if (unlikely(!info)) goto unlock; - for_each_set_bit(i, map->map, shrinker_nr_max) { + for_each_set_bit(i, info->map, shrinker_nr_max) { struct shrink_control sc = { .gfp_mask = gfp_mask, .nid = nid, @@ -692,7 +692,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, shrinker = idr_find(&shrinker_idr, i); if (unlikely(!shrinker || shrinker == SHRINKER_REGISTERING)) { if (!shrinker) - clear_bit(i, map->map); + clear_bit(i, info->map); continue; } @@ -703,7 +703,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, ret = do_shrink_slab(&sc, shrinker, priority); if (ret == SHRINK_EMPTY) { - clear_bit(i, map->map); + clear_bit(i, info->map); /* * After the shrinker reported that it had no objects to * free, but before we cleared the corresponding bit in From patchwork Tue Jan 5 22:58:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12000469 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1FA1C433E6 for ; Tue, 5 Jan 2021 22:59:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6570E22EBE for ; Tue, 5 Jan 2021 22:59:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6570E22EBE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C69278D00C0; Tue, 5 Jan 2021 17:59:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BEEDC8D006E; Tue, 5 Jan 2021 17:59:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9F4468D00C0; Tue, 5 Jan 2021 17:59:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0104.hostedemail.com [216.40.44.104]) by kanga.kvack.org (Postfix) with ESMTP id 802278D006E for ; Tue, 5 Jan 2021 17:59:01 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 4DA77824556B for ; Tue, 5 Jan 2021 22:59:01 +0000 (UTC) X-FDA: 77673238482.26.list73_0411d0d274dc Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin26.hostedemail.com (Postfix) with ESMTP id 2912F1804B655 for ; Tue, 5 Jan 2021 22:59:01 +0000 (UTC) X-HE-Tag: list73_0411d0d274dc X-Filterd-Recvd-Size: 9763 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Tue, 5 Jan 2021 22:59:00 +0000 (UTC) Received: by mail-pl1-f179.google.com with SMTP id j1so564353pld.3 for ; Tue, 05 Jan 2021 14:59:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9Hxeg7RM538QT73g2kaLdndzNoFfzbn1WB4hfWvSZK4=; b=B7L9o8WSEDCg28b6dQo9oNjLTAc+ey9QORavjdi37ElwceIs089tOeZhNPbTfOoWv9 LNtTm7EQ3XA9d5kKJva6hkUKh8X7zXlwM0zYKZ6ZnX5lbaLsOmJ4Vm/cuQqhm9c/p6cY THHqg5rTutu30CgXvo7FUoCU5zIVqGkmVsMUTJRCbVFVfwhTsweXHNViRPC6w3i0Oo5D ro8JlRNaU32srNHukt7o/gC3exzyxYnAlahTq8YDegUisrDtIN76vmyyGb4rUTMzNu65 rLdu/JKLWSKWxP0hF0EIZwtVR+wvuviF8CHwsyANaOJIcnb41P4XDLlSuAi66xABGrGa fy4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9Hxeg7RM538QT73g2kaLdndzNoFfzbn1WB4hfWvSZK4=; b=sRJ76LOlVNyp8CRQuXZmC9wHpczlUBMVZhvh8+qdkEYq7pV9gAgC6dE+05pCPJZdDF /N9bA6kku/kNnjtMWJLaQsxcsHwTcRbuW8bQWuhhx4aXm983QRAKkXopqcvVsvhLy3v4 +kj2pveDRFezKD4z1pxesS0VBxUS2U0Mo4Zx/gl4UkhkEUv48YZMQtQHRD59ukoe1Gpr Olm5hEmpaO6u0yN+pNll7ALIRB480+9Ed8VKVJPk7hnaDzzExEH1D4nRHHK7QcB+4LK1 7oHHeBFQuYn6dPSR+iZab3WLXQnLFSPTmrxrv8PjosoqBIeDh0ooGcMKFPMOgMM6VMDC tQRw== X-Gm-Message-State: AOAM530GZptcwcuuHQmubaRpc4rUmdfDcv6wqONRjblR+Q5QrPdQDiIF ienWqDDz4Sud7GHoNn+abAI= X-Google-Smtp-Source: ABdhPJxYUessIt/pQ+7UpKjeoTntfiUMUxj7BrNvK+smRV683qkL3Jz2Ur82KlR9HuHc6FukDwo3Ew== X-Received: by 2002:a17:90b:1918:: with SMTP id mp24mr1310730pjb.45.1609887539671; Tue, 05 Jan 2021 14:58:59 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id fw12sm244233pjb.43.2021.01.05.14.58.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 05 Jan 2021 14:58:58 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v3 PATCH 07/11] mm: vmscan: add per memcg shrinker nr_deferred Date: Tue, 5 Jan 2021 14:58:13 -0800 Message-Id: <20210105225817.1036378-8-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210105225817.1036378-1-shy828301@gmail.com> References: <20210105225817.1036378-1-shy828301@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently the number of deferred objects are per shrinker, but some slabs, for example, vfs inode/dentry cache are per memcg, this would result in poor isolation among memcgs. The deferred objects typically are generated by __GFP_NOFS allocations, one memcg with excessive __GFP_NOFS allocations may blow up deferred objects, then other innocent memcgs may suffer from over shrink, excessive reclaim latency, etc. For example, two workloads run in memcgA and memcgB respectively, workload in B is vfs heavy workload. Workload in A generates excessive deferred objects, then B's vfs cache might be hit heavily (drop half of caches) by B's limit reclaim or global reclaim. We observed this hit in our production environment which was running vfs heavy workload shown as the below tracing log: <...>-409454 [016] .... 28286961.747146: mm_shrink_slab_start: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: nid: 1 objects to shrink 3641681686040 gfp_flags GFP_HIGHUSER_MOVABLE|__GFP_ZERO pgs_scanned 1 lru_pgs 15721 cache items 246404277 delta 31345 total_scan 123202138 <...>-409454 [022] .... 28287105.928018: mm_shrink_slab_end: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: nid: 1 unused scan count 3641681686040 new scan count 3641798379189 total_scan 602 last shrinker return val 123186855 The vfs cache and page cache ration was 10:1 on this machine, and half of caches were dropped. This also resulted in significant amount of page caches were dropped due to inodes eviction. Make nr_deferred per memcg for memcg aware shrinkers would solve the unfairness and bring better isolation. When memcg is not enabled (!CONFIG_MEMCG or memcg disabled), the shrinker's nr_deferred would be used. And non memcg aware shrinkers use shrinker's nr_deferred all the time. Signed-off-by: Yang Shi --- include/linux/memcontrol.h | 7 +++--- mm/vmscan.c | 49 +++++++++++++++++++++++++------------- 2 files changed, 37 insertions(+), 19 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index e05bbe8277cc..5599082df623 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -93,12 +93,13 @@ struct lruvec_stat { }; /* - * Bitmap of shrinker::id corresponding to memcg-aware shrinkers, - * which have elements charged to this memcg. + * Bitmap and deferred work of shrinker::id corresponding to memcg-aware + * shrinkers, which have elements charged to this memcg. */ struct memcg_shrinker_info { struct rcu_head rcu; - unsigned long map[]; + unsigned long *map; + atomic_long_t *nr_deferred; }; /* diff --git a/mm/vmscan.c b/mm/vmscan.c index 0033659abf9e..72259253e414 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -193,10 +193,12 @@ static void memcg_free_shrinker_info_rcu(struct rcu_head *head) } static int memcg_expand_one_shrinker_info(struct mem_cgroup *memcg, - int size, int old_size) + int m_size, int d_size, + int old_m_size, int old_d_size) { struct memcg_shrinker_info *new, *old; int nid; + int size = m_size + d_size; for_each_node(nid) { old = rcu_dereference_protected( @@ -209,9 +211,18 @@ static int memcg_expand_one_shrinker_info(struct mem_cgroup *memcg, if (!new) return -ENOMEM; - /* Set all old bits, clear all new bits */ - memset(new->map, (int)0xff, old_size); - memset((void *)new->map + old_size, 0, size - old_size); + new->map = (unsigned long *)((unsigned long)new + sizeof(*new)); + new->nr_deferred = (atomic_long_t *)((unsigned long)new + + sizeof(*new) + m_size); + + /* map: set all old bits, clear all new bits */ + memset(new->map, (int)0xff, old_m_size); + memset((void *)new->map + old_m_size, 0, m_size - old_m_size); + /* nr_deferred: copy old values, clear all new values */ + memcpy((void *)new->nr_deferred, (void *)old->nr_deferred, + old_d_size); + memset((void *)new->nr_deferred + old_d_size, 0, + d_size - old_d_size); rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, new); call_rcu(&old->rcu, memcg_free_shrinker_info_rcu); @@ -226,9 +237,6 @@ void memcg_free_shrinker_info(struct mem_cgroup *memcg) struct memcg_shrinker_info *info; int nid; - if (mem_cgroup_is_root(memcg)) - return; - for_each_node(nid) { pn = mem_cgroup_nodeinfo(memcg, nid); info = rcu_dereference_protected(pn->shrinker_info, true); @@ -242,12 +250,13 @@ int memcg_alloc_shrinker_info(struct mem_cgroup *memcg) { struct memcg_shrinker_info *info; int nid, size, ret = 0; - - if (mem_cgroup_is_root(memcg)) - return 0; + int m_size, d_size = 0; down_read(&shrinker_rwsem); - size = DIV_ROUND_UP(shrinker_nr_max, BITS_PER_LONG) * sizeof(unsigned long); + m_size = DIV_ROUND_UP(shrinker_nr_max, BITS_PER_LONG) * sizeof(unsigned long); + d_size = shrinker_nr_max * sizeof(atomic_long_t); + size = m_size + d_size; + for_each_node(nid) { info = kvzalloc(sizeof(*info) + size, GFP_KERNEL); if (!info) { @@ -255,6 +264,9 @@ int memcg_alloc_shrinker_info(struct mem_cgroup *memcg) ret = -ENOMEM; break; } + info->map = (unsigned long *)((unsigned long)info + sizeof(*info)); + info->nr_deferred = (atomic_long_t *)((unsigned long)info + + sizeof(*info) + m_size); rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, info); } up_read(&shrinker_rwsem); @@ -265,10 +277,16 @@ int memcg_alloc_shrinker_info(struct mem_cgroup *memcg) static int memcg_expand_shrinker_info(int new_id) { int size, old_size, ret = 0; + int m_size, d_size = 0; + int old_m_size, old_d_size = 0; struct mem_cgroup *memcg; - size = DIV_ROUND_UP(new_id + 1, BITS_PER_LONG) * sizeof(unsigned long); - old_size = DIV_ROUND_UP(shrinker_nr_max, BITS_PER_LONG) * sizeof(unsigned long); + m_size = DIV_ROUND_UP(new_id + 1, BITS_PER_LONG) * sizeof(unsigned long); + d_size = (new_id + 1) * sizeof(atomic_long_t); + size = m_size + d_size; + old_m_size = DIV_ROUND_UP(shrinker_nr_max, BITS_PER_LONG) * sizeof(unsigned long); + old_d_size = shrinker_nr_max * sizeof(atomic_long_t); + old_size = old_m_size + old_d_size; if (size <= old_size) return 0; @@ -277,9 +295,8 @@ static int memcg_expand_shrinker_info(int new_id) memcg = mem_cgroup_iter(NULL, NULL, NULL); do { - if (mem_cgroup_is_root(memcg)) - continue; - ret = memcg_expand_one_shrinker_info(memcg, size, old_size); + ret = memcg_expand_one_shrinker_info(memcg, m_size, d_size, + old_m_size, old_d_size); if (ret) { mem_cgroup_iter_break(NULL, memcg); goto out; From patchwork Tue Jan 5 22:58:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12000471 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0EB4BC433DB for ; Tue, 5 Jan 2021 22:59:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9D68322EBE for ; Tue, 5 Jan 2021 22:59:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9D68322EBE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 95D268D00C1; Tue, 5 Jan 2021 17:59:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8E7378D006E; Tue, 5 Jan 2021 17:59:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 73E5A8D00C1; Tue, 5 Jan 2021 17:59:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 554138D006E for ; Tue, 5 Jan 2021 17:59:03 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 17A581EF3 for ; Tue, 5 Jan 2021 22:59:03 +0000 (UTC) X-FDA: 77673238566.13.grip99_4816d7f274dc Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id ED7D118140B67 for ; Tue, 5 Jan 2021 22:59:02 +0000 (UTC) X-HE-Tag: grip99_4816d7f274dc X-Filterd-Recvd-Size: 7599 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Tue, 5 Jan 2021 22:59:02 +0000 (UTC) Received: by mail-pl1-f174.google.com with SMTP id x18so554481pln.6 for ; Tue, 05 Jan 2021 14:59:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Ckz+iajI2TNnqdbm5CVDffNmOINo4C2i2MSG0RCclRM=; b=oluYyeVNBXayU4Icdv9HkuZMEQpG1jl7sJ6hxEkS/sGxQLUqYkMEVSjdV5xIMKkzRL N590EgDkFgj3vnWPjzL8Lq5Mq2hv7uSb8u/BNiXukSpfuSHWQEQHHldKeoRn4SHUw5a6 dt98gk6oplwMDfwoDCSpy7SprMrBCirsZVSxz4rf71MHSo4SNDZ8UFReAAIv0fbdxEqX ZMAxKxFm5RFN5sQ93Zu/RPkI5lhq5l3m1kc9Uq2MZoTlpKYR66RNSApyfElOlEaeG/eR CtSGQpgzJCymonVPtFgnoAtNqV/LfeJPn4rMqdV4M59C3dYuR6XLYI8F1cHNQm3A9bMm YWqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Ckz+iajI2TNnqdbm5CVDffNmOINo4C2i2MSG0RCclRM=; b=MlFQ7M/QKCMCWZJsUBGhxgtb2mvf/qknAZrXxw+rzM+MgOcfSOgT8qYAwrNi4/6dJW 0Krk3zKrR/t17UjJzlM5uG6QbbxjDAoX0xMpdMiREoN/F3UePu+hZZeP1L7vrJJ/b+aF HZbGmFJdTJAxC3f2Xgsv0cUyZLh/hEojI95/sUUKb1fw2H6bebWoiv0vz2sfF+NyAlsK pdz1knToWwJ+VoP7Lzg2YCGWTXaE/fyuIPEbA8GD5MbU+31wA0wworywB5P6OvpZ3AkF I4HU2U/myZNRHRCVn8JZDH7lCY4ddlaoGlNOPqNlYPummIt/rUGfG2EGS9+QjzbR5qms KBuA== X-Gm-Message-State: AOAM530oasOtX/eiQQtD5+zpj1a3mBd1myqvzhw4fdj6xtniuz+IB+n6 1QMfTPC+gGxv2Ie6WXOSyMQ= X-Google-Smtp-Source: ABdhPJzBzueIBvuKEVP0dGpLPdFnWIZUhYeGgv8FEZm16lQPS322SeWlEnAWVQ+a32ib8v0PZHxN+w== X-Received: by 2002:a17:902:b601:b029:da:d459:65dc with SMTP id b1-20020a170902b601b02900dad45965dcmr1321996pls.26.1609887541717; Tue, 05 Jan 2021 14:59:01 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id fw12sm244233pjb.43.2021.01.05.14.58.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 05 Jan 2021 14:59:00 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v3 PATCH 08/11] mm: vmscan: use per memcg nr_deferred of shrinker Date: Tue, 5 Jan 2021 14:58:14 -0800 Message-Id: <20210105225817.1036378-9-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210105225817.1036378-1-shy828301@gmail.com> References: <20210105225817.1036378-1-shy828301@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use per memcg's nr_deferred for memcg aware shrinkers. The shrinker's nr_deferred will be used in the following cases: 1. Non memcg aware shrinkers 2. !CONFIG_MEMCG 3. memcg is disabled by boot parameter Signed-off-by: Yang Shi --- mm/vmscan.c | 81 +++++++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 69 insertions(+), 12 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 72259253e414..f20ed8e928c2 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -372,6 +372,27 @@ static void unregister_memcg_shrinker(struct shrinker *shrinker) up_write(&shrinker_rwsem); } +static long count_nr_deferred_memcg(int nid, struct shrinker *shrinker, + struct mem_cgroup *memcg) +{ + struct memcg_shrinker_info *info; + + info = rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_info, + true); + return atomic_long_xchg(&info->nr_deferred[shrinker->id], 0); +} + +static long set_nr_deferred_memcg(long nr, int nid, struct shrinker *shrinker, + struct mem_cgroup *memcg) +{ + struct memcg_shrinker_info *info; + + info = rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_info, + true); + + return atomic_long_add_return(nr, &info->nr_deferred[shrinker->id]); +} + static bool cgroup_reclaim(struct scan_control *sc) { return sc->target_mem_cgroup; @@ -410,6 +431,18 @@ static void unregister_memcg_shrinker(struct shrinker *shrinker) { } +static long count_nr_deferred_memcg(int nid, struct shrinker *shrinker, + struct mem_cgroup *memcg) +{ + return 0; +} + +static long set_nr_deferred_memcg(long nr, int nid, struct shrinker *shrinker, + struct mem_cgroup *memcg) +{ + return 0; +} + static bool cgroup_reclaim(struct scan_control *sc) { return false; @@ -421,6 +454,39 @@ static bool writeback_throttling_sane(struct scan_control *sc) } #endif +static long count_nr_deferred(struct shrinker *shrinker, + struct shrink_control *sc) +{ + int nid = sc->nid; + + if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) + nid = 0; + + if (sc->memcg && + (shrinker->flags & SHRINKER_MEMCG_AWARE)) + return count_nr_deferred_memcg(nid, shrinker, + sc->memcg); + + return atomic_long_xchg(&shrinker->nr_deferred[nid], 0); +} + + +static long set_nr_deferred(long nr, struct shrinker *shrinker, + struct shrink_control *sc) +{ + int nid = sc->nid; + + if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) + nid = 0; + + if (sc->memcg && + (shrinker->flags & SHRINKER_MEMCG_AWARE)) + return set_nr_deferred_memcg(nr, nid, shrinker, + sc->memcg); + + return atomic_long_add_return(nr, &shrinker->nr_deferred[nid]); +} + /* * This misses isolated pages which are not accounted for to save counters. * As the data only determines if reclaim or compaction continues, it is @@ -558,14 +624,10 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, long freeable; long nr; long new_nr; - int nid = shrinkctl->nid; long batch_size = shrinker->batch ? shrinker->batch : SHRINK_BATCH; long scanned = 0, next_deferred; - if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) - nid = 0; - freeable = shrinker->count_objects(shrinker, shrinkctl); if (freeable == 0 || freeable == SHRINK_EMPTY) return freeable; @@ -575,7 +637,7 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, * and zero it so that other concurrent shrinker invocations * don't also do this scanning work. */ - nr = atomic_long_xchg(&shrinker->nr_deferred[nid], 0); + nr = count_nr_deferred(shrinker, shrinkctl); total_scan = nr; if (shrinker->seeks) { @@ -666,14 +728,9 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, next_deferred = 0; /* * move the unused scan count back into the shrinker in a - * manner that handles concurrent updates. If we exhausted the - * scan, there is no need to do an update. + * manner that handles concurrent updates. */ - if (next_deferred > 0) - new_nr = atomic_long_add_return(next_deferred, - &shrinker->nr_deferred[nid]); - else - new_nr = atomic_long_read(&shrinker->nr_deferred[nid]); + new_nr = set_nr_deferred(next_deferred, shrinker, shrinkctl); trace_mm_shrink_slab_end(shrinker, shrinkctl->nid, freed, nr, new_nr, total_scan); return freed; From patchwork Tue Jan 5 22:58:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12000473 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11664C433E0 for ; Tue, 5 Jan 2021 22:59:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AD61622EBE for ; Tue, 5 Jan 2021 22:59:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AD61622EBE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 932A28D00C2; Tue, 5 Jan 2021 17:59:05 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8E1918D006E; Tue, 5 Jan 2021 17:59:05 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7AA8F8D00C2; Tue, 5 Jan 2021 17:59:05 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0202.hostedemail.com [216.40.44.202]) by kanga.kvack.org (Postfix) with ESMTP id 55A458D006E for ; Tue, 5 Jan 2021 17:59:05 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 1B7CC181AC1E9 for ; Tue, 5 Jan 2021 22:59:05 +0000 (UTC) X-FDA: 77673238650.22.loaf97_0f02cd2274dc Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id F270E18038E67 for ; Tue, 5 Jan 2021 22:59:04 +0000 (UTC) X-HE-Tag: loaf97_0f02cd2274dc X-Filterd-Recvd-Size: 5726 Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Tue, 5 Jan 2021 22:59:04 +0000 (UTC) Received: by mail-pj1-f47.google.com with SMTP id b5so571446pjl.0 for ; Tue, 05 Jan 2021 14:59:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=wnMlP5mohrX3bvCSvWrNTxzp8rn+yEf+Jg5XouhGMW8=; b=VfKzNUjMJmuArEaHfpqnslSBVuqZOQjrYZbMQtPQjbs3dE0oYJ2cgQ1rN9OkzKriYB QRfZgVB1tjfgE8Uu6nS6zawm4Dyax8d9B7r1Pu3S0Whtox5LxMYX9KD/hKUiE8dS8R9x g777bezaoqX7KjKNgzsr4x6779qhSqO1hgrsYJb660FtfDvZMeKlOknsSkEMbZPfKhY8 qbZ5RQ2sw9+O/uMmWcXt4xZIqYTh+9+knsjziZgGeeMlsDCsaAj/SDSgLKN2hI0aMrch 0dYurd+5FU+W6D6aXK4OkteMVGsCGlVt8IaA4N2kzWj//PWKAVOXmJxOzcTbs7w6fy9N XYxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wnMlP5mohrX3bvCSvWrNTxzp8rn+yEf+Jg5XouhGMW8=; b=DA91JhQQ0jO+TQk47EkbfoKowhKHZhVryFc59ktYzwD0FHriUVAFvuDh00Y+KcGbHh PCB8s93PIHR3QtzEAEP3Ehg1PyQJvnaqDInpPMW/EL1bD2YHt+lHfUnjWgQJSEzAhvDE /JSmCJuRFRs90ISz2RpzApzCAvDME5xv3kf7eqtStmi2rqp2jbK7aZMzZvpAYjLfUUTe ZBz02zF6SeBJJFrhkRuxgNREBoae0+bTFUECFz/IKxNWv9vtzxvZjlyp0TwWO77wUU5v JrNCm6bTjM0ykr9+1jQQ6imFjgFwBo/WPWiU+II8T06fe7R5dhEx77YCMOOt/CSTtdS9 222A== X-Gm-Message-State: AOAM531upLABhOl77kOx6s2aH3JB/ggRWJXZ1VQu0koMagBXBXo1RpZr 0hOa3hJvhs9bGvRlYA3OLhc= X-Google-Smtp-Source: ABdhPJw7TMu00abpfLHlFdGgRt+lmRt41rnt1svrizMh0Lj4ti/rve6qS9uCxXYsPIfEIz31zFZbCw== X-Received: by 2002:a17:902:d90c:b029:da:9930:9da7 with SMTP id c12-20020a170902d90cb02900da99309da7mr1668658plz.85.1609887543779; Tue, 05 Jan 2021 14:59:03 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id fw12sm244233pjb.43.2021.01.05.14.59.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 05 Jan 2021 14:59:02 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v3 PATCH 09/11] mm: vmscan: don't need allocate shrinker->nr_deferred for memcg aware shrinkers Date: Tue, 5 Jan 2021 14:58:15 -0800 Message-Id: <20210105225817.1036378-10-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210105225817.1036378-1-shy828301@gmail.com> References: <20210105225817.1036378-1-shy828301@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now nr_deferred is available on per memcg level for memcg aware shrinkers, so don't need allocate shrinker->nr_deferred for such shrinkers anymore. The prealloc_memcg_shrinker() would return -ENOSYS if !CONFIG_MEMCG or memcg is disabled by kernel command line, then shrinker's SHRINKER_MEMCG_AWARE flag would be cleared. This makes the implementation of this patch simpler. Signed-off-by: Yang Shi --- mm/vmscan.c | 33 ++++++++++++++++++--------------- 1 file changed, 18 insertions(+), 15 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index f20ed8e928c2..d9795fb0f1c5 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -340,6 +340,9 @@ static int prealloc_memcg_shrinker(struct shrinker *shrinker) { int id, ret = -ENOMEM; + if (mem_cgroup_disabled()) + return -ENOSYS; + down_write(&shrinker_rwsem); /* This may call shrinker, so it must use down_read_trylock() */ id = idr_alloc(&shrinker_idr, SHRINKER_REGISTERING, 0, 0, GFP_KERNEL); @@ -424,7 +427,7 @@ static bool writeback_throttling_sane(struct scan_control *sc) #else static int prealloc_memcg_shrinker(struct shrinker *shrinker) { - return 0; + return -ENOSYS; } static void unregister_memcg_shrinker(struct shrinker *shrinker) @@ -535,8 +538,20 @@ unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru, int zone */ int prealloc_shrinker(struct shrinker *shrinker) { - unsigned int size = sizeof(*shrinker->nr_deferred); + unsigned int size; + int err; + + if (shrinker->flags & SHRINKER_MEMCG_AWARE) { + err = prealloc_memcg_shrinker(shrinker); + if (!err) + return 0; + if (err != -ENOSYS) + return err; + + shrinker->flags &= ~SHRINKER_MEMCG_AWARE; + } + size = sizeof(*shrinker->nr_deferred); if (shrinker->flags & SHRINKER_NUMA_AWARE) size *= nr_node_ids; @@ -544,26 +559,14 @@ int prealloc_shrinker(struct shrinker *shrinker) if (!shrinker->nr_deferred) return -ENOMEM; - if (shrinker->flags & SHRINKER_MEMCG_AWARE) { - if (prealloc_memcg_shrinker(shrinker)) - goto free_deferred; - } return 0; - -free_deferred: - kfree(shrinker->nr_deferred); - shrinker->nr_deferred = NULL; - return -ENOMEM; } void free_prealloced_shrinker(struct shrinker *shrinker) { - if (!shrinker->nr_deferred) - return; - if (shrinker->flags & SHRINKER_MEMCG_AWARE) - unregister_memcg_shrinker(shrinker); + return unregister_memcg_shrinker(shrinker); kfree(shrinker->nr_deferred); shrinker->nr_deferred = NULL; From patchwork Tue Jan 5 22:58:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12000475 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34D63C433DB for ; Tue, 5 Jan 2021 22:59:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D809622EBE for ; Tue, 5 Jan 2021 22:59:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D809622EBE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F07488D00C3; Tue, 5 Jan 2021 17:59:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DEC658D006E; Tue, 5 Jan 2021 17:59:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C3DB68D00C3; Tue, 5 Jan 2021 17:59:07 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0020.hostedemail.com [216.40.44.20]) by kanga.kvack.org (Postfix) with ESMTP id A6BCF8D006E for ; Tue, 5 Jan 2021 17:59:07 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 7274B180AD817 for ; Tue, 5 Jan 2021 22:59:07 +0000 (UTC) X-FDA: 77673238734.02.side89_1d14ed8274dc Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin02.hostedemail.com (Postfix) with ESMTP id 5A7FB10097AA0 for ; Tue, 5 Jan 2021 22:59:07 +0000 (UTC) X-HE-Tag: side89_1d14ed8274dc X-Filterd-Recvd-Size: 5672 Received: from mail-pj1-f43.google.com (mail-pj1-f43.google.com [209.85.216.43]) by imf17.hostedemail.com (Postfix) with ESMTP for ; Tue, 5 Jan 2021 22:59:06 +0000 (UTC) Received: by mail-pj1-f43.google.com with SMTP id iq13so548580pjb.3 for ; Tue, 05 Jan 2021 14:59:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=c17ObrP6pIXejl0tN7A+aopaXDPRL2YppDlbMJ0mvSE=; b=tDqzSggaO61nrhp/sqcOfm7bU0V0LMNT0ZsN95jZJivUd9EtqeGXy38BvfNMee/7+l nKG0Y6mRHlVRht0nRcZtH0p5NN9X3Np0GWfBaXkVXG48ooF+N7m4YaTO4AgPbzkQb5vI hOZIaFwnttNdnZhEmmmcfjDXbwmosSgqMdIm6SqYaCTcsDFgcbPD6dR0yvSAwvirVlSm XZJW67Xv3f54G0Vswq+K8BcTjQgKDoo352vC1mVrmkAkeOO90u5dNOW23YkrF1SRd3dO baXzIpqFBhdBU9sqfoP+5dX/5bQPFqy2WxYwMAPN4VepGDH5kzs15Tnq9Jn+P2FPkBj1 GSLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=c17ObrP6pIXejl0tN7A+aopaXDPRL2YppDlbMJ0mvSE=; b=UbHwRxQUbYvssQ4AY0snVcXKNfbk/5mDZJKMqGdC27AFc7hoiudS6Mtn6mVdIykUKF EDtLW0G5pzEkO/VG5lH27MyBG864y0Go4zuE7Wy/8eURdlIKmls6l7ELjWEI8buo1O3g 4GobOmADBTXrIv1HcaaKNG/8t8BA7wdiZA1vkAkMfMY/DW6lj+o9SIfb/PsP+RvvE/YV CdWruczddGA7bINouKPG+vAzJi2vOjIW38PceO1Cg/30gO9lWF3The1tew+lnN7Mkn1f Imjhg1bmA2/No8jA+Cl4mylX3SH6Qdt8OBJQzloL23WNOS8as72PtfRvmFwXGRWz9iER 35HA== X-Gm-Message-State: AOAM532x6CPkys8S58wzD5x+45TIEa++tXsCE9htlTNs6/md+GXxosKj YPvNua4uVNfa4EM5jNgh1TU= X-Google-Smtp-Source: ABdhPJwM3GKSl7VBuju8WlWtsE8xcxy/o5z2sT9Npe08X1CCSRXQkSoLyg1SdaZVWXQ4a9AcaPU2dA== X-Received: by 2002:a17:90a:454e:: with SMTP id r14mr1377354pjm.194.1609887546032; Tue, 05 Jan 2021 14:59:06 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id fw12sm244233pjb.43.2021.01.05.14.59.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 05 Jan 2021 14:59:04 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v3 PATCH 10/11] mm: memcontrol: reparent nr_deferred when memcg offline Date: Tue, 5 Jan 2021 14:58:16 -0800 Message-Id: <20210105225817.1036378-11-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210105225817.1036378-1-shy828301@gmail.com> References: <20210105225817.1036378-1-shy828301@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now shrinker's nr_deferred is per memcg for memcg aware shrinkers, add to parent's corresponding nr_deferred when memcg offline. Signed-off-by: Yang Shi --- include/linux/memcontrol.h | 1 + mm/memcontrol.c | 1 + mm/vmscan.c | 29 +++++++++++++++++++++++++++++ 3 files changed, 31 insertions(+) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 5599082df623..d1e52e916cc2 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1586,6 +1586,7 @@ extern int memcg_alloc_shrinker_info(struct mem_cgroup *memcg); extern void memcg_free_shrinker_info(struct mem_cgroup *memcg); extern void memcg_set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id); +extern void memcg_reparent_shrinker_deferred(struct mem_cgroup *memcg); #else #define mem_cgroup_sockets_enabled 0 static inline void mem_cgroup_sk_alloc(struct sock *sk) { }; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 126f1fd550c8..19e555675582 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5284,6 +5284,7 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css) page_counter_set_low(&memcg->memory, 0); memcg_offline_kmem(memcg); + memcg_reparent_shrinker_deferred(memcg); wb_memcg_offline(memcg); drain_all_stock(memcg); diff --git a/mm/vmscan.c b/mm/vmscan.c index d9795fb0f1c5..71056057d26d 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -396,6 +396,35 @@ static long set_nr_deferred_memcg(long nr, int nid, struct shrinker *shrinker, return atomic_long_add_return(nr, &info->nr_deferred[shrinker->id]); } +void memcg_reparent_shrinker_deferred(struct mem_cgroup *memcg) +{ + int i, nid; + long nr; + struct mem_cgroup *parent; + struct memcg_shrinker_info *child_info, *parent_info; + + parent = parent_mem_cgroup(memcg); + if (!parent) + parent = root_mem_cgroup; + + /* Prevent from concurrent shrinker_info expand */ + down_read(&shrinker_rwsem); + for_each_node(nid) { + child_info = rcu_dereference_protected( + memcg->nodeinfo[nid]->shrinker_info, + true); + parent_info = rcu_dereference_protected( + parent->nodeinfo[nid]->shrinker_info, + true); + for (i = 0; i < shrinker_nr_max; i++) { + nr = atomic_long_read(&child_info->nr_deferred[i]); + atomic_long_add(nr, + &parent_info->nr_deferred[i]); + } + } + up_read(&shrinker_rwsem); +} + static bool cgroup_reclaim(struct scan_control *sc) { return sc->target_mem_cgroup; From patchwork Tue Jan 5 22:58:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12000477 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2DD83C433E9 for ; Tue, 5 Jan 2021 22:59:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C979F22EBE for ; Tue, 5 Jan 2021 22:59:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C979F22EBE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 42C0F8D00C4; Tue, 5 Jan 2021 17:59:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3B4408D006E; Tue, 5 Jan 2021 17:59:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 140638D00C4; Tue, 5 Jan 2021 17:59:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0212.hostedemail.com [216.40.44.212]) by kanga.kvack.org (Postfix) with ESMTP id E74AD8D006E for ; Tue, 5 Jan 2021 17:59:09 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id B0C4D3629 for ; Tue, 5 Jan 2021 22:59:09 +0000 (UTC) X-FDA: 77673238818.14.smash32_040d269274dc Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin14.hostedemail.com (Postfix) with ESMTP id 9350D1822987A for ; Tue, 5 Jan 2021 22:59:09 +0000 (UTC) X-HE-Tag: smash32_040d269274dc X-Filterd-Recvd-Size: 6027 Received: from mail-pj1-f42.google.com (mail-pj1-f42.google.com [209.85.216.42]) by imf43.hostedemail.com (Postfix) with ESMTP for ; Tue, 5 Jan 2021 22:59:09 +0000 (UTC) Received: by mail-pj1-f42.google.com with SMTP id b5so571575pjl.0 for ; Tue, 05 Jan 2021 14:59:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/w8zHA39ZxWTccojV8sgLaQmxN/C0nsAOHnTjuJkkjk=; b=pKw4juce56d+GBEmByu2Qylqs4jJTT/Le0PivjE/+r/aD5Wxc44Wei78Nv3+gdnxqw MNjdwru+fxvfv4VCkLlnVImf7Is22pfli8p+jUrXFtnRkfq+B062VnccJtCfaSXy/RVC 5o/4rF3m9fuN2aIAKlIIfOKPOtKlQb2msBauxqK2Y/NQpzwr09sJG18Dzfp7vcIffN7n Xi3fPh4AcChPjKv/Gk46xGLEESYyN0XGD+JJ7V0nVOThUhBAJdVvMVzvs9p5g4FFb8zl xbITy9vE6FiQFC8FqKAdZsmeHz6EB6mCwCv40mHC2wllatjZkSUMIXy2HED4VzxNyrlY 8MYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/w8zHA39ZxWTccojV8sgLaQmxN/C0nsAOHnTjuJkkjk=; b=nz2yXd1STNzgxsktZ0LPiHVXLFEukRrLMa7CILolR/gh2FaZqcnzpitNNd/kMivMV2 3MTD8fNVTeD/ch3l6c1cATmLV4VJa1YbPgeaKkndcDMPz+FDbGnFmQyTN1aa6BnNEjVS WNRxFKM4KokHLLOac45wZTXDttI/TiCAkEBrWHx9h4AADjgckNZS1MB1aPsAZTaRemYv 0eVhm/6Dh8mGMQUrMXoDHMvKge2CVVFfQkNqbri8kaFslwUb8LHIBHhzSzCPRg9BbWfb fXtI1nrI1TQLABwx4x3pYTmxVi0uXDL5r7gegAlBZfvbiNLtQTjefcZXRqmuJR0sbvlO c6KA== X-Gm-Message-State: AOAM531y2PqEvWMe35KHnQ2qqcPIA83inNeAN7NTs90fLtvJ4dVMw2RW 74mp1wOa2XOab5V9ndiKXbY= X-Google-Smtp-Source: ABdhPJxcHNSPb1E7+FuR9nJPd1obGqKUl+cSei//usANPGUrV3vQhFy/nLBh6vsG4MmdfaYibgjhHA== X-Received: by 2002:a17:902:ee52:b029:da:4dee:1a54 with SMTP id 18-20020a170902ee52b02900da4dee1a54mr1306441plo.29.1609887548434; Tue, 05 Jan 2021 14:59:08 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id fw12sm244233pjb.43.2021.01.05.14.59.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 05 Jan 2021 14:59:07 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v3 PATCH 11/11] mm: vmscan: shrink deferred objects proportional to priority Date: Tue, 5 Jan 2021 14:58:17 -0800 Message-Id: <20210105225817.1036378-12-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210105225817.1036378-1-shy828301@gmail.com> References: <20210105225817.1036378-1-shy828301@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The number of deferred objects might get windup to an absurd number, and it results in clamp of slab objects. It is undesirable for sustaining workingset. So shrink deferred objects proportional to priority and cap nr_deferred to twice of cache items. The idea is borrowed fron Dave Chinner's patch: https://lore.kernel.org/linux-xfs/20191031234618.15403-13-david@fromorbit.com/ Tested with kernel build and vfs metadata heavy workload, no regression is spotted so far. Signed-off-by: Yang Shi --- mm/vmscan.c | 40 +++++----------------------------------- 1 file changed, 5 insertions(+), 35 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 71056057d26d..6832f1d24d2b 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -671,7 +671,6 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, */ nr = count_nr_deferred(shrinker, shrinkctl); - total_scan = nr; if (shrinker->seeks) { delta = freeable >> priority; delta *= 4; @@ -685,37 +684,9 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, delta = freeable / 2; } + total_scan = nr >> priority; total_scan += delta; - if (total_scan < 0) { - pr_err("shrink_slab: %pS negative objects to delete nr=%ld\n", - shrinker->scan_objects, total_scan); - total_scan = freeable; - next_deferred = nr; - } else - next_deferred = total_scan; - - /* - * We need to avoid excessive windup on filesystem shrinkers - * due to large numbers of GFP_NOFS allocations causing the - * shrinkers to return -1 all the time. This results in a large - * nr being built up so when a shrink that can do some work - * comes along it empties the entire cache due to nr >>> - * freeable. This is bad for sustaining a working set in - * memory. - * - * Hence only allow the shrinker to scan the entire cache when - * a large delta change is calculated directly. - */ - if (delta < freeable / 4) - total_scan = min(total_scan, freeable / 2); - - /* - * Avoid risking looping forever due to too large nr value: - * never try to free more than twice the estimate number of - * freeable entries. - */ - if (total_scan > freeable * 2) - total_scan = freeable * 2; + total_scan = min(total_scan, (2 * freeable)); trace_mm_shrink_slab_start(shrinker, shrinkctl, nr, freeable, delta, total_scan, priority); @@ -754,10 +725,9 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, cond_resched(); } - if (next_deferred >= scanned) - next_deferred -= scanned; - else - next_deferred = 0; + next_deferred = max_t(long, (nr - scanned), 0) + total_scan; + next_deferred = min(next_deferred, (2 * freeable)); + /* * move the unused scan count back into the shrinker in a * manner that handles concurrent updates.