From patchwork Mon Dec 14 22:37:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 11973287 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 584FBC2BB40 for ; Mon, 14 Dec 2020 22:37:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DDA9320731 for ; Mon, 14 Dec 2020 22:37:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DDA9320731 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7047D6B005D; Mon, 14 Dec 2020 17:37:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6B4DA6B0068; Mon, 14 Dec 2020 17:37:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5A9606B006C; Mon, 14 Dec 2020 17:37:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0184.hostedemail.com [216.40.44.184]) by kanga.kvack.org (Postfix) with ESMTP id 403F36B005D for ; Mon, 14 Dec 2020 17:37:44 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id E7FBA363C for ; Mon, 14 Dec 2020 22:37:43 +0000 (UTC) X-FDA: 77593351206.23.tramp88_5e116582741e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id C67D337604 for ; Mon, 14 Dec 2020 22:37:43 +0000 (UTC) X-HE-Tag: tramp88_5e116582741e X-Filterd-Recvd-Size: 4119 Received: from mail-pj1-f67.google.com (mail-pj1-f67.google.com [209.85.216.67]) by imf39.hostedemail.com (Postfix) with ESMTP for ; Mon, 14 Dec 2020 22:37:43 +0000 (UTC) Received: by mail-pj1-f67.google.com with SMTP id lj6so5857273pjb.0 for ; Mon, 14 Dec 2020 14:37:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JK0RA9az0H36swxW1b9xoMtxd77STDT0bYjsifkqcRo=; b=F/5xiYvM5I0EZ1qGc6HKxC5DbVx8mFwcf3JfwHFHN3kcB1Qk1grQjgn8+kQBdPQSM1 alxCOT2x5/FoazJu6JQkdG4GitO3MZsQEYSYtsjaQ1/F2TyZ2E92jXroDuFdytjhvKmO dUMJHFDGxiZ2cM0lQ8CMcNHXBU+sBDQMm1b8ZrivVoAmViSUsIHutgMvOtj9kgkl7msA r6T7n+glgZi4S5CqtQQj/F+wqQ5y/4lc6pyB3xHEAzlcLA6PXgtMgM8YYxEhmnF6oWq1 NSaUa2RVk8XMOmsEyNxFja/5Ndj8gKg6XpSZHHj7mV5zYjfZg4HN/c8FLUq3S1DFx9Fl I8Fg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JK0RA9az0H36swxW1b9xoMtxd77STDT0bYjsifkqcRo=; b=RoCtW7pEC1Z5gVPX87EfJ/tKFnb+BS//iGUzIFz8apfaYe6jnzlymnYmIakHbkHqRK dbzA4ryd/gks0/DLFtXslU0O9d/rQbF2bVcf+vky9D9S9I0D/zNMSKwvXqvhxwmeSRjm PlyVTSnJY1otxTWhWsEmbighEeSXFP44ckKSlpZU6fVyFPNf9sEY5Fgoyd5m8UPcEvRm DRnJb8EEJGN5YffxtCtOrb2UprIL0Xx4CyvbXuG3eHyJgGuARB9GvhQUKxb9mcptw4yf 2vgvEkXTwxCWObx/4aqdfqMDH8lgZ8QZ3JP51qy5SzjMy8bJV8Z2a1WueA01xww3/Iz3 gFOA== X-Gm-Message-State: AOAM533+0bVfpiEafZfelZDia34N6xC02fsC2e8bXXbJ3iieoj7ab0Fk 7HVwrOKhzSP4rj3ZF5EMM5Y= X-Google-Smtp-Source: ABdhPJyD+/0faNVKRhmAJZsCjeKks1k242JFR+RcoNYxKcHgo8xWCG5eLgy8LarRv/qmPAu9krxmMw== X-Received: by 2002:a17:90a:5581:: with SMTP id c1mr6843745pji.86.1607985462413; Mon, 14 Dec 2020 14:37:42 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id d4sm20610758pfo.127.2020.12.14.14.37.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Dec 2020 14:37:41 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v2 PATCH 1/9] mm: vmscan: use nid from shrink_control for tracepoint Date: Mon, 14 Dec 2020 14:37:14 -0800 Message-Id: <20201214223722.232537-2-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201214223722.232537-1-shy828301@gmail.com> References: <20201214223722.232537-1-shy828301@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The tracepoint's nid should show what node the shrink happens on, the start tracepoint uses nid from shrinkctl, but the nid might be set to 0 before end tracepoint if the shrinker is not NUMA aware, so the traceing log may show the shrink happens on one node but end up on the other node. It seems confusing. And the following patch will remove using nid directly in do_shrink_slab(), this patch also helps cleanup the code. Signed-off-by: Yang Shi --- mm/vmscan.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 7b4e31eac2cf..48c06c48b97e 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -537,7 +537,7 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, else new_nr = atomic_long_read(&shrinker->nr_deferred[nid]); - trace_mm_shrink_slab_end(shrinker, nid, freed, nr, new_nr, total_scan); + trace_mm_shrink_slab_end(shrinker, shrinkctl->nid, freed, nr, new_nr, total_scan); return freed; } From patchwork Mon Dec 14 22:37:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 11973289 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96EBCC4361B for ; Mon, 14 Dec 2020 22:37:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4322C20731 for ; Mon, 14 Dec 2020 22:37:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4322C20731 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AFC196B0068; Mon, 14 Dec 2020 17:37:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A860A6B006C; Mon, 14 Dec 2020 17:37:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 928A46B006E; Mon, 14 Dec 2020 17:37:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0167.hostedemail.com [216.40.44.167]) by kanga.kvack.org (Postfix) with ESMTP id 7CE1F6B0068 for ; Mon, 14 Dec 2020 17:37:46 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 44F3B8249980 for ; Mon, 14 Dec 2020 22:37:46 +0000 (UTC) X-FDA: 77593351332.20.order81_600b4212741e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id 23636180C07AB for ; Mon, 14 Dec 2020 22:37:46 +0000 (UTC) X-HE-Tag: order81_600b4212741e X-Filterd-Recvd-Size: 6376 Received: from mail-pj1-f67.google.com (mail-pj1-f67.google.com [209.85.216.67]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Mon, 14 Dec 2020 22:37:45 +0000 (UTC) Received: by mail-pj1-f67.google.com with SMTP id j13so7669816pjz.3 for ; Mon, 14 Dec 2020 14:37:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ozVKgrz/cuTdhl0CMZyCRzGO4Sb1PcBOLEI+PfNUSHM=; b=dXTQDcP72TdGCCJV+w/cgvaAaD/MLr2B5g5F2KSNONWtAZWOrf4mUN2HiA0vEeHEcK mIdVk7cX8wqk32sJglGFovFRe9g17sNd9lz4PWd6gnOEUagZ7k0cfxk9LsNabtlnHoIH K5V13cL6igyAu0466QaJrJHXMIXQaEoAlJ2rCYucdM8mhULTLteNYDGA14m2Sb1m15Ur cDzckkG3rRz9pDmouO3zJm9mPEaPtswg3Ql+oC+VsTkihgfEfbOVXn6tSZkZZHZpHwCJ yDgYlgKsPsUFqLDbti8/VKsZZ3l139LvxDf/arZ3LGUfghanuwCfHYoAKv10FumTEAmX 5ajg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ozVKgrz/cuTdhl0CMZyCRzGO4Sb1PcBOLEI+PfNUSHM=; b=eMXXpHHUw5ZBhkcjqi17mOdQTTYrDVuCBEVJTUGopL/quw06fDITvLiYJxu//MCpbd SjYnuipTMMoHxDH8d469xcXhBCwELVwAgD/RaUnczhAFy7yz3/Y9GBET5GAryFUuAV8c kdNRB01xNujx9pxC+MbWiSMQqAXABcC9G1tpjr6CtlTQshaKnVKrEkYS3t7zUvcKMNGh K+wCKzeWhjEJEvpoDUJ1uauYUPsBWyd91a+e8ycJ/s5wMGWb3hfKh3S5U5c8c8OGed4X X8lE2f9ytCGpQVUIJa1vI0i52OlrRh9DkfEC9CvrCeDnsz9UPcZJbyDjbwyicuUy0lc+ eu0g== X-Gm-Message-State: AOAM530KFs0xFLI19TSI7IT+yIJQEmTcR46ZolRLLBQ5Wdl5nGQg/XxA G1eCoRBOJZ31+sEvFE/RwtA= X-Google-Smtp-Source: ABdhPJwNKKhYDEpsImErYCx1i9yXj/MdY2yg4ExDO6Zf1q/mFSBm62tiTbO3AGPAWepsMKNUSLxTfw== X-Received: by 2002:a17:90a:f0c5:: with SMTP id fa5mr27964263pjb.144.1607985464867; Mon, 14 Dec 2020 14:37:44 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id d4sm20610758pfo.127.2020.12.14.14.37.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Dec 2020 14:37:43 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v2 PATCH 2/9] mm: memcontrol: use shrinker_rwsem to protect shrinker_maps allocation Date: Mon, 14 Dec 2020 14:37:15 -0800 Message-Id: <20201214223722.232537-3-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201214223722.232537-1-shy828301@gmail.com> References: <20201214223722.232537-1-shy828301@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since memcg_shrinker_map_size just can be changd under holding shrinker_rwsem exclusively, the read side can be protected by holding read lock, so it sounds superfluous to have a dedicated mutex. This should not exacerbate the contention to shrinker_rwsem since just one read side critical section is added. Signed-off-by: Yang Shi Acked-by: Johannes Weiner --- mm/internal.h | 1 + mm/memcontrol.c | 17 +++++++---------- mm/vmscan.c | 2 +- 3 files changed, 9 insertions(+), 11 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index c43ccdddb0f6..10c79d199aaa 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -108,6 +108,7 @@ extern unsigned long highest_memmap_pfn; /* * in mm/vmscan.c: */ +extern struct rw_semaphore shrinker_rwsem; extern int isolate_lru_page(struct page *page); extern void putback_lru_page(struct page *page); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 29459a6ce1c7..ed942734235f 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -394,8 +394,8 @@ DEFINE_STATIC_KEY_FALSE(memcg_kmem_enabled_key); EXPORT_SYMBOL(memcg_kmem_enabled_key); #endif +/* It is only can be changed with holding shrinker_rwsem exclusively */ static int memcg_shrinker_map_size; -static DEFINE_MUTEX(memcg_shrinker_map_mutex); static void memcg_free_shrinker_map_rcu(struct rcu_head *head) { @@ -408,8 +408,6 @@ static int memcg_expand_one_shrinker_map(struct mem_cgroup *memcg, struct memcg_shrinker_map *new, *old; int nid; - lockdep_assert_held(&memcg_shrinker_map_mutex); - for_each_node(nid) { old = rcu_dereference_protected( mem_cgroup_nodeinfo(memcg, nid)->shrinker_map, true); @@ -458,7 +456,7 @@ static int memcg_alloc_shrinker_maps(struct mem_cgroup *memcg) if (mem_cgroup_is_root(memcg)) return 0; - mutex_lock(&memcg_shrinker_map_mutex); + down_read(&shrinker_rwsem); size = memcg_shrinker_map_size; for_each_node(nid) { map = kvzalloc_node(sizeof(*map) + size, GFP_KERNEL, nid); @@ -469,7 +467,7 @@ static int memcg_alloc_shrinker_maps(struct mem_cgroup *memcg) } rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, map); } - mutex_unlock(&memcg_shrinker_map_mutex); + up_read(&shrinker_rwsem); return ret; } @@ -484,9 +482,8 @@ int memcg_expand_shrinker_maps(int new_id) if (size <= old_size) return 0; - mutex_lock(&memcg_shrinker_map_mutex); if (!root_mem_cgroup) - goto unlock; + goto out; for_each_mem_cgroup(memcg) { if (mem_cgroup_is_root(memcg)) @@ -494,13 +491,13 @@ int memcg_expand_shrinker_maps(int new_id) ret = memcg_expand_one_shrinker_map(memcg, size, old_size); if (ret) { mem_cgroup_iter_break(NULL, memcg); - goto unlock; + goto out; } } -unlock: +out: if (!ret) memcg_shrinker_map_size = size; - mutex_unlock(&memcg_shrinker_map_mutex); + return ret; } diff --git a/mm/vmscan.c b/mm/vmscan.c index 48c06c48b97e..912c044301dd 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -184,7 +184,7 @@ static void set_task_reclaim_state(struct task_struct *task, } static LIST_HEAD(shrinker_list); -static DECLARE_RWSEM(shrinker_rwsem); +DECLARE_RWSEM(shrinker_rwsem); #ifdef CONFIG_MEMCG /* From patchwork Mon Dec 14 22:37:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 11973291 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23402C4361B for ; Mon, 14 Dec 2020 22:37:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CCFFB20731 for ; Mon, 14 Dec 2020 22:37:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CCFFB20731 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 61AF56B006C; Mon, 14 Dec 2020 17:37:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 57D196B006E; Mon, 14 Dec 2020 17:37:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 448C46B0070; Mon, 14 Dec 2020 17:37:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0025.hostedemail.com [216.40.44.25]) by kanga.kvack.org (Postfix) with ESMTP id 20C846B006C for ; Mon, 14 Dec 2020 17:37:49 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id DA9FA8249980 for ; Mon, 14 Dec 2020 22:37:48 +0000 (UTC) X-FDA: 77593351416.05.power30_360dbce2741e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin05.hostedemail.com (Postfix) with ESMTP id B7F5518024CFD for ; Mon, 14 Dec 2020 22:37:48 +0000 (UTC) X-HE-Tag: power30_360dbce2741e X-Filterd-Recvd-Size: 5344 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) by imf16.hostedemail.com (Postfix) with ESMTP for ; Mon, 14 Dec 2020 22:37:48 +0000 (UTC) Received: by mail-pl1-f179.google.com with SMTP id 4so9747782plk.5 for ; Mon, 14 Dec 2020 14:37:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=RnXK+gZ1C2/dQ0iAczaSS+4uBnqZQVQH1YKLl+NZ61M=; b=GGQmRYRtTlBV1mtp1fkqgUKL0n34FOR26CqFeArLaZtWKB97t753yJch9PooXHpLPB FFL31FyLa6SpRP+3/Z0kWL3bXc0aEaQP7CmkOOgzgIxDiSSIJTe9E0UDD+D7+IOMXNdq G32XeAoRxQ4Gj9cZLsbnwzlpUj1jOUS1JaEU/TlA8GRMDiBhfEAq6+Z8gPbS7KQNUQO+ 047aDRXN/m70MRnprmi6cLoOwFaRYNUAcDtCC4tLBVY/sYP3ED7zOjg9FjRrwMyYi+K1 YIh/vfq2/gW4y2Tdc7GdFKtnUOKWimotpGDHPEzQGyU0c2quOU94BM3rJ0TF8VKN2gK9 6y3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=RnXK+gZ1C2/dQ0iAczaSS+4uBnqZQVQH1YKLl+NZ61M=; b=mYJ4rQa49esM+2JzxEn5ZkeVyQYdRZfms330f6v+D+z+GXtIfJyvnX2ZsrUAm6LT6Q F0Eg37jpY08KEvKBWlnCtjM38C73oPSX5y66S8g/NtEkcPImTu5lwvgIz7kQrdCo08tF PNrjlJRvVjoHBPX8k8bYx3msp473LGFfxkW2Tw83UIv+bsWr7uC5pMvhLUTculFxL5Wc C4eGGU8mKab2UqPZOj8GaDBIxNEQjwSUcRriByCJJcdxdmYTQaM0fPot6J3oiFi8jxjI dkTkUyhU7cIeTN6EZf3JZC6pMeAbfUCc9QlFXknnPJJ7wm8SwBQFMclFNSUTapIBqYcO cqyA== X-Gm-Message-State: AOAM533BIOy+Hu7ZPCFaC7hAlKsFWOW64TgkpZkslF8kucrZD2XrJxCO Rt/bzfrb8PUmVgzn5qBnfrA= X-Google-Smtp-Source: ABdhPJx2Cv9p2Tq9vADJWiKsvIEIe5Ww975Xm0kVR2bMbuwE9HC7vTBHlQ9cBXkEeK2Pe9I1qyYDNg== X-Received: by 2002:a17:90a:e7cc:: with SMTP id kb12mr27236213pjb.234.1607985467358; Mon, 14 Dec 2020 14:37:47 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id d4sm20610758pfo.127.2020.12.14.14.37.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Dec 2020 14:37:46 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v2 PATCH 3/9] mm: vmscan: guarantee shrinker_slab_memcg() sees valid shrinker_maps for online memcg Date: Mon, 14 Dec 2020 14:37:16 -0800 Message-Id: <20201214223722.232537-4-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201214223722.232537-1-shy828301@gmail.com> References: <20201214223722.232537-1-shy828301@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The shrink_slab_memcg() races with mem_cgroup_css_online(). A visibility of CSS_ONLINE flag in shrink_slab_memcg()->mem_cgroup_online() does not guarantee that we will see memcg->nodeinfo[nid]->shrinker_maps != NULL. This may occur because of processor reordering on !x86. This seems like the below case: CPU A CPU B store shrinker_map load CSS_ONLINE store CSS_ONLINE load shrinker_map So the memory ordering could be guaranteed by smp_wmb()/smp_rmb() pair. The memory barriers pair will guarantee the ordering between shrinker_deferred and CSS_ONLINE for the following patches as well. Signed-off-by: Yang Shi --- mm/memcontrol.c | 7 +++++++ mm/vmscan.c | 8 +++++--- 2 files changed, 12 insertions(+), 3 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index ed942734235f..3d4ddbb84a01 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5406,6 +5406,13 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css) return -ENOMEM; } + /* + * Barrier for CSS_ONLINE, so that shrink_slab_memcg() sees shirnker_maps + * and shrinker_deferred before CSS_ONLINE. It pairs with the read barrier + * in shrink_slab_memcg(). + */ + smp_wmb(); + /* Online state pins memcg ID, memcg ID pins CSS */ refcount_set(&memcg->id.ref, 1); css_get(css); diff --git a/mm/vmscan.c b/mm/vmscan.c index 912c044301dd..9b31b9c419ec 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -552,13 +552,15 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, if (!mem_cgroup_online(memcg)) return 0; + /* Pairs with write barrier in mem_cgroup_css_online() */ + smp_rmb(); + if (!down_read_trylock(&shrinker_rwsem)) return 0; + /* Once memcg is online it can't be NULL */ map = rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_map, true); - if (unlikely(!map)) - goto unlock; for_each_set_bit(i, map->map, shrinker_nr_max) { struct shrink_control sc = { @@ -612,7 +614,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, break; } } -unlock: + up_read(&shrinker_rwsem); return freed; } From patchwork Mon Dec 14 22:37:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 11973293 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F53FC2BB40 for ; Mon, 14 Dec 2020 22:37:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 318B020731 for ; Mon, 14 Dec 2020 22:37:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 318B020731 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C83A86B006E; Mon, 14 Dec 2020 17:37:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BEB106B0070; Mon, 14 Dec 2020 17:37:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A8BD86B0071; Mon, 14 Dec 2020 17:37:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0119.hostedemail.com [216.40.44.119]) by kanga.kvack.org (Postfix) with ESMTP id 907EF6B006E for ; Mon, 14 Dec 2020 17:37:51 -0500 (EST) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 4F33B180ACEE9 for ; Mon, 14 Dec 2020 22:37:51 +0000 (UTC) X-FDA: 77593351542.03.oven97_1f1489d2741e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id 2BEC128A4E8 for ; Mon, 14 Dec 2020 22:37:51 +0000 (UTC) X-HE-Tag: oven97_1f1489d2741e X-Filterd-Recvd-Size: 5600 Received: from mail-pg1-f194.google.com (mail-pg1-f194.google.com [209.85.215.194]) by imf27.hostedemail.com (Postfix) with ESMTP for ; Mon, 14 Dec 2020 22:37:50 +0000 (UTC) Received: by mail-pg1-f194.google.com with SMTP id w5so12981937pgj.3 for ; Mon, 14 Dec 2020 14:37:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=buF0Bc96au1TIy8Azp9h1XTsIalRZi79CS3NEeVVksI=; b=OaUcYrYVf/eeMpxa1NLV/j1JCTA3035IB9EUkfK3Pj1tDDE9eF3GHvc1LmiNegkWnH vZKB9fyYio5aTMn3q269OXiEB6rOIRy9cFsX3Ve5v8g9/cHgo8OSh9K+KhokHrHtNS4T HuR8/EwA3E9eyzcqZAOrmJ9iT0iZvl2SgPhk/fK3FuvkiJjvH7kamIhCNcF5cyrJD1Os GQN68PXwvsxq5aQR44ulyWqZ6s84fCyZmdJ0k/wxHnooGLkH69L6I/se5zmYCtglPmbm HxGN/G/Oq7BpsavM8km/yznT8JI7JISm7VYSEpRfFRfKj4peKfG6mKuY/gDBvNMcZ0/u g2/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=buF0Bc96au1TIy8Azp9h1XTsIalRZi79CS3NEeVVksI=; b=PPyFu5IOijw2lOCmFoD+yxe3I8VFzoRoBUYFbalKkwqVS4TBx6tlIy5Zk1IGZjTJfl +kJZTLCP7DKcqZa4f4RJl5FOUZkV/mm5f7TUZK0u9Cs9B7jSh9cn6s65yd6NwddMucq8 fBatAfBd3cpOk5yUkrhrgrB8VEUm9Sjprf+WkiSkoUnFem8PuRiKAq1TuWHxVfLNTDwH ChVtVgz+2KvrnXoVRRDuhjKNAh525gCgArzSnsn4k/KBpNJejiQfjFXuvvJMoFQBMKpo rVlulJn/GJ6PUYj0j1gZQ4wMN3e1zYdLmfKDdF1denGhIu118HXEa50sC2lD7xIVk4H3 bx9w== X-Gm-Message-State: AOAM532ijk6nRgU+A/ljak0+q52eIad5IGe+Q7MXpZbaIynJ3KWGfUdH f5IPWYl405qDBe5qaz2rLR8= X-Google-Smtp-Source: ABdhPJwzng310WgdKPBA4Pp2tm1EIsAo8wEKWhFrdn5P24aXLStOtFuZeSLbkUX67IKRJTf9pkPdCQ== X-Received: by 2002:a63:5d59:: with SMTP id o25mr25614645pgm.218.1607985469792; Mon, 14 Dec 2020 14:37:49 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id d4sm20610758pfo.127.2020.12.14.14.37.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Dec 2020 14:37:48 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v2 PATCH 4/9] mm: vmscan: use a new flag to indicate shrinker is registered Date: Mon, 14 Dec 2020 14:37:17 -0800 Message-Id: <20201214223722.232537-5-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201214223722.232537-1-shy828301@gmail.com> References: <20201214223722.232537-1-shy828301@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently registered shrinker is indicated by non-NULL shrinker->nr_deferred. This approach is fine with nr_deferred at the shrinker level, but the following patches will move MEMCG_AWARE shrinkers' nr_deferred to memcg level, so their shrinker->nr_deferred would always be NULL. This would prevent the shrinkers from unregistering correctly. Introduce a new flag to indicate if shrinker is registered or not. Signed-off-by: Yang Shi --- include/linux/shrinker.h | 7 ++++--- mm/vmscan.c | 13 +++++++++---- 2 files changed, 13 insertions(+), 7 deletions(-) diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h index 0f80123650e2..1eac79ce57d4 100644 --- a/include/linux/shrinker.h +++ b/include/linux/shrinker.h @@ -79,13 +79,14 @@ struct shrinker { #define DEFAULT_SEEKS 2 /* A good number if you don't know better. */ /* Flags */ -#define SHRINKER_NUMA_AWARE (1 << 0) -#define SHRINKER_MEMCG_AWARE (1 << 1) +#define SHRINKER_REGISTERED (1 << 0) +#define SHRINKER_NUMA_AWARE (1 << 1) +#define SHRINKER_MEMCG_AWARE (1 << 2) /* * It just makes sense when the shrinker is also MEMCG_AWARE for now, * non-MEMCG_AWARE shrinker should not have this flag set. */ -#define SHRINKER_NONSLAB (1 << 2) +#define SHRINKER_NONSLAB (1 << 3) extern int prealloc_shrinker(struct shrinker *shrinker); extern void register_shrinker_prepared(struct shrinker *shrinker); diff --git a/mm/vmscan.c b/mm/vmscan.c index 9b31b9c419ec..16c9d2aeeb26 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -378,6 +378,7 @@ void register_shrinker_prepared(struct shrinker *shrinker) if (shrinker->flags & SHRINKER_MEMCG_AWARE) idr_replace(&shrinker_idr, shrinker, shrinker->id); #endif + shrinker->flags |= SHRINKER_REGISTERED; up_write(&shrinker_rwsem); } @@ -397,13 +398,17 @@ EXPORT_SYMBOL(register_shrinker); */ void unregister_shrinker(struct shrinker *shrinker) { - if (!shrinker->nr_deferred) - return; - if (shrinker->flags & SHRINKER_MEMCG_AWARE) - unregister_memcg_shrinker(shrinker); down_write(&shrinker_rwsem); + if (!(shrinker->flags & SHRINKER_REGISTERED)) { + up_write(&shrinker_rwsem); + return; + } list_del(&shrinker->list); + shrinker->flags &= ~SHRINKER_REGISTERED; up_write(&shrinker_rwsem); + + if (shrinker->flags & SHRINKER_MEMCG_AWARE) + unregister_memcg_shrinker(shrinker); kfree(shrinker->nr_deferred); shrinker->nr_deferred = NULL; } From patchwork Mon Dec 14 22:37:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 11973295 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C11D7C4361B for ; Mon, 14 Dec 2020 22:37:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6B30322509 for ; Mon, 14 Dec 2020 22:37:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6B30322509 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0758A6B0070; Mon, 14 Dec 2020 17:37:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0001B6B0071; Mon, 14 Dec 2020 17:37:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DE23F6B0072; Mon, 14 Dec 2020 17:37:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0186.hostedemail.com [216.40.44.186]) by kanga.kvack.org (Postfix) with ESMTP id BF1316B0070 for ; Mon, 14 Dec 2020 17:37:53 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 75F391DE1 for ; Mon, 14 Dec 2020 22:37:53 +0000 (UTC) X-FDA: 77593351626.14.beef89_0a0fb372741e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin14.hostedemail.com (Postfix) with ESMTP id 58EF618229818 for ; Mon, 14 Dec 2020 22:37:53 +0000 (UTC) X-HE-Tag: beef89_0a0fb372741e X-Filterd-Recvd-Size: 12075 Received: from mail-pl1-f193.google.com (mail-pl1-f193.google.com [209.85.214.193]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Mon, 14 Dec 2020 22:37:52 +0000 (UTC) Received: by mail-pl1-f193.google.com with SMTP id 4so9747984plk.5 for ; Mon, 14 Dec 2020 14:37:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ssXnZiLmnDnhCBaKkF0ViwDaE3pTbiD3bgrO3REdu1o=; b=Kdyag+afxDemndXxDu2neVdeMrMnJnpsM79GpbcCiOrzOzOecSQNwuUwNkXBaAfsUm amDvyRcr96/3GBLplUfOrS53VyimB/bD7uc8cL7ubwF/h2bPq8ljNIV1s7se7hquiP9c 3RvhvrRw/x7f2Khxqt8p3gz0hMoWA+vH1wdzGMUQd4f/e2P0RPJwesMT33kdRgRR7ZQa tTYXem2DTnlGvmP2C9FZi3W0VC37WChq1xtvTnSlMHjlpmKdvYYrKxL/nXqDKvyRHxDf CTV27ONp3jyp+fxK52/nnbpluX0KBiHPx8zrahhBuZE/0VU2+hJ6L+reyM7zj7X9D8qb /LOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ssXnZiLmnDnhCBaKkF0ViwDaE3pTbiD3bgrO3REdu1o=; b=eBAt6RnZ7CsAnPwzqK2oqRnC6yaqnyOwIH4TzGZWGm55t3cBRe0mgSA7XffJGWwfva Tf3m3KEnDtPFz/uq3Vi/TA6rEZsH0aQhHziPY6fnSm7C+GnoXbXx45kfM0kW17NBoco0 Srab9AIs29RWF8ujyEUUaR/+iU1AiZI8m8eW9bfZ8EvoJjs2tkhhNwKphUkNR5g1mgae PV8oUWQ9ytMWtT1x8TtzLsO4klzfeRtuB4kJo5TfkzqSZbtZOVk79a6yyOskDrA8awMC KePJJHMMNROSe29lWgI3R4GjLoF7pfqjWt7+2AUD+5iSBalm5uX547Gd9wEqtDaItkVw jklA== X-Gm-Message-State: AOAM5320ApiXeIS+yXhxiSHG/iECdGh5qydxiguHCpPjZEei/0S9adrI rQ2hVUW6npsMUxkFAnTMfbU= X-Google-Smtp-Source: ABdhPJyWkKVRTRNzs5DbHGN2QcNv2wKI3ximz4j2UUbDkjomVlfb5vfSXO6RIa099VMG93AU7IgDsw== X-Received: by 2002:a17:90b:4b07:: with SMTP id lx7mr28007598pjb.230.1607985471993; Mon, 14 Dec 2020 14:37:51 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id d4sm20610758pfo.127.2020.12.14.14.37.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Dec 2020 14:37:51 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v2 PATCH 5/9] mm: memcontrol: add per memcg shrinker nr_deferred Date: Mon, 14 Dec 2020 14:37:18 -0800 Message-Id: <20201214223722.232537-6-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201214223722.232537-1-shy828301@gmail.com> References: <20201214223722.232537-1-shy828301@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently the number of deferred objects are per shrinker, but some slabs, for example, vfs inode/dentry cache are per memcg, this would result in poor isolation among memcgs. The deferred objects typically are generated by __GFP_NOFS allocations, one memcg with excessive __GFP_NOFS allocations may blow up deferred objects, then other innocent memcgs may suffer from over shrink, excessive reclaim latency, etc. For example, two workloads run in memcgA and memcgB respectively, workload in B is vfs heavy workload. Workload in A generates excessive deferred objects, then B's vfs cache might be hit heavily (drop half of caches) by B's limit reclaim or global reclaim. We observed this hit in our production environment which was running vfs heavy workload shown as the below tracing log: <...>-409454 [016] .... 28286961.747146: mm_shrink_slab_start: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: nid: 1 objects to shrink 3641681686040 gfp_flags GFP_HIGHUSER_MOVABLE|__GFP_ZERO pgs_scanned 1 lru_pgs 15721 cache items 246404277 delta 31345 total_scan 123202138 <...>-409454 [022] .... 28287105.928018: mm_shrink_slab_end: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: nid: 1 unused scan count 3641681686040 new scan count 3641798379189 total_scan 602 last shrinker return val 123186855 The vfs cache and page cache ration was 10:1 on this machine, and half of caches were dropped. This also resulted in significant amount of page caches were dropped due to inodes eviction. Make nr_deferred per memcg for memcg aware shrinkers would solve the unfairness and bring better isolation. When memcg is not enabled (!CONFIG_MEMCG or memcg disabled), the shrinker's nr_deferred would be used. And non memcg aware shrinkers use shrinker's nr_deferred all the time. Signed-off-by: Yang Shi --- include/linux/memcontrol.h | 9 +++ mm/memcontrol.c | 110 ++++++++++++++++++++++++++++++++++++- mm/vmscan.c | 4 ++ 3 files changed, 120 insertions(+), 3 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 922a7f600465..1b343b268359 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -92,6 +92,13 @@ struct lruvec_stat { long count[NR_VM_NODE_STAT_ITEMS]; }; + +/* Shrinker::id indexed nr_deferred of memcg-aware shrinkers. */ +struct memcg_shrinker_deferred { + struct rcu_head rcu; + atomic_long_t nr_deferred[]; +}; + /* * Bitmap of shrinker::id corresponding to memcg-aware shrinkers, * which have elements charged to this memcg. @@ -119,6 +126,7 @@ struct mem_cgroup_per_node { struct mem_cgroup_reclaim_iter iter; struct memcg_shrinker_map __rcu *shrinker_map; + struct memcg_shrinker_deferred __rcu *shrinker_deferred; struct rb_node tree_node; /* RB tree node */ unsigned long usage_in_excess;/* Set to the value by which */ @@ -1489,6 +1497,7 @@ static inline bool mem_cgroup_under_socket_pressure(struct mem_cgroup *memcg) } extern int memcg_expand_shrinker_maps(int new_id); +extern int memcg_expand_shrinker_deferred(int new_id); extern void memcg_set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 3d4ddbb84a01..321d1818ce3d 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -394,14 +394,20 @@ DEFINE_STATIC_KEY_FALSE(memcg_kmem_enabled_key); EXPORT_SYMBOL(memcg_kmem_enabled_key); #endif -/* It is only can be changed with holding shrinker_rwsem exclusively */ +/* They are only can be changed with holding shrinker_rwsem exclusively */ static int memcg_shrinker_map_size; +static int memcg_shrinker_deferred_size; static void memcg_free_shrinker_map_rcu(struct rcu_head *head) { kvfree(container_of(head, struct memcg_shrinker_map, rcu)); } +static void memcg_free_shrinker_deferred_rcu(struct rcu_head *head) +{ + kvfree(container_of(head, struct memcg_shrinker_deferred, rcu)); +} + static int memcg_expand_one_shrinker_map(struct mem_cgroup *memcg, int size, int old_size) { @@ -430,6 +436,34 @@ static int memcg_expand_one_shrinker_map(struct mem_cgroup *memcg, return 0; } +static int memcg_expand_one_shrinker_deferred(struct mem_cgroup *memcg, + int size, int old_size) +{ + struct memcg_shrinker_deferred *new, *old; + int nid; + + for_each_node(nid) { + old = rcu_dereference_protected( + mem_cgroup_nodeinfo(memcg, nid)->shrinker_deferred, true); + /* Not yet online memcg */ + if (!old) + return 0; + + new = kvmalloc_node(sizeof(*new) + size, GFP_KERNEL, nid); + if (!new) + return -ENOMEM; + + /* Copy all old values, and clear all new ones */ + memcpy((void *)new->nr_deferred, (void *)old->nr_deferred, old_size); + memset((void *)new->nr_deferred + old_size, 0, size - old_size); + + rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_deferred, new); + call_rcu(&old->rcu, memcg_free_shrinker_deferred_rcu); + } + + return 0; +} + static void memcg_free_shrinker_maps(struct mem_cgroup *memcg) { struct mem_cgroup_per_node *pn; @@ -448,6 +482,21 @@ static void memcg_free_shrinker_maps(struct mem_cgroup *memcg) } } +static void memcg_free_shrinker_deferred(struct mem_cgroup *memcg) +{ + struct mem_cgroup_per_node *pn; + struct memcg_shrinker_deferred *deferred; + int nid; + + for_each_node(nid) { + pn = mem_cgroup_nodeinfo(memcg, nid); + deferred = rcu_dereference_protected(pn->shrinker_deferred, true); + if (deferred) + kvfree(deferred); + rcu_assign_pointer(pn->shrinker_deferred, NULL); + } +} + static int memcg_alloc_shrinker_maps(struct mem_cgroup *memcg) { struct memcg_shrinker_map *map; @@ -472,6 +521,27 @@ static int memcg_alloc_shrinker_maps(struct mem_cgroup *memcg) return ret; } +static int memcg_alloc_shrinker_deferred(struct mem_cgroup *memcg) +{ + struct memcg_shrinker_deferred *deferred; + int nid, size, ret = 0; + + down_read(&shrinker_rwsem); + size = memcg_shrinker_deferred_size; + for_each_node(nid) { + deferred = kvzalloc_node(sizeof(*deferred) + size, GFP_KERNEL, nid); + if (!deferred) { + memcg_free_shrinker_deferred(memcg); + ret = -ENOMEM; + break; + } + rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_deferred, deferred); + } + up_read(&shrinker_rwsem); + + return ret; +} + int memcg_expand_shrinker_maps(int new_id) { int size, old_size, ret = 0; @@ -501,6 +571,33 @@ int memcg_expand_shrinker_maps(int new_id) return ret; } +int memcg_expand_shrinker_deferred(int new_id) +{ + int size, old_size, ret = 0; + struct mem_cgroup *memcg; + + size = (new_id + 1) * sizeof(atomic_long_t); + old_size = memcg_shrinker_deferred_size; + if (size <= old_size) + return 0; + + if (!root_mem_cgroup) + goto out; + + for_each_mem_cgroup(memcg) { + ret = memcg_expand_one_shrinker_deferred(memcg, size, old_size); + if (ret) { + mem_cgroup_iter_break(NULL, memcg); + goto out; + } + } +out: + if (!ret) + memcg_shrinker_deferred_size = size; + + return ret; +} + void memcg_set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id) { if (shrinker_id >= 0 && memcg && !mem_cgroup_is_root(memcg)) { @@ -5397,8 +5494,8 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css) struct mem_cgroup *memcg = mem_cgroup_from_css(css); /* - * A memcg must be visible for memcg_expand_shrinker_maps() - * by the time the maps are allocated. So, we allocate maps + * A memcg must be visible for memcg_expand_shrinker_{maps|deferred}() + * by the time the maps are allocated. So, we allocate maps and deferred * here, when for_each_mem_cgroup() can't skip it. */ if (memcg_alloc_shrinker_maps(memcg)) { @@ -5406,6 +5503,12 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css) return -ENOMEM; } + if (memcg_alloc_shrinker_deferred(memcg)) { + memcg_free_shrinker_maps(memcg); + mem_cgroup_id_remove(memcg); + return -ENOMEM; + } + /* * Barrier for CSS_ONLINE, so that shrink_slab_memcg() sees shirnker_maps * and shrinker_deferred before CSS_ONLINE. It pairs with the read barrier @@ -5473,6 +5576,7 @@ static void mem_cgroup_css_free(struct cgroup_subsys_state *css) cancel_work_sync(&memcg->high_work); mem_cgroup_remove_from_trees(memcg); memcg_free_shrinker_maps(memcg); + memcg_free_shrinker_deferred(memcg); memcg_free_kmem(memcg); mem_cgroup_free(memcg); } diff --git a/mm/vmscan.c b/mm/vmscan.c index 16c9d2aeeb26..bf34167dd67e 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -219,6 +219,10 @@ static int prealloc_memcg_shrinker(struct shrinker *shrinker) goto unlock; } + if (memcg_expand_shrinker_deferred(id)) { + idr_remove(&shrinker_idr, id); + goto unlock; + } shrinker_nr_max = id + 1; } shrinker->id = id; From patchwork Mon Dec 14 22:37:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 11973297 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 549A3C2BB40 for ; Mon, 14 Dec 2020 22:37:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D7F1D20731 for ; Mon, 14 Dec 2020 22:37:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D7F1D20731 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7C75E6B0071; Mon, 14 Dec 2020 17:37:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 752076B0072; Mon, 14 Dec 2020 17:37:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5F4E06B0073; Mon, 14 Dec 2020 17:37:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0007.hostedemail.com [216.40.44.7]) by kanga.kvack.org (Postfix) with ESMTP id 4239A6B0071 for ; Mon, 14 Dec 2020 17:37:56 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 08CFE181AEF0B for ; Mon, 14 Dec 2020 22:37:56 +0000 (UTC) X-FDA: 77593351752.15.bait08_570bc362741e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id C747B1814B0C8 for ; Mon, 14 Dec 2020 22:37:55 +0000 (UTC) X-HE-Tag: bait08_570bc362741e X-Filterd-Recvd-Size: 7947 Received: from mail-pg1-f195.google.com (mail-pg1-f195.google.com [209.85.215.195]) by imf23.hostedemail.com (Postfix) with ESMTP for ; Mon, 14 Dec 2020 22:37:55 +0000 (UTC) Received: by mail-pg1-f195.google.com with SMTP id k65so3450766pgk.0 for ; Mon, 14 Dec 2020 14:37:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+kmdzUyomriPoEmgNJZkhkyxllzW++vSvJDg1wBbRWU=; b=iI8SQ4uIBy6rSF6WJgwZJtcCMKt06dXZSxVquYF/3xfi7j7bZXBscDEUmCgzi5GewB IkE8ILNOew3ChA1nW75iqnez79iWw1ndLmdEo8i3ua18h3SXGZmqO0ZMh1+wRiabjzqR u457fTskXBgUezEgKLBY80N5NDt6zlCNjwuLuHFKLHmuCBZW8+xvjRE2ntee027Cahhr NM9BA3PYx/1thRe828BQdlVujz2/6yVbURUpZFHYoG9XQbBcF0YdiW1At1cCihtAuj12 e4tXSPFTIhhJ5e/n3mwY124qc/D8hTOntHETVdST3zZcxe1yU7NptmaHaERLmo9+Wwta EO1w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+kmdzUyomriPoEmgNJZkhkyxllzW++vSvJDg1wBbRWU=; b=RQc5tpatVDZj0yAFDg8Rm4gD3SV8Kr2Vu/bQGljBInUMjizs7ZZNtbwPaJr33KHa1h Hb7KFZp26qQtN6ZPi3CWIdpSzCBJOqMk5IeqAN98UdxUw7GsvM+08VASJMr3R12W000T NW4zTWqzneq3hjh2+kR+A2/B/C8dk6O+CgM43fmPekVW8AUjrWV9r/GD9pz5OHoBdNEG F8vpLr/NGTgG6pIAIqa+NC7LBy6PBAhDxkkDswey0JRYW/M3lp4s73liNiZP2gef9TAZ 8dRzqf0RmnrvZBBMp4DaLM3G0zb4y1jo9aKvifYp5vTlo7pcDzhuPc+0yWWvcDvmQiRI oFig== X-Gm-Message-State: AOAM5335ZN/sCXsmbu9vuPbBKXkdEvuaxKqRKppSOxYuZV+rTIAkPS7R 3/5DwBtA2Zar1zrhpjhRqYU= X-Google-Smtp-Source: ABdhPJz/LPX+WFR6Oci5zsfqoxLPRAVL4rXMXlUq7w+UJS5jvacni+aMkFdG5kal9SInUPXavYNuhQ== X-Received: by 2002:aa7:963c:0:b029:19d:dcd3:b2ae with SMTP id r28-20020aa7963c0000b029019ddcd3b2aemr25803975pfg.76.1607985474398; Mon, 14 Dec 2020 14:37:54 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id d4sm20610758pfo.127.2020.12.14.14.37.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Dec 2020 14:37:53 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v2 PATCH 6/9] mm: vmscan: use per memcg nr_deferred of shrinker Date: Mon, 14 Dec 2020 14:37:19 -0800 Message-Id: <20201214223722.232537-7-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201214223722.232537-1-shy828301@gmail.com> References: <20201214223722.232537-1-shy828301@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use per memcg's nr_deferred for memcg aware shrinkers. The shrinker's nr_deferred will be used in the following cases: 1. Non memcg aware shrinkers 2. !CONFIG_MEMCG 3. memcg is disabled by boot parameter Signed-off-by: Yang Shi --- mm/vmscan.c | 94 ++++++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 83 insertions(+), 11 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index bf34167dd67e..bce8cf44eca2 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -203,6 +203,12 @@ DECLARE_RWSEM(shrinker_rwsem); static DEFINE_IDR(shrinker_idr); static int shrinker_nr_max; +static inline bool is_deferred_memcg_aware(struct shrinker *shrinker) +{ + return (shrinker->flags & SHRINKER_MEMCG_AWARE) && + !mem_cgroup_disabled(); +} + static int prealloc_memcg_shrinker(struct shrinker *shrinker) { int id, ret = -ENOMEM; @@ -271,7 +277,58 @@ static bool writeback_throttling_sane(struct scan_control *sc) #endif return false; } + +static inline long count_nr_deferred(struct shrinker *shrinker, + struct shrink_control *sc) +{ + bool per_memcg_deferred = is_deferred_memcg_aware(shrinker) && sc->memcg; + struct memcg_shrinker_deferred *deferred; + struct mem_cgroup *memcg = sc->memcg; + int nid = sc->nid; + int id = shrinker->id; + long nr; + + if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) + nid = 0; + + if (per_memcg_deferred) { + deferred = rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_deferred, + true); + nr = atomic_long_xchg(&deferred->nr_deferred[id], 0); + } else + nr = atomic_long_xchg(&shrinker->nr_deferred[nid], 0); + + return nr; +} + +static inline long set_nr_deferred(long nr, struct shrinker *shrinker, + struct shrink_control *sc) +{ + bool per_memcg_deferred = is_deferred_memcg_aware(shrinker) && sc->memcg; + struct memcg_shrinker_deferred *deferred; + struct mem_cgroup *memcg = sc->memcg; + int nid = sc->nid; + int id = shrinker->id; + long new_nr; + + if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) + nid = 0; + + if (per_memcg_deferred) { + deferred = rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_deferred, + true); + new_nr = atomic_long_add_return(nr, &deferred->nr_deferred[id]); + } else + new_nr = atomic_long_add_return(nr, &shrinker->nr_deferred[nid]); + + return new_nr; +} #else +static inline bool is_deferred_memcg_aware(struct shrinker *shrinker) +{ + return false; +} + static int prealloc_memcg_shrinker(struct shrinker *shrinker) { return 0; @@ -290,6 +347,29 @@ static bool writeback_throttling_sane(struct scan_control *sc) { return true; } + +static inline long count_nr_deferred(struct shrinker *shrinker, + struct shrink_control *sc) +{ + int nid = sc->nid; + + if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) + nid = 0; + + return atomic_long_xchg(&shrinker->nr_deferred[nid], 0); +} + +static inline long set_nr_deferred(long nr, struct shrinker *shrinker, + struct shrink_control *sc) +{ + int nid = sc->nid; + + if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) + nid = 0; + + return atomic_long_add_return(nr, + &shrinker->nr_deferred[nid]); +} #endif /* @@ -429,13 +509,10 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, long freeable; long nr; long new_nr; - int nid = shrinkctl->nid; long batch_size = shrinker->batch ? shrinker->batch : SHRINK_BATCH; long scanned = 0, next_deferred; - if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) - nid = 0; freeable = shrinker->count_objects(shrinker, shrinkctl); if (freeable == 0 || freeable == SHRINK_EMPTY) @@ -446,7 +523,7 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, * and zero it so that other concurrent shrinker invocations * don't also do this scanning work. */ - nr = atomic_long_xchg(&shrinker->nr_deferred[nid], 0); + nr = count_nr_deferred(shrinker, shrinkctl); total_scan = nr; if (shrinker->seeks) { @@ -537,14 +614,9 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, next_deferred = 0; /* * move the unused scan count back into the shrinker in a - * manner that handles concurrent updates. If we exhausted the - * scan, there is no need to do an update. + * manner that handles concurrent updates. */ - if (next_deferred > 0) - new_nr = atomic_long_add_return(next_deferred, - &shrinker->nr_deferred[nid]); - else - new_nr = atomic_long_read(&shrinker->nr_deferred[nid]); + new_nr = set_nr_deferred(next_deferred, shrinker, shrinkctl); trace_mm_shrink_slab_end(shrinker, shrinkctl->nid, freed, nr, new_nr, total_scan); return freed; From patchwork Mon Dec 14 22:37:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 11973299 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 687B4C4361B for ; Mon, 14 Dec 2020 22:37:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1889E22509 for ; Mon, 14 Dec 2020 22:37:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1889E22509 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AD6686B0072; Mon, 14 Dec 2020 17:37:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A3A0B6B0073; Mon, 14 Dec 2020 17:37:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7EE316B0074; Mon, 14 Dec 2020 17:37:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0141.hostedemail.com [216.40.44.141]) by kanga.kvack.org (Postfix) with ESMTP id 683C16B0072 for ; Mon, 14 Dec 2020 17:37:58 -0500 (EST) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 2DBCA3634 for ; Mon, 14 Dec 2020 22:37:58 +0000 (UTC) X-FDA: 77593351836.03.wind73_59007342741e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id 145F028A4E8 for ; Mon, 14 Dec 2020 22:37:58 +0000 (UTC) X-HE-Tag: wind73_59007342741e X-Filterd-Recvd-Size: 4869 Received: from mail-pf1-f194.google.com (mail-pf1-f194.google.com [209.85.210.194]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Mon, 14 Dec 2020 22:37:57 +0000 (UTC) Received: by mail-pf1-f194.google.com with SMTP id q22so13148486pfk.12 for ; Mon, 14 Dec 2020 14:37:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=SadIRfB1ozL+BvgNvi56acGK40sGczeZRbLfcOjsgUI=; b=Mc+/YW20XwmOUSCsmjbxlbhWLn0udfDF1hb/tFJNMCo26ixOoN4By9QolVjs97PCec OsGBYh8BNUOvZrxJdfXaWJP7hwH9ofY7oommJ7G+EXI5WXBBZv9A1gWNQb6wYY/Gu8Pr y7qK59t7ZdEHMj/A29Kbg2Stu7UteCZvXBPx5QN5VjvG55neVSTwoUcRrRZATYVQq1+j sLl+9sM+yMHaC339qQHpau96kBWG90pfG+0EVfBmEmFcjatC2mX+U2j2Hx7L9vbXdaGX SiBpXoYwWKsqJoqzhFhgjkjALrzhvgF9hunAQTYfmtDdbvm42WW6Ll9PO/QZK02jtOSv frGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SadIRfB1ozL+BvgNvi56acGK40sGczeZRbLfcOjsgUI=; b=SO1DTH7Py0wO3QrSvjJhM5Lz8fp1fL4SAcBoPuXpRjUxjF6gcrSxOrix71QxqVVYub aB5fEv5BcRIxDOf1c2/OF+OK2tMv+nRgPOctkXAT9NdWGlnTuu4aGRvsX5qaxefJ/zjg rGpEJ7i3gD+srlVWffkv1seWI8LPpw6IP/3D+GjWhcPXC0Xah80Nhi+zQH0cxONpF7VU m+nbr7lWYRoIvUGLKIQuyTmbRBWrn+gDhmMe4upCdKKrwtFooJJG0UXS6OoQxH4FLEpi Am0kSHO+BDoyBNwtXF7cr0WMdaoOEi6L2WONFRgauB87Ffihm3hIpjSrR1BNhdoZSfKa pp7g== X-Gm-Message-State: AOAM532/hbz7GuRupnf1Sr/dfQrZuolthB+Puv/grFzOQDbCmt4sH54F 6elX6HtEapCCjzAIqct9ogY= X-Google-Smtp-Source: ABdhPJxLNoStEmuqK/WqKcnP4+Gjaztl1jLEFeEywadhkRJ+a/8qnPy/51MZKB/GCDpJoTDGEkDQVQ== X-Received: by 2002:a05:6a00:1596:b029:19d:96b8:6eab with SMTP id u22-20020a056a001596b029019d96b86eabmr25860773pfk.38.1607985476760; Mon, 14 Dec 2020 14:37:56 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id d4sm20610758pfo.127.2020.12.14.14.37.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Dec 2020 14:37:55 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v2 PATCH 7/9] mm: vmscan: don't need allocate shrinker->nr_deferred for memcg aware shrinkers Date: Mon, 14 Dec 2020 14:37:20 -0800 Message-Id: <20201214223722.232537-8-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201214223722.232537-1-shy828301@gmail.com> References: <20201214223722.232537-1-shy828301@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now nr_deferred is available on per memcg level for memcg aware shrinkers, so don't need allocate shrinker->nr_deferred for such shrinkers anymore. Signed-off-by: Yang Shi --- mm/vmscan.c | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index bce8cf44eca2..8d5bfd818acd 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -420,7 +420,15 @@ unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru, int zone */ int prealloc_shrinker(struct shrinker *shrinker) { - unsigned int size = sizeof(*shrinker->nr_deferred); + unsigned int size; + + if (is_deferred_memcg_aware(shrinker)) { + if (prealloc_memcg_shrinker(shrinker)) + return -ENOMEM; + return 0; + } + + size = sizeof(*shrinker->nr_deferred); if (shrinker->flags & SHRINKER_NUMA_AWARE) size *= nr_node_ids; @@ -429,26 +437,18 @@ int prealloc_shrinker(struct shrinker *shrinker) if (!shrinker->nr_deferred) return -ENOMEM; - if (shrinker->flags & SHRINKER_MEMCG_AWARE) { - if (prealloc_memcg_shrinker(shrinker)) - goto free_deferred; - } - return 0; - -free_deferred: - kfree(shrinker->nr_deferred); - shrinker->nr_deferred = NULL; - return -ENOMEM; } void free_prealloced_shrinker(struct shrinker *shrinker) { - if (!shrinker->nr_deferred) + if (is_deferred_memcg_aware(shrinker)) { + unregister_memcg_shrinker(shrinker); return; + } - if (shrinker->flags & SHRINKER_MEMCG_AWARE) - unregister_memcg_shrinker(shrinker); + if (!shrinker->nr_deferred) + return; kfree(shrinker->nr_deferred); shrinker->nr_deferred = NULL; From patchwork Mon Dec 14 22:37:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 11973301 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CFD61C4361B for ; Mon, 14 Dec 2020 22:38:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7322D20731 for ; Mon, 14 Dec 2020 22:38:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7322D20731 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 117D46B0073; Mon, 14 Dec 2020 17:38:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EF7A86B0074; Mon, 14 Dec 2020 17:38:00 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D218B6B0075; Mon, 14 Dec 2020 17:38:00 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0046.hostedemail.com [216.40.44.46]) by kanga.kvack.org (Postfix) with ESMTP id B157B6B0073 for ; Mon, 14 Dec 2020 17:38:00 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 6F004181AEF0B for ; Mon, 14 Dec 2020 22:38:00 +0000 (UTC) X-FDA: 77593351920.29.jail43_02127572741e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin29.hostedemail.com (Postfix) with ESMTP id 4DE88180868C8 for ; Mon, 14 Dec 2020 22:38:00 +0000 (UTC) X-HE-Tag: jail43_02127572741e X-Filterd-Recvd-Size: 5823 Received: from mail-pg1-f195.google.com (mail-pg1-f195.google.com [209.85.215.195]) by imf28.hostedemail.com (Postfix) with ESMTP for ; Mon, 14 Dec 2020 22:37:59 +0000 (UTC) Received: by mail-pg1-f195.google.com with SMTP id k65so3450995pgk.0 for ; Mon, 14 Dec 2020 14:37:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=w3fRQtpX8VRWJVNuVlihrDSl4Ew2OVDi3uoQ+orHFSM=; b=TUWsLN2EIgkfe9UBwtBIZfE2DYuDAvqDdRMI2Mzue4kMvvvzUiNj6Hple87HPp2G/p m2vn8lIGJ3ccErxvg6+mSJA9OIBTGnlxjN6kAz2Ha+JqXXUyhU6ToD/1s9ggBTx8my9u GDBG3+FZfNKeJlTdz+jGhB9ials3Yak4nSI6MqItr76HUvhNN/v50JsTz7G9COEt0C8Z fRLPSSHRlyYvfdIhxC8cpseF5JQJl/PbjXNlLlCtz45Czf8pup73ukEbKZSEmX/yD+JD NHII5vcveR9S7an5u1MUIetWNpbfHppz2X41tSHAnIv/Y2ezkciJBMStXI6b4xSAf1R0 XaBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=w3fRQtpX8VRWJVNuVlihrDSl4Ew2OVDi3uoQ+orHFSM=; b=twRa0499jkHTqZ2kmZcUhGkG3dG9zyd0/VVNEkEQ07H2S3bzPBaILSL9eC5r4uR8K2 PFE/ssawuCd6on16szvGrEj3hVhIzDbfroil/CjLcD4UTqvwkwmd5oCGeqxcjp97tzw3 va1A7QFy427BOn2q7CTZhJNjL8N2l4rR/IXT0w0n7cMUEBDSZLwFdAbuNARlJ/9qXvBw w2Vm5tvNiwwe/CqroVIATKUB7ai98RUZplOnxN7SbXYSU+2FLnfYCl9Nieq4tzKBByZd /tRNtbfsQcbfWzOooSGALLMd0fj1ziYjEKdb5ijMaUo/+XU7jQ+Qju10DUyxO1OrshnR zbEw== X-Gm-Message-State: AOAM5328sdVluCNeatEvxznPxyRshpVh+TIZwbw0vZf8974OgSkE3urd c2GaTCWeb/05uWvtzmE5i9Q= X-Google-Smtp-Source: ABdhPJy5whBOYsOd14m3qSnV22diJRbUamHDpaAsnC9yQbAgm+Kzx7vaAAYra+01E60AsBNY91361Q== X-Received: by 2002:aa7:85d2:0:b029:1a2:73fe:5c28 with SMTP id z18-20020aa785d20000b02901a273fe5c28mr14313012pfn.40.1607985479060; Mon, 14 Dec 2020 14:37:59 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id d4sm20610758pfo.127.2020.12.14.14.37.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Dec 2020 14:37:58 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v2 PATCH 8/9] mm: memcontrol: reparent nr_deferred when memcg offline Date: Mon, 14 Dec 2020 14:37:21 -0800 Message-Id: <20201214223722.232537-9-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201214223722.232537-1-shy828301@gmail.com> References: <20201214223722.232537-1-shy828301@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now shrinker's nr_deferred is per memcg for memcg aware shrinkers, add to parent's corresponding nr_deferred when memcg offline. Signed-off-by: Yang Shi --- include/linux/shrinker.h | 4 ++++ mm/memcontrol.c | 24 ++++++++++++++++++++++++ mm/vmscan.c | 2 +- 3 files changed, 29 insertions(+), 1 deletion(-) diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h index 1eac79ce57d4..85cfc910dde4 100644 --- a/include/linux/shrinker.h +++ b/include/linux/shrinker.h @@ -78,6 +78,10 @@ struct shrinker { }; #define DEFAULT_SEEKS 2 /* A good number if you don't know better. */ +#ifdef CONFIG_MEMCG +extern int shrinker_nr_max; +#endif + /* Flags */ #define SHRINKER_REGISTERED (1 << 0) #define SHRINKER_NUMA_AWARE (1 << 1) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 321d1818ce3d..1f191a15bee1 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -59,6 +59,7 @@ #include #include #include +#include #include "internal.h" #include #include @@ -612,6 +613,28 @@ void memcg_set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id) } } +static void memcg_reparent_shrinker_deferred(struct mem_cgroup *memcg) +{ + int i, nid; + long nr; + struct mem_cgroup *parent; + struct memcg_shrinker_deferred *child_deferred, *parent_deferred; + + parent = parent_mem_cgroup(memcg); + if (!parent) + parent = root_mem_cgroup; + + for_each_node(nid) { + child_deferred = memcg->nodeinfo[nid]->shrinker_deferred; + parent_deferred = parent->nodeinfo[nid]->shrinker_deferred; + for (i = 0; i < shrinker_nr_max; i ++) { + nr = atomic_long_read(&child_deferred->nr_deferred[i]); + atomic_long_add(nr, + &parent_deferred->nr_deferred[i]); + } + } +} + /** * mem_cgroup_css_from_page - css of the memcg associated with a page * @page: page of interest @@ -5543,6 +5566,7 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css) page_counter_set_low(&memcg->memory, 0); memcg_offline_kmem(memcg); + memcg_reparent_shrinker_deferred(memcg); wb_memcg_offline(memcg); drain_all_stock(memcg); diff --git a/mm/vmscan.c b/mm/vmscan.c index 8d5bfd818acd..693a41e89969 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -201,7 +201,7 @@ DECLARE_RWSEM(shrinker_rwsem); #define SHRINKER_REGISTERING ((struct shrinker *)~0UL) static DEFINE_IDR(shrinker_idr); -static int shrinker_nr_max; +int shrinker_nr_max; static inline bool is_deferred_memcg_aware(struct shrinker *shrinker) { From patchwork Mon Dec 14 22:37:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 11973303 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A19AC4361B for ; Mon, 14 Dec 2020 22:38:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DC67D20731 for ; Mon, 14 Dec 2020 22:38:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DC67D20731 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7A4776B0074; Mon, 14 Dec 2020 17:38:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6B56A6B0075; Mon, 14 Dec 2020 17:38:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 57D256B0078; Mon, 14 Dec 2020 17:38:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0069.hostedemail.com [216.40.44.69]) by kanga.kvack.org (Postfix) with ESMTP id 3EB516B0074 for ; Mon, 14 Dec 2020 17:38:03 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id F37018249980 for ; Mon, 14 Dec 2020 22:38:02 +0000 (UTC) X-FDA: 77593352046.13.group74_310d3d92741e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id 8A44C18140B69 for ; Mon, 14 Dec 2020 22:38:02 +0000 (UTC) X-HE-Tag: group74_310d3d92741e X-Filterd-Recvd-Size: 5755 Received: from mail-pf1-f193.google.com (mail-pf1-f193.google.com [209.85.210.193]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Mon, 14 Dec 2020 22:38:02 +0000 (UTC) Received: by mail-pf1-f193.google.com with SMTP id q22so13148626pfk.12 for ; Mon, 14 Dec 2020 14:38:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4UvK5zU8YmbSA4ZaOjSKDv3yX4mccXYxyaNC7iOnySk=; b=GeoybizuMfpDpfAV+/8gXc4rn4T8bM6oj5/eYEvqje82syR/3rKUqljo8Tm4WFWUg1 1w3F/pPUjwfEfRfpHW9pcmwhju62t7p3ZHTesDosxN1rma5RaKBYCUeBZL8cVQW7R79n OUaSC1f5iQFNeQHbcisZmna4WNL6WB3YRH+v5d/mMEW1oMAJ6BQBkqmf//LLdSlBMxP6 zsC0eX1wJB0kp1pCC5P6dLcZDzlLMAjPM3zUDse2W7bS6YBN5J/x6RhkXV1hKcLIa8zO D45Jw/7xDAM8F8o2M9p+L767QPqRyEGI77zEZbivRA/B5qGv7qFXpKsR3TvKy0dKlxBS Qmnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4UvK5zU8YmbSA4ZaOjSKDv3yX4mccXYxyaNC7iOnySk=; b=RWhNRJA1BTW9tUDmiwpxBCS3EgtbX6MybajmntC83z3aPolmkBg95AopIiW83swKDY FpnUpyIlfoy09qbCfSRIWAXJTO/qNxGHlCz72uXKLxbULdvJDrYZ5ScTUcbHJBpa3IV1 JaXkaQnEkzw5z3UJ6yOapmRhkEaCMgMDGi9aSGkSokOrMg/RGy2PkI8+RAlM739FuUL0 oUHjtreDgqqDeTwu931hAlmMtB1FYF3nVLdm8DPAH5XeI8rxneFIDkgIeykePXGDpkwy 2ENXt2B/HwiZ/f/HoI84vq5wr5mV3FCfbZHCt2/vYLTnRCkYm9BVWFNPD7YcRQp0wBiQ mrXw== X-Gm-Message-State: AOAM5325l3wUx31pTJiH9c1UZis/KDT+Xs1WSuKZXEZaXwq1Oy88O/ue F/3Jy4KFv0fX2C95tZAfBKM= X-Google-Smtp-Source: ABdhPJzYQzO1DELUNIEnqXV01z7jV6nNrNROlE526aXtT8QmQi4/cQdvSHCGGkYYtvsWa+qjj0okbA== X-Received: by 2002:a63:344b:: with SMTP id b72mr16684261pga.406.1607985481301; Mon, 14 Dec 2020 14:38:01 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id d4sm20610758pfo.127.2020.12.14.14.37.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Dec 2020 14:38:00 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v2 PATCH 9/9] mm: vmscan: shrink deferred objects proportional to priority Date: Mon, 14 Dec 2020 14:37:22 -0800 Message-Id: <20201214223722.232537-10-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201214223722.232537-1-shy828301@gmail.com> References: <20201214223722.232537-1-shy828301@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The number of deferred objects might get windup to an absurd number, and it results in clamp of slab objects. It is undesirable for sustaining workingset. So shrink deferred objects proportional to priority and cap nr_deferred to twice of cache items. Signed-off-by: Yang Shi --- mm/vmscan.c | 40 +++++----------------------------------- 1 file changed, 5 insertions(+), 35 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 693a41e89969..58f4a383f0df 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -525,7 +525,6 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, */ nr = count_nr_deferred(shrinker, shrinkctl); - total_scan = nr; if (shrinker->seeks) { delta = freeable >> priority; delta *= 4; @@ -539,37 +538,9 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, delta = freeable / 2; } + total_scan = nr >> priority; total_scan += delta; - if (total_scan < 0) { - pr_err("shrink_slab: %pS negative objects to delete nr=%ld\n", - shrinker->scan_objects, total_scan); - total_scan = freeable; - next_deferred = nr; - } else - next_deferred = total_scan; - - /* - * We need to avoid excessive windup on filesystem shrinkers - * due to large numbers of GFP_NOFS allocations causing the - * shrinkers to return -1 all the time. This results in a large - * nr being built up so when a shrink that can do some work - * comes along it empties the entire cache due to nr >>> - * freeable. This is bad for sustaining a working set in - * memory. - * - * Hence only allow the shrinker to scan the entire cache when - * a large delta change is calculated directly. - */ - if (delta < freeable / 4) - total_scan = min(total_scan, freeable / 2); - - /* - * Avoid risking looping forever due to too large nr value: - * never try to free more than twice the estimate number of - * freeable entries. - */ - if (total_scan > freeable * 2) - total_scan = freeable * 2; + total_scan = min(total_scan, (2 * freeable)); trace_mm_shrink_slab_start(shrinker, shrinkctl, nr, freeable, delta, total_scan, priority); @@ -608,10 +579,9 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, cond_resched(); } - if (next_deferred >= scanned) - next_deferred -= scanned; - else - next_deferred = 0; + next_deferred = max_t(long, (nr - scanned), 0) + total_scan; + next_deferred = min(next_deferred, (2 * freeable)); + /* * move the unused scan count back into the shrinker in a * manner that handles concurrent updates.