From patchwork Wed Feb 17 00:13:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12090769 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8DA2C433E0 for ; Wed, 17 Feb 2021 00:13:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8D34464EB1 for ; Wed, 17 Feb 2021 00:13:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8D34464EB1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 239EC8D0009; Tue, 16 Feb 2021 19:13:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 197486B007E; Tue, 16 Feb 2021 19:13:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F2ED18D0009; Tue, 16 Feb 2021 19:13:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0104.hostedemail.com [216.40.44.104]) by kanga.kvack.org (Postfix) with ESMTP id CDEBF6B007D for ; Tue, 16 Feb 2021 19:13:32 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 8E7127587 for ; Wed, 17 Feb 2021 00:13:32 +0000 (UTC) X-FDA: 77825835864.23.slope78_300038327648 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id 70F9D37604 for ; Wed, 17 Feb 2021 00:13:32 +0000 (UTC) X-HE-Tag: slope78_300038327648 X-Filterd-Recvd-Size: 4320 Received: from mail-pj1-f54.google.com (mail-pj1-f54.google.com [209.85.216.54]) by imf09.hostedemail.com (Postfix) with ESMTP for ; Wed, 17 Feb 2021 00:13:31 +0000 (UTC) Received: by mail-pj1-f54.google.com with SMTP id gx20so423717pjb.1 for ; Tue, 16 Feb 2021 16:13:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=oDIXLxy24LDduOjAgxeksw9hKjupmB+52EkzQu0ub3U=; b=tftXLk6zX3Xb/cf8+6zK9ZM6tMvifXEmb/deF6t5Rus0pTCpIfksjhYqeeJ5a95yH4 m+4rfrfqtzyTY+ma3SBO1W0w10WAh0xhmztzyrGuj81FocIeHOoPCWe1C/2E73IntBP3 B8RaDuy91wqWcfILUAtMKgwI//V/lZGukHWyelk/KGOAOlpqyWRSFO9/49IWWJ8nr7W4 lY1PmhOdYjBe/TsWuhs5utEtqdUa0/l/ZIbNUrXUn7Hu0FpUY3n1dR6O98ddRIW0Pb5W hHcKIkAJqPYiKFMK9GQ6CMqml5wfDLhfyWtKLavN9KNhuTQH51cUMKqDliHUIb6Wm5nG Ukew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=oDIXLxy24LDduOjAgxeksw9hKjupmB+52EkzQu0ub3U=; b=gX42G5JYm/6B6iKtyCP7jZ4dWjCrx9NLublmc0PYj2o7B4fsAOMxpeCqaa/Vc8aPpI Z9tVi9WuTRiLwvTnEZmIrFGqowwZRDWK9q3CNDUfhPnGyOOdUPNtw1c6zDbDkx3u4Q1z bw6S1tBRWiEgvznpaYOSNX0whsrQ0LV5Q5uxhGMzziPBoUjyriUvUOsBg0N8w3DF/vay oB6aq6c/uCDA8bMctUpI+/0JWWWXU8psjgVe0WKPIFcc0Si6dlSCSFMfzOiPCQpZviSG HmoO0c5GhRfz75fYHYOb5592/Ot6cVSZkg4YpqATd4wJd6RPnVbQBS552G3eX1Xuo+i6 nVXg== X-Gm-Message-State: AOAM533miGkog6aDbNUZ+YxFuCfj4mbN3bx2Py4q7Xmtup7HcO/v8KQG ioxHq8PG7WivLGNxvQtAO3o= X-Google-Smtp-Source: ABdhPJzwkFumlTnfhOiwYgAsXEbk8Y/K1i2XUu3wOErxAZ5+RVdNcGG132WhjXrBD4iR4KyPU84LLw== X-Received: by 2002:a17:90b:4a0b:: with SMTP id kk11mr6563422pjb.95.1613520811146; Tue, 16 Feb 2021 16:13:31 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y12sm99220pjc.56.2021.02.16.16.13.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Feb 2021 16:13:30 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, vbabka@suse.cz, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v8 PATCH 01/13] mm: vmscan: use nid from shrink_control for tracepoint Date: Tue, 16 Feb 2021 16:13:10 -0800 Message-Id: <20210217001322.2226796-2-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210217001322.2226796-1-shy828301@gmail.com> References: <20210217001322.2226796-1-shy828301@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The tracepoint's nid should show what node the shrink happens on, the start tracepoint uses nid from shrinkctl, but the nid might be set to 0 before end tracepoint if the shrinker is not NUMA aware, so the tracing log may show the shrink happens on one node but end up on the other node. It seems confusing. And the following patch will remove using nid directly in do_shrink_slab(), this patch also helps cleanup the code. Acked-by: Vlastimil Babka Acked-by: Kirill Tkhai Reviewed-by: Shakeel Butt Acked-by: Roman Gushchin Signed-off-by: Yang Shi --- mm/vmscan.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index b1b574ad199d..b512dd5e3a1c 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -535,7 +535,7 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, else new_nr = atomic_long_read(&shrinker->nr_deferred[nid]); - trace_mm_shrink_slab_end(shrinker, nid, freed, nr, new_nr, total_scan); + trace_mm_shrink_slab_end(shrinker, shrinkctl->nid, freed, nr, new_nr, total_scan); return freed; } From patchwork Wed Feb 17 00:13:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12090771 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57E07C433E0 for ; Wed, 17 Feb 2021 00:13:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E33CC64DFF for ; Wed, 17 Feb 2021 00:13:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E33CC64DFF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 717658D000A; Tue, 16 Feb 2021 19:13:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6C6416B007E; Tue, 16 Feb 2021 19:13:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4F4848D000A; Tue, 16 Feb 2021 19:13:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0230.hostedemail.com [216.40.44.230]) by kanga.kvack.org (Postfix) with ESMTP id 3244A6B007D for ; Tue, 16 Feb 2021 19:13:35 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id ED5B6180AD820 for ; Wed, 17 Feb 2021 00:13:34 +0000 (UTC) X-FDA: 77825835948.04.wren89_18136da27648 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id D398A8014BC4 for ; Wed, 17 Feb 2021 00:13:34 +0000 (UTC) X-HE-Tag: wren89_18136da27648 X-Filterd-Recvd-Size: 15452 Received: from mail-pf1-f181.google.com (mail-pf1-f181.google.com [209.85.210.181]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Wed, 17 Feb 2021 00:13:34 +0000 (UTC) Received: by mail-pf1-f181.google.com with SMTP id z6so7263503pfq.0 for ; Tue, 16 Feb 2021 16:13:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=iUU/F1/sUlWA430OGWLD5vYEFvrXqDvu+mUbwKeWm8M=; b=kD5e6mN2eamYKeLqs3SGKIjmmxWHUONpbaMQeVIcjP7sNbzqFnZlEHZiqqZVClhvd0 WSrJDYMiADiJkkYLnU4YVwDYoil5c2GTWpzbBENe59x6b8/fjMQksu/2ngUGGGOoJ7mb CRrIpit16f9bOyAV3NkLlaZD0wEV+txLU+U95eFjNhSaSGSEGjm8fGV3PhyQ+23x6H8g 7c42VnNDfRAv9ik27mfG45U2I09JQ29LA1gSvfuk80+nDy0SYCbNf9mtHQ6qGZKSYobJ KK4e+s1n12TzvkU8rH7fFF6LhMaTJyL2X+ngt6ZJe8WGMqbp6DGDwn6vAysnmrcOISMa c6XA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iUU/F1/sUlWA430OGWLD5vYEFvrXqDvu+mUbwKeWm8M=; b=T2nm/GRqrMET74slFe9aOmEFv5lrQlnwyRX8TkC3OuZn0xjhl3JYlZ8dl+FnQOd68G lWkWB5Ek5YJcpoRxvRtzuWWutgC1F2yRCawZ3zGJZz49N5Jq2pRL5WmYLy0qO+mT07YM 0z8zY9wv6v/QhVh7BnBf0Z7k4v7yTHFE/4aIaTS3X6A7SptRf9RYRocv/g3O2Z0ttrzc FtjtJuEGE57Po/o9gAOGI9SRCf2WOx3HidCr/c6zvW+3Qu9Bi593imHyolB4klGJlje3 MypIpC2mRP2UGe35+2cxJMT49K7uvH5AsEqhNQ0FB8R5FU6kkuIOhlm6ifgGp8Xpt7vI SZ8A== X-Gm-Message-State: AOAM533dT8i20Bgiz4p6A4qfIh4/jkTVGFjoadBA7b10gizsa5tGiQnG O8KU4cuU4seKcb4DK4KbUI6gQXr9Ruovag== X-Google-Smtp-Source: ABdhPJzwmGLd1koYIh+EJCZ9DXiQkpAmKri3tEVg5VXGthvmNp6h/Z4duHv3WG0hXn59xs4RYbUwpQ== X-Received: by 2002:a63:c60b:: with SMTP id w11mr20933992pgg.215.1613520813254; Tue, 16 Feb 2021 16:13:33 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y12sm99220pjc.56.2021.02.16.16.13.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Feb 2021 16:13:32 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, vbabka@suse.cz, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v8 PATCH 02/13] mm: vmscan: consolidate shrinker_maps handling code Date: Tue, 16 Feb 2021 16:13:11 -0800 Message-Id: <20210217001322.2226796-3-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210217001322.2226796-1-shy828301@gmail.com> References: <20210217001322.2226796-1-shy828301@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The shrinker map management is not purely memcg specific, it is at the intersection between memory cgroup and shrinkers. It's allocation and assignment of a structure, and the only memcg bit is the map is being stored in a memcg structure. So move the shrinker_maps handling code into vmscan.c for tighter integration with shrinker code, and remove the "memcg_" prefix. There is no functional change. Acked-by: Vlastimil Babka Acked-by: Kirill Tkhai Acked-by: Roman Gushchin Reviewed-by: Shakeel Butt Signed-off-by: Yang Shi --- include/linux/memcontrol.h | 11 ++-- mm/huge_memory.c | 4 +- mm/list_lru.c | 6 +- mm/memcontrol.c | 129 +----------------------------------- mm/vmscan.c | 131 ++++++++++++++++++++++++++++++++++++- 5 files changed, 141 insertions(+), 140 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index eeb0b52203e9..1739f17e0939 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1581,10 +1581,9 @@ static inline bool mem_cgroup_under_socket_pressure(struct mem_cgroup *memcg) return false; } -extern int memcg_expand_shrinker_maps(int new_id); - -extern void memcg_set_shrinker_bit(struct mem_cgroup *memcg, - int nid, int shrinker_id); +int alloc_shrinker_maps(struct mem_cgroup *memcg); +void free_shrinker_maps(struct mem_cgroup *memcg); +void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id); #else #define mem_cgroup_sockets_enabled 0 static inline void mem_cgroup_sk_alloc(struct sock *sk) { }; @@ -1594,8 +1593,8 @@ static inline bool mem_cgroup_under_socket_pressure(struct mem_cgroup *memcg) return false; } -static inline void memcg_set_shrinker_bit(struct mem_cgroup *memcg, - int nid, int shrinker_id) +static inline void set_shrinker_bit(struct mem_cgroup *memcg, + int nid, int shrinker_id) { } #endif diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 91ca9b103ee5..1c2ee6ecd6cf 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2832,8 +2832,8 @@ void deferred_split_huge_page(struct page *page) ds_queue->split_queue_len++; #ifdef CONFIG_MEMCG if (memcg) - memcg_set_shrinker_bit(memcg, page_to_nid(page), - deferred_split_shrinker.id); + set_shrinker_bit(memcg, page_to_nid(page), + deferred_split_shrinker.id); #endif } spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); diff --git a/mm/list_lru.c b/mm/list_lru.c index fe230081690b..628030fa5f69 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -125,8 +125,8 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item) list_add_tail(item, &l->list); /* Set shrinker bit if the first element was added */ if (!l->nr_items++) - memcg_set_shrinker_bit(memcg, nid, - lru_shrinker_id(lru)); + set_shrinker_bit(memcg, nid, + lru_shrinker_id(lru)); nlru->nr_items++; spin_unlock(&nlru->lock); return true; @@ -548,7 +548,7 @@ static void memcg_drain_list_lru_node(struct list_lru *lru, int nid, if (src->nr_items) { dst->nr_items += src->nr_items; - memcg_set_shrinker_bit(dst_memcg, nid, lru_shrinker_id(lru)); + set_shrinker_bit(dst_memcg, nid, lru_shrinker_id(lru)); src->nr_items = 0; } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 1bdb93ee8e72..f5c9a0d2160b 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -397,129 +397,6 @@ DEFINE_STATIC_KEY_FALSE(memcg_kmem_enabled_key); EXPORT_SYMBOL(memcg_kmem_enabled_key); #endif -static int memcg_shrinker_map_size; -static DEFINE_MUTEX(memcg_shrinker_map_mutex); - -static void memcg_free_shrinker_map_rcu(struct rcu_head *head) -{ - kvfree(container_of(head, struct memcg_shrinker_map, rcu)); -} - -static int memcg_expand_one_shrinker_map(struct mem_cgroup *memcg, - int size, int old_size) -{ - struct memcg_shrinker_map *new, *old; - int nid; - - lockdep_assert_held(&memcg_shrinker_map_mutex); - - for_each_node(nid) { - old = rcu_dereference_protected( - mem_cgroup_nodeinfo(memcg, nid)->shrinker_map, true); - /* Not yet online memcg */ - if (!old) - return 0; - - new = kvmalloc_node(sizeof(*new) + size, GFP_KERNEL, nid); - if (!new) - return -ENOMEM; - - /* Set all old bits, clear all new bits */ - memset(new->map, (int)0xff, old_size); - memset((void *)new->map + old_size, 0, size - old_size); - - rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, new); - call_rcu(&old->rcu, memcg_free_shrinker_map_rcu); - } - - return 0; -} - -static void memcg_free_shrinker_maps(struct mem_cgroup *memcg) -{ - struct mem_cgroup_per_node *pn; - struct memcg_shrinker_map *map; - int nid; - - if (mem_cgroup_is_root(memcg)) - return; - - for_each_node(nid) { - pn = mem_cgroup_nodeinfo(memcg, nid); - map = rcu_dereference_protected(pn->shrinker_map, true); - kvfree(map); - rcu_assign_pointer(pn->shrinker_map, NULL); - } -} - -static int memcg_alloc_shrinker_maps(struct mem_cgroup *memcg) -{ - struct memcg_shrinker_map *map; - int nid, size, ret = 0; - - if (mem_cgroup_is_root(memcg)) - return 0; - - mutex_lock(&memcg_shrinker_map_mutex); - size = memcg_shrinker_map_size; - for_each_node(nid) { - map = kvzalloc_node(sizeof(*map) + size, GFP_KERNEL, nid); - if (!map) { - memcg_free_shrinker_maps(memcg); - ret = -ENOMEM; - break; - } - rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, map); - } - mutex_unlock(&memcg_shrinker_map_mutex); - - return ret; -} - -int memcg_expand_shrinker_maps(int new_id) -{ - int size, old_size, ret = 0; - struct mem_cgroup *memcg; - - size = DIV_ROUND_UP(new_id + 1, BITS_PER_LONG) * sizeof(unsigned long); - old_size = memcg_shrinker_map_size; - if (size <= old_size) - return 0; - - mutex_lock(&memcg_shrinker_map_mutex); - if (!root_mem_cgroup) - goto unlock; - - for_each_mem_cgroup(memcg) { - if (mem_cgroup_is_root(memcg)) - continue; - ret = memcg_expand_one_shrinker_map(memcg, size, old_size); - if (ret) { - mem_cgroup_iter_break(NULL, memcg); - goto unlock; - } - } -unlock: - if (!ret) - memcg_shrinker_map_size = size; - mutex_unlock(&memcg_shrinker_map_mutex); - return ret; -} - -void memcg_set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id) -{ - if (shrinker_id >= 0 && memcg && !mem_cgroup_is_root(memcg)) { - struct memcg_shrinker_map *map; - - rcu_read_lock(); - map = rcu_dereference(memcg->nodeinfo[nid]->shrinker_map); - /* Pairs with smp mb in shrink_slab() */ - smp_mb__before_atomic(); - set_bit(shrinker_id, map->map); - rcu_read_unlock(); - } -} - /** * mem_cgroup_css_from_page - css of the memcg associated with a page * @page: page of interest @@ -5369,11 +5246,11 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css) struct mem_cgroup *memcg = mem_cgroup_from_css(css); /* - * A memcg must be visible for memcg_expand_shrinker_maps() + * A memcg must be visible for expand_shrinker_maps() * by the time the maps are allocated. So, we allocate maps * here, when for_each_mem_cgroup() can't skip it. */ - if (memcg_alloc_shrinker_maps(memcg)) { + if (alloc_shrinker_maps(memcg)) { mem_cgroup_id_remove(memcg); return -ENOMEM; } @@ -5437,7 +5314,7 @@ static void mem_cgroup_css_free(struct cgroup_subsys_state *css) vmpressure_cleanup(&memcg->vmpressure); cancel_work_sync(&memcg->high_work); mem_cgroup_remove_from_trees(memcg); - memcg_free_shrinker_maps(memcg); + free_shrinker_maps(memcg); memcg_free_kmem(memcg); mem_cgroup_free(memcg); } diff --git a/mm/vmscan.c b/mm/vmscan.c index b512dd5e3a1c..96b08c79f18d 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -185,6 +185,131 @@ static LIST_HEAD(shrinker_list); static DECLARE_RWSEM(shrinker_rwsem); #ifdef CONFIG_MEMCG + +static int memcg_shrinker_map_size; +static DEFINE_MUTEX(memcg_shrinker_map_mutex); + +static void free_shrinker_map_rcu(struct rcu_head *head) +{ + kvfree(container_of(head, struct memcg_shrinker_map, rcu)); +} + +static int expand_one_shrinker_map(struct mem_cgroup *memcg, + int size, int old_size) +{ + struct memcg_shrinker_map *new, *old; + int nid; + + lockdep_assert_held(&memcg_shrinker_map_mutex); + + for_each_node(nid) { + old = rcu_dereference_protected( + mem_cgroup_nodeinfo(memcg, nid)->shrinker_map, true); + /* Not yet online memcg */ + if (!old) + return 0; + + new = kvmalloc_node(sizeof(*new) + size, GFP_KERNEL, nid); + if (!new) + return -ENOMEM; + + /* Set all old bits, clear all new bits */ + memset(new->map, (int)0xff, old_size); + memset((void *)new->map + old_size, 0, size - old_size); + + rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, new); + call_rcu(&old->rcu, free_shrinker_map_rcu); + } + + return 0; +} + +void free_shrinker_maps(struct mem_cgroup *memcg) +{ + struct mem_cgroup_per_node *pn; + struct memcg_shrinker_map *map; + int nid; + + if (mem_cgroup_is_root(memcg)) + return; + + for_each_node(nid) { + pn = mem_cgroup_nodeinfo(memcg, nid); + map = rcu_dereference_protected(pn->shrinker_map, true); + kvfree(map); + rcu_assign_pointer(pn->shrinker_map, NULL); + } +} + +int alloc_shrinker_maps(struct mem_cgroup *memcg) +{ + struct memcg_shrinker_map *map; + int nid, size, ret = 0; + + if (mem_cgroup_is_root(memcg)) + return 0; + + mutex_lock(&memcg_shrinker_map_mutex); + size = memcg_shrinker_map_size; + for_each_node(nid) { + map = kvzalloc_node(sizeof(*map) + size, GFP_KERNEL, nid); + if (!map) { + free_shrinker_maps(memcg); + ret = -ENOMEM; + break; + } + rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, map); + } + mutex_unlock(&memcg_shrinker_map_mutex); + + return ret; +} + +static int expand_shrinker_maps(int new_id) +{ + int size, old_size, ret = 0; + struct mem_cgroup *memcg; + + size = DIV_ROUND_UP(new_id + 1, BITS_PER_LONG) * sizeof(unsigned long); + old_size = memcg_shrinker_map_size; + if (size <= old_size) + return 0; + + mutex_lock(&memcg_shrinker_map_mutex); + if (!root_mem_cgroup) + goto unlock; + + memcg = mem_cgroup_iter(NULL, NULL, NULL); + do { + if (mem_cgroup_is_root(memcg)) + continue; + ret = expand_one_shrinker_map(memcg, size, old_size); + if (ret) { + mem_cgroup_iter_break(NULL, memcg); + goto unlock; + } + } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL); +unlock: + if (!ret) + memcg_shrinker_map_size = size; + mutex_unlock(&memcg_shrinker_map_mutex); + return ret; +} + +void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id) +{ + if (shrinker_id >= 0 && memcg && !mem_cgroup_is_root(memcg)) { + struct memcg_shrinker_map *map; + + rcu_read_lock(); + map = rcu_dereference(memcg->nodeinfo[nid]->shrinker_map); + /* Pairs with smp mb in shrink_slab() */ + smp_mb__before_atomic(); + set_bit(shrinker_id, map->map); + rcu_read_unlock(); + } +} + /* * We allow subsystems to populate their shrinker-related * LRU lists before register_shrinker_prepared() is called @@ -212,7 +337,7 @@ static int prealloc_memcg_shrinker(struct shrinker *shrinker) goto unlock; if (id >= shrinker_nr_max) { - if (memcg_expand_shrinker_maps(id)) { + if (expand_shrinker_maps(id)) { idr_remove(&shrinker_idr, id); goto unlock; } @@ -589,7 +714,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, * case, we invoke the shrinker one more time and reset * the bit if it reports that it is not empty anymore. * The memory barrier here pairs with the barrier in - * memcg_set_shrinker_bit(): + * set_shrinker_bit(): * * list_lru_add() shrink_slab_memcg() * list_add_tail() clear_bit() @@ -601,7 +726,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, if (ret == SHRINK_EMPTY) ret = 0; else - memcg_set_shrinker_bit(memcg, nid, i); + set_shrinker_bit(memcg, nid, i); } freed += ret; From patchwork Wed Feb 17 00:13:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12090773 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2FD71C433E6 for ; Wed, 17 Feb 2021 00:13:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C97A064DFF for ; Wed, 17 Feb 2021 00:13:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C97A064DFF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DF71C8D000B; Tue, 16 Feb 2021 19:13:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D7F7A6B007E; Tue, 16 Feb 2021 19:13:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BA9D08D000B; Tue, 16 Feb 2021 19:13:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0124.hostedemail.com [216.40.44.124]) by kanga.kvack.org (Postfix) with ESMTP id 9F34D6B007D for ; Tue, 16 Feb 2021 19:13:36 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 6C8771801D450 for ; Wed, 17 Feb 2021 00:13:36 +0000 (UTC) X-FDA: 77825836032.26.9EA0692 Received: from mail-pf1-f169.google.com (mail-pf1-f169.google.com [209.85.210.169]) by imf01.hostedemail.com (Postfix) with ESMTP id B582D2000D81 for ; Wed, 17 Feb 2021 00:13:35 +0000 (UTC) Received: by mail-pf1-f169.google.com with SMTP id k13so7220383pfh.13 for ; Tue, 16 Feb 2021 16:13:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9kmwvTRVdVE30SERAs5GtvM6Gm5zn00pBYFxryVSYDw=; b=Apl5JLsT8gxKLTZobcVZz6rQgkrf7fQH/siVQrIumS8tleaop2xVxnCc1CVoVkVJle rEBSWCXJ4hgljG6TwwrSCSET42iM1SypFs4U9pXsbsGS9cN1u+TN9L1qYNRvDz3LItls Ge3B/4WeTdat0V1dhsU6QowLkWjm7g+w17A6BFufm8mkrMno6SUeQD2CQhHKuOPF7Uxk zU1c6ZKQsWhYDuEtW7kYlqmyD2hruwXytfAMU8YVYBuncyTrbQHQMP3LHOORQ9WL2uH4 4ao0Lx3XyQx7Qi851KdhTdcmrGzoI+2PT6TZiSWkVELbysIpYRKOcVkE6dI01pF14uzo dfWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9kmwvTRVdVE30SERAs5GtvM6Gm5zn00pBYFxryVSYDw=; b=Pal6mUO6H+EysZvfvOs3oxVHJU3utXBqe6Agy6ppdrtOYkNCwtmmov6BD4Tpw0GrBi gCG5B5jgV48kOapdMfQGEEfxTpjbpSi+LwFrxYPG9AGNsx06FkUoo1TnxVM5Q8B7l359 7KTu2B0dcGcTzyeSOpqGHJeTNzUoMVVXRlOJjf7Zg7HHh/6ZiSe76JSfd5KbLn+R3L1Z tWD4cFy7AOoxZ0m/KPoUz6dDDzZtDlchQFoY4h85pAzDl307DWtR89gxmPZHIuhcUG6A E6GEm6ZajC4g+Az9vA4oUBXPpHEy3iwkDR9jQAhEosDPYf79at5ipQ3XZET/ttxGgvp1 oEQw== X-Gm-Message-State: AOAM531nwfQ163jUckU2QxSI6ae2T0LPnvMe8iXo6x947GJ2NaCmdXOs No1KaAUtpnccEfy18CXwh4o= X-Google-Smtp-Source: ABdhPJxwYwEKGAq8dceduxuD8dwCPV4j9BtMWCo2ogoZYclu44P7088TKtQSQLx9LwfhLkUdFWexiw== X-Received: by 2002:a62:b410:0:b029:1a4:7868:7e4e with SMTP id h16-20020a62b4100000b02901a478687e4emr215718pfn.62.1613520815217; Tue, 16 Feb 2021 16:13:35 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y12sm99220pjc.56.2021.02.16.16.13.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Feb 2021 16:13:34 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, vbabka@suse.cz, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v8 PATCH 03/13] mm: vmscan: use shrinker_rwsem to protect shrinker_maps allocation Date: Tue, 16 Feb 2021 16:13:12 -0800 Message-Id: <20210217001322.2226796-4-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210217001322.2226796-1-shy828301@gmail.com> References: <20210217001322.2226796-1-shy828301@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: B582D2000D81 X-Stat-Signature: x6sah8j8mgdrhhx5s68w458c4ow5rapn Received-SPF: none (gmail.com>: No applicable sender policy available) receiver=imf01; identity=mailfrom; envelope-from=""; helo=mail-pf1-f169.google.com; client-ip=209.85.210.169 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1613520815-611312 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since memcg_shrinker_map_size just can be changed under holding shrinker_rwsem exclusively, the read side can be protected by holding read lock, so it sounds superfluous to have a dedicated mutex. Kirill Tkhai suggested use write lock since: * We want the assignment to shrinker_maps is visible for shrink_slab_memcg(). * The rcu_dereference_protected() dereferrencing in shrink_slab_memcg(), but in case of we use READ lock in alloc_shrinker_maps(), the dereferrencing is not actually protected. * READ lock makes alloc_shrinker_info() racy against memory allocation fail. alloc_shrinker_info()->free_shrinker_info() may free memory right after shrink_slab_memcg() dereferenced it. You may say shrink_slab_memcg()->mem_cgroup_online() protects us from it? Yes, sure, but this is not the thing we want to remember in the future, since this spreads modularity. And a test with heavy paging workload didn't show write lock makes things worse. Acked-by: Vlastimil Babka Acked-by: Kirill Tkhai Acked-by: Roman Gushchin Signed-off-by: Yang Shi Reviewed-by: Shakeel Butt --- mm/vmscan.c | 18 ++++++++---------- 1 file changed, 8 insertions(+), 10 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 96b08c79f18d..543af6ec1e02 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -187,7 +187,6 @@ static DECLARE_RWSEM(shrinker_rwsem); #ifdef CONFIG_MEMCG static int memcg_shrinker_map_size; -static DEFINE_MUTEX(memcg_shrinker_map_mutex); static void free_shrinker_map_rcu(struct rcu_head *head) { @@ -200,8 +199,6 @@ static int expand_one_shrinker_map(struct mem_cgroup *memcg, struct memcg_shrinker_map *new, *old; int nid; - lockdep_assert_held(&memcg_shrinker_map_mutex); - for_each_node(nid) { old = rcu_dereference_protected( mem_cgroup_nodeinfo(memcg, nid)->shrinker_map, true); @@ -249,7 +246,7 @@ int alloc_shrinker_maps(struct mem_cgroup *memcg) if (mem_cgroup_is_root(memcg)) return 0; - mutex_lock(&memcg_shrinker_map_mutex); + down_write(&shrinker_rwsem); size = memcg_shrinker_map_size; for_each_node(nid) { map = kvzalloc_node(sizeof(*map) + size, GFP_KERNEL, nid); @@ -260,7 +257,7 @@ int alloc_shrinker_maps(struct mem_cgroup *memcg) } rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, map); } - mutex_unlock(&memcg_shrinker_map_mutex); + up_write(&shrinker_rwsem); return ret; } @@ -275,9 +272,10 @@ static int expand_shrinker_maps(int new_id) if (size <= old_size) return 0; - mutex_lock(&memcg_shrinker_map_mutex); if (!root_mem_cgroup) - goto unlock; + goto out; + + lockdep_assert_held(&shrinker_rwsem); memcg = mem_cgroup_iter(NULL, NULL, NULL); do { @@ -286,13 +284,13 @@ static int expand_shrinker_maps(int new_id) ret = expand_one_shrinker_map(memcg, size, old_size); if (ret) { mem_cgroup_iter_break(NULL, memcg); - goto unlock; + goto out; } } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL); -unlock: +out: if (!ret) memcg_shrinker_map_size = size; - mutex_unlock(&memcg_shrinker_map_mutex); + return ret; } From patchwork Wed Feb 17 00:13:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12090775 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D6F8C433E0 for ; Wed, 17 Feb 2021 00:13:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CB2E664DFF for ; Wed, 17 Feb 2021 00:13:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CB2E664DFF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 654F28D000C; Tue, 16 Feb 2021 19:13:40 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 604286B007E; Tue, 16 Feb 2021 19:13:40 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4A6F98D000C; Tue, 16 Feb 2021 19:13:40 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0126.hostedemail.com [216.40.44.126]) by kanga.kvack.org (Postfix) with ESMTP id 2C7336B007D for ; Tue, 16 Feb 2021 19:13:40 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id E46BC6D93 for ; Wed, 17 Feb 2021 00:13:39 +0000 (UTC) X-FDA: 77825836158.18.trick57_3e02c2527648 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin18.hostedemail.com (Postfix) with ESMTP id B626F100EC663 for ; Wed, 17 Feb 2021 00:13:39 +0000 (UTC) X-HE-Tag: trick57_3e02c2527648 X-Filterd-Recvd-Size: 5794 Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Wed, 17 Feb 2021 00:13:38 +0000 (UTC) Received: by mail-pf1-f176.google.com with SMTP id z15so7252675pfc.3 for ; Tue, 16 Feb 2021 16:13:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=nrWTpq2dO7Eg3GTZ90HCf5mhya6dwUv+AMyfdHDY2DA=; b=o0z2F6Qc4S3+vXblyH0mYP/ax8EyOS1mKW7gexinUfzv49fXukNW7QCAIA0ZJO7iTC UFtZo0CfeeWgJA5Kisa28tJcIIP5jkq8NG4R/Xz3dumFOL3PVv10yxHovzM54BYnrwjb 339Ev1AqMok156YI1cCj2582/Nj7Z9RRljbb5+8CVJANnOMYZo3wRD5zxdyYj9e2KysG lunfq90/Zcfbk7OApA2LeuHcdVbYVKvNa+/u2dzBG7OABLCGxhRzaY5Hnk0nnj+aEyWr c6R5Ns4obmv87viI7yUaHvMjcG5TnoKJ7Iq+zOnCtLkd1tTR+2TfCpOBysbFZyVMPiHY V9Ag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nrWTpq2dO7Eg3GTZ90HCf5mhya6dwUv+AMyfdHDY2DA=; b=JFxKabhIm9P4iq9MXmwVkn8OCz9GsvE/8pvvX94FSbO4/yNSag2dfL2xuVBzzGloae KxmXLLZtJVKKUm9CceuFqjQnqSbGyILsX945ZOyKbz+9ayE0B1Wwt58hTjnt82coCDZp gwc10weS+myuOruBWLC26qDcclPCNOQmppDwn2tueLyVLr4AOeDTuOR4Hm8J6UkojWwN 9geqn3pl/uuOml3flQbUkMOJkQ/OOQoh1GwJiaQMOkOHZw4ZCYbqBpXer2EQUOnO66k2 Ffr9rblxI5ETnxNRJGcQ41Me8nIIQQrZaat1Bw2ALkkn9h278izy89EqiOnEU1VXw/1W UhEA== X-Gm-Message-State: AOAM532Rj7oDgoTqdnNATSywTtuCklpLGG2dDPFEb080/JGJCy1Lk8yV W7ErXdZzUBDuWrNPH4mLhBDbC7m8Tcx+yw== X-Google-Smtp-Source: ABdhPJy3A83kV+erXZ8lkFiabZggHE6dNpCc6tIrExHC220czNBuC5EzLT1Qs5H6KpkeBQPb4+gfOA== X-Received: by 2002:aa7:8811:0:b029:1eb:77b1:6e77 with SMTP id c17-20020aa788110000b02901eb77b16e77mr15629198pfo.22.1613520817524; Tue, 16 Feb 2021 16:13:37 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y12sm99220pjc.56.2021.02.16.16.13.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Feb 2021 16:13:36 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, vbabka@suse.cz, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v8 PATCH 04/13] mm: vmscan: remove memcg_shrinker_map_size Date: Tue, 16 Feb 2021 16:13:13 -0800 Message-Id: <20210217001322.2226796-5-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210217001322.2226796-1-shy828301@gmail.com> References: <20210217001322.2226796-1-shy828301@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Both memcg_shrinker_map_size and shrinker_nr_max is maintained, but actually the map size can be calculated via shrinker_nr_max, so it seems unnecessary to keep both. Remove memcg_shrinker_map_size since shrinker_nr_max is also used by iterating the bit map. Acked-by: Kirill Tkhai Acked-by: Roman Gushchin Acked-by: Vlastimil Babka Signed-off-by: Yang Shi Reviewed-by: Shakeel Butt --- mm/vmscan.c | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 543af6ec1e02..2e753c2516fa 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -185,8 +185,12 @@ static LIST_HEAD(shrinker_list); static DECLARE_RWSEM(shrinker_rwsem); #ifdef CONFIG_MEMCG +static int shrinker_nr_max; -static int memcg_shrinker_map_size; +static inline int shrinker_map_size(int nr_items) +{ + return (DIV_ROUND_UP(nr_items, BITS_PER_LONG) * sizeof(unsigned long)); +} static void free_shrinker_map_rcu(struct rcu_head *head) { @@ -247,7 +251,7 @@ int alloc_shrinker_maps(struct mem_cgroup *memcg) return 0; down_write(&shrinker_rwsem); - size = memcg_shrinker_map_size; + size = shrinker_map_size(shrinker_nr_max); for_each_node(nid) { map = kvzalloc_node(sizeof(*map) + size, GFP_KERNEL, nid); if (!map) { @@ -265,12 +269,13 @@ int alloc_shrinker_maps(struct mem_cgroup *memcg) static int expand_shrinker_maps(int new_id) { int size, old_size, ret = 0; + int new_nr_max = new_id + 1; struct mem_cgroup *memcg; - size = DIV_ROUND_UP(new_id + 1, BITS_PER_LONG) * sizeof(unsigned long); - old_size = memcg_shrinker_map_size; + size = shrinker_map_size(new_nr_max); + old_size = shrinker_map_size(shrinker_nr_max); if (size <= old_size) - return 0; + goto out; if (!root_mem_cgroup) goto out; @@ -289,7 +294,7 @@ static int expand_shrinker_maps(int new_id) } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL); out: if (!ret) - memcg_shrinker_map_size = size; + shrinker_nr_max = new_nr_max; return ret; } @@ -322,7 +327,6 @@ void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id) #define SHRINKER_REGISTERING ((struct shrinker *)~0UL) static DEFINE_IDR(shrinker_idr); -static int shrinker_nr_max; static int prealloc_memcg_shrinker(struct shrinker *shrinker) { @@ -339,8 +343,6 @@ static int prealloc_memcg_shrinker(struct shrinker *shrinker) idr_remove(&shrinker_idr, id); goto unlock; } - - shrinker_nr_max = id + 1; } shrinker->id = id; ret = 0; From patchwork Wed Feb 17 00:13:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12090777 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0FD96C433DB for ; Wed, 17 Feb 2021 00:13:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BB4A564DFF for ; Wed, 17 Feb 2021 00:13:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BB4A564DFF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A9ACA6B007D; Tue, 16 Feb 2021 19:13:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9FCD58D000D; Tue, 16 Feb 2021 19:13:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 828966B0080; Tue, 16 Feb 2021 19:13:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0047.hostedemail.com [216.40.44.47]) by kanga.kvack.org (Postfix) with ESMTP id 6C0936B007D for ; Tue, 16 Feb 2021 19:13:41 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 218F8180206FF for ; Wed, 17 Feb 2021 00:13:41 +0000 (UTC) X-FDA: 77825836242.19.camp49_390cc2327648 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id F02631ACC27 for ; Wed, 17 Feb 2021 00:13:40 +0000 (UTC) X-HE-Tag: camp49_390cc2327648 X-Filterd-Recvd-Size: 4234 Received: from mail-pj1-f41.google.com (mail-pj1-f41.google.com [209.85.216.41]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Wed, 17 Feb 2021 00:13:40 +0000 (UTC) Received: by mail-pj1-f41.google.com with SMTP id kr16so359757pjb.2 for ; Tue, 16 Feb 2021 16:13:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xVcqSZcHu5IZi/C+WStQFCtrAWEjFBSbWmkdZnY1Mak=; b=jHIijN1unicXjyzPENrx3Ud1wLIwQPgTtH6laop63DTNjZ0dgJ28S5yZfj0x+h8cCM r+07GDQ1I7eTsdhnJD0/xw5YuUfEb+SeZwEWsp9ZGiiaw4COS45rwbqZ/HtjfNr+W928 Y4KHY3P2DsDcBRmJBwxHe2Bw3O5LHH2irQgkGVY/bKHwleetw3Sod4IYA9hDHuxWEgJv d4HSCLHb7GmM2kZ0BhoxN/ffzrrYxfiMY3aw5rD7qhY1+DjsY8sAyi36yBZn4xegKrRK ZcHtI8Y7vZb7jFeUBxeghGXHHL2Q88NiP5CHahtEtnzPPTnuGIvkEBT7aAA0jRDRjH4W +ZvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xVcqSZcHu5IZi/C+WStQFCtrAWEjFBSbWmkdZnY1Mak=; b=Fha3r1hQDpUgJPP2+KM4MerpNs48Pvc2qAhDbA03EynfLmxjCkwsHCrqecFwMG3wlh 2y/ZaZ3JyHYZrO+2qMBibcVcadquTd23MDBSgIJKHkkvA9B9pTTyraSXHLaspPTMzmsD VApTPrk0UYfnAd/MOkGSgZts9Qdo8RSdAINxmqrA5WYmAt/BNTBKe2R2XYvFNoDIlxib Ehn2LMnMcLYDZg6zt5ZHfoSXC3DB7lDHu7lKBGtmZinLEgHyXJU1TKPMzdUcervRMgIq zNaPKrP0HnU3bxZucmWWXGxCYqjw/hEhO7PF5hnGfILlkq7oT2A5Mu2x3eDgUtjKccHg Ri6g== X-Gm-Message-State: AOAM530WhdRTKa021IN1VdHa3HyqQYc8jACziG8sgTcVcNrZxAqicq7b RK11NyqMvIFKOGR6vgoJYao= X-Google-Smtp-Source: ABdhPJwLfmniYH7zm8PmbGDXGY5EKEOj2yTUgjfmbch/RcW4W/iuVWi4pfuPHYaFIhnOSnFSKjOCkg== X-Received: by 2002:a17:902:20e:b029:e1:916c:a4d6 with SMTP id 14-20020a170902020eb02900e1916ca4d6mr18104plc.57.1613520819500; Tue, 16 Feb 2021 16:13:39 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y12sm99220pjc.56.2021.02.16.16.13.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Feb 2021 16:13:38 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, vbabka@suse.cz, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v8 PATCH 05/13] mm: vmscan: use kvfree_rcu instead of call_rcu Date: Tue, 16 Feb 2021 16:13:14 -0800 Message-Id: <20210217001322.2226796-6-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210217001322.2226796-1-shy828301@gmail.com> References: <20210217001322.2226796-1-shy828301@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Using kvfree_rcu() to free the old shrinker_maps instead of call_rcu(). We don't have to define a dedicated callback for call_rcu() anymore. Signed-off-by: Yang Shi Acked-by: Roman Gushchin Acked-by: Kirill Tkhai Reviewed-by: Shakeel Butt --- mm/vmscan.c | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 2e753c2516fa..c2a309acd86b 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -192,11 +192,6 @@ static inline int shrinker_map_size(int nr_items) return (DIV_ROUND_UP(nr_items, BITS_PER_LONG) * sizeof(unsigned long)); } -static void free_shrinker_map_rcu(struct rcu_head *head) -{ - kvfree(container_of(head, struct memcg_shrinker_map, rcu)); -} - static int expand_one_shrinker_map(struct mem_cgroup *memcg, int size, int old_size) { @@ -219,7 +214,7 @@ static int expand_one_shrinker_map(struct mem_cgroup *memcg, memset((void *)new->map + old_size, 0, size - old_size); rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, new); - call_rcu(&old->rcu, free_shrinker_map_rcu); + kvfree_rcu(old); } return 0; From patchwork Wed Feb 17 00:13:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12090779 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80D79C433E0 for ; Wed, 17 Feb 2021 00:13:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1D63364DDA for ; Wed, 17 Feb 2021 00:13:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1D63364DDA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B5E0E8D000E; Tue, 16 Feb 2021 19:13:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AE7F18D000D; Tue, 16 Feb 2021 19:13:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 963EA8D000E; Tue, 16 Feb 2021 19:13:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0187.hostedemail.com [216.40.44.187]) by kanga.kvack.org (Postfix) with ESMTP id 7C95F8D000D for ; Tue, 16 Feb 2021 19:13:44 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 3837A8249980 for ; Wed, 17 Feb 2021 00:13:44 +0000 (UTC) X-FDA: 77825836368.10.F62FDA7 Received: from mail-pg1-f174.google.com (mail-pg1-f174.google.com [209.85.215.174]) by imf03.hostedemail.com (Postfix) with ESMTP id BBBBCC000C44 for ; Wed, 17 Feb 2021 00:13:41 +0000 (UTC) Received: by mail-pg1-f174.google.com with SMTP id p21so1681118pgl.12 for ; Tue, 16 Feb 2021 16:13:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=CdooLm6FEpgeYf4aCk5vQTWBIDCkIBhqYMgTNhIFYyw=; b=URZ4hnL4SUWcPdFs4/tROAPmsXTR2/LdoQpbbwknlBEO3g0MMh55GLJq7Jht735Nff rqpRtAERzRyP0dNmOqCQm1ZTEy6BbnSSjtJFDMQOxiqEUjr36Zycme0+CXRptPoft/x6 5ZJS5YlC7UomxFLVdMV53J7J3/ieV9lbKXtgcQfSc6ES8VaPBB7H/Pq8JGmJ1o4hBkRd v8L3dKQ9ZybHBp3H2JwdWSrHRdOQquS3QyimtRMQVCqtlcxXDejceM4qTYXiaSaOb5cB Cu7Ov2JXUQa3aYi1GYIcrUu5wCFWLEGtxrKzRQC4PDsUd8fi0QFZkcZva+dlO5yBLOYI CEnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=CdooLm6FEpgeYf4aCk5vQTWBIDCkIBhqYMgTNhIFYyw=; b=QJGRzv6CGNbeSXHKccWOt7TWOG+ElJmhvDx42YtPqDZQPFOSOffMb9SH31g48iBrRu g4ZWaf6YszinqZl3cdgqmGp/FZ1z+xGzQ78FNlTeEigAitu9UJi4lzGwuQRFJF+trg8J MToD0zXjAevuNtFjeiOkstnOVOnOqWT2mMe79zwp+uOArz81RKKQ/SbtGKjf32LaP7yJ jHpK0ZcNKhzyzHblZp84LZ6PfXikV/DuY5Gz37yT/cc6vQA7fW2+mpoUuQROFYursYkQ IpWUGbA9PhpbUuKyezFH2NPbMKpBs34HtqlNOtIYhpZ99sppx9I2Q1Tn8RZZBubmrm3A EpQg== X-Gm-Message-State: AOAM5317yJhYwyX4ZBt4+CKguUQxrOlERqu/FXt9xKqvZi6hB0gyG2gm 57Wre8V2SegQRgwCf+GT0w4VZmI7oKVFyA== X-Google-Smtp-Source: ABdhPJyeS67RTRv9prWRBpCtnKNYCZWtj1PjSfX0vCNoRWMOh+CzSsh72KqO9uDmefcekDtkYT2pPA== X-Received: by 2002:a63:1d26:: with SMTP id d38mr3778744pgd.385.1613520822907; Tue, 16 Feb 2021 16:13:42 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y12sm99220pjc.56.2021.02.16.16.13.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Feb 2021 16:13:41 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, vbabka@suse.cz, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v8 PATCH 06/13] mm: memcontrol: rename shrinker_map to shrinker_info Date: Tue, 16 Feb 2021 16:13:15 -0800 Message-Id: <20210217001322.2226796-7-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210217001322.2226796-1-shy828301@gmail.com> References: <20210217001322.2226796-1-shy828301@gmail.com> MIME-Version: 1.0 X-Stat-Signature: ztmwmqu9kj9oamapscpibhmx45kzaeny X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: BBBBCC000C44 Received-SPF: none (gmail.com>: No applicable sender policy available) receiver=imf03; identity=mailfrom; envelope-from=""; helo=mail-pg1-f174.google.com; client-ip=209.85.215.174 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1613520821-448515 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The following patch is going to add nr_deferred into shrinker_map, the change will make shrinker_map not only include map anymore, so rename it to "memcg_shrinker_info". And this should make the patch adding nr_deferred cleaner and readable and make review easier. Also remove the "memcg_" prefix. Acked-by: Vlastimil Babka Acked-by: Kirill Tkhai Acked-by: Roman Gushchin Signed-off-by: Yang Shi Reviewed-by: Shakeel Butt --- include/linux/memcontrol.h | 8 +++--- mm/memcontrol.c | 6 ++-- mm/vmscan.c | 58 +++++++++++++++++++------------------- 3 files changed, 36 insertions(+), 36 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 1739f17e0939..4c9253896e25 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -96,7 +96,7 @@ struct lruvec_stat { * Bitmap of shrinker::id corresponding to memcg-aware shrinkers, * which have elements charged to this memcg. */ -struct memcg_shrinker_map { +struct shrinker_info { struct rcu_head rcu; unsigned long map[]; }; @@ -118,7 +118,7 @@ struct mem_cgroup_per_node { struct mem_cgroup_reclaim_iter iter; - struct memcg_shrinker_map __rcu *shrinker_map; + struct shrinker_info __rcu *shrinker_info; struct rb_node tree_node; /* RB tree node */ unsigned long usage_in_excess;/* Set to the value by which */ @@ -1581,8 +1581,8 @@ static inline bool mem_cgroup_under_socket_pressure(struct mem_cgroup *memcg) return false; } -int alloc_shrinker_maps(struct mem_cgroup *memcg); -void free_shrinker_maps(struct mem_cgroup *memcg); +int alloc_shrinker_info(struct mem_cgroup *memcg); +void free_shrinker_info(struct mem_cgroup *memcg); void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id); #else #define mem_cgroup_sockets_enabled 0 diff --git a/mm/memcontrol.c b/mm/memcontrol.c index f5c9a0d2160b..f64ad0d044d9 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5246,11 +5246,11 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css) struct mem_cgroup *memcg = mem_cgroup_from_css(css); /* - * A memcg must be visible for expand_shrinker_maps() + * A memcg must be visible for expand_shrinker_info() * by the time the maps are allocated. So, we allocate maps * here, when for_each_mem_cgroup() can't skip it. */ - if (alloc_shrinker_maps(memcg)) { + if (alloc_shrinker_info(memcg)) { mem_cgroup_id_remove(memcg); return -ENOMEM; } @@ -5314,7 +5314,7 @@ static void mem_cgroup_css_free(struct cgroup_subsys_state *css) vmpressure_cleanup(&memcg->vmpressure); cancel_work_sync(&memcg->high_work); mem_cgroup_remove_from_trees(memcg); - free_shrinker_maps(memcg); + free_shrinker_info(memcg); memcg_free_kmem(memcg); mem_cgroup_free(memcg); } diff --git a/mm/vmscan.c b/mm/vmscan.c index c2a309acd86b..c94861a3ea3e 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -192,15 +192,15 @@ static inline int shrinker_map_size(int nr_items) return (DIV_ROUND_UP(nr_items, BITS_PER_LONG) * sizeof(unsigned long)); } -static int expand_one_shrinker_map(struct mem_cgroup *memcg, - int size, int old_size) +static int expand_one_shrinker_info(struct mem_cgroup *memcg, + int size, int old_size) { - struct memcg_shrinker_map *new, *old; + struct shrinker_info *new, *old; int nid; for_each_node(nid) { old = rcu_dereference_protected( - mem_cgroup_nodeinfo(memcg, nid)->shrinker_map, true); + mem_cgroup_nodeinfo(memcg, nid)->shrinker_info, true); /* Not yet online memcg */ if (!old) return 0; @@ -213,17 +213,17 @@ static int expand_one_shrinker_map(struct mem_cgroup *memcg, memset(new->map, (int)0xff, old_size); memset((void *)new->map + old_size, 0, size - old_size); - rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, new); + rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, new); kvfree_rcu(old); } return 0; } -void free_shrinker_maps(struct mem_cgroup *memcg) +void free_shrinker_info(struct mem_cgroup *memcg) { struct mem_cgroup_per_node *pn; - struct memcg_shrinker_map *map; + struct shrinker_info *info; int nid; if (mem_cgroup_is_root(memcg)) @@ -231,15 +231,15 @@ void free_shrinker_maps(struct mem_cgroup *memcg) for_each_node(nid) { pn = mem_cgroup_nodeinfo(memcg, nid); - map = rcu_dereference_protected(pn->shrinker_map, true); - kvfree(map); - rcu_assign_pointer(pn->shrinker_map, NULL); + info = rcu_dereference_protected(pn->shrinker_info, true); + kvfree(info); + rcu_assign_pointer(pn->shrinker_info, NULL); } } -int alloc_shrinker_maps(struct mem_cgroup *memcg) +int alloc_shrinker_info(struct mem_cgroup *memcg) { - struct memcg_shrinker_map *map; + struct shrinker_info *info; int nid, size, ret = 0; if (mem_cgroup_is_root(memcg)) @@ -248,20 +248,20 @@ int alloc_shrinker_maps(struct mem_cgroup *memcg) down_write(&shrinker_rwsem); size = shrinker_map_size(shrinker_nr_max); for_each_node(nid) { - map = kvzalloc_node(sizeof(*map) + size, GFP_KERNEL, nid); - if (!map) { - free_shrinker_maps(memcg); + info = kvzalloc_node(sizeof(*info) + size, GFP_KERNEL, nid); + if (!info) { + free_shrinker_info(memcg); ret = -ENOMEM; break; } - rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, map); + rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, info); } up_write(&shrinker_rwsem); return ret; } -static int expand_shrinker_maps(int new_id) +static int expand_shrinker_info(int new_id) { int size, old_size, ret = 0; int new_nr_max = new_id + 1; @@ -281,7 +281,7 @@ static int expand_shrinker_maps(int new_id) do { if (mem_cgroup_is_root(memcg)) continue; - ret = expand_one_shrinker_map(memcg, size, old_size); + ret = expand_one_shrinker_info(memcg, size, old_size); if (ret) { mem_cgroup_iter_break(NULL, memcg); goto out; @@ -297,13 +297,13 @@ static int expand_shrinker_maps(int new_id) void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id) { if (shrinker_id >= 0 && memcg && !mem_cgroup_is_root(memcg)) { - struct memcg_shrinker_map *map; + struct shrinker_info *info; rcu_read_lock(); - map = rcu_dereference(memcg->nodeinfo[nid]->shrinker_map); + info = rcu_dereference(memcg->nodeinfo[nid]->shrinker_info); /* Pairs with smp mb in shrink_slab() */ smp_mb__before_atomic(); - set_bit(shrinker_id, map->map); + set_bit(shrinker_id, info->map); rcu_read_unlock(); } } @@ -334,7 +334,7 @@ static int prealloc_memcg_shrinker(struct shrinker *shrinker) goto unlock; if (id >= shrinker_nr_max) { - if (expand_shrinker_maps(id)) { + if (expand_shrinker_info(id)) { idr_remove(&shrinker_idr, id); goto unlock; } @@ -663,7 +663,7 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, struct mem_cgroup *memcg, int priority) { - struct memcg_shrinker_map *map; + struct shrinker_info *info; unsigned long ret, freed = 0; int i; @@ -673,12 +673,12 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, if (!down_read_trylock(&shrinker_rwsem)) return 0; - map = rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_map, - true); - if (unlikely(!map)) + info = rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_info, + true); + if (unlikely(!info)) goto unlock; - for_each_set_bit(i, map->map, shrinker_nr_max) { + for_each_set_bit(i, info->map, shrinker_nr_max) { struct shrink_control sc = { .gfp_mask = gfp_mask, .nid = nid, @@ -689,7 +689,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, shrinker = idr_find(&shrinker_idr, i); if (unlikely(!shrinker || shrinker == SHRINKER_REGISTERING)) { if (!shrinker) - clear_bit(i, map->map); + clear_bit(i, info->map); continue; } @@ -700,7 +700,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, ret = do_shrink_slab(&sc, shrinker, priority); if (ret == SHRINK_EMPTY) { - clear_bit(i, map->map); + clear_bit(i, info->map); /* * After the shrinker reported that it had no objects to * free, but before we cleared the corresponding bit in From patchwork Wed Feb 17 00:13:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12090781 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91840C433DB for ; Wed, 17 Feb 2021 00:13:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2837964DFF for ; Wed, 17 Feb 2021 00:13:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2837964DFF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AE83E8D000F; Tue, 16 Feb 2021 19:13:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A6FFA8D000D; Tue, 16 Feb 2021 19:13:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9149D8D000F; Tue, 16 Feb 2021 19:13:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0034.hostedemail.com [216.40.44.34]) by kanga.kvack.org (Postfix) with ESMTP id 71C758D000D for ; Tue, 16 Feb 2021 19:13:46 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 3F44F180AD820 for ; Wed, 17 Feb 2021 00:13:46 +0000 (UTC) X-FDA: 77825836452.06.pain21_440244e27648 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id 1EBCE1005D00A for ; Wed, 17 Feb 2021 00:13:46 +0000 (UTC) X-HE-Tag: pain21_440244e27648 X-Filterd-Recvd-Size: 5345 Received: from mail-pg1-f178.google.com (mail-pg1-f178.google.com [209.85.215.178]) by imf16.hostedemail.com (Postfix) with ESMTP for ; Wed, 17 Feb 2021 00:13:45 +0000 (UTC) Received: by mail-pg1-f178.google.com with SMTP id n10so7323773pgl.10 for ; Tue, 16 Feb 2021 16:13:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HSn3NqB9VD7rWht7Szy8nfdBgp8H6px/y36iWOtZRrk=; b=ogEtLEdngbkvDqPH+9k1j7lFEVEQ8Pd5ZYB0zf2c1AslkcEufULjGU+i0U3pl1UYk5 f5/W8fn/VgbPSXrYB+U0zZqI7wvw86WUOM2p19OmoYw+vgFqPHM2qKOUPJEAcb8sUB22 kOoR1jsGwnef7/cE1UjbwbmpYWZxsN+UkULCCzH5TJODl+B/r+ckbYkYXit2GDJGzHbG qtT9ePpxLmGZl2AOhwlO7uq0KPS3ucBzM7w/wpl1nUhVZ7onoGk/fIc75IYZWtOMBpAZ d0/5bkkS60b2K7Iwd8Eiso8NX9jLDCbHZ+XAwisXEb77hIatNJMA3UWwh7iPHdPdbhsz Yslw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HSn3NqB9VD7rWht7Szy8nfdBgp8H6px/y36iWOtZRrk=; b=GSupmH2nuPoKTlnpLSGt4mq1xMoNZbaTmi/1ab7oYbdtr+DPTIV56sY3f6EWwWN1Lc IXwdns5TcXnO6p59zNA7X+oK1vxHns6lT5LYScn7ezsi9EZicoowVAue/+0UA0iEG+K3 LM5Bqxc0kUeEAHsmLzdn8mcSVwPSdoQ31hYBqu2Yh4MYrM631Qan04GyK+hRaEddkV/+ lr5bTI9Io60VCHY1oKVJ619fowtho09iXnGdLQ1aHSIsGN3xGzBqJ1Z44vgjMyz5EPU/ e7pZ8Ifb6hkp21HyixQuQ9EpF3Y9y+N6OBPwYJWKPBYMM+MD0LIqni0WDZQ+lpgi40Iz DtGQ== X-Gm-Message-State: AOAM530HbctSX8hOHrJOGtSro/dmp1SKc2YqVWepfvMzysV6f/bEIB1F JHyhNwTJfyeGlOSOvpn4EZA= X-Google-Smtp-Source: ABdhPJwZEecYz6LkZS+2+ihMK2GWDVgJ6bjhlNP1SYaJC1fdnuVWtBc+oQ3w2THvYZWNwjTWlxlzXQ== X-Received: by 2002:a63:cf06:: with SMTP id j6mr21192416pgg.195.1613520824890; Tue, 16 Feb 2021 16:13:44 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y12sm99220pjc.56.2021.02.16.16.13.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Feb 2021 16:13:44 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, vbabka@suse.cz, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v8 PATCH 07/13] mm: vmscan: add shrinker_info_protected() helper Date: Tue, 16 Feb 2021 16:13:16 -0800 Message-Id: <20210217001322.2226796-8-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210217001322.2226796-1-shy828301@gmail.com> References: <20210217001322.2226796-1-shy828301@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The shrinker_info is dereferenced in a couple of places via rcu_dereference_protected with different calling conventions, for example, using mem_cgroup_nodeinfo helper or dereferencing memcg->nodeinfo[nid]->shrinker_info. And the later patch will add more dereference places. So extract the dereference into a helper to make the code more readable. No functional change. Acked-by: Roman Gushchin Acked-by: Kirill Tkhai Acked-by: Vlastimil Babka Signed-off-by: Yang Shi Reviewed-by: Shakeel Butt --- mm/vmscan.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index c94861a3ea3e..fe6e25f46b55 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -192,6 +192,13 @@ static inline int shrinker_map_size(int nr_items) return (DIV_ROUND_UP(nr_items, BITS_PER_LONG) * sizeof(unsigned long)); } +static struct shrinker_info *shrinker_info_protected(struct mem_cgroup *memcg, + int nid) +{ + return rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_info, + lockdep_is_held(&shrinker_rwsem)); +} + static int expand_one_shrinker_info(struct mem_cgroup *memcg, int size, int old_size) { @@ -199,8 +206,7 @@ static int expand_one_shrinker_info(struct mem_cgroup *memcg, int nid; for_each_node(nid) { - old = rcu_dereference_protected( - mem_cgroup_nodeinfo(memcg, nid)->shrinker_info, true); + old = shrinker_info_protected(memcg, nid); /* Not yet online memcg */ if (!old) return 0; @@ -231,7 +237,7 @@ void free_shrinker_info(struct mem_cgroup *memcg) for_each_node(nid) { pn = mem_cgroup_nodeinfo(memcg, nid); - info = rcu_dereference_protected(pn->shrinker_info, true); + info = shrinker_info_protected(memcg, nid); kvfree(info); rcu_assign_pointer(pn->shrinker_info, NULL); } @@ -673,8 +679,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, if (!down_read_trylock(&shrinker_rwsem)) return 0; - info = rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_info, - true); + info = shrinker_info_protected(memcg, nid); if (unlikely(!info)) goto unlock; From patchwork Wed Feb 17 00:13:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12090783 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D244C433E0 for ; Wed, 17 Feb 2021 00:13:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3D09A64EB2 for ; Wed, 17 Feb 2021 00:13:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3D09A64EB2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D0B708D0010; Tue, 16 Feb 2021 19:13:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C95688D000D; Tue, 16 Feb 2021 19:13:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A4DBC8D0010; Tue, 16 Feb 2021 19:13:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0090.hostedemail.com [216.40.44.90]) by kanga.kvack.org (Postfix) with ESMTP id 8031A8D000D for ; Tue, 16 Feb 2021 19:13:48 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 4E5787582 for ; Wed, 17 Feb 2021 00:13:48 +0000 (UTC) X-FDA: 77825836536.09.D7B0FA0 Received: from mail-pg1-f180.google.com (mail-pg1-f180.google.com [209.85.215.180]) by imf13.hostedemail.com (Postfix) with ESMTP id BA149E0001AF for ; Wed, 17 Feb 2021 00:13:46 +0000 (UTC) Received: by mail-pg1-f180.google.com with SMTP id z68so7361039pgz.0 for ; Tue, 16 Feb 2021 16:13:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hkn1dx6SBMPLTML2sqiHh/8OhgR0+c9GTaDqLQG9mRw=; b=mkH7qtgs/sWwR4kD6hoIFngLAQvUL2GPktVY3d/z4niwdZepWeEo7AH0a9NiFPODlv QUlrGVdMT+xO/IwguQ1bxvrUk9VU7j0GM/kWNkUC8svsymmD0dpabMk2lFgB5UFUiol9 xdSaXqNCIQjHg8jECpAvnC+3rjEu7fwZ3tfuxV3VIqO6PL88KXmJ6MI0XwpX5uFa6MAH 71grjRbccq6xRslenhUtEKq3fG1FBQTugbINLupLm0SmtGMhEh0RCGGVFWGI2jeO00Lv Fu7FyXqbyHU8B0p1+Uz6R9Ih6OtstECkyCfk6C2P2RBOCbvavrmIvF9pbwOI6pniRzHS y3Zg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hkn1dx6SBMPLTML2sqiHh/8OhgR0+c9GTaDqLQG9mRw=; b=Srp8z4X5vs68iUP8YiIAQjZImj9IR+VI/zfjImDZ/d9cu6xIU6iipzWD7JM87xFvjY qub0lL1Yxi1qyYDhgjv5C5RPVPqJhlR1AUBxn5yWFurwHK+oSJ9U8J2K5equkD5+yJB3 i9lcrwaNj2g41IEj0oJLtMIqhdegf3Z1+KtZq8+qUUFeM0WJaDbPwsiDf+tbwkvYYjY7 OxgHQHdqZAMFFunMyuF27M8gXjzkMG4oek0e+Q+9vyZgbuA3/nwHtDDrro3west0Ugj3 BZOxmCR4QNq7zExwDhq8TARuhn7FqHXaENEQgB473MY+0butT5HFgoXwOjYeEEAMR9rK U4IA== X-Gm-Message-State: AOAM530hlxHsEJ++ioze+MUrj++xaV4dRyIuglXt9RibzjNnBcbpzwzc JXd7YXU6rc1neV+BiElduNk= X-Google-Smtp-Source: ABdhPJxqdz7DA9e2dGvWSU81oXjXXmxqWC5jlwimj9yzT8it/Gn8cOPqh3jo1xVDg7Msfs3rW/c2kw== X-Received: by 2002:a63:fc58:: with SMTP id r24mr556511pgk.72.1613520827032; Tue, 16 Feb 2021 16:13:47 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y12sm99220pjc.56.2021.02.16.16.13.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Feb 2021 16:13:46 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, vbabka@suse.cz, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v8 PATCH 08/13] mm: vmscan: use a new flag to indicate shrinker is registered Date: Tue, 16 Feb 2021 16:13:17 -0800 Message-Id: <20210217001322.2226796-9-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210217001322.2226796-1-shy828301@gmail.com> References: <20210217001322.2226796-1-shy828301@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: BA149E0001AF X-Stat-Signature: ycz7aogpnqhxjnbothptyp53a69hqtfw Received-SPF: none (gmail.com>: No applicable sender policy available) receiver=imf13; identity=mailfrom; envelope-from=""; helo=mail-pg1-f180.google.com; client-ip=209.85.215.180 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1613520826-116131 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently registered shrinker is indicated by non-NULL shrinker->nr_deferred. This approach is fine with nr_deferred at the shrinker level, but the following patches will move MEMCG_AWARE shrinkers' nr_deferred to memcg level, so their shrinker->nr_deferred would always be NULL. This would prevent the shrinkers from unregistering correctly. Remove SHRINKER_REGISTERING since we could check if shrinker is registered successfully by the new flag. Acked-by: Kirill Tkhai Acked-by: Vlastimil Babka Signed-off-by: Yang Shi Acked-by: Roman Gushchin Reviewed-by: Shakeel Butt --- include/linux/shrinker.h | 7 ++++--- mm/vmscan.c | 40 +++++++++++++++------------------------- 2 files changed, 19 insertions(+), 28 deletions(-) diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h index 0f80123650e2..1eac79ce57d4 100644 --- a/include/linux/shrinker.h +++ b/include/linux/shrinker.h @@ -79,13 +79,14 @@ struct shrinker { #define DEFAULT_SEEKS 2 /* A good number if you don't know better. */ /* Flags */ -#define SHRINKER_NUMA_AWARE (1 << 0) -#define SHRINKER_MEMCG_AWARE (1 << 1) +#define SHRINKER_REGISTERED (1 << 0) +#define SHRINKER_NUMA_AWARE (1 << 1) +#define SHRINKER_MEMCG_AWARE (1 << 2) /* * It just makes sense when the shrinker is also MEMCG_AWARE for now, * non-MEMCG_AWARE shrinker should not have this flag set. */ -#define SHRINKER_NONSLAB (1 << 2) +#define SHRINKER_NONSLAB (1 << 3) extern int prealloc_shrinker(struct shrinker *shrinker); extern void register_shrinker_prepared(struct shrinker *shrinker); diff --git a/mm/vmscan.c b/mm/vmscan.c index fe6e25f46b55..a1047ea60ecf 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -314,19 +314,6 @@ void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id) } } -/* - * We allow subsystems to populate their shrinker-related - * LRU lists before register_shrinker_prepared() is called - * for the shrinker, since we don't want to impose - * restrictions on their internal registration order. - * In this case shrink_slab_memcg() may find corresponding - * bit is set in the shrinkers map. - * - * This value is used by the function to detect registering - * shrinkers and to skip do_shrink_slab() calls for them. - */ -#define SHRINKER_REGISTERING ((struct shrinker *)~0UL) - static DEFINE_IDR(shrinker_idr); static int prealloc_memcg_shrinker(struct shrinker *shrinker) @@ -335,7 +322,7 @@ static int prealloc_memcg_shrinker(struct shrinker *shrinker) down_write(&shrinker_rwsem); /* This may call shrinker, so it must use down_read_trylock() */ - id = idr_alloc(&shrinker_idr, SHRINKER_REGISTERING, 0, 0, GFP_KERNEL); + id = idr_alloc(&shrinker_idr, shrinker, 0, 0, GFP_KERNEL); if (id < 0) goto unlock; @@ -358,9 +345,9 @@ static void unregister_memcg_shrinker(struct shrinker *shrinker) BUG_ON(id < 0); - down_write(&shrinker_rwsem); + lockdep_assert_held(&shrinker_rwsem); + idr_remove(&shrinker_idr, id); - up_write(&shrinker_rwsem); } static bool cgroup_reclaim(struct scan_control *sc) @@ -487,8 +474,11 @@ void free_prealloced_shrinker(struct shrinker *shrinker) if (!shrinker->nr_deferred) return; - if (shrinker->flags & SHRINKER_MEMCG_AWARE) + if (shrinker->flags & SHRINKER_MEMCG_AWARE) { + down_write(&shrinker_rwsem); unregister_memcg_shrinker(shrinker); + up_write(&shrinker_rwsem); + } kfree(shrinker->nr_deferred); shrinker->nr_deferred = NULL; @@ -498,10 +488,7 @@ void register_shrinker_prepared(struct shrinker *shrinker) { down_write(&shrinker_rwsem); list_add_tail(&shrinker->list, &shrinker_list); -#ifdef CONFIG_MEMCG - if (shrinker->flags & SHRINKER_MEMCG_AWARE) - idr_replace(&shrinker_idr, shrinker, shrinker->id); -#endif + shrinker->flags |= SHRINKER_REGISTERED; up_write(&shrinker_rwsem); } @@ -521,13 +508,16 @@ EXPORT_SYMBOL(register_shrinker); */ void unregister_shrinker(struct shrinker *shrinker) { - if (!shrinker->nr_deferred) + if (!(shrinker->flags & SHRINKER_REGISTERED)) return; - if (shrinker->flags & SHRINKER_MEMCG_AWARE) - unregister_memcg_shrinker(shrinker); + down_write(&shrinker_rwsem); list_del(&shrinker->list); + shrinker->flags &= ~SHRINKER_REGISTERED; + if (shrinker->flags & SHRINKER_MEMCG_AWARE) + unregister_memcg_shrinker(shrinker); up_write(&shrinker_rwsem); + kfree(shrinker->nr_deferred); shrinker->nr_deferred = NULL; } @@ -692,7 +682,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, struct shrinker *shrinker; shrinker = idr_find(&shrinker_idr, i); - if (unlikely(!shrinker || shrinker == SHRINKER_REGISTERING)) { + if (unlikely(!shrinker || !(shrinker->flags & SHRINKER_REGISTERED))) { if (!shrinker) clear_bit(i, info->map); continue; From patchwork Wed Feb 17 00:13:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12090785 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9FD4C433E6 for ; Wed, 17 Feb 2021 00:13:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 51EF564DFF for ; Wed, 17 Feb 2021 00:13:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 51EF564DFF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E98728D0011; Tue, 16 Feb 2021 19:13:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D7D918D000D; Tue, 16 Feb 2021 19:13:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BD4508D0011; Tue, 16 Feb 2021 19:13:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0153.hostedemail.com [216.40.44.153]) by kanga.kvack.org (Postfix) with ESMTP id 9DC368D000D for ; Tue, 16 Feb 2021 19:13:50 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 5E27B180143FA for ; Wed, 17 Feb 2021 00:13:50 +0000 (UTC) X-FDA: 77825836620.06.bit35_510375727648 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id 3A5E71005D00B for ; Wed, 17 Feb 2021 00:13:50 +0000 (UTC) X-HE-Tag: bit35_510375727648 X-Filterd-Recvd-Size: 10243 Received: from mail-pj1-f46.google.com (mail-pj1-f46.google.com [209.85.216.46]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Wed, 17 Feb 2021 00:13:49 +0000 (UTC) Received: by mail-pj1-f46.google.com with SMTP id gb24so353267pjb.4 for ; Tue, 16 Feb 2021 16:13:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BeUPvhEAKyX9E/ec80pKz9hYSPF2D+uBDSObIq3Mzk0=; b=l5jGtCznVK0NVwkC43TLcjy6EyPzdWSDomF/6MCFY8gOAwNC3+VwMtOM94S+yqhKbt UPbavFhbVYtbOOEMITEO4aYXRfIC9FAqrj3xFADx5Dy6K+xpNPcROM6grZ9sx9+3/I2v BrIhDi9BAlb8c//c+0R6c/mVlKu45hhx1eqkOkom91W0TDCBvrJV8jdiI8vkXsVtr7uR 8QuL319HGFDPYH0JGzGgodQgk4+Z2ww8t+5katwqDidIc7PrJUGCF1Gl/vMTpknE+wh6 2mDy2SGwTxLI2Qtnxuv5OVoEVNmD2ETZjpBe01AiCiZrg0a49co8chiz941kd/8SLGhR NqPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BeUPvhEAKyX9E/ec80pKz9hYSPF2D+uBDSObIq3Mzk0=; b=gQWlHPk9b1pPAIe3R1ojxd9GsnLaX9oudkuOaooN7PRTQbQ3VtZEAZoE7W8cCUkq9x 7GWls9q4lqvoNV0crhOXcGrm4bmy40KiZErTJNTEzhy5lzs2xFCjSLvg+9XF32qayd0n 81SoON8kXwv6ZwYSH8lY2l/UN0UhJ0VifJ49O/yTHEH6UZLwp74zy9dDe1zZF0cadpsl tewy1KzgYPGN/t/NVPmwVLyTbUcMj79oztx8qxzLpR3guHzfk3xzJ+6h3CAgouXTfvAD Uy5FRNZ86j/5z/dW0wbBPQeteAC9mASmr+J4FlIrlKouOiq3sBtFLNfjjP61oqr/jyS6 zuSw== X-Gm-Message-State: AOAM5301PdnEB3ACBQoi/IpA+eGaT2fkz4E2QFAboAJOlYjSpRWfly6W f5dqdz/asXDcFXx/P22w/8Y= X-Google-Smtp-Source: ABdhPJzRIgnWMkJWfVYWNE2KifMetToUV9ry7UTA22xZy4U6M6axAXj8xeGW4AYHXUwQn33JRYcT2g== X-Received: by 2002:a17:90a:4f83:: with SMTP id q3mr6548785pjh.38.1613520828992; Tue, 16 Feb 2021 16:13:48 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y12sm99220pjc.56.2021.02.16.16.13.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Feb 2021 16:13:48 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, vbabka@suse.cz, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v8 PATCH 09/13] mm: vmscan: add per memcg shrinker nr_deferred Date: Tue, 16 Feb 2021 16:13:18 -0800 Message-Id: <20210217001322.2226796-10-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210217001322.2226796-1-shy828301@gmail.com> References: <20210217001322.2226796-1-shy828301@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently the number of deferred objects are per shrinker, but some slabs, for example, vfs inode/dentry cache are per memcg, this would result in poor isolation among memcgs. The deferred objects typically are generated by __GFP_NOFS allocations, one memcg with excessive __GFP_NOFS allocations may blow up deferred objects, then other innocent memcgs may suffer from over shrink, excessive reclaim latency, etc. For example, two workloads run in memcgA and memcgB respectively, workload in B is vfs heavy workload. Workload in A generates excessive deferred objects, then B's vfs cache might be hit heavily (drop half of caches) by B's limit reclaim or global reclaim. We observed this hit in our production environment which was running vfs heavy workload shown as the below tracing log: <...>-409454 [016] .... 28286961.747146: mm_shrink_slab_start: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: nid: 1 objects to shrink 3641681686040 gfp_flags GFP_HIGHUSER_MOVABLE|__GFP_ZERO pgs_scanned 1 lru_pgs 15721 cache items 246404277 delta 31345 total_scan 123202138 <...>-409454 [022] .... 28287105.928018: mm_shrink_slab_end: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: nid: 1 unused scan count 3641681686040 new scan count 3641798379189 total_scan 602 last shrinker return val 123186855 The vfs cache and page cache ratio was 10:1 on this machine, and half of caches were dropped. This also resulted in significant amount of page caches were dropped due to inodes eviction. Make nr_deferred per memcg for memcg aware shrinkers would solve the unfairness and bring better isolation. When memcg is not enabled (!CONFIG_MEMCG or memcg disabled), the shrinker's nr_deferred would be used. And non memcg aware shrinkers use shrinker's nr_deferred all the time. Signed-off-by: Yang Shi Acked-by: Roman Gushchin Acked-by: Kirill Tkhai Reviewed-by: Shakeel Butt --- include/linux/memcontrol.h | 7 +++-- mm/vmscan.c | 60 ++++++++++++++++++++++++++------------ 2 files changed, 46 insertions(+), 21 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 4c9253896e25..c457fc7bc631 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -93,12 +93,13 @@ struct lruvec_stat { }; /* - * Bitmap of shrinker::id corresponding to memcg-aware shrinkers, - * which have elements charged to this memcg. + * Bitmap and deferred work of shrinker::id corresponding to memcg-aware + * shrinkers, which have elements charged to this memcg. */ struct shrinker_info { struct rcu_head rcu; - unsigned long map[]; + atomic_long_t *nr_deferred; + unsigned long *map; }; /* diff --git a/mm/vmscan.c b/mm/vmscan.c index a1047ea60ecf..fcb399e18fc3 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -187,11 +187,17 @@ static DECLARE_RWSEM(shrinker_rwsem); #ifdef CONFIG_MEMCG static int shrinker_nr_max; +/* The shrinker_info is expanded in a batch of BITS_PER_LONG */ static inline int shrinker_map_size(int nr_items) { return (DIV_ROUND_UP(nr_items, BITS_PER_LONG) * sizeof(unsigned long)); } +static inline int shrinker_defer_size(int nr_items) +{ + return (round_up(nr_items, BITS_PER_LONG) * sizeof(atomic_long_t)); +} + static struct shrinker_info *shrinker_info_protected(struct mem_cgroup *memcg, int nid) { @@ -200,10 +206,12 @@ static struct shrinker_info *shrinker_info_protected(struct mem_cgroup *memcg, } static int expand_one_shrinker_info(struct mem_cgroup *memcg, - int size, int old_size) + int map_size, int defer_size, + int old_map_size, int old_defer_size) { struct shrinker_info *new, *old; int nid; + int size = map_size + defer_size; for_each_node(nid) { old = shrinker_info_protected(memcg, nid); @@ -215,9 +223,16 @@ static int expand_one_shrinker_info(struct mem_cgroup *memcg, if (!new) return -ENOMEM; - /* Set all old bits, clear all new bits */ - memset(new->map, (int)0xff, old_size); - memset((void *)new->map + old_size, 0, size - old_size); + new->nr_deferred = (atomic_long_t *)(new + 1); + new->map = (void *)new->nr_deferred + defer_size; + + /* map: set all old bits, clear all new bits */ + memset(new->map, (int)0xff, old_map_size); + memset((void *)new->map + old_map_size, 0, map_size - old_map_size); + /* nr_deferred: copy old values, clear all new values */ + memcpy(new->nr_deferred, old->nr_deferred, old_defer_size); + memset((void *)new->nr_deferred + old_defer_size, 0, + defer_size - old_defer_size); rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, new); kvfree_rcu(old); @@ -232,9 +247,6 @@ void free_shrinker_info(struct mem_cgroup *memcg) struct shrinker_info *info; int nid; - if (mem_cgroup_is_root(memcg)) - return; - for_each_node(nid) { pn = mem_cgroup_nodeinfo(memcg, nid); info = shrinker_info_protected(memcg, nid); @@ -247,12 +259,12 @@ int alloc_shrinker_info(struct mem_cgroup *memcg) { struct shrinker_info *info; int nid, size, ret = 0; - - if (mem_cgroup_is_root(memcg)) - return 0; + int map_size, defer_size = 0; down_write(&shrinker_rwsem); - size = shrinker_map_size(shrinker_nr_max); + map_size = shrinker_map_size(shrinker_nr_max); + defer_size = shrinker_defer_size(shrinker_nr_max); + size = map_size + defer_size; for_each_node(nid) { info = kvzalloc_node(sizeof(*info) + size, GFP_KERNEL, nid); if (!info) { @@ -260,6 +272,8 @@ int alloc_shrinker_info(struct mem_cgroup *memcg) ret = -ENOMEM; break; } + info->nr_deferred = (atomic_long_t *)(info + 1); + info->map = (void *)info->nr_deferred + defer_size; rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, info); } up_write(&shrinker_rwsem); @@ -267,15 +281,21 @@ int alloc_shrinker_info(struct mem_cgroup *memcg) return ret; } +static inline bool need_expand(int nr_max) +{ + return round_up(nr_max, BITS_PER_LONG) > + round_up(shrinker_nr_max, BITS_PER_LONG); +} + static int expand_shrinker_info(int new_id) { - int size, old_size, ret = 0; + int ret = 0; int new_nr_max = new_id + 1; + int map_size, defer_size = 0; + int old_map_size, old_defer_size = 0; struct mem_cgroup *memcg; - size = shrinker_map_size(new_nr_max); - old_size = shrinker_map_size(shrinker_nr_max); - if (size <= old_size) + if (!need_expand(new_nr_max)) goto out; if (!root_mem_cgroup) @@ -283,11 +303,15 @@ static int expand_shrinker_info(int new_id) lockdep_assert_held(&shrinker_rwsem); + map_size = shrinker_map_size(new_nr_max); + defer_size = shrinker_defer_size(new_nr_max); + old_map_size = shrinker_map_size(shrinker_nr_max); + old_defer_size = shrinker_defer_size(shrinker_nr_max); + memcg = mem_cgroup_iter(NULL, NULL, NULL); do { - if (mem_cgroup_is_root(memcg)) - continue; - ret = expand_one_shrinker_info(memcg, size, old_size); + ret = expand_one_shrinker_info(memcg, map_size, defer_size, + old_map_size, old_defer_size); if (ret) { mem_cgroup_iter_break(NULL, memcg); goto out; From patchwork Wed Feb 17 00:13:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12090787 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9F4EC433DB for ; Wed, 17 Feb 2021 00:13:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 52D4C64DFF for ; Wed, 17 Feb 2021 00:13:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 52D4C64DFF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A99178D0012; Tue, 16 Feb 2021 19:13:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A23A18D000D; Tue, 16 Feb 2021 19:13:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 851A08D0012; Tue, 16 Feb 2021 19:13:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0240.hostedemail.com [216.40.44.240]) by kanga.kvack.org (Postfix) with ESMTP id 6DC288D000D for ; Tue, 16 Feb 2021 19:13:52 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 382DC18019577 for ; Wed, 17 Feb 2021 00:13:52 +0000 (UTC) X-FDA: 77825836704.23.DE1362F Received: from mail-pj1-f42.google.com (mail-pj1-f42.google.com [209.85.216.42]) by imf03.hostedemail.com (Postfix) with ESMTP id D0B5DC0001FA for ; Wed, 17 Feb 2021 00:13:49 +0000 (UTC) Received: by mail-pj1-f42.google.com with SMTP id lw17so586320pjb.0 for ; Tue, 16 Feb 2021 16:13:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=oR8nKROSIHuwr5r2cjQa0F+3vcVobf/RwbwA1G9gVJ8=; b=RNmWCc1rfbFjfx2dHI/SpqPm15sqeJbOH/dgKEWRRz/ac+32aZzyvvCEFVykQNkzdD /m7kWXif2dDZu3iIeoAvgQuZqeY1rroN8PT/YwHCWP8tGogLm8oMFXt6u55EIr0PnIwX jPL1SyXzaqMA7DZPQ3TFrDn3BNqshOsQpfhAYs6QTrQGFvd0IjxokLIj5xi/04HMsH6A VXShD/Sn/fIJGF9E7klrX2/ORnzVtmtAciePa4ma8fPkW7QTQhDDGwGX+kFDSsTpmdve qf0uQQbQo0K/dgb6iyNKhyacqR1I3gQGPd97jImtzJ/nYdt7mNqJwl7Y1wPz9NOhq9CT hpOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=oR8nKROSIHuwr5r2cjQa0F+3vcVobf/RwbwA1G9gVJ8=; b=MSas+bidoB/f4+6A2vFuGV/KlIRSSyhJlhbq020G8qElAtASlyrYh/fdo9t91zW7oQ S0I53J61f90+SoTTy1aIRKsPPmZSNri5VJmohAhyR6q884NULBlCJ/Z43OVKQp+GxyNr DGZEasfuywc/4DAqqZTtKldxiDJZnMu9JXlOEEmt0LMXyQlRcI06oTnlv7A23LBNpGUh 8Ni2ZA8RpuVivB55C5/lH71n9IGcOmfTV0OsPPEovss0T84ThByg4XuV6rX1k2wnDZo+ /nxGOUZC92Cw5plC7lSSWau1L0pv/VO73U+VF2ebxjApmZcDYX3ZEPI6RTYhQx75VuPr ek+A== X-Gm-Message-State: AOAM533fV05IjyOnbUMv1YeQ5KNXKwnZaGdvfaFg8hDJRybd8QxTeEk6 t7SZB6Yucv+tjBux5a5qytY= X-Google-Smtp-Source: ABdhPJynnWV/n3VpQ+hxbtK6YzKqt/IV+8FJESNbPl6F/fso06lkUpZU1U3mBCbgCKNtzSRQdDbSug== X-Received: by 2002:a17:90a:bf02:: with SMTP id c2mr6684811pjs.117.1613520830950; Tue, 16 Feb 2021 16:13:50 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y12sm99220pjc.56.2021.02.16.16.13.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Feb 2021 16:13:50 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, vbabka@suse.cz, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v8 PATCH 10/13] mm: vmscan: use per memcg nr_deferred of shrinker Date: Tue, 16 Feb 2021 16:13:19 -0800 Message-Id: <20210217001322.2226796-11-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210217001322.2226796-1-shy828301@gmail.com> References: <20210217001322.2226796-1-shy828301@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: D0B5DC0001FA X-Stat-Signature: jmtusu1g5oj6y6nrqshrtifimwpis514 Received-SPF: none (gmail.com>: No applicable sender policy available) receiver=imf03; identity=mailfrom; envelope-from=""; helo=mail-pj1-f42.google.com; client-ip=209.85.216.42 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1613520829-285046 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use per memcg's nr_deferred for memcg aware shrinkers. The shrinker's nr_deferred will be used in the following cases: 1. Non memcg aware shrinkers 2. !CONFIG_MEMCG 3. memcg is disabled by boot parameter Signed-off-by: Yang Shi Acked-by: Roman Gushchin Acked-by: Kirill Tkhai Reviewed-by: Shakeel Butt --- mm/vmscan.c | 78 ++++++++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 66 insertions(+), 12 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index fcb399e18fc3..57cbc6bc8a49 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -374,6 +374,24 @@ static void unregister_memcg_shrinker(struct shrinker *shrinker) idr_remove(&shrinker_idr, id); } +static long xchg_nr_deferred_memcg(int nid, struct shrinker *shrinker, + struct mem_cgroup *memcg) +{ + struct shrinker_info *info; + + info = shrinker_info_protected(memcg, nid); + return atomic_long_xchg(&info->nr_deferred[shrinker->id], 0); +} + +static long add_nr_deferred_memcg(long nr, int nid, struct shrinker *shrinker, + struct mem_cgroup *memcg) +{ + struct shrinker_info *info; + + info = shrinker_info_protected(memcg, nid); + return atomic_long_add_return(nr, &info->nr_deferred[shrinker->id]); +} + static bool cgroup_reclaim(struct scan_control *sc) { return sc->target_mem_cgroup; @@ -412,6 +430,18 @@ static void unregister_memcg_shrinker(struct shrinker *shrinker) { } +static long xchg_nr_deferred_memcg(int nid, struct shrinker *shrinker, + struct mem_cgroup *memcg) +{ + return 0; +} + +static long add_nr_deferred_memcg(long nr, int nid, struct shrinker *shrinker, + struct mem_cgroup *memcg) +{ + return 0; +} + static bool cgroup_reclaim(struct scan_control *sc) { return false; @@ -423,6 +453,39 @@ static bool writeback_throttling_sane(struct scan_control *sc) } #endif +static long xchg_nr_deferred(struct shrinker *shrinker, + struct shrink_control *sc) +{ + int nid = sc->nid; + + if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) + nid = 0; + + if (sc->memcg && + (shrinker->flags & SHRINKER_MEMCG_AWARE)) + return xchg_nr_deferred_memcg(nid, shrinker, + sc->memcg); + + return atomic_long_xchg(&shrinker->nr_deferred[nid], 0); +} + + +static long add_nr_deferred(long nr, struct shrinker *shrinker, + struct shrink_control *sc) +{ + int nid = sc->nid; + + if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) + nid = 0; + + if (sc->memcg && + (shrinker->flags & SHRINKER_MEMCG_AWARE)) + return add_nr_deferred_memcg(nr, nid, shrinker, + sc->memcg); + + return atomic_long_add_return(nr, &shrinker->nr_deferred[nid]); +} + /* * This misses isolated pages which are not accounted for to save counters. * As the data only determines if reclaim or compaction continues, it is @@ -558,14 +621,10 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, long freeable; long nr; long new_nr; - int nid = shrinkctl->nid; long batch_size = shrinker->batch ? shrinker->batch : SHRINK_BATCH; long scanned = 0, next_deferred; - if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) - nid = 0; - freeable = shrinker->count_objects(shrinker, shrinkctl); if (freeable == 0 || freeable == SHRINK_EMPTY) return freeable; @@ -575,7 +634,7 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, * and zero it so that other concurrent shrinker invocations * don't also do this scanning work. */ - nr = atomic_long_xchg(&shrinker->nr_deferred[nid], 0); + nr = xchg_nr_deferred(shrinker, shrinkctl); total_scan = nr; if (shrinker->seeks) { @@ -666,14 +725,9 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, next_deferred = 0; /* * move the unused scan count back into the shrinker in a - * manner that handles concurrent updates. If we exhausted the - * scan, there is no need to do an update. + * manner that handles concurrent updates. */ - if (next_deferred > 0) - new_nr = atomic_long_add_return(next_deferred, - &shrinker->nr_deferred[nid]); - else - new_nr = atomic_long_read(&shrinker->nr_deferred[nid]); + new_nr = add_nr_deferred(next_deferred, shrinker, shrinkctl); trace_mm_shrink_slab_end(shrinker, shrinkctl->nid, freed, nr, new_nr, total_scan); return freed; From patchwork Wed Feb 17 00:13:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12090789 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ABC3EC433E6 for ; Wed, 17 Feb 2021 00:13:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 60F0464EB1 for ; Wed, 17 Feb 2021 00:13:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 60F0464EB1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BA9808D0013; Tue, 16 Feb 2021 19:13:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B33508D000D; Tue, 16 Feb 2021 19:13:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 961CB8D0013; Tue, 16 Feb 2021 19:13:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0047.hostedemail.com [216.40.44.47]) by kanga.kvack.org (Postfix) with ESMTP id 7A8638D000D for ; Tue, 16 Feb 2021 19:13:54 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 4959945B3 for ; Wed, 17 Feb 2021 00:13:54 +0000 (UTC) X-FDA: 77825836788.19.ink69_410c4cb27648 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id 3026B1ACC27 for ; Wed, 17 Feb 2021 00:13:54 +0000 (UTC) X-HE-Tag: ink69_410c4cb27648 X-Filterd-Recvd-Size: 5786 Received: from mail-pj1-f41.google.com (mail-pj1-f41.google.com [209.85.216.41]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Wed, 17 Feb 2021 00:13:53 +0000 (UTC) Received: by mail-pj1-f41.google.com with SMTP id cl8so366450pjb.0 for ; Tue, 16 Feb 2021 16:13:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qF6ksg7l8pC6izRWMIXhdTqgtS4B/O7ETuSZILY8w1s=; b=efLeE7WoLcttQISB/mbXgF8wP3e76TpoE8gJQSpz8Doiv4ijcze5SDWuVLpPilWugW 4ase4ksfGx3DQ1vQ9YPS8cvJgTc6yrYkXYYL16cV5wUUsZMCRKE/0CVqrMEUDcpFG7cd 98ppxJELp7IKlxIyYmzLE5sHSXYOslrGVPOJE8ZnB87f5l1xOwj/PBABqjJKvDrEEw0W 9A6cMzrD8TEYAQG0O6lKGFyhv14a3xGy/upbPa7cmjFMZMoYe6VkfETTgNFzEsRK7Lh4 PHM2MHODmm3VT4wd+ECwKb9Is2ipviXZPb4J6RTTOURjIepkEuKENtWyJrrwEoxbi+GI q8og== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qF6ksg7l8pC6izRWMIXhdTqgtS4B/O7ETuSZILY8w1s=; b=X4/A3DMlJkeoxH9/xbBWPRUzuZZLyGGcwfHdvRqld8F5Gqx++vg0LCTdYr1V+w9IEM 7BcFF8RaG/GUcUbhAtBb3NrG5Y7Hh6ds0beUVTW0BXpmNxb7yeDJNrTJYphRZkWu0GqR APrI5m904ANNhDjAmEhkIwJHMxN1U55e1z7edou67C9G1y/Z7A4huZaZUybfOLHRaCXM CPwksX1EKrYCaNoIQ6cZNyplJcvtidUninrTTFlYHZZL6HwvWK/iNpVtbwI8jzl7nIhb hs9xFy1xuH2TWCZbPHq1ASVk7JHP9sVwY3Vn63jBlCXcFtJKOe7x64Jx/XZRXZXsGmlr dKVw== X-Gm-Message-State: AOAM531p8ZV6KwKpHYr2hmEzO6A3WCeTvlIk31Xs6C3Aqk9Up4IDqTTN 71hCfkSKXRN9zPYafr1EKj8= X-Google-Smtp-Source: ABdhPJxLa2Ndx3ZiLcTf5jq9MtNEjZleRMgJ3G3T68XGHQVRcwZQievbuP+0T/xY4M5LCfHxhgP3gg== X-Received: by 2002:a17:90a:7108:: with SMTP id h8mr6288180pjk.98.1613520832960; Tue, 16 Feb 2021 16:13:52 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y12sm99220pjc.56.2021.02.16.16.13.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Feb 2021 16:13:52 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, vbabka@suse.cz, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v8 PATCH 11/13] mm: vmscan: don't need allocate shrinker->nr_deferred for memcg aware shrinkers Date: Tue, 16 Feb 2021 16:13:20 -0800 Message-Id: <20210217001322.2226796-12-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210217001322.2226796-1-shy828301@gmail.com> References: <20210217001322.2226796-1-shy828301@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now nr_deferred is available on per memcg level for memcg aware shrinkers, so don't need allocate shrinker->nr_deferred for such shrinkers anymore. The prealloc_memcg_shrinker() would return -ENOSYS if !CONFIG_MEMCG or memcg is disabled by kernel command line, then shrinker's SHRINKER_MEMCG_AWARE flag would be cleared. This makes the implementation of this patch simpler. Acked-by: Vlastimil Babka Reviewed-by: Kirill Tkhai Acked-by: Roman Gushchin Signed-off-by: Yang Shi Reviewed-by: Shakeel Butt --- mm/vmscan.c | 31 ++++++++++++++++--------------- 1 file changed, 16 insertions(+), 15 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 57cbc6bc8a49..d8800e4da67d 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -344,6 +344,9 @@ static int prealloc_memcg_shrinker(struct shrinker *shrinker) { int id, ret = -ENOMEM; + if (mem_cgroup_disabled()) + return -ENOSYS; + down_write(&shrinker_rwsem); /* This may call shrinker, so it must use down_read_trylock() */ id = idr_alloc(&shrinker_idr, shrinker, 0, 0, GFP_KERNEL); @@ -423,7 +426,7 @@ static bool writeback_throttling_sane(struct scan_control *sc) #else static int prealloc_memcg_shrinker(struct shrinker *shrinker) { - return 0; + return -ENOSYS; } static void unregister_memcg_shrinker(struct shrinker *shrinker) @@ -534,8 +537,18 @@ unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru, int zone */ int prealloc_shrinker(struct shrinker *shrinker) { - unsigned int size = sizeof(*shrinker->nr_deferred); + unsigned int size; + int err; + + if (shrinker->flags & SHRINKER_MEMCG_AWARE) { + err = prealloc_memcg_shrinker(shrinker); + if (err != -ENOSYS) + return err; + shrinker->flags &= ~SHRINKER_MEMCG_AWARE; + } + + size = sizeof(*shrinker->nr_deferred); if (shrinker->flags & SHRINKER_NUMA_AWARE) size *= nr_node_ids; @@ -543,28 +556,16 @@ int prealloc_shrinker(struct shrinker *shrinker) if (!shrinker->nr_deferred) return -ENOMEM; - if (shrinker->flags & SHRINKER_MEMCG_AWARE) { - if (prealloc_memcg_shrinker(shrinker)) - goto free_deferred; - } - return 0; - -free_deferred: - kfree(shrinker->nr_deferred); - shrinker->nr_deferred = NULL; - return -ENOMEM; } void free_prealloced_shrinker(struct shrinker *shrinker) { - if (!shrinker->nr_deferred) - return; - if (shrinker->flags & SHRINKER_MEMCG_AWARE) { down_write(&shrinker_rwsem); unregister_memcg_shrinker(shrinker); up_write(&shrinker_rwsem); + return; } kfree(shrinker->nr_deferred); From patchwork Wed Feb 17 00:13:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12090791 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2609C433E0 for ; Wed, 17 Feb 2021 00:13:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 59ED164EB1 for ; Wed, 17 Feb 2021 00:13:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 59ED164EB1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F06378D0014; Tue, 16 Feb 2021 19:13:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E93AC8D000D; Tue, 16 Feb 2021 19:13:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D08908D0014; Tue, 16 Feb 2021 19:13:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id B1DCD8D000D for ; Tue, 16 Feb 2021 19:13:56 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 797C318024A62 for ; Wed, 17 Feb 2021 00:13:56 +0000 (UTC) X-FDA: 77825836872.16.neck10_5f181a927648 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id 5ACBA100E6903 for ; Wed, 17 Feb 2021 00:13:56 +0000 (UTC) X-HE-Tag: neck10_5f181a927648 X-Filterd-Recvd-Size: 5765 Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Wed, 17 Feb 2021 00:13:55 +0000 (UTC) Received: by mail-pl1-f171.google.com with SMTP id a24so6403435plm.11 for ; Tue, 16 Feb 2021 16:13:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=GTXH9pU4UKvAeIiKlIuVYFEP3dOyBTmqLQ92jrFsTbY=; b=mhHHzKvtD90ogbwP0/K8kvPhFu2KAfiTc5QtEhGBbxt+Mb7fSTrStvGppXl9QXOypN SazTKsi8EowW96OwiEgtWaPT7e5yo4OKVvZDPgWwdJGdP1nF54Pc3hyQE3Z2J+StOCiV DpwBPNEoeirEHEdNsQdeRdQK2gTbVCCfMfacsfLPU5sT+RKu9nfzi0KUyruFIYrpvTku oYvfRJsDPRILjSZ2xuRO98HNvDdaA45MP5A8mnCt6rvsi56MMfPv3qhkrgQ4qQ2vv5zB QQ9b56NjjLtscCrZeme0cZprSZkmtqXE371xPBETREEtDSwfCwyJaJU72ytMZTJoK6ZI xzgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GTXH9pU4UKvAeIiKlIuVYFEP3dOyBTmqLQ92jrFsTbY=; b=RBbWjJMiBmNClQoF/itQ3FThDYAtkPNQO8nD+e2CowCuft7m3vf88Dii8G11DuYXrd 0JAzi/6pe1KMXxjd1GaiV0nmDH0Uf4AJm62x1XlP5Ghk5tBIb9YPqjxurNnu5rCYVd9A g5Me3XByIq/cRhMW2kcK1gQOa0q36O34VIvP9Vu6t7nNcey3Jr6NeMc0Eh/qm9FeMRBx fB0Cdic4nLw20MuycjKalbIyZBn9vgvX+0RyVTNEVui87Gc6sYoohl6v4WYdxV6Zuef5 c0oGKOmPFl1y4guFA2/bzhWqJLgsLj/OQNViKHdFaFFc8/JszKp2/eNnvWHhE8rwN04B IE6w== X-Gm-Message-State: AOAM533VkWUclywV/WjabkcdfxMlQ7IHi5Hqw+7NtFSLGY12lbz837JM vtARmDpDGCepWVuGUm3p2Hs= X-Google-Smtp-Source: ABdhPJzjx38X/6z2lktUcBE2x/989ultR6wYHYlPOfUHfDB0TvkaLepuyhOH/fcjg2aO8iUuqVhjUg== X-Received: by 2002:a17:902:ed0d:b029:e3:76d8:79de with SMTP id b13-20020a170902ed0db02900e376d879demr4980227pld.36.1613520835093; Tue, 16 Feb 2021 16:13:55 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y12sm99220pjc.56.2021.02.16.16.13.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Feb 2021 16:13:54 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, vbabka@suse.cz, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v8 PATCH 12/13] mm: memcontrol: reparent nr_deferred when memcg offline Date: Tue, 16 Feb 2021 16:13:21 -0800 Message-Id: <20210217001322.2226796-13-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210217001322.2226796-1-shy828301@gmail.com> References: <20210217001322.2226796-1-shy828301@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now shrinker's nr_deferred is per memcg for memcg aware shrinkers, add to parent's corresponding nr_deferred when memcg offline. Acked-by: Vlastimil Babka Acked-by: Kirill Tkhai Acked-by: Roman Gushchin Signed-off-by: Yang Shi Reviewed-by: Shakeel Butt --- include/linux/memcontrol.h | 1 + mm/memcontrol.c | 1 + mm/vmscan.c | 24 ++++++++++++++++++++++++ 3 files changed, 26 insertions(+) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index c457fc7bc631..e1c4b93889ad 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1585,6 +1585,7 @@ static inline bool mem_cgroup_under_socket_pressure(struct mem_cgroup *memcg) int alloc_shrinker_info(struct mem_cgroup *memcg); void free_shrinker_info(struct mem_cgroup *memcg); void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id); +void reparent_shrinker_deferred(struct mem_cgroup *memcg); #else #define mem_cgroup_sockets_enabled 0 static inline void mem_cgroup_sk_alloc(struct sock *sk) { }; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index f64ad0d044d9..21f36b73f36a 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5282,6 +5282,7 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css) page_counter_set_low(&memcg->memory, 0); memcg_offline_kmem(memcg); + reparent_shrinker_deferred(memcg); wb_memcg_offline(memcg); drain_all_stock(memcg); diff --git a/mm/vmscan.c b/mm/vmscan.c index d8800e4da67d..4247a3568585 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -395,6 +395,30 @@ static long add_nr_deferred_memcg(long nr, int nid, struct shrinker *shrinker, return atomic_long_add_return(nr, &info->nr_deferred[shrinker->id]); } +void reparent_shrinker_deferred(struct mem_cgroup *memcg) +{ + int i, nid; + long nr; + struct mem_cgroup *parent; + struct shrinker_info *child_info, *parent_info; + + parent = parent_mem_cgroup(memcg); + if (!parent) + parent = root_mem_cgroup; + + /* Prevent from concurrent shrinker_info expand */ + down_read(&shrinker_rwsem); + for_each_node(nid) { + child_info = shrinker_info_protected(memcg, nid); + parent_info = shrinker_info_protected(parent, nid); + for (i = 0; i < shrinker_nr_max; i++) { + nr = atomic_long_read(&child_info->nr_deferred[i]); + atomic_long_add(nr, &parent_info->nr_deferred[i]); + } + } + up_read(&shrinker_rwsem); +} + static bool cgroup_reclaim(struct scan_control *sc) { return sc->target_mem_cgroup; From patchwork Wed Feb 17 00:13:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12090793 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4BF9AC433DB for ; Wed, 17 Feb 2021 00:14:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DFD3C64EB1 for ; Wed, 17 Feb 2021 00:13:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DFD3C64EB1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7DA268D0015; Tue, 16 Feb 2021 19:13:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 766A18D000D; Tue, 16 Feb 2021 19:13:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5B9928D0015; Tue, 16 Feb 2021 19:13:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0109.hostedemail.com [216.40.44.109]) by kanga.kvack.org (Postfix) with ESMTP id 4191C8D000D for ; Tue, 16 Feb 2021 19:13:59 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 070D175AF for ; Wed, 17 Feb 2021 00:13:59 +0000 (UTC) X-FDA: 77825836998.23.8E0FD46 Received: from mail-pf1-f178.google.com (mail-pf1-f178.google.com [209.85.210.178]) by imf02.hostedemail.com (Postfix) with ESMTP id 80387407F8ED for ; Wed, 17 Feb 2021 00:13:51 +0000 (UTC) Received: by mail-pf1-f178.google.com with SMTP id q20so7239752pfu.8 for ; Tue, 16 Feb 2021 16:13:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=l3hgH+28WYE3ZXoblhdSElWyJzRaS9xgsMhEuGQG4Ck=; b=QMtwxddL/Doi8ti3RVqupaiMXJnaV9oGrObTbm+fsfP+CLJwAhTg4H2cVedturf5DV zxMhDQidZP6zDkrUwUy3kcfi15WIvwH1mEzbKU7Ans11F5RGcUqAjy0++7ZCS1Fbt7UR Y6Ivl518m6sTiAbBGyoEKpHzsD2kY7Q4DVKMy1qWplkTEvLvyjpv9y5gW4RWppP6eE4M bzVo3X+si5N15PafDj0vU/TQRrW9QAKXgxtiKEJZcZ6m0SjlfAJ6KPp7s3soh2rRiMLJ MEcAAAj9p+wWS6Sck1vMJ3xrRtpHGXpntrpfA1T6vI15ZY7v+XbVstR+0U155i9htOoU brvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=l3hgH+28WYE3ZXoblhdSElWyJzRaS9xgsMhEuGQG4Ck=; b=oENdCdZYWjwB5YAlV6rrs9H6+dr6ufbwPmpOtPWxKqRB9oEaffxvwu2Ed7QPe4ebeQ BQRHSGfvMz337p+Uyhmr9G8Mb0DaIZG4LgkabErl7D1dEe4duxymqKXne3O/UkrljL33 18J/G4VX8JNiWXUzutEd0ZL/pxfTfG/sKJ0aWG1y6vr9RF4z28c9i6uiXN+5lQR/ALTK EdnMcy3HfGhzCt0D2+jMBxaIxs1dMA+37I6D7kuy0Upli8zgq/VchP7eaOXJbkiSDeQS u2SAcQkUCq0j5H3fqfuGAgVg8wd0S7nOKwexCnI8blLCdSw1mnbwPmzdIb3Y+oC8fHH3 SH9g== X-Gm-Message-State: AOAM531G9C5gbRMWgBH/mM27rW51N/PuK0thTBn3eiEQG53Wz2Z6dGNl azFMnP5deobsD2MyS30ZSDc= X-Google-Smtp-Source: ABdhPJzDqnLKOjhckWkxEnIUdYEfij0kOz6DzuqxoZpDS+iPjz7y/hkLXnRk9SrvxZKe5UMlLh6QPA== X-Received: by 2002:aa7:888b:0:b029:1ec:df4a:4da2 with SMTP id z11-20020aa7888b0000b02901ecdf4a4da2mr14656pfe.66.1613520837746; Tue, 16 Feb 2021 16:13:57 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id y12sm99220pjc.56.2021.02.16.16.13.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Feb 2021 16:13:56 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, vbabka@suse.cz, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v8 PATCH 13/13] mm: vmscan: shrink deferred objects proportional to priority Date: Tue, 16 Feb 2021 16:13:22 -0800 Message-Id: <20210217001322.2226796-14-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210217001322.2226796-1-shy828301@gmail.com> References: <20210217001322.2226796-1-shy828301@gmail.com> MIME-Version: 1.0 X-Stat-Signature: wrdi8qpxbfizkdemyzc1cahhn84hu9yt X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 80387407F8ED Received-SPF: none (gmail.com>: No applicable sender policy available) receiver=imf02; identity=mailfrom; envelope-from=""; helo=mail-pf1-f178.google.com; client-ip=209.85.210.178 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1613520831-369691 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The number of deferred objects might get windup to an absurd number, and it results in clamp of slab objects. It is undesirable for sustaining workingset. So shrink deferred objects proportional to priority and cap nr_deferred to twice of cache items. The idea is borrowed from Dave Chinner's patch: https://lore.kernel.org/linux-xfs/20191031234618.15403-13-david@fromorbit.com/ Tested with kernel build and vfs metadata heavy workload in our production environment, no regression is spotted so far. Signed-off-by: Yang Shi --- mm/vmscan.c | 46 +++++++++++----------------------------------- 1 file changed, 11 insertions(+), 35 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 4247a3568585..b3bdc3ba8edc 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -661,7 +661,6 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, */ nr = xchg_nr_deferred(shrinker, shrinkctl); - total_scan = nr; if (shrinker->seeks) { delta = freeable >> priority; delta *= 4; @@ -675,37 +674,9 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, delta = freeable / 2; } + total_scan = nr >> priority; total_scan += delta; - if (total_scan < 0) { - pr_err("shrink_slab: %pS negative objects to delete nr=%ld\n", - shrinker->scan_objects, total_scan); - total_scan = freeable; - next_deferred = nr; - } else - next_deferred = total_scan; - - /* - * We need to avoid excessive windup on filesystem shrinkers - * due to large numbers of GFP_NOFS allocations causing the - * shrinkers to return -1 all the time. This results in a large - * nr being built up so when a shrink that can do some work - * comes along it empties the entire cache due to nr >>> - * freeable. This is bad for sustaining a working set in - * memory. - * - * Hence only allow the shrinker to scan the entire cache when - * a large delta change is calculated directly. - */ - if (delta < freeable / 4) - total_scan = min(total_scan, freeable / 2); - - /* - * Avoid risking looping forever due to too large nr value: - * never try to free more than twice the estimate number of - * freeable entries. - */ - if (total_scan > freeable * 2) - total_scan = freeable * 2; + total_scan = min(total_scan, (2 * freeable)); trace_mm_shrink_slab_start(shrinker, shrinkctl, nr, freeable, delta, total_scan, priority); @@ -744,10 +715,15 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, cond_resched(); } - if (next_deferred >= scanned) - next_deferred -= scanned; - else - next_deferred = 0; + /* + * The deferred work is increased by any new work (delta) that wasn't + * done, decreased by old deferred work that was done now. + * + * And it is capped to two times of the freeable items. + */ + next_deferred = max_t(long, (nr + delta - scanned), 0); + next_deferred = min(next_deferred, (2 * freeable)); + /* * move the unused scan count back into the shrinker in a * manner that handles concurrent updates.