Message ID | 20210428094949.43579-2-songmuchun@bytedance.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Shrink the list lru size on memory cgroup removal | expand |
diff --git a/mm/list_lru.c b/mm/list_lru.c index cd58790d0fb3..4962d48d4410 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -176,13 +176,16 @@ unsigned long list_lru_count_one(struct list_lru *lru, { struct list_lru_node *nlru = &lru->node[nid]; struct list_lru_one *l; - unsigned long count; + long count; rcu_read_lock(); l = list_lru_from_memcg_idx(nlru, memcg_cache_id(memcg)); count = READ_ONCE(l->nr_items); rcu_read_unlock(); + if (unlikely(count < 0)) + count = 0; + return count; } EXPORT_SYMBOL_GPL(list_lru_count_one);
Since commit 2788cf0c401c ("memcg: reparent list_lrus and free kmemcg_id on css offline"), the ->nr_items can be negative during memory cgroup reparenting. In this case, list_lru_count_one() can returns an unusually large value. In order to not surprise the user. So return zero when ->nr_items is negative. Signed-off-by: Muchun Song <songmuchun@bytedance.com> --- mm/list_lru.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)