From patchwork Mon Nov 30 18:45:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 11941143 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F8E8C63777 for ; Mon, 30 Nov 2020 18:45:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9EB6720705 for ; Mon, 30 Nov 2020 18:45:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="XOBUgBSx" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9EB6720705 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DDEE36B0036; Mon, 30 Nov 2020 13:45:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D90FE8D0002; Mon, 30 Nov 2020 13:45:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C7FA78D0001; Mon, 30 Nov 2020 13:45:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0176.hostedemail.com [216.40.44.176]) by kanga.kvack.org (Postfix) with ESMTP id B22BA6B0036 for ; Mon, 30 Nov 2020 13:45:19 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 768D4182371A6 for ; Mon, 30 Nov 2020 18:45:19 +0000 (UTC) X-FDA: 77541962358.26.coat28_120dc81273a4 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin26.hostedemail.com (Postfix) with ESMTP id 3490E1804A308 for ; Mon, 30 Nov 2020 18:45:19 +0000 (UTC) X-HE-Tag: coat28_120dc81273a4 X-Filterd-Recvd-Size: 7880 Received: from mail-pj1-f65.google.com (mail-pj1-f65.google.com [209.85.216.65]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Mon, 30 Nov 2020 18:45:18 +0000 (UTC) Received: by mail-pj1-f65.google.com with SMTP id j13so126120pjz.3 for ; Mon, 30 Nov 2020 10:45:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=RIvRWRD6HO9Oyw/EUUNnRsPAJrYWyVDF1THvKv1y1Fs=; b=XOBUgBSxf/RjmEDHYrQQqkosaTnSRzD/gEoFQsf2TfV7Qhq6ZqopwGLYemFfX4OiFz naKFMfyA59z/3Q2/3hv4fsZXKcVO3f5rC/njOA3H+V2Hk4nQ29Oc3kNs7LfQTw6HX++C AYtuvb50a7aG85F5m143rG2qaPa4sR92AWkKyESmMs/S2tZ+0bbvxZoWmCsYXEwnG83B +kKQ5d2GHEtUT0BNEIOGdqUeAw5bTroRkTE3jy7KAmrNexKOOQTU+/I5sL9dViqyYpp6 hCko6oO6/IAObPbfwVPOsiW81CEC6xyQw9CXMs+gYzWuCyAL3x8anIV0yo1WjQ3rzWg2 jklg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=RIvRWRD6HO9Oyw/EUUNnRsPAJrYWyVDF1THvKv1y1Fs=; b=rxiHShoZ6Iq9MJO6FqLTpOwwSygJSLFcntdl+B9x/CzIhb4oWL0FP4LYGMuzHGbUsZ IZBlXKoi5LazpAJdnzFA1ClOg9rdp0fhUWJCgftUuRnRdwhAJ2KeocBFzOJAumizQ2h7 QD2ROi5Y3/N0VqN4iDv3vQX2e4StUbxbWVa9DAw6DBHARwQmUIZxG5W4nyIUdpa8wSZV QouyixiaTpfh/ONpOb73U6cXBpCbCCjGVm26X9Nm/USsB/qSqAdYOYeaztZrMy2Wr3kQ +4gaY/y5PHR1+EWZrTvkCHO6jMnUfdW8gPcD+RMB6HSCpT9ook+41vrOH1/o/w+jkpRb +WuQ== X-Gm-Message-State: AOAM532kkZRL9txs3v4bUYuQoSZXHudjni0OmTK887cB6aNwyuuo0K5r ZBulbp9wBEXOc281P9UMG4s= X-Google-Smtp-Source: ABdhPJxvCrP6PxH+CF1f23dBvytjEh+ZRfcFjJ3c4wvmKIvc0hy4G2IgWeRVpMC7iot8YYQKAojfYw== X-Received: by 2002:a17:90b:198a:: with SMTP id mv10mr184697pjb.5.1606761917654; Mon, 30 Nov 2020 10:45:17 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id r11sm154390pjo.1.2020.11.30.10.45.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Nov 2020 10:45:16 -0800 (PST) From: Yang Shi To: vdavydov.dev@gmail.com, ktkhai@virtuozzo.com, guro@fb.com, shakeelb@google.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH] mm: list_lru: hold nlru lock to avoid reading transient negative nr_items Date: Mon, 30 Nov 2020 10:45:14 -0800 Message-Id: <20201130184514.551950-1-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When investigating a slab cache bloat problem, significant amount of negative dentry cache was seen, but confusingly they neither got shrunk by reclaimer (the host has very tight memory) nor be shrunk by dropping cache. The vmcore shows there are over 14M negative dentry objects on lru, but tracing result shows they were even not scanned at all. The further investigation shows the memcg's vfs shrinker_map bit is not set. So the reclaimer or dropping cache just skip calling vfs shrinker. So we have to reboot the hosts to get the memory back. I didn't manage to come up with a reproducer in test environment, and the problem can't be reproduced after rebooting. But it seems there is race between shrinker map bit clear and reparenting by code inspection. The hypothesis is elaborated as below. The memcg hierarchy on our production environment looks like: root / \ system user The main workloads are running under user slice's children, and it creates and removes memcg frequently. So reparenting happens very often under user slice, but no task is under user slice directly. So with the frequent reparenting and tight memory pressure, the below hypothetical race condition may happen: CPU A CPU B CPU C reparent dst->nr_items == 0 shrinker: total_objects == 0 add src->nr_items to dst set_bit retrun SHRINK_EMPTY clear_bit list_lru_del() reparent again dst->nr_items may go negative due to current list_lru_del() on CPU C The second run of shrinker: read nr_items without any synchronization, so it may see intermediate negative nr_items then total_objects may return 0 conincidently keep the bit cleared dst->nr_items != 0 skip set_bit add scr->nr_item to dst After this point dst->nr_item may never go zero, so reparenting will not set shrinker_map bit anymore. And since there is no task under user slice directly, so no new object will be added to its lru to set the shrinker map bit either. That bit is kept cleared forever. How does list_lru_del() race with reparenting? It is because reparenting replaces childen's kmemcg_id to parent's without protecting from nlru->lock, so list_lru_del() may see parent's kmemcg_id but actually deleting items from child's lru, but dec'ing parent's nr_items, so the parent's nr_items may go negative as commit 2788cf0c401c268b4819c5407493a8769b7007aa ("memcg: reparent list_lrus and free kmemcg_id on css offline") says. Can we move kmemcg_id replacement after reparenting? No, because the race with list_lru_del() may result in negative src->nr_items, but it will never be fixed. So the shrinker may never return SHRINK_EMPTY then keep the shrinker map bit set always. The shrinker will be always called for nonsense. Can we synchronize list_lru_del() and reparenting? Yes, it could be done. But it seems we need introduce a new lock or use nlru->lock. But it sounds complicated to move kmemcg_id replacement code under nlru->lock. And list_lru_del() may be called quite often to exacerbate some hot path, i.e. dentry kill. So, it sounds acceptable to synchronize reading nr_items to avoid seeing intermediate negative nr_items given the simplicity and it is typically just called by shrinkers when counting the freeable objects. The patch is tested with some shrinker intensive workloads, no noticeable regression is soptted. Cc: Vladimir Davydov Cc: Kirill Tkhai Cc: Roman Gushchin Cc: Shakeel Butt Signed-off-by: Yang Shi --- mm/list_lru.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/mm/list_lru.c b/mm/list_lru.c index 5aa6e44bc2ae..5c128a7710ff 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -178,10 +178,17 @@ unsigned long list_lru_count_one(struct list_lru *lru, struct list_lru_one *l; unsigned long count; - rcu_read_lock(); + /* + * Since list_lru_{add,del} may be called under an IRQ-safe lock, + * we have to use IRQ-safe primitives here to avoid deadlock. + * + * Hold the lock to prevent from seeing transient negative + * nr_items value. + */ + spin_lock_irq(&nlru->lock); l = list_lru_from_memcg_idx(nlru, memcg_cache_id(memcg)); count = READ_ONCE(l->nr_items); - rcu_read_unlock(); + spin_unlock_irq(&nlru->lock); return count; }