From patchwork Mon Oct 25 12:43:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12581761 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84875C433F5 for ; Mon, 25 Oct 2021 12:44:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DF1DB60EFF for ; Mon, 25 Oct 2021 12:44:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org DF1DB60EFF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 6F6A16B006C; Mon, 25 Oct 2021 08:44:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 67E1B6B0072; Mon, 25 Oct 2021 08:44:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 51E81940007; Mon, 25 Oct 2021 08:44:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0175.hostedemail.com [216.40.44.175]) by kanga.kvack.org (Postfix) with ESMTP id 3CBAF6B006C for ; Mon, 25 Oct 2021 08:44:14 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id DB9E01820C7B3 for ; Mon, 25 Oct 2021 12:44:13 +0000 (UTC) X-FDA: 78734927586.02.077BE5F Received: from mail-pg1-f180.google.com (mail-pg1-f180.google.com [209.85.215.180]) by imf08.hostedemail.com (Postfix) with ESMTP id 9880B30000B0 for ; Mon, 25 Oct 2021 12:44:06 +0000 (UTC) Received: by mail-pg1-f180.google.com with SMTP id m21so10788400pgu.13 for ; Mon, 25 Oct 2021 05:44:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=U205qA0D781B6BaETFIwO9vBDYC4f6gXi1xSjPP3fXU=; b=g1CwdOIb6G3OfFsqliVbTjyWQceAZOwYXt44/ZnnPt2Jia+VgVq7stwEsXdnKAlgdu ztpDf54id5sZIcG6BJj8gH6PlHg9XnomEXcqgJ39sf+/vc+9wosB14P0CR6PtI2BkwfO vLAf0Nzhq2ahclTirUPK9SZ5lJJQR2IXbVCrjM9zPYUInwBDtUN68vUofy3S7HWpcb22 ZP0QGT7zV4ZXfkpZu5QEzpRQRKPrsVAWJSr0uFRQv/R/qW9QmxwXSgNdjBaeC204LCp2 RX8u3SVBuruiHeJdIJloznfKo0o6ad7ODEUX+QbevHx5RA4Ql6AmwfCSzsdQoN4C+Ywl MBKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=U205qA0D781B6BaETFIwO9vBDYC4f6gXi1xSjPP3fXU=; b=S/X0vijIkH9J5QONQBd2HDzyDZoetAigy8PmcnHpBl0TS70JVD1gpslxuRYrfhAENE g2B9IW0dQplkuCnfoG3OmTPes6oB/BaZL91ctUkUXKN4r4ULE7o0kZoGLA3f6NUkZitw f0tVMxYIBkF0Ch6Xtmg8mC/A8XDwVBvuSEmmkxQqwUr2EM1ogxh8zjo3M6N75HFc0JUf XsDwP9YECcvPvRY6XZiLD3adfZvOXXU1UOXxlJ7ehJLpSNPApZwEVgjKAJ0KJGdg48c+ DnKWHyo8Uo6Yg5VfFGqPNhooBC8T5uAzRqX05932jqLVGN1y5R53jHWz6Ov11l6QJUtC OrVw== X-Gm-Message-State: AOAM532jfocgCAYr631AvjGcUpxDnD9qAJmIMnJvczH+ToZ2gG62xOht 4ye6uk9Xh8n5Q7qLX91lQQFb+Q== X-Google-Smtp-Source: ABdhPJz1eITwLyRpzzpaOiB0M7R2BWkp27hRsKyc5Soj3i18Pmn5EURUsGknMYb9ZKpjVihvM5mzlw== X-Received: by 2002:a62:e816:0:b0:47b:d98e:f934 with SMTP id c22-20020a62e816000000b0047bd98ef934mr14348848pfi.83.1635165851469; Mon, 25 Oct 2021 05:44:11 -0700 (PDT) Received: from localhost.localdomain ([61.120.150.70]) by smtp.gmail.com with ESMTPSA id h1sm9400347pjf.10.2021.10.25.05.44.08 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 25 Oct 2021 05:44:11 -0700 (PDT) From: Muchun Song To: akpm@linux-foundation.org, mhocko@kernel.org, shakeelb@google.com, willy@infradead.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Muchun Song Subject: [PATCH] mm: list_lru: only add memcg-aware lrus to the global lru list Date: Mon, 25 Oct 2021 20:43:53 +0800 Message-Id: <20211025124353.55781-1-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 9880B30000B0 X-Stat-Signature: dc4hu4pgjdsrdbehcy49euxbetddueje Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=g1CwdOIb; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf08.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.215.180 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-HE-Tag: 1635165846-593754 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The non-memcg-aware lru is always skiped when traversing the global lru list, which is not efficient. We can only add the memcg-aware lru to the global lru list instead to make traversing more efficient. Signed-off-by: Muchun Song --- mm/list_lru.c | 35 ++++++++++++++++------------------- 1 file changed, 16 insertions(+), 19 deletions(-) diff --git a/mm/list_lru.c b/mm/list_lru.c index 7572f8e70b86..0cd5e89ca063 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -15,18 +15,29 @@ #include "slab.h" #ifdef CONFIG_MEMCG_KMEM -static LIST_HEAD(list_lrus); +static LIST_HEAD(memcg_list_lrus); static DEFINE_MUTEX(list_lrus_mutex); +static inline bool list_lru_memcg_aware(struct list_lru *lru) +{ + return lru->memcg_aware; +} + static void list_lru_register(struct list_lru *lru) { + if (!list_lru_memcg_aware(lru)) + return; + mutex_lock(&list_lrus_mutex); - list_add(&lru->list, &list_lrus); + list_add(&lru->list, &memcg_list_lrus); mutex_unlock(&list_lrus_mutex); } static void list_lru_unregister(struct list_lru *lru) { + if (!list_lru_memcg_aware(lru)) + return; + mutex_lock(&list_lrus_mutex); list_del(&lru->list); mutex_unlock(&list_lrus_mutex); @@ -37,11 +48,6 @@ static int lru_shrinker_id(struct list_lru *lru) return lru->shrinker_id; } -static inline bool list_lru_memcg_aware(struct list_lru *lru) -{ - return lru->memcg_aware; -} - static inline struct list_lru_one * list_lru_from_memcg_idx(struct list_lru_node *nlru, int idx) { @@ -457,9 +463,6 @@ static int memcg_update_list_lru(struct list_lru *lru, { int i; - if (!list_lru_memcg_aware(lru)) - return 0; - for_each_node(i) { if (memcg_update_list_lru_node(&lru->node[i], old_size, new_size)) @@ -482,9 +485,6 @@ static void memcg_cancel_update_list_lru(struct list_lru *lru, { int i; - if (!list_lru_memcg_aware(lru)) - return; - for_each_node(i) memcg_cancel_update_list_lru_node(&lru->node[i], old_size, new_size); @@ -497,7 +497,7 @@ int memcg_update_all_list_lrus(int new_size) int old_size = memcg_nr_cache_ids; mutex_lock(&list_lrus_mutex); - list_for_each_entry(lru, &list_lrus, list) { + list_for_each_entry(lru, &memcg_list_lrus, list) { ret = memcg_update_list_lru(lru, old_size, new_size); if (ret) goto fail; @@ -506,7 +506,7 @@ int memcg_update_all_list_lrus(int new_size) mutex_unlock(&list_lrus_mutex); return ret; fail: - list_for_each_entry_continue_reverse(lru, &list_lrus, list) + list_for_each_entry_continue_reverse(lru, &memcg_list_lrus, list) memcg_cancel_update_list_lru(lru, old_size, new_size); goto out; } @@ -543,9 +543,6 @@ static void memcg_drain_list_lru(struct list_lru *lru, { int i; - if (!list_lru_memcg_aware(lru)) - return; - for_each_node(i) memcg_drain_list_lru_node(lru, i, src_idx, dst_memcg); } @@ -555,7 +552,7 @@ void memcg_drain_all_list_lrus(int src_idx, struct mem_cgroup *dst_memcg) struct list_lru *lru; mutex_lock(&list_lrus_mutex); - list_for_each_entry(lru, &list_lrus, list) + list_for_each_entry(lru, &memcg_list_lrus, list) memcg_drain_list_lru(lru, src_idx, dst_memcg); mutex_unlock(&list_lrus_mutex); }