From patchwork Thu Sep 16 13:47:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12499117 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8DBFC433F5 for ; Thu, 16 Sep 2021 13:52:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 567D261056 for ; Thu, 16 Sep 2021 13:52:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 567D261056 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id C548D6B0072; Thu, 16 Sep 2021 09:52:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C042D6B0073; Thu, 16 Sep 2021 09:52:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AA3FF6B0074; Thu, 16 Sep 2021 09:52:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0239.hostedemail.com [216.40.44.239]) by kanga.kvack.org (Postfix) with ESMTP id 9AD356B0072 for ; Thu, 16 Sep 2021 09:52:10 -0400 (EDT) Received: from smtpin34.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 5F2DF82499A8 for ; Thu, 16 Sep 2021 13:52:10 +0000 (UTC) X-FDA: 78593575620.34.982175C Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) by imf14.hostedemail.com (Postfix) with ESMTP id 67A6B6001993 for ; Thu, 16 Sep 2021 13:52:09 +0000 (UTC) Received: by mail-pl1-f172.google.com with SMTP id v1so3871508plo.10 for ; Thu, 16 Sep 2021 06:52:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=n9wIjj8xLLBzdnBn259zJI9o7Pja/5VUgexxrmUSEiM=; b=xAfHTJVecbpE3lJwD/OU7A5LkTVmINWpj3g3tEt77KH3wp2QGLIQpKQM1oxh00i5zJ 9ivYA8X/CGTn1V2iluipfkQLQowSqkxHd20YBitRdMwIQmSLdQom9DK9OPlymTFwBi+P HHcao7oKUI/jaVhgMkHwcz0EbFjVXVrQxCCdqtl34DcKZ0kurCIZ1jOv1DSKBHQLd5kM SBrgkker9/Zs0YNOdQjlV53OQznqUYRqXXMIs45kv3snr46pSUYdbEOViia9k96Out24 v8zf9E3McuFjcr3Hc7sAr5sVXfejNJbLzsYfbSR/TXNoacNWG9OoS+jm0blXhwz89h8O 972A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=n9wIjj8xLLBzdnBn259zJI9o7Pja/5VUgexxrmUSEiM=; b=rkO52/JfOKP2Qxn11FIh0IWqWTqdtkmgo8loSWLfibK4KR6vtb36V69iyNe2c4YHLn rqvD9KfU7DMgsqRXXvXRYiXkAGMtD22f0U8UPjB47qN0fhHOkFcSbRPn9DsdDxPke2jj 9D6X+jYhCa7zX1clkJQJBRdjHjLT4eGYPNrxiCIzdYQ78JRaqpCezkETbYs+xFZKp1ZU 38rSEdesWZb+kKFKfoeiH+H6UOhA0jCrh1bfQmqESglrHVfZRB6t4cSgacH0IZCrSl3x wacwi5H+xt5q8f63vBlZ6aZtKOUMTFP/pz4RcA91a0+io/6rszytNNiuZio1MXqafNnR E4JQ== X-Gm-Message-State: AOAM533yG/nwOBXXOhGuGeevYI27s2swwZweiEtRFIIh9Zq1nqM8VLRK myDBqvXvS25SEOeU0Ba4j/p3Dg== X-Google-Smtp-Source: ABdhPJytGRuNBoH643MZXZvZXjefR0Q3eMMQfrYoaaMQdSTA9YzbMkpZvG7K8qnbdiW9nVFEaHu6tw== X-Received: by 2002:a17:902:e78f:b0:139:a3c3:5eaa with SMTP id cp15-20020a170902e78f00b00139a3c35eaamr4920556plb.47.1631800328043; Thu, 16 Sep 2021 06:52:08 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.226]) by smtp.gmail.com with ESMTPSA id o9sm3617443pfh.217.2021.09.16.06.52.01 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 16 Sep 2021 06:52:07 -0700 (PDT) From: Muchun Song To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, shakeelb@google.com, vdavydov.dev@gmail.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, fam.zheng@bytedance.com, bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org, smuchun@gmail.com, zhengqi.arch@bytedance.com, Muchun Song Subject: [PATCH v2 00/13] Use obj_cgroup APIs to charge the LRU pages Date: Thu, 16 Sep 2021 21:47:35 +0800 Message-Id: <20210916134748.67712-1-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) MIME-Version: 1.0 X-Stat-Signature: m3jxhy6z916d9du3331c3yf5j9rr8gnd Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=xAfHTJVe; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf14.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.172 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 67A6B6001993 X-HE-Tag: 1631800329-432136 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This version is rebased over linux 5.15-rc1, because Shakeel has asked me if I could do that. I rework some code suggested by Roman as well in this version. I have not removed the Acked-by tags which are from Roman, because this version is not based on the folio relevant. If Roman wants me to do this, please let me know, thanks. Since the following patchsets applied. All the kernel memory are charged with the new APIs of obj_cgroup. [v17,00/19] The new cgroup slab memory controller[1] [v5,0/7] Use obj_cgroup APIs to charge kmem pages[2] But user memory allocations (LRU pages) pinning memcgs for a long time - it exists at a larger scale and is causing recurring problems in the real world: page cache doesn't get reclaimed for a long time, or is used by the second, third, fourth, ... instance of the same job that was restarted into a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory, and make page reclaim very inefficient. We can convert LRU pages and most other raw memcg pins to the objcg direction to fix this problem, and then the LRU pages will not pin the memcgs. This patchset aims to make the LRU pages to drop the reference to memory cgroup by using the APIs of obj_cgroup. Finally, we can see that the number of the dying cgroups will not increase if we run the following test script. ```bash #!/bin/bash cat /proc/cgroups | grep memory cd /sys/fs/cgroup/memory for i in range{1..500} do mkdir test echo $$ > test/cgroup.procs sleep 60 & echo $$ > cgroup.procs echo `cat test/cgroup.procs` > cgroup.procs rmdir test done cat /proc/cgroups | grep memory ``` Thanks. [1] https://lore.kernel.org/linux-mm/20200623015846.1141975-1-guro@fb.com/ [2] https://lore.kernel.org/linux-mm/20210319163821.20704-1-songmuchun@bytedance.com/ Changlogs in v2: 1. Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks). 2. Rebase to linux 5.15-rc1. 3. Add a new pacth to cleanup mem_cgroup_kmem_disabled(). Changlogs in v1: 1. Drop RFC tag. 2. Rebase to linux next-20210811. Changlogs in RFC v4: 1. Collect Acked-by from Roman. 2. Rebase to linux next-20210525. 3. Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem(). 4. Change the patch 1 title to "prepare objcg API for non-kmem usage". 5. Convert reparent_ops_head to an array in patch 8. Thanks for Roman's review and suggestions. Changlogs in RFC v3: 1. Drop the code cleanup and simplification patches. Gather those patches into a separate series[1]. 2. Rework patch #1 suggested by Johannes. Changlogs in RFC v2: 1. Collect Acked-by tags by Johannes. Thanks. 2. Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks. 3. Fix move_pages_to_lru(). Muchun Song (13): mm: move mem_cgroup_kmem_disabled() to memcontrol.h mm: memcontrol: prepare objcg API for non-kmem usage mm: memcontrol: introduce compact_lock_page_irqsave mm: memcontrol: make lruvec lock safe when the LRU pages reparented mm: vmscan: rework move_pages_to_lru() mm: thp: introduce split_queue_lock/unlock{_irqsave}() mm: thp: make split queue lock safe when LRU pages reparented mm: memcontrol: make all the callers of page_memcg() safe mm: memcontrol: introduce memcg_reparent_ops mm: memcontrol: use obj_cgroup APIs to charge the LRU pages mm: memcontrol: rename {un}lock_page_memcg() to {un}lock_page_objcg() mm: lru: add VM_BUG_ON_PAGE to lru maintenance function mm: lru: use lruvec lock to serialize memcg changes Documentation/admin-guide/cgroup-v1/memory.rst | 2 +- fs/buffer.c | 11 +- fs/fs-writeback.c | 23 +- include/linux/memcontrol.h | 184 ++++---- include/linux/mm_inline.h | 6 + mm/compaction.c | 36 +- mm/filemap.c | 2 +- mm/huge_memory.c | 159 +++++-- mm/internal.h | 5 - mm/memcontrol.c | 563 ++++++++++++++++++------- mm/migrate.c | 4 + mm/page-writeback.c | 26 +- mm/page_io.c | 5 +- mm/rmap.c | 14 +- mm/slab_common.c | 2 +- mm/swap.c | 46 +- mm/vmscan.c | 56 ++- 17 files changed, 775 insertions(+), 369 deletions(-)