From patchwork Mon Jan 27 17:34:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roman Gushchin X-Patchwork-Id: 11352977 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D4632924 for ; Mon, 27 Jan 2020 17:37:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 86F0621739 for ; Mon, 27 Jan 2020 17:37:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=fb.com header.i=@fb.com header.b="cIa5PLni" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 86F0621739 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=fb.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B9AA66B0008; Mon, 27 Jan 2020 12:37:44 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B4A9D6B000A; Mon, 27 Jan 2020 12:37:44 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A60BC6B000C; Mon, 27 Jan 2020 12:37:44 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0213.hostedemail.com [216.40.44.213]) by kanga.kvack.org (Postfix) with ESMTP id 8C3536B0008 for ; Mon, 27 Jan 2020 12:37:44 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 503F12463 for ; Mon, 27 Jan 2020 17:37:44 +0000 (UTC) X-FDA: 76424121648.04.root33_60964bfe5ce2f X-Spam-Summary: 2,0,0,c660b96f8955e790,d41d8cd98f00b204,prvs=829571e488=guro@fb.com,::akpm@linux-foundation.org:mhocko@kernel.org:hannes@cmpxchg.org:shakeelb@google.com:vdavydov.dev@gmail.com:linux-kernel@vger.kernel.org:kernel-team@fb.com:bharata@linux.ibm.com:laoar.shao@gmail.com:guro@fb.com,RULES_HIT:1:2:41:69:355:379:541:966:967:968:973:981:982:988:989:1028:1260:1261:1277:1313:1314:1345:1437:1516:1518:1605:1730:1747:1777:1792:1801:2194:2196:2199:2200:2393:2525:2559:2564:2682:2685:2730:2859:2917:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4051:4250:4385:4423:4605:5007:6117:6119:6261:6630:6653:6755:7903:7904:8603:9025:9040:9108:10004:11026:11657:11658:11914:12043:12050:12219:12295:12296:12297:12438:12555:12679:12895:13161:13227:13229:13869:14096:14097:14394:21080:21121:21325:21433:21450:21451:21627:21740:21939:21972:21987:30005:30012:30034:30054:30056:30064:30 070:3007 X-HE-Tag: root33_60964bfe5ce2f X-Filterd-Recvd-Size: 11545 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by imf11.hostedemail.com (Postfix) with ESMTP for ; Mon, 27 Jan 2020 17:37:43 +0000 (UTC) Received: from pps.filterd (m0044012.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 00RHYNVG026707 for ; Mon, 27 Jan 2020 09:37:42 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : mime-version : content-type; s=facebook; bh=KOLPizCSA1B9rXuQb6BFVQPKO1BNOLI1acu0coHqhCM=; b=cIa5PLniW9nRZF7tot7s3Nkv5AMKzKd6gzE4sJrKRJjIXyeP8vnZxI8mBw6UnoNu+yiv +IcAsmFHGYhcHOiZd2oag0a5W/WQBIEG5CmRXgcnzo5FfxO/zmiJvxXyiQD7S6oOHFWO lCjIhTRlk/sK3y0E3CQMXNnhGL1qM3tGRnI= Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com with ESMTP id 2xs6a2p6yb-6 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 27 Jan 2020 09:37:42 -0800 Received: from intmgw004.06.prn3.facebook.com (2620:10d:c085:108::4) by mail.thefacebook.com (2620:10d:c085:11d::4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1779.2; Mon, 27 Jan 2020 09:37:23 -0800 Received: by devvm2643.prn2.facebook.com (Postfix, from userid 111017) id 7865C1DFEFC63; Mon, 27 Jan 2020 09:35:07 -0800 (PST) Smtp-Origin-Hostprefix: devvm From: Roman Gushchin Smtp-Origin-Hostname: devvm2643.prn2.facebook.com To: , Andrew Morton CC: Michal Hocko , Johannes Weiner , Shakeel Butt , Vladimir Davydov , , , Bharata B Rao , Yafang Shao , Roman Gushchin Smtp-Origin-Cluster: prn2c23 Subject: [PATCH v2 00/28] The new cgroup slab memory controller Date: Mon, 27 Jan 2020 09:34:25 -0800 Message-ID: <20200127173453.2089565-1-guro@fb.com> X-Mailer: git-send-email 2.17.1 X-FB-Internal: Safe MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138,18.0.572 definitions=2020-01-27_06:2020-01-24,2020-01-27 signatures=0 X-Proofpoint-Spam-Details: rule=fb_default_notspam policy=fb_default score=0 phishscore=0 mlxlogscore=999 mlxscore=0 clxscore=1015 suspectscore=0 malwarescore=0 bulkscore=0 lowpriorityscore=0 impostorscore=0 adultscore=0 spamscore=0 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-1911200001 definitions=main-2001270141 X-FB-Internal: deliver X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The existing cgroup slab memory controller is based on the idea of replicating slab allocator internals for each memory cgroup. This approach promises a low memory overhead (one pointer per page), and isn't adding too much code on hot allocation and release paths. But is has a very serious flaw: it leads to a low slab utilization. Using a drgn* script I've got an estimation of slab utilization on a number of machines running different production workloads. In most cases it was between 45% and 65%, and the best number I've seen was around 85%. Turning kmem accounting off brings it to high 90s. Also it brings back 30-50% of slab memory. It means that the real price of the existing slab memory controller is way bigger than a pointer per page. The real reason why the existing design leads to a low slab utilization is simple: slab pages are used exclusively by one memory cgroup. If there are only few allocations of certain size made by a cgroup, or if some active objects (e.g. dentries) are left after the cgroup is deleted, or the cgroup contains a single-threaded application which is barely allocating any kernel objects, but does it every time on a new CPU: in all these cases the resulting slab utilization is very low. If kmem accounting is off, the kernel is able to use free space on slab pages for other allocations. Arguably it wasn't an issue back to days when the kmem controller was introduced and was an opt-in feature, which had to be turned on individually for each memory cgroup. But now it's turned on by default on both cgroup v1 and v2. And modern systemd-based systems tend to create a large number of cgroups. This patchset provides a new implementation of the slab memory controller, which aims to reach a much better slab utilization by sharing slab pages between multiple memory cgroups. Below is the short description of the new design (more details in commit messages). Accounting is performed per-object instead of per-page. Slab-related vmstat counters are converted to bytes. Charging is performed on page-basis, with rounding up and remembering leftovers. Memcg ownership data is stored in a per-slab-page vector: for each slab page a vector of corresponding size is allocated. To keep slab memory reparenting working, instead of saving a pointer to the memory cgroup directly an intermediate object is used. It's simply a pointer to a memcg (which can be easily changed to the parent) with a built-in reference counter. This scheme allows to reparent all allocated objects without walking them over and changing memcg pointer to the parent. Instead of creating an individual set of kmem_caches for each memory cgroup, two global sets are used: the root set for non-accounted and root-cgroup allocations and the second set for all other allocations. This allows to simplify the lifetime management of individual kmem_caches: they are destroyed with root counterparts. It allows to remove a good amount of code and make things generally simpler. The patchset* has been tested on a number of different workloads in our production. In all cases it saved significant amount of memory, measured from high hundreds of MBs to single GBs per host. On average, the size of slab memory has been reduced by 35-45%. (* These numbers were received used a backport of this patchset to the kernel version used in fb production. But similar numbers can be obtained on a vanilla kernel. On my personal desktop with 8-cores CPU and 16 GB of RAM running Fedora 31 the new slab controller saves ~45-50% of slab memory, measured just after loading of the system). Additionally, it should lead to a lower memory fragmentation, just because of a smaller number of non-movable pages and also because there is no more need to move all slab objects to a new set of pages when a workload is restarted in a new memory cgroup. The patchset consists of several blocks: patches (1)-(6) clean up the existing kmem accounting API, patches (7)-(13) prepare vmstat to count individual slab objects, patches (14)-(21) implement the main idea of the patchset, patches (22)-(25) are following clean-ups of the memcg/slab code, patches (26)-(27) implement a drgn-based replacement for per-memcg slabinfo, patch (28) add kselftests covering kernel memory accounting functionality. * https://github.com/osandov/drgn v2: 1) implemented re-layering and renaming suggested by Johannes, added his patch to the set. Thanks! 2) fixed the issue discovered by Bharata B Rao. Thanks! 3) added kmem API clean up part 4) added slab/memcg follow-up clean up part 5) fixed a couple of issues discovered by internal testing on FB fleet. 6) added kselftests 7) included metadata into the charge calculation 8) refreshed commit logs, regrouped patches, rebased onto mm tree, etc v1: 1) fixed a bug in zoneinfo_show_print() 2) added some comments to the subpage charging API, a minor fix 3) separated memory.kmem.slabinfo deprecation into a separate patch, provided a drgn-based replacement 4) rebased on top of the current mm tree RFC: https://lwn.net/Articles/798605/ Johannes Weiner (1): mm: memcontrol: decouple reference counting from page accounting Roman Gushchin (27): mm: kmem: cleanup (__)memcg_kmem_charge_memcg() arguments mm: kmem: cleanup memcg_kmem_uncharge_memcg() arguments mm: kmem: rename memcg_kmem_(un)charge() into memcg_kmem_(un)charge_page() mm: kmem: switch to nr_pages in (__)memcg_kmem_charge_memcg() mm: memcg/slab: cache page number in memcg_(un)charge_slab() mm: kmem: rename (__)memcg_kmem_(un)charge_memcg() to __memcg_kmem_(un)charge() mm: memcg/slab: introduce mem_cgroup_from_obj() mm: fork: fix kernel_stack memcg stats for various stack implementations mm: memcg/slab: rename __mod_lruvec_slab_state() into __mod_lruvec_obj_state() mm: memcg: introduce mod_lruvec_memcg_state() mm: slub: implement SLUB version of obj_to_index() mm: vmstat: use s32 for vm_node_stat_diff in struct per_cpu_nodestat mm: vmstat: convert slab vmstat counter to bytes mm: memcg/slab: obj_cgroup API mm: memcg/slab: allocate obj_cgroups for non-root slab pages mm: memcg/slab: save obj_cgroup for non-root slab objects mm: memcg/slab: charge individual slab objects instead of pages mm: memcg/slab: deprecate memory.kmem.slabinfo mm: memcg/slab: move memcg_kmem_bypass() to memcontrol.h mm: memcg/slab: use a single set of kmem_caches for all memory cgroups mm: memcg/slab: simplify memcg cache creation mm: memcg/slab: deprecate memcg_kmem_get_cache() mm: memcg/slab: deprecate slab_root_caches mm: memcg/slab: remove redundant check in memcg_accumulate_slabinfo() tools/cgroup: add slabinfo.py tool tools/cgroup: make slabinfo.py compatible with new slab controller kselftests: cgroup: add kernel memory accounting tests drivers/base/node.c | 14 +- fs/pipe.c | 2 +- fs/proc/meminfo.c | 4 +- include/linux/memcontrol.h | 147 ++++- include/linux/mm.h | 25 +- include/linux/mm_types.h | 5 +- include/linux/mmzone.h | 12 +- include/linux/slab.h | 5 +- include/linux/slub_def.h | 9 + include/linux/vmstat.h | 8 + kernel/fork.c | 13 +- kernel/power/snapshot.c | 2 +- mm/list_lru.c | 12 +- mm/memcontrol.c | 638 +++++++++++++-------- mm/oom_kill.c | 2 +- mm/page_alloc.c | 12 +- mm/slab.c | 36 +- mm/slab.h | 346 +++++------ mm/slab_common.c | 513 ++--------------- mm/slob.c | 12 +- mm/slub.c | 62 +- mm/vmscan.c | 3 +- mm/vmstat.c | 37 +- mm/workingset.c | 6 +- tools/cgroup/slabinfo.py | 220 +++++++ tools/testing/selftests/cgroup/.gitignore | 1 + tools/testing/selftests/cgroup/Makefile | 2 + tools/testing/selftests/cgroup/test_kmem.c | 380 ++++++++++++ 28 files changed, 1505 insertions(+), 1023 deletions(-) create mode 100755 tools/cgroup/slabinfo.py create mode 100644 tools/testing/selftests/cgroup/test_kmem.c