From patchwork Mon Mar 13 11:28:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13172280 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A940C6FD19 for ; Mon, 13 Mar 2023 11:29:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 94C696B0071; Mon, 13 Mar 2023 07:29:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8D4836B0072; Mon, 13 Mar 2023 07:29:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 727326B0074; Mon, 13 Mar 2023 07:29:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 5C0A86B0071 for ; Mon, 13 Mar 2023 07:29:40 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 24E53A952D for ; Mon, 13 Mar 2023 11:29:40 +0000 (UTC) X-FDA: 80563654920.27.66A14D6 Received: from mail-pj1-f49.google.com (mail-pj1-f49.google.com [209.85.216.49]) by imf24.hostedemail.com (Postfix) with ESMTP id 74FF018001E for ; Mon, 13 Mar 2023 11:29:37 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=MzKjonRw; spf=pass (imf24.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.216.49 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678706978; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=nutwP8eBjcVbc22noDnh/NYMSLjOUT6aykIF5CijApM=; b=mCdF0lQu72RBhMs65d2poOjTqccwlQmEm5fqRKDp9Mnyhjaw8bQvxQPP9uIkUNrl6/mqh2 QLiXYbt9MNw4QO3CUHV6snqHb76ge8BIjpkiWH6azhc09RbYdH8PWVG1vOFPeOBLJCUpnU qvqZRi6JB3G5AXZC1PKeaq4JlLWgz2o= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=MzKjonRw; spf=pass (imf24.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.216.49 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678706978; a=rsa-sha256; cv=none; b=YMnsxy20bgkvhizh3THYmMoFox+TrCWSP0eHom53vnip1Y3S67LwzUhD6AtOr1hs4cUWIa Qst5+bgOvgNbXqLlzrMzhmbDzem7DSbdcBuWnu+PegXAQj4i7WzQKv4FDDwL6BUYpd9J3z brIT9awz//1w3YXOI1VsnToZ/M8whxI= Received: by mail-pj1-f49.google.com with SMTP id x20-20020a17090a8a9400b00233ba727724so12756080pjn.1 for ; Mon, 13 Mar 2023 04:29:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1678706976; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=nutwP8eBjcVbc22noDnh/NYMSLjOUT6aykIF5CijApM=; b=MzKjonRwDEyp9YK0wN9uimbirJQKR0BXX2a8DB31h6L2ZA8J7mVZMif6Q5pIL1trWH IbA6L9qk1kpjpjRfSm04E96W5314yKbX+/qH06tmxUtP3xjnWuVJR1wUGkfiXo308aon Ti50dw8vgpplmlZ9TSRDwjAt0jLgqG2yhaT+doXpqku3dkqZmOukNjNK78fqtkYOLYKe r3t8RudrznyjC3Jc1Hve7JvVIC1NvGdlXS3DYD+ElEF0ZD5TmlTHbPa58tgFsZa9W6eL /ztXnUm6TM/xE3AHNny6Fl/2PGAL1iVXjZsbbDwIeqatB+aNIEZ8q7xDBU//6oW/ud/V TFCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678706976; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=nutwP8eBjcVbc22noDnh/NYMSLjOUT6aykIF5CijApM=; b=BNQZU6rCmoBR7dgJTOKI5idRmW7+BougELgC/V/OMb4Tyyhfo/fd1mUv/MNr2b7ZCT 7brjPwJF9h7CZZzKpdufVL6ctju8SZ+TWKrVAhcV0TcA4IL7F2UWbp9TJE7376l6BM15 wCavmM8o62ckWYXSFhoF3aDPMbAMCYE9Fo3SFkScYgL0WcUrKtojZo1Bu1oyaC2hqJjw 1lzP+Ttt4KMmHvP6QM3hAoLZw8IEE81Jyb8KTkF97yH2pJf/jZJNvdfghk2qbg6B+ZJC gMVEGqGTskgOdx4HQXPe4lByLfcxPGM3VY/Sa3xxeFWFTOWtKeJrTLDTaXGUr6rJjhFB uXoQ== X-Gm-Message-State: AO0yUKVdNIEGUi1McYA1+6cLd8S/Opf/ALm6yjsvHjGcMa8bjiK8YaoH 3u9PlqnIjLwoVjrl8pjGbjRIEw== X-Google-Smtp-Source: AK7set+Jz3TakTA+hO8L1yLd1pWq03DSCcigRddSwyrvx0+RyypZ7xKGo6LeVhnJjE1rU9cMtOuMcg== X-Received: by 2002:a05:6a20:7f8e:b0:cc:c303:8a65 with SMTP id d14-20020a056a207f8e00b000ccc3038a65mr13693918pzj.2.1678706975963; Mon, 13 Mar 2023 04:29:35 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([139.177.225.229]) by smtp.gmail.com with ESMTPSA id n2-20020a654882000000b0050300a7c8c2sm4390827pgs.89.2023.03.13.04.29.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 13 Mar 2023 04:29:35 -0700 (PDT) From: Qi Zheng To: akpm@linux-foundation.org, tkhai@ya.ru, vbabka@suse.cz, christian.koenig@amd.com, hannes@cmpxchg.org, shakeelb@google.com, mhocko@kernel.org, roman.gushchin@linux.dev, muchun.song@linux.dev, david@redhat.com, shy828301@gmail.com Cc: sultan@kerneltoast.com, dave@stgolabs.net, penguin-kernel@I-love.SAKURA.ne.jp, paulmck@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Qi Zheng Subject: [PATCH v5 0/8] make slab shrink lockless Date: Mon, 13 Mar 2023 19:28:11 +0800 Message-Id: <20230313112819.38938-1-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: 9n8omkmeha7htkccdr1ott5ukqen1gfa X-Rspamd-Queue-Id: 74FF018001E X-HE-Tag: 1678706977-839251 X-HE-Meta: U2FsdGVkX18tg0bx+2qfpDjM/Lhzd9R9tGmFGm7E19pjMiuhTQc5lgFtKNwoH+aNL7tXCGb8dqkJ4AAv1PQ9ssj0OutYMYToWc4NJ5MtKN7NcUpqggk+dNgCJiOp6ZxBrQm7IiaRDVHm/uOduADBjWZuPJFWKdgxG8LhVYSUmTXrvLP3e6lLcpx/m2aMkJmw2HLnpOxrVnz0RXESb26HbQiPDxQraEHHvfRDdtFZeAr9crZ+emfj9OMLYLZjjSqBpBWstzTMFQZRDQ1r0EOr/DL6dUq+r6eNkW75M1SMzgr/kTPFSzQP+aCAbrox9yyec2kNytztbWrjSPrbgC5mo/yy8PTmHstDQXrS0WpKJ7IPkJEsREnKqdnCZ79rnMNV7j1EnTweLmda+fpCjJGFl7a9XtSyunkS1PTark2LKJa2873OmAb6aPSh27PuxFSc7XX5+IZaCjjRtH25SGuphQmkCEEp4XJpTN+viEvMsQUs5pFgaPjHHdIJC9GKDP8FMk6bJrUlPR3odFOmFE7LfCJIFxe7OZ1Ha4gSkEO6IHn0107HyYVYkgl1r7Qr9aaDP46bBTnDqP0Lvr9YtWtWqM0UGmxXC8yoITlM1ZIC0wmq33XEQAe12oaizOiu5ylMCFDeMmV59YwhJpKibZu8RK3dKpT4lp+GXGAf49qJ4lNaHdMEdax84nwkG++WhPzZqH9wJemfV3HnDDKHxxqnSzf5ZTY4DKClHAb8MArFL5rqgVJsxAG/Pznh96+evPYpxURCt9fipmYI5JZIceTsyecvoxxhLkbIEyra75RN5bJunuFROXR+1UpXYXkAJCs3J8y7Dqk7jABk3CpieNokZFxqTeslqy1crn6WsarZSvYw7jMObPqXekf06ok1pr9FF9lvh+IUgMn7JzINVenAcbYd69n6TDMrxuSVynnXOVnXMAcxfZSWJNLENSLhJWaZ153h/+avSP89hukd0ht ErkIJGhW OztR4dQszkE3gjFTVMJoPxmuyjrcDxHn/gFOiaQ2izsWNzfe3jfdpapARL6j1QvkHgsku87rBoegplAj9GznApE1ZJ1kVszYpCL2g2VJQI0IQiyeZy9b1qPx5j59Z1rQpr7zP7sB0DicBpp/09KXrFCFFTIH1VlpgtDJXAJsqMDsBrMLJSoZf2xyMHO+BN7ikjLe0c/Cu9SA42ukVh9fJYW1T2cwKO/glttf5vh1adu+SYcnBg44Umnv8NmAwdj0LD9rOXr+pCYDapAedIJfqeMJl3QWLMedBIk+y7nKx4fYgzFePEiRbuVUdcHXmzLYfhh/hJ0BYKa9iY8H9UQLCuRLF6JDWUu/JLiswYiC9Ufigv9uHoL12mef5wssxZnM7KcQNEWFrg0vfYq5mJhy3TQFoocaOh8g39n4mHTjugW8699ENrjpwpSd7r+n+NTzZqT+bArS8FJwv9G2E6xtEiIR3hLruMnz4+H/bvpiB3S6RbnNHug9NGKQSwhD0dqO/m2OWa4aMxPBP085R4ChNqPZII03J9YImN4b2f3lWkFKGGau37FA//PKIn7YD9LvIKwrmgh+xJI9sSITLijwsgqFBgKgjUfBH4uCm X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi all, This patch series aims to make slab shrink lockless. 1. Background ============= On our servers, we often find the following system cpu hotspots: 52.22% [kernel] [k] down_read_trylock 19.60% [kernel] [k] up_read 8.86% [kernel] [k] shrink_slab 2.44% [kernel] [k] idr_find 1.25% [kernel] [k] count_shadow_nodes 1.18% [kernel] [k] shrink lruvec 0.71% [kernel] [k] mem_cgroup_iter 0.71% [kernel] [k] shrink_node 0.55% [kernel] [k] find_next_bit And we used bpftrace to capture its calltrace as follows: @[ down_read_trylock+1 shrink_slab+128 shrink_node+371 do_try_to_free_pages+232 try_to_free_pages+243 _alloc_pages_slowpath+771 _alloc_pages_nodemask+702 pagecache_get_page+255 filemap_fault+1361 ext4_filemap_fault+44 __do_fault+76 handle_mm_fault+3543 do_user_addr_fault+442 do_page_fault+48 page_fault+62 ]: 1161690 @[ down_read_trylock+1 shrink_slab+128 shrink_node+371 balance_pgdat+690 kswapd+389 kthread+246 ret_from_fork+31 ]: 8424884 @[ down_read_trylock+1 shrink_slab+128 shrink_node+371 do_try_to_free_pages+232 try_to_free_pages+243 __alloc_pages_slowpath+771 __alloc_pages_nodemask+702 __do_page_cache_readahead+244 filemap_fault+1674 ext4_filemap_fault+44 __do_fault+76 handle_mm_fault+3543 do_user_addr_fault+442 do_page_fault+48 page_fault+62 ]: 20917631 We can see that down_read_trylock() of shrinker_rwsem is being called with high frequency at that time. Because of the poor multicore scalability of atomic operations, this can lead to a significant drop in IPC (instructions per cycle). And more, the shrinker_rwsem is a global read-write lock in shrinkers subsystem, which protects most operations such as slab shrink, registration and unregistration of shrinkers, etc. This can easily cause problems in the following cases. 1) When the memory pressure is high and there are many filesystems mounted or unmounted at the same time, slab shrink will be affected (down_read_trylock() failed). Such as the real workload mentioned by Kirill Tkhai: ``` One of the real workloads from my experience is start of an overcommitted node containing many starting containers after node crash (or many resuming containers after reboot for kernel update). In these cases memory pressure is huge, and the node goes round in long reclaim. ``` 2) If a shrinker is blocked (such as the case mentioned in [1]) and a writer comes in (such as mount a fs), then this writer will be blocked and cause all subsequent shrinker-related operations to be blocked. [1]. https://lore.kernel.org/lkml/20191129214541.3110-1-ptikhomirov@virtuozzo.com/ All the above cases can be solved by replacing the shrinker_rwsem trylocks with SRCU. 2. Survey ========= Before doing the code implementation, I found that there were many similar submissions in the community: a. Davidlohr Bueso submitted a patch in 2015. Subject: [PATCH -next v2] mm: srcu-ify shrinkers Link: https://lore.kernel.org/all/1437080113.3596.2.camel@stgolabs.net/ Result: It was finally merged into the linux-next branch, but failed on arm allnoconfig (without CONFIG_SRCU) b. Tetsuo Handa submitted a patchset in 2017. Subject: [PATCH 1/2] mm,vmscan: Kill global shrinker lock. Link: https://lore.kernel.org/lkml/1510609063-3327-1-git-send-email-penguin-kernel@I-love.SAKURA.ne.jp/ Result: Finally chose to use the current simple way (break when rwsem_is_contended()). And Christoph Hellwig suggested to using SRCU, but SRCU was not unconditionally enabled at the time. c. Kirill Tkhai submitted a patchset in 2018. Subject: [PATCH RFC 00/10] Introduce lockless shrink_slab() Link: https://lore.kernel.org/lkml/153365347929.19074.12509495712735843805.stgit@localhost.localdomain/ Result: At that time, SRCU was not unconditionally enabled, and there were some objections to enabling SRCU. Later, because Kirill's focus was moved to other things, this patchset was not continued to be updated. d. Sultan Alsawaf submitted a patch in 2021. Subject: [PATCH] mm: vmscan: Replace shrinker_rwsem trylocks with SRCU protection Link: https://lore.kernel.org/lkml/20210927074823.5825-1-sultan@kerneltoast.com/ Result: Rejected because SRCU was not unconditionally enabled. We can find that almost all these historical commits were abandoned because SRCU was not unconditionally enabled. But now SRCU has been unconditionally enable by Paul E. McKenney in 2023 [2], so it's time to replace shrinker_rwsem trylocks with SRCU. [2] https://lore.kernel.org/lkml/20230105003759.GA1769545@paulmck-ThinkPad-P17-Gen-1/ 3. Reproduction and testing =========================== We can reproduce the down_read_trylock() hotspot through the following script: ``` #!/bin/bash DIR="/root/shrinker/memcg/mnt" do_create() { mkdir -p /sys/fs/cgroup/memory/test mkdir -p /sys/fs/cgroup/perf_event/test echo 4G > /sys/fs/cgroup/memory/test/memory.limit_in_bytes for i in `seq 0 $1`; do mkdir -p /sys/fs/cgroup/memory/test/$i; echo $$ > /sys/fs/cgroup/memory/test/$i/cgroup.procs; echo $$ > /sys/fs/cgroup/perf_event/test/cgroup.procs; mkdir -p $DIR/$i; done } do_mount() { for i in `seq $1 $2`; do mount -t tmpfs $i $DIR/$i; done } do_touch() { for i in `seq $1 $2`; do echo $$ > /sys/fs/cgroup/memory/test/$i/cgroup.procs; echo $$ > /sys/fs/cgroup/perf_event/test/cgroup.procs; dd if=/dev/zero of=$DIR/$i/file$i bs=1M count=1 & done } case "$1" in touch) do_touch $2 $3 ;; test) do_create 4000 do_mount 0 4000 do_touch 0 3000 ;; *) exit 1 ;; esac ``` Save the above script, then run test and touch commands. Then we can use the following perf command to view hotspots: perf top -U -F 999 1) Before applying this patchset: 32.31% [kernel] [k] down_read_trylock 19.40% [kernel] [k] pv_native_safe_halt 16.24% [kernel] [k] up_read 15.70% [kernel] [k] shrink_slab 4.69% [kernel] [k] _find_next_bit 2.62% [kernel] [k] shrink_node 1.78% [kernel] [k] shrink_lruvec 0.76% [kernel] [k] do_shrink_slab 2) After applying this patchset: 27.83% [kernel] [k] _find_next_bit 16.97% [kernel] [k] shrink_slab 15.82% [kernel] [k] pv_native_safe_halt 9.58% [kernel] [k] shrink_node 8.31% [kernel] [k] shrink_lruvec 5.64% [kernel] [k] do_shrink_slab 3.88% [kernel] [k] mem_cgroup_iter At the same time, we use the following perf command to capture IPC information: perf stat -e cycles,instructions -G test -a --repeat 5 -- sleep 10 1) Before applying this patchset: Performance counter stats for 'system wide' (5 runs): 454187219766 cycles test ( +- 1.84% ) 78896433101 instructions test # 0.17 insn per cycle ( +- 0.44% ) 10.0020430 +- 0.0000366 seconds time elapsed ( +- 0.00% ) 2) After applying this patchset: Performance counter stats for 'system wide' (5 runs): 841954709443 cycles test ( +- 15.80% ) (98.69%) 527258677936 instructions test # 0.63 insn per cycle ( +- 15.11% ) (98.68%) 10.01064 +- 0.00831 seconds time elapsed ( +- 0.08% ) We can see that IPC drops very seriously when calling down_read_trylock() at high frequency. After using SRCU, the IPC is at a normal level. This series is based on next-20230306. Comments and suggestions are welcome. Thanks, Qi. Changelog in v4 -> v5: - clean up [PATCH v4 1/8] (per Kirill) - include linux/srcu.h in [PATCH v4 2/8] and [PATCH v4 5/8] (per Vlastimil) - fix typo in the commit message of [PATCH v4 4/8] (per Vlastimil) - add more explanation to the commit message of [PATCH v4 7/8] - collect Acked-bys Changelog in v3 -> v4: - fix bug in [PATCH v3 1/7] - revise commit messages - rebase onto the next-20230306 Changelog in v2 -> v3: - fix bug in [PATCH v2 1/7] (per Kirill) - add Kirill's pacth which restore a check similar to the rwsem_is_contendent() check by adding shrinker_srcu_generation Changelog in v1 -> v2: - add a map_nr_max field to shrinker_info (suggested by Kirill) - use shrinker_mutex in reparent_shrinker_deferred() (pointed by Kirill) Kirill Tkhai (1): mm: vmscan: add shrinker_srcu_generation Qi Zheng (7): mm: vmscan: add a map_nr_max field to shrinker_info mm: vmscan: make global slab shrink lockless mm: vmscan: make memcg slab shrink lockless mm: shrinkers: make count and scan in shrinker debugfs lockless mm: vmscan: hold write lock to reparent shrinker nr_deferred mm: vmscan: remove shrinker_rwsem from synchronize_shrinkers() mm: shrinkers: convert shrinker_rwsem to mutex drivers/md/dm-cache-metadata.c | 2 +- drivers/md/dm-thin-metadata.c | 2 +- fs/super.c | 2 +- include/linux/memcontrol.h | 1 + mm/shrinker_debug.c | 39 ++++---- mm/vmscan.c | 160 ++++++++++++++++++--------------- 6 files changed, 107 insertions(+), 99 deletions(-)