From patchwork Fri Aug 30 10:03:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Usama Arif X-Patchwork-Id: 13784866 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0CDCCA0EFB for ; Fri, 30 Aug 2024 10:04:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 85C4B6B00F2; Fri, 30 Aug 2024 06:04:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 80C4D6B00F3; Fri, 30 Aug 2024 06:04:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6ACF66B00F4; Fri, 30 Aug 2024 06:04:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 4C9EB6B00F2 for ; Fri, 30 Aug 2024 06:04:44 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id F0C1CA1612 for ; Fri, 30 Aug 2024 10:04:43 +0000 (UTC) X-FDA: 82508477646.03.6EAB337 Received: from mail-qk1-f171.google.com (mail-qk1-f171.google.com [209.85.222.171]) by imf03.hostedemail.com (Postfix) with ESMTP id 277DF2002B for ; Fri, 30 Aug 2024 10:04:41 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=FvzDPw+d; spf=pass (imf03.hostedemail.com: domain of usamaarif642@gmail.com designates 209.85.222.171 as permitted sender) smtp.mailfrom=usamaarif642@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725012236; a=rsa-sha256; cv=none; b=WqCQMvl/mf1/6fAA1t80gTDqku52EqO+3UgRNM+jSyXts76aq7MhmPvtfWYgWflQAaD+vz EKsFczLA6MCRAhHr6V29/yLCjtDMHkvRL+ZPC7ENo9F3QO+BFl8LP9fmeZF7j2ouNvfE5v BPaRC7tXkfiylIQz0vDp+4Idy3+ALss= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=FvzDPw+d; spf=pass (imf03.hostedemail.com: domain of usamaarif642@gmail.com designates 209.85.222.171 as permitted sender) smtp.mailfrom=usamaarif642@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725012236; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=dLKphNpkkCp+J9O7l+uUUFl357u8bBkR0pEv7i+vcZQ=; b=NAvayVaqwB4j9O3rRtGeQsKL8Hl6ygrP5oC5JwTJBR4Xl1bhM05Ef00+6PQHBBJvErrShq cctE+bNCDmHOaM/d5PAhNz6uyTZmphBGWZfDTXWJoNrviSw2cY2HMB23DSu0+dNGkae6Hj NxCXRqMhlG/OZv10/EAWq4gAX1DFF60= Received: by mail-qk1-f171.google.com with SMTP id af79cd13be357-7a8049d4a4cso105243185a.0 for ; Fri, 30 Aug 2024 03:04:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1725012281; x=1725617081; darn=kvack.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=dLKphNpkkCp+J9O7l+uUUFl357u8bBkR0pEv7i+vcZQ=; b=FvzDPw+dXteRyhppm9EVGQpFBip+UpGWAL/vSS5KbSbiBYaTqQxzYbzB2xfupn3jdo w3BHPucoDypSq53UgrXAnENuJLXMjcePywtUOWntn943sjAOXRsyxR1n7AnwszXD+K74 +X03QIZJZQFreEMXXmpdxnaxumYo5Nijyhj2T2iluwBBbXU4dW0PFX8kBTxP6uLbzM2u Yi6zTuxtDZjB3lPQMlcWV4tCxb6E0mqx3Mmfa3O/YtE8SsXMocGJ1L+U8HIHElW9mL+o M0XNBIrA0j4cUBKPDfyYvqB3rudsY20rZaF90bEHh/JeBIEWkryB9xhp1pfBE+gNLxGP aVNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725012281; x=1725617081; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=dLKphNpkkCp+J9O7l+uUUFl357u8bBkR0pEv7i+vcZQ=; b=snxh5C1dBU7Poq+6skyA/dssYvsFU/6BrL3gQe8hj+Zq+TWEDbO4MP87srU0hdsm4U d34tQ97Ke4RC2A8o4lyuxOYdr6T5jy6no+U8Jbo6Dr3aRoFdghktpTEa9jg8FLSXDfKo VvWTOJL7S+l7ZN8rU/9L9MTxu3EYBnbhUIvzwhwJM8CVL/ISesP30qHOPbodpW9s+mxk OFhFtWkr52tjpw8vU/kjol9bLay3d4gN2Rq+IlJnqTj7/3yLHeHQA/2K+rqfJf5/yhKH i3dVHWvJfdUvxHc4ZMyd2YsAT6NdNXrF/6sN+dAwDxTvcrSXfQwP5esPXBEAfTB/x7aW WxXQ== X-Forwarded-Encrypted: i=1; AJvYcCX1GKDM4kS5yXFG3aB+ILrIMSTwQ2VESqChqYfCXsFzqomtMQTu19Rn/1IbvRSSUVXdo7HNQqOxJA==@kvack.org X-Gm-Message-State: AOJu0YzCtEuxYyjYEl02eejQcbhyyI4zXtfbPMI2kQIIQOfXvPc2TJFi fOH5oi2T7ZKzdIR+UIKyI/xLFNQdeSo/DZrjSK49cBuyWBzTlIWw X-Google-Smtp-Source: AGHT+IFMKbL7pEYm24WeSLEu9sCq4QqQ9Qp4Fm8CPC+4jsuJQakmX0eaui3K7I12XoanfSpCSgwB+A== X-Received: by 2002:a05:620a:254b:b0:7a7:df01:f949 with SMTP id af79cd13be357-7a8041ca9f7mr648369785a.33.1725012280883; Fri, 30 Aug 2024 03:04:40 -0700 (PDT) Received: from localhost (fwdproxy-ash-003.fbsv.net. [2a03:2880:20ff:3::face:b00c]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7a806d3be38sm128888085a.79.2024.08.30.03.04.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 30 Aug 2024 03:04:40 -0700 (PDT) From: Usama Arif To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: hannes@cmpxchg.org, riel@surriel.com, shakeel.butt@linux.dev, roman.gushchin@linux.dev, yuzhao@google.com, david@redhat.com, npache@redhat.com, baohua@kernel.org, ryan.roberts@arm.com, rppt@kernel.org, willy@infradead.org, cerasuolodomenico@gmail.com, ryncsn@gmail.com, corbet@lwn.net, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kernel-team@meta.com, Usama Arif Subject: [PATCH v5 0/6] mm: split underused THPs Date: Fri, 30 Aug 2024 11:03:34 +0100 Message-ID: <20240830100438.3623486-1-usamaarif642@gmail.com> X-Mailer: git-send-email 2.43.5 MIME-Version: 1.0 X-Stat-Signature: yzguh7z96b49ribf3sd8regb8skw8xxe X-Rspamd-Queue-Id: 277DF2002B X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1725012281-502132 X-HE-Meta: U2FsdGVkX1+SM46uIE2mRBzBA6NhR+XetruCjqmmhxMjUg59VumCj2NaGvArRnTOUNXr//w7hW65oV+Oarin0Jbs0xUc93qx039bS6mMDruhVZpazQmYyuYWMCggOeAJWNHubqBL/zhsCR4dWqUkXS0r1vj0KctZJXrBPCM48j4zUVeyq7hHWDAbyCFbSVP2UMl8djSflqJVMAMcLTjChPGXsbtwXWCf1OaWgFl3jZ4IZu4d5D6D84J5svvm2ULatW+Fdx72SirOuxIjzDUCYeJTSKrjHdRl2TFFPt81srahENKxNhQyPCHXpwvl2a0SpFjZ2Wda4/m0r1/O4dCJdPXIFvuu4ha0Ap/MdlZiDioW1ziTU3JOiacKxbdLKNIbf42aVjfWJWWD9DH0hoZl8cKXRXHel7Opq8wtUsZ9TSTBIgOvcM21xBCoW1mLiRCtyxb+9mQBI9hETA+u1B84g+pNMOv+w+YlXPLu+DlNlEonfdMArXgF0qwQRlZ/tmTNNSXCeu065CRRYtawS+hczxYJoUNupLXlmxITjZfoAv9lTxUAeQFGDlHTq67NsnmzWF7V9yNBRIRTp2TprO01l/HtWPZcwKCMOL14g0hQljQAVobEjOSkpuZ0+xqc0KiSeABdRJHZgAt71nd7SlPIL14Z5ituxW5UbdAAcYFuhJ2HQRvnGU3w7RKw/PTD2rorfLeFvnkaNwhSRCYsiz8P0jx1ccf1LbGNmrs3d1MAyofDeKaq8oiJQFwSdOCW+fbWlWHYkhZpbGlgFsE+7MqAZitD5oTlPW1gEsO30Jun0ROEyntK1OEwv1UUmvsz1J03j6N7afzIr01UNl/YCwQ7vNRrEZ6WOoYnAy2y/Fs/3Emt6miI5xqDkchXH9FVDqajLLnH4mdrHUUr0coGHXaqjK6GwCtsI5WBDgVcGUdcj0rUntZ/f9i6STgzq/pGSk12UyHhea5l7GMNk/oyXwy IIF7+FzE yG9AbE049Suj8RAPyVwP0zsAdqkq+NiMfMMyMo2s6839evGTTRtwlaojrtq/EEApvcJ1HakOzkRGCkgW5XogC0XwKo8yPKI0luNNmJqLI1C1cluKHaBYVSnzU8rFrAxpmC73ek14PYw3ENUTsQwrT80XX+6fxYPuzbFPvl+1shFPOCPqxeHzo1yChjROPKnpEWaAHSeNv2Bcmxt2FRRo0/VlfaQsYb+lpxX6WoeuDjBc31Ph7bsLG21glm+p7IgO+7SlkbX7VrBF58ghEFldJTH2wU9JGmijpCXP5Fp1wCdQPSi9YzuaF10Qmv75aszIXx9NuKE+7ohcsU7Y8Ym0xzNbcgdgzYVqi35xHTBswsROyc9Q= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The current upstream default policy for THP is always. However, Meta uses madvise in production as the current THP=always policy vastly overprovisions THPs in sparsely accessed memory areas, resulting in excessive memory pressure and premature OOM killing. Using madvise + relying on khugepaged has certain drawbacks over THP=always. Using madvise hints mean THPs aren't "transparent" and require userspace changes. Waiting for khugepaged to scan memory and collapse pages into THP can be slow and unpredictable in terms of performance (i.e. you dont know when the collapse will happen), while production environments require predictable performance. If there is enough memory available, its better for both performance and predictability to have a THP from fault time, i.e. THP=always rather than wait for khugepaged to collapse it, and deal with sparsely populated THPs when the system is running out of memory. This patch-series is an attempt to mitigate the issue of running out of memory when THP is always enabled. During runtime whenever a THP is being faulted in or collapsed by khugepaged, the THP is added to a list. Whenever memory reclaim happens, the kernel runs the deferred_split shrinker which goes through the list and checks if the THP was underused, i.e. how many of the base 4K pages of the entire THP were zero-filled. If this number goes above a certain threshold, the shrinker will attempt to split that THP. Then at remap time, the pages that were zero-filled are mapped to the shared zeropage, hence saving memory. This method avoids the downside of wasting memory in areas where THP is sparsely filled when THP is always enabled, while still providing the upside THPs like reduced TLB misses without having to use madvise. Meta production workloads that were CPU bound (>99% CPU utilzation) were tested with THP shrinker. The results after 2 hours are as follows: | THP=madvise | THP=always | THP=always | | | + shrinker series | | | + max_ptes_none=409 ----------------------------------------------------------------------------- Performance improvement | - | +1.8% | +1.7% (over THP=madvise) | | | ----------------------------------------------------------------------------- Memory usage | 54.6G | 58.8G (+7.7%) | 55.9G (+2.4%) ----------------------------------------------------------------------------- max_ptes_none=409 means that any THP that has more than 409 out of 512 (80%) zero filled filled pages will be split. To test out the patches, the below commands without the shrinker will invoke OOM killer immediately and kill stress, but will not fail with the shrinker: echo 450 > /sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_none mkdir /sys/fs/cgroup/test echo $$ > /sys/fs/cgroup/test/cgroup.procs echo 20M > /sys/fs/cgroup/test/memory.max echo 0 > /sys/fs/cgroup/test/memory.swap.max # allocate twice memory.max for each stress worker and touch 40/512 of # each THP, i.e. vm-stride 50K. # With the shrinker, max_ptes_none of 470 and below won't invoke OOM # killer. # Without the shrinker, OOM killer is invoked immediately irrespective # of max_ptes_none value and kills stress. stress --vm 1 --vm-bytes 40M --vm-stride 50K --- v4 -> v5: - rebase on top of latest mm-unstable. This includes Barrys' anon mTHP accounting series. All merge conflicts should be resolved with the patches on mailing list. This also means all places where partially_mapped flag is set is first tested for it (Barry). - uint64_t to unsigned long for rss_anon function in selftest. - convert PageMlocked to folio_test_mlocked in try_to_map_unused_to_zeropage. v3 -> v4: - do not clear partially_mapped flag on hugeTLB folios (Yu Zhao). - fix condition for calling deferred_folio_split in partially mapped case and count for partially mapped vm events (Barry Song). - use non-atomic versions of set/clear partially_mapped flags (David Hildenbrand) - use PG_partially_mapped = PG_reclaim (Matthew Wilcox) - delete folio from lru list and folio_batch_add "new_folio" instead of folio in __split_huge_page. (Kairui Song) - fix deadlock in deferred_split_scan by not doing folio_put while holding split_queue_lock (Hugh Dickins) - underutilized to underused and thp_low_util_shrinker to shrink_underused (Hugh Dickins) v2 -> v3: - Use my_zero_pfn instead of page_to_pfn(ZERO_PAGE(..)) (Johannes) - Use flags argument instead of bools in remove_migration_ptes (Johannes) - Use a new flag in folio->_flags_1 instead of folio->_partially_mapped (David Hildenbrand). - Split out the last patch of v2 into 3, one for introducing the flag, one for splitting underutilized THPs on _deferred_list and one for adding sysfs entry to disable splitting (David Hildenbrand). v1 -> v2: - Turn page checks and operations to folio versions in __split_huge_page. This means patches 1 and 2 from v1 are no longer needed. (David Hildenbrand) - Map to shared zeropage in all cases if the base page is zero-filled. The uffd selftest was removed. (David Hildenbrand). - rename 'dirty' to 'contains_data' in try_to_map_unused_to_zeropage (Rik van Riel). - Use unsigned long instead of uint64_t (kernel test robot). Alexander Zhu (1): mm: selftest to verify zero-filled pages are mapped to zeropage Usama Arif (3): mm: Introduce a pageflag for partially mapped folios mm: split underused THPs mm: add sysfs entry to disable splitting underused THPs Yu Zhao (2): mm: free zapped tail pages when splitting isolated thp mm: remap unused subpages to shared zeropage when splitting isolated thp Documentation/admin-guide/mm/transhuge.rst | 16 ++ include/linux/huge_mm.h | 4 +- include/linux/khugepaged.h | 1 + include/linux/page-flags.h | 13 +- include/linux/rmap.h | 7 +- include/linux/vm_event_item.h | 1 + mm/huge_memory.c | 163 ++++++++++++++++-- mm/khugepaged.c | 3 +- mm/memcontrol.c | 3 +- mm/migrate.c | 75 ++++++-- mm/migrate_device.c | 4 +- mm/page_alloc.c | 5 +- mm/rmap.c | 5 +- mm/vmscan.c | 3 +- mm/vmstat.c | 1 + .../selftests/mm/split_huge_page_test.c | 71 ++++++++ tools/testing/selftests/mm/vm_util.c | 22 +++ tools/testing/selftests/mm/vm_util.h | 1 + 18 files changed, 358 insertions(+), 40 deletions(-)