From patchwork Mon Aug 19 02:30:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Usama Arif X-Patchwork-Id: 13767731 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07EB6C531DF for ; Mon, 19 Aug 2024 02:32:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E3F106B0089; Sun, 18 Aug 2024 22:32:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DC9486B008A; Sun, 18 Aug 2024 22:32:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BA5A86B0092; Sun, 18 Aug 2024 22:32:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 99B666B0089 for ; Sun, 18 Aug 2024 22:32:00 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 4F104A0F2F for ; Mon, 19 Aug 2024 02:32:00 +0000 (UTC) X-FDA: 82467420000.30.635183B Received: from mail-yb1-f176.google.com (mail-yb1-f176.google.com [209.85.219.176]) by imf01.hostedemail.com (Postfix) with ESMTP id 739954001B for ; Mon, 19 Aug 2024 02:31:58 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=nOMkfXTX; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf01.hostedemail.com: domain of usamaarif642@gmail.com designates 209.85.219.176 as permitted sender) smtp.mailfrom=usamaarif642@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724034703; a=rsa-sha256; cv=none; b=Q0Z7yZ8niQIW+4gEnViT7PVU5yMXW0VCfw99odeKVI0OGWicdTwgwNPBMJVbmYzHbicStV DTtnwA/Gx2OPlMEGEqFZxhxb/VTiFH7WyzyUmEOOrGSkHK0NGQyY+GKsJxX6JvGC7+TMuv hdeJjqPV8ruIXnI6pCb3z8DKPMxJ0Os= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=nOMkfXTX; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf01.hostedemail.com: domain of usamaarif642@gmail.com designates 209.85.219.176 as permitted sender) smtp.mailfrom=usamaarif642@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724034703; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=BDAWwS723hN3aXuX35eiOFn0OpaDg+7RbZaRdmjXne8=; b=11gz7donH7vzikJt2hqKI4+EawJtUyYrakq+1k8/8OLlnkkmgpHGL1Wyso3zZdK9Wv4FnA d4Ye54ye7SJnHt7gp/BT3mgoJCvMbSe7cVmz3TbDqCnG0Xa9F8C7YfnTooeH4z6Q/iEF5r KCWoXqgTyWwnjfGDKKGcJ4x+MGmF2U0= Received: by mail-yb1-f176.google.com with SMTP id 3f1490d57ef6-e05f25fb96eso3979409276.1 for ; Sun, 18 Aug 2024 19:31:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1724034717; x=1724639517; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BDAWwS723hN3aXuX35eiOFn0OpaDg+7RbZaRdmjXne8=; b=nOMkfXTXs/t9bxav4amvLr5JFktjvjLLPSwuiwZDLrKXSZbi815ZmPo4t9q9qdDLMi f2K6LkVIBc2BXuIReimzA7rSUn6WLvgCRZ6yviXkdGrTm5wbMX7Ld3Ej9FjPz+TQ616w weXNkQsoqyHuakTum5FqViRQFdO6oZiHW0unDxWRD6qAMxV3iMfrr9GtZSLzb8u5Z7XW DG/6YIBqYulD+x/ABst/a80O4C81ftwyx8nsvNPBGJzGr6e7A9wav6lZttdDhghIQ0GJ rr+Sga21fsDbNsfjHaDU9m+z5hs1rf0yu8N3NFBMVLg25FclnET9V5MRb7KFx9uPdf2g 1A9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724034717; x=1724639517; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BDAWwS723hN3aXuX35eiOFn0OpaDg+7RbZaRdmjXne8=; b=E96ZT3oQcHZvedwHk4dbpkOCG3TIchDYsuHhuuywAg3L+9WQSr0Z0eIwP/rSaTeW85 TuM0eUaQ1ZeVnEHZuXy0YctMzqAnutj3a7Yh9fiJaji0f9xH4WnNhJIaxHkTeULCsCoQ UH1j34cD6/ua56K8ssUoTnUIbq2+qmIQIXCRDscVMvhDXJCu3VZl+1VL+Fqzs91dHfhe 3OPgTCGJ7/qRsi25hAZrzEazsTaPN162JBc5xtFuxBcnUr+Kyig0elOs9rGy7LUjeC60 5fEpQlkd3OiFDUxdbi/0p59lqILHh1mYVQ8yHKV/O1VAh3Xg/Jp/xp6DHi1VUEd0JoCA DT8g== X-Forwarded-Encrypted: i=1; AJvYcCUKjhRi1mDDyScfZI6u0R8Fl6Ec9FtFyH8HLso6khxX+B6OYHfIOvSZpMQ2vdcaJISP76Hm8+kbNg==@kvack.org X-Gm-Message-State: AOJu0YypoqlH5Fn3orH4z0I05oMobXdIGuqA/LL208KfiCTdt/ZxgTfV P8kOXLwJYTlrH50TOMR+uzp1JyhUYRWX33cI1vHG7t/hRu+hNmsb X-Google-Smtp-Source: AGHT+IHePXgHOJtzLLyUXdkdD/gbE32Y/UIS3Wv6y/4aGanspmXVzMlLaX5LmK42xXQm7XY5FnN/nA== X-Received: by 2002:a05:6902:18c3:b0:e13:d3ec:2b8f with SMTP id 3f1490d57ef6-e13d3ec34a5mr7085223276.52.1724034717319; Sun, 18 Aug 2024 19:31:57 -0700 (PDT) Received: from localhost (fwdproxy-ash-014.fbsv.net. [2a03:2880:20ff:e::face:b00c]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-4536a0046a4sm36947491cf.41.2024.08.18.19.31.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 18 Aug 2024 19:31:56 -0700 (PDT) From: Usama Arif To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: hannes@cmpxchg.org, riel@surriel.com, shakeel.butt@linux.dev, roman.gushchin@linux.dev, yuzhao@google.com, david@redhat.com, baohua@kernel.org, ryan.roberts@arm.com, rppt@kernel.org, willy@infradead.org, cerasuolodomenico@gmail.com, ryncsn@gmail.com, corbet@lwn.net, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kernel-team@meta.com, Usama Arif Subject: [PATCH v4 5/6] mm: split underused THPs Date: Mon, 19 Aug 2024 03:30:58 +0100 Message-ID: <20240819023145.2415299-6-usamaarif642@gmail.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20240819023145.2415299-1-usamaarif642@gmail.com> References: <20240819023145.2415299-1-usamaarif642@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 739954001B X-Rspamd-Server: rspam01 X-Stat-Signature: 7mksgpksyudbdyz3fq67swfj7nr6ui4a X-HE-Tag: 1724034718-549774 X-HE-Meta: U2FsdGVkX1+EcFE0pRhCfktIbrauY9JWjnX/KjPzior2t9vxMwYCQzhCaPKH07DBKOmBjcte1nfv8SX9qp8Y5AAuxX5dCIQ44QFtARQwaFoqOarYZG8dJzHsPta8pAWZS9AqiyqJpjzuTdX1Jyp4AZSqjuQO8R7G6jVj88Pui2ODd+HKSuaXF7DmjedNNBlk3XkrHSdvS5vigJ2cmsAwaOKyjK9EcTAs7DVwR03W3Ny0p6+odv5fFfiA7twUO+1c1apsy/a8KsySFJGXcUU+UXOlCwUq88X8lZQZQaiyF+e0Ip/gMRaMsePGgttDuhUMn4J+tJpK2g05BNCvuydYBvUG/W6l7ueedFztrmKgmu/dxt3+dn/fGrRMDwI1RY0OvMF93YfGfTlYoT9gFXaI7BaS40lERq7Gx5yOb7KALTlH0VjF2dp8ZFn4BUUWREe7gdMf2QJkUwJ2NLoKvqBFlYpj7NdX6D9p1087iqy/ULkumlGWtxx2E+TZucl0c1L66xm9YIeu5YDnh1smhIPZatmNwyI6l6VcdqcQCirkkoNTCnDFgGTLl6yk9iFWNdvSpNSkeFpF8Ta5tva6vNWm6CObTuImN70T4Vpv2iMuciRgBScELy3mReYkc0g5dijOt17sRwVjndH/gFR/MXLvIgNkUCGDR7o9t7BKmHBYud9APxacaA8PYzid4Of2vtzVKiFVghR9alJXOHK0+UUgWTyxruWjEwG658jXAa1DLENatxqjqxj3qTTUKSknxCb1NJiOS1fwYQJvz71fobocU13/qlbiF39sH6pty8BmWqz/q/+HZHKCi1fdJCGeBS9XM0eQRUdLazoC5n5y06u4+4srzXmMl8INxDBK2toNgFdoECW2DzAzCBEW5tEg1+T7Mli6R1hPSkMHqQeaw9/OspgnJrHWXAjoQrUwzPV7nv9uFUX+GWol/WCxYbZkRyL3M+0iqpVQPYhGjxRrSZe HhMjRZLy Co7nuLLt14mLId1/ySzs5wRwvYLbtDsIe/sfXiY33Ke8wF8xB2HcvwWIF+h2OkYmr+f/SEYPB4F6TkdUMb5WA5+inzW1emptdLPcPg04s14rCjCBkkgEomWYBqLJatBj9HvL/cq07w3WEA74fQQiciBXxBOJSaUKLe5UhH2h8czxDmHfd0/Kik4emu2+bu+FlpSCOoBEK82Ey8p+mW37hkt/e7ikDg61trBS3dzzODIvA0Soma/NBKicJ9HddXr9jCxCZFjMhfKM/ra99xAs9IJgAArPFU11nhJfZ2GUbRPaf9biOvAIqHyz5pltRdrYaOpmHWydfyim+YeW54uKhM0lx1bqLifpExSKBnvcGC7E6/j8kAATkAsOotU/Z5nzYQDHqC47Fo1ctpQheCO5JKb+k+fqY0BkmGEFuNc4+V6Hy0WNL0d3SYzxQyB2bgGcJE0bvzLE2sxN1GGs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This is an attempt to mitigate the issue of running out of memory when THP is always enabled. During runtime whenever a THP is being faulted in (__do_huge_pmd_anonymous_page) or collapsed by khugepaged (collapse_huge_page), the THP is added to _deferred_list. Whenever memory reclaim happens in linux, the kernel runs the deferred_split shrinker which goes through the _deferred_list. If the folio was partially mapped, the shrinker attempts to split it. If the folio is not partially mapped, the shrinker checks if the THP was underused, i.e. how many of the base 4K pages of the entire THP were zero-filled. If this number goes above a certain threshold (decided by /sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_none), the shrinker will attempt to split that THP. Then at remap time, the pages that were zero-filled are mapped to the shared zeropage, hence saving memory. Suggested-by: Rik van Riel Co-authored-by: Johannes Weiner Signed-off-by: Usama Arif --- Documentation/admin-guide/mm/transhuge.rst | 6 +++ include/linux/khugepaged.h | 1 + include/linux/vm_event_item.h | 1 + mm/huge_memory.c | 60 +++++++++++++++++++++- mm/khugepaged.c | 3 +- mm/vmstat.c | 1 + 6 files changed, 69 insertions(+), 3 deletions(-) diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst index 058485daf186..40741b892aff 100644 --- a/Documentation/admin-guide/mm/transhuge.rst +++ b/Documentation/admin-guide/mm/transhuge.rst @@ -447,6 +447,12 @@ thp_deferred_split_page splitting it would free up some memory. Pages on split queue are going to be split under memory pressure. +thp_underused_split_page + is incremented when a huge page on the split queue was split + because it was underused. A THP is underused if the number of + zero pages in the THP is above a certain threshold + (/sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_none). + thp_split_pmd is incremented every time a PMD split into table of PTEs. This can happen, for instance, when application calls mprotect() or diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h index f68865e19b0b..30baae91b225 100644 --- a/include/linux/khugepaged.h +++ b/include/linux/khugepaged.h @@ -4,6 +4,7 @@ #include /* MMF_VM_HUGEPAGE */ +extern unsigned int khugepaged_max_ptes_none __read_mostly; #ifdef CONFIG_TRANSPARENT_HUGEPAGE extern struct attribute_group khugepaged_attr_group; diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h index aae5c7c5cfb4..aed952d04132 100644 --- a/include/linux/vm_event_item.h +++ b/include/linux/vm_event_item.h @@ -105,6 +105,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT, THP_SPLIT_PAGE, THP_SPLIT_PAGE_FAILED, THP_DEFERRED_SPLIT_PAGE, + THP_UNDERUSED_SPLIT_PAGE, THP_SPLIT_PMD, THP_SCAN_EXCEED_NONE_PTE, THP_SCAN_EXCEED_SWAP_PTE, diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 70ee49dfeaad..f5363cf900f9 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1087,6 +1087,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, update_mmu_cache_pmd(vma, vmf->address, vmf->pmd); add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR); mm_inc_nr_ptes(vma->vm_mm); + deferred_split_folio(folio, false); spin_unlock(vmf->ptl); count_vm_event(THP_FAULT_ALLOC); count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_ALLOC); @@ -3526,6 +3527,39 @@ static unsigned long deferred_split_count(struct shrinker *shrink, return READ_ONCE(ds_queue->split_queue_len); } +static bool thp_underused(struct folio *folio) +{ + int num_zero_pages = 0, num_filled_pages = 0; + void *kaddr; + int i; + + if (khugepaged_max_ptes_none == HPAGE_PMD_NR - 1) + return false; + + for (i = 0; i < folio_nr_pages(folio); i++) { + kaddr = kmap_local_folio(folio, i * PAGE_SIZE); + if (!memchr_inv(kaddr, 0, PAGE_SIZE)) { + num_zero_pages++; + if (num_zero_pages > khugepaged_max_ptes_none) { + kunmap_local(kaddr); + return true; + } + } else { + /* + * Another path for early exit once the number + * of non-zero filled pages exceeds threshold. + */ + num_filled_pages++; + if (num_filled_pages >= HPAGE_PMD_NR - khugepaged_max_ptes_none) { + kunmap_local(kaddr); + return false; + } + } + kunmap_local(kaddr); + } + return false; +} + static unsigned long deferred_split_scan(struct shrinker *shrink, struct shrink_control *sc) { @@ -3559,13 +3593,35 @@ static unsigned long deferred_split_scan(struct shrinker *shrink, spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); list_for_each_entry_safe(folio, next, &list, _deferred_list) { + bool did_split = false; + bool underused = false; + + if (!folio_test_partially_mapped(folio)) { + underused = thp_underused(folio); + if (!underused) + goto next; + } if (!folio_trylock(folio)) goto next; - /* split_huge_page() removes page from list on success */ - if (!split_folio(folio)) + if (!split_folio(folio)) { + did_split = true; + if (underused) + count_vm_event(THP_UNDERUSED_SPLIT_PAGE); split++; + } folio_unlock(folio); next: + /* + * split_folio() removes folio from list on success. + * Only add back to the queue if folio is partially mapped. + * If thp_underused returns false, or if split_folio fails + * in the case it was underused, then consider it used and + * don't add it back to split_queue. + */ + if (!did_split && !folio_test_partially_mapped(folio)) { + list_del_init(&folio->_deferred_list); + ds_queue->split_queue_len--; + } folio_put(folio); } diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 6c42062478c1..2e138b22d939 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -85,7 +85,7 @@ static DECLARE_WAIT_QUEUE_HEAD(khugepaged_wait); * * Note that these are only respected if collapse was initiated by khugepaged. */ -static unsigned int khugepaged_max_ptes_none __read_mostly; +unsigned int khugepaged_max_ptes_none __read_mostly; static unsigned int khugepaged_max_ptes_swap __read_mostly; static unsigned int khugepaged_max_ptes_shared __read_mostly; @@ -1235,6 +1235,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, pgtable_trans_huge_deposit(mm, pmd, pgtable); set_pmd_at(mm, address, pmd, _pmd); update_mmu_cache_pmd(vma, address, pmd); + deferred_split_folio(folio, false); spin_unlock(pmd_ptl); folio = NULL; diff --git a/mm/vmstat.c b/mm/vmstat.c index c3a402ea91f0..6060bb7bbb44 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1384,6 +1384,7 @@ const char * const vmstat_text[] = { "thp_split_page", "thp_split_page_failed", "thp_deferred_split_page", + "thp_underused_split_page", "thp_split_pmd", "thp_scan_exceed_none_pte", "thp_scan_exceed_swap_pte",