From patchwork Wed Nov 15 13:27:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13456675 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B20FC48BFA for ; Wed, 15 Nov 2023 13:28:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 37A066B033B; Wed, 15 Nov 2023 08:28:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3294C6B033C; Wed, 15 Nov 2023 08:28:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1F23A6B033D; Wed, 15 Nov 2023 08:28:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 0CF936B033B for ; Wed, 15 Nov 2023 08:28:04 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id DA25AB5D54 for ; Wed, 15 Nov 2023 13:28:03 +0000 (UTC) X-FDA: 81460266846.26.A2A8882 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf17.hostedemail.com (Postfix) with ESMTP id 510E44001D for ; Wed, 15 Nov 2023 13:28:01 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf17.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700054881; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uLiZitmjG+3foZ1+dsOMU1U8TEvwumUeFwfgKAb7Uhk=; b=p9m17XuBKBkykdg3Bh6Ocj0LhLDEU7nTmxX1GsgcYMn2pBGFsJ4Wrpb+aWB85A0bfqgYC2 wH+aC26b1vf5E67DEYR9Y5cBAtgAnS8TeSXHV4YXMU/ohj/q3aPWUHrucBkEuuJ59/FR8W 0nTWGsXdcNHsCOmF14C9b4BUxOHwizQ= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf17.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700054881; a=rsa-sha256; cv=none; b=ad0MFClg9gS3lxQhV+ZdDPoUA9MiyYDJam6ymw4KPft3qOk1PupObop0dItFi2p/T2BpqF zndq2wfj4beaoLR/7QeX67pqpt7l7CjNlqJy3jyWgL1zqed99TmVcwUGAcraXZiSLGQoaH P3V1i3QJsllnhzWrF3KgV5WckGro7IM= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1E2AC1595; Wed, 15 Nov 2023 05:28:46 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A7A3F3F7B4; Wed, 15 Nov 2023 05:27:57 -0800 (PST) From: Ryan Roberts To: Andrew Morton , Matthew Wilcox , Yin Fengwei , David Hildenbrand , Yu Zhao , Catalin Marinas , Anshuman Khandual , Yang Shi , "Huang, Ying" , Zi Yan , Luis Chamberlain , Itaru Kitayama , "Kirill A. Shutemov" , John Hubbard , David Rientjes , Vlastimil Babka , Hugh Dickins , Kefeng Wang Cc: Ryan Roberts , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 01/10] mm: Allow deferred splitting of arbitrary anon large folios Date: Wed, 15 Nov 2023 13:27:25 +0000 Message-Id: <20231115132734.931023-2-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231115132734.931023-1-ryan.roberts@arm.com> References: <20231115132734.931023-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 510E44001D X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: ykjjnyx59kpiaix7y9ot1m7j771ourhr X-HE-Tag: 1700054881-401489 X-HE-Meta: U2FsdGVkX1+ENK2cOO01DZfT6cudlcXSMd8cYKnMT7dBnDgkbOP5veQpocTxXQ8KpPu78qwWGj4omCWxNTIgBsbNKJ2FB0A1C9MI+ySna5QiDHldWue5z0xO5htJEMLZtg5q8IVz2NuRS1bSgWKtR95Bc9x5uAjHRFp6u9PrRUoGLWyeFPuoA2Sdt6hQzECY91eYKZog33HH3/KSR+SDgQDTzqBicNrg9DNwrQb5iZckFSLzuuBongkDS9NmLQfAmMLNvlmJUgNpVQepyEuP4twT/HAPMnYV2LHsEodkNfbeD4h1LDpphQkqvQIzba7ZeggZSAnVuEbEDuJd1Q0iv2FfubMxfL8Q3jmCdm5S9iSpBX7CYhpzSR/6mC0s0/+5vd+mrxYisQo9lB03N4uKMdIi137+BR4fUtFBE0+EY8vJ3SsLW+9gxETeUrSSgn8EheGBVypa4J+LcZTx5YB00St2jp7XbNAIQbtrTfKiCJE1GVG/EVTKnZExEKTpXvCmlBTtLUhvBwkxLPS9Inu0zt3Qzd5Ux/CwnvENAk/ebFu9JgY82mrjFCJUu1NjdAly/erGdS3AiQma05JgUQCd2g4HaAKhTll5OE/y6lqKPp6yWZ+bKDY6++oF2zHxMzKnfqERUn0sPZ4OdBaMiNVioP+nKfCE4+iHDh4XYS4+fWCVtyLCurS4H+7XRSIm5zEXHxXB251wWnPz/ODN8/wy++SzrbwBrnVuky3E+ED71aaI4MVg+ukySKQ/DQvTYl+aBsR5FXO4VZoWh6qxHmaTgvFogqBkP2w/Vf2aEmyTSnB+ZdpAPnPO1/3cNLggknzZWUU0f1po4oNByG24aIJxceJ3b3QGIKat26o15oKKU3ccEgliZpp6BHmda64xmsgnQJ/gKweFEsz14wTMfeiy7EvQtFPoeLTR5LZa130FUpHR1UCX0Qe1MBeR4x+73PT5gRWPEGS1onGlB5sCIjk jeip4NOD VhflZYk2Y+MlpvjCmvwaECHqb2IF282wm9VFkP/3W4vfXEhIDQXlkft0OD1y3kWk29S5BnGYqqY5rnQlkfGCnxo6nPAp/uK370UvQHleQyoRUkp7wfH56Ym7/cPHZ3QlEjYQRyP0bd3I6EQ7nfqheZrp5gYohnxWiHUHiizovKqqhDiTm+Wf6DbOXZHextbcOLUr6TRl3FU//OXDv5Ks1YZl4g8iGakwKKUZWhkQAM8vPnNMo0c3hOqlCf0TkFr2v8SSNm3bgbiYRE8RbyKsKgHdOVbC+MoospqJ6w5do0PcYS5pO+8FIi5kQCehqrqwWZUFf0rOv2HEqi7rJG4uzR3L/fjvp7vdxXc74 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: In preparation for the introduction of anonymous small-sized THP, we would like to be able to split them when they have unmapped subpages, in order to free those unused pages under memory pressure. So remove the artificial requirement that the large folio needed to be at least PMD-sized. Reviewed-by: Yu Zhao Reviewed-by: Yin Fengwei Reviewed-by: Matthew Wilcox (Oracle) Reviewed-by: David Hildenbrand Signed-off-by: Ryan Roberts --- mm/rmap.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index 7a27a2b41802..49e4d86a4f70 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1488,11 +1488,11 @@ void page_remove_rmap(struct page *page, struct vm_area_struct *vma, __lruvec_stat_mod_folio(folio, idx, -nr); /* - * Queue anon THP for deferred split if at least one + * Queue anon large folio for deferred split if at least one * page of the folio is unmapped and at least one page * is still mapped. */ - if (folio_test_pmd_mappable(folio) && folio_test_anon(folio)) + if (folio_test_large(folio) && folio_test_anon(folio)) if (!compound || nr < nr_pmdmapped) deferred_split_folio(folio); } From patchwork Wed Nov 15 13:27:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13456676 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C55BC48BF9 for ; Wed, 15 Nov 2023 13:28:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AF5A16B033D; Wed, 15 Nov 2023 08:28:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A7CD06B033E; Wed, 15 Nov 2023 08:28:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9454A6B033F; Wed, 15 Nov 2023 08:28:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 7E19F6B033D for ; Wed, 15 Nov 2023 08:28:06 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 51542B5D6E for ; Wed, 15 Nov 2023 13:28:06 +0000 (UTC) X-FDA: 81460266972.22.380CBC6 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf17.hostedemail.com (Postfix) with ESMTP id 991FF40015 for ; Wed, 15 Nov 2023 13:28:04 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700054884; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=v5seL2le1Mw6GIfT/Jkn3HeHK2JNJN/qsVUzKZyoMG0=; b=e6FURGkl6SorqVpjPBM9WT/njHsEr8/ixQZeLHElc9YmZiw0RAQtqYpTpCBL2/R/1IHJ3Z cMt0MXDwuTSpIvkndb+CzRnWpBI+UvBaVanfngqtTqHooOllKSV4oHlRJd2l6DXHougZvg dQm6/+joSBVQzwiDVNoQmKWkdcc4Sxc= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700054884; a=rsa-sha256; cv=none; b=ld0qU2Oy+NdkGwkyaKkJZgol5ZfOHFNpJEgXmCTIAy5DFjU7V6JnOOVTwCAuJrVef/q5F9 LuIhOfsZT1CNGz9xDM3g2wmyVDcmySrVz3shhe+kRyDf6xCEf70p4sjH7CGwUjkv78OfeW 9An7glGs9oej2rcihS8Q2Z88E7H1iP4= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 325DA15BF; Wed, 15 Nov 2023 05:28:49 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id BAF573F7B4; Wed, 15 Nov 2023 05:28:00 -0800 (PST) From: Ryan Roberts To: Andrew Morton , Matthew Wilcox , Yin Fengwei , David Hildenbrand , Yu Zhao , Catalin Marinas , Anshuman Khandual , Yang Shi , "Huang, Ying" , Zi Yan , Luis Chamberlain , Itaru Kitayama , "Kirill A. Shutemov" , John Hubbard , David Rientjes , Vlastimil Babka , Hugh Dickins , Kefeng Wang Cc: Ryan Roberts , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 02/10] mm: Non-pmd-mappable, large folios for folio_add_new_anon_rmap() Date: Wed, 15 Nov 2023 13:27:26 +0000 Message-Id: <20231115132734.931023-3-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231115132734.931023-1-ryan.roberts@arm.com> References: <20231115132734.931023-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 991FF40015 X-Rspam-User: X-Stat-Signature: nniwsd5k1u66b8zwtok38mw11huc5cbr X-Rspamd-Server: rspam01 X-HE-Tag: 1700054884-648581 X-HE-Meta: U2FsdGVkX1/8IHSlQOUtxWnWH/pS8T0WzmUDLfB1zUriUxDe17FP8rcmnHR+lMLjeLBy5loao6Tr4tV+h7zzqZyrR3zunVAMoP3ABFn/TbQomIxxjVTdx6EPAZflfY44bnWJQ3CRyqyNvj3jAUsxPQl4idHTWbBBBYYGW+LXKWUyGuldtqgQNSyIwNkr7lP/Hj1L8yYPsfWW9Dwz9XZiZJeiXt3aJTGjp+5C9LeoJ/UdjlVwjS/jZzGIDv2LZWB0CKnLanrBS7TM+YvLBxSkZP8KsMZxX75gksVjC7Qcd9k7FzuvXIWjXKDOTYTZPZ9HbbbYd7BaZf0GeyE84uF3WxAw3RP3mjjJjcWw6qqcamHIekSBqAuAfYqCREl10GwJVh8MM/w2hkNoI+NERdIazvSJ3hEmH9+wBqhgpL93lU25XFreR4pZCmAuZrHpCc1QJ/MFUMcbGh256slqYx966WUx9a06KqvpiLXzJhFLtDHDOH6UUhInwIlkaF5cX3UK+tWjejaf15YLBsGVOWJJ4tEv3235OvynUOVNr3hsz042vzy/Ru2J4lhI6ZR2uNM7FVIYi4BZoIoXAN3TCpU8A5npPFd/SMLfzsATfa13dnjxpWCTFd2+7T4XUrPp2OAXm/GamLOuTp1JXCexV1RKXviBG5Kjl2w3mldDmgjye1yv3q4Uv3Xirge2CWIFDkXgwfgqUn6aW15n3NlKhMECiedhokgnCaQ+zKroNSHEP8CjPOE3zqWzRdKz/83pS1UI/5h4ihK8LyzlNkhRgm7FVxux5cTonKefR8ahtdMoVa0OV1VGSrXIu3CmEUc6h5lZVvfP+wzAcNyoKXCyTe+e+YbsV3ZO0zAd/9Emn3ZY0HUuzPXD2KmUiwezy6bMalty0qkAb4SqBNAYnkazDzZ8Lw2+Sl8x821OonUwRiJfXIbvt7mh18U9elhuKemf/C67EftWctPTtDRU2rKLoA/ KXKHS58P 1ofEDYBRE8Z5N69Z0h0+4yzUAgx2IypUTCasXUvRGFO7IVvMb/SyrX0rqEokwdbOiS7mfYE+qUNhYjHxGi6O5W0as2U4gsKEQJ0y7FbiXIrxCslWKcdaODp3/7fjkeLwgrslumF0oqKJdQogpQXPrMB+GwR0oVKqyCoqq5TJFFWfxF3LXl8fADZxg6Jglx72E5j/iGCnpjxfngwVcxCl7dWuNqnKB60ur9zwMNM0F9lgqTWDoJtGliuAQvP6kPYwTD/L1s/d/bLcfNWlJUhlKAr+p1mMZ+KaeBwdBv4pAPYb8MQrCNqhVSc0m9gY5+6Gtu9Iw X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: In preparation for supporting anonymous small-sized THP, improve folio_add_new_anon_rmap() to allow a non-pmd-mappable, large folio to be passed to it. In this case, all contained pages are accounted using the order-0 folio (or base page) scheme. Reviewed-by: Yu Zhao Reviewed-by: Yin Fengwei Signed-off-by: Ryan Roberts --- mm/rmap.c | 28 ++++++++++++++++++++-------- 1 file changed, 20 insertions(+), 8 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index 49e4d86a4f70..b086dc957b0c 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1305,32 +1305,44 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma, * This means the inc-and-test can be bypassed. * The folio does not have to be locked. * - * If the folio is large, it is accounted as a THP. As the folio + * If the folio is pmd-mappable, it is accounted as a THP. As the folio * is new, it's assumed to be mapped exclusively by a single process. */ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, unsigned long address) { - int nr; + int nr = folio_nr_pages(folio); - VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma); + VM_BUG_ON_VMA(address < vma->vm_start || + address + (nr << PAGE_SHIFT) > vma->vm_end, vma); __folio_set_swapbacked(folio); + __folio_set_anon(folio, vma, address, true); - if (likely(!folio_test_pmd_mappable(folio))) { + if (likely(!folio_test_large(folio))) { /* increment count (starts at -1) */ atomic_set(&folio->_mapcount, 0); - nr = 1; + SetPageAnonExclusive(&folio->page); + } else if (!folio_test_pmd_mappable(folio)) { + int i; + + for (i = 0; i < nr; i++) { + struct page *page = folio_page(folio, i); + + /* increment count (starts at -1) */ + atomic_set(&page->_mapcount, 0); + SetPageAnonExclusive(page); + } + + atomic_set(&folio->_nr_pages_mapped, nr); } else { /* increment count (starts at -1) */ atomic_set(&folio->_entire_mapcount, 0); atomic_set(&folio->_nr_pages_mapped, COMPOUND_MAPPED); - nr = folio_nr_pages(folio); + SetPageAnonExclusive(&folio->page); __lruvec_stat_mod_folio(folio, NR_ANON_THPS, nr); } __lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr); - __folio_set_anon(folio, vma, address, true); - SetPageAnonExclusive(&folio->page); } /** From patchwork Wed Nov 15 13:27:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13456677 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 753D4C48BF9 for ; Wed, 15 Nov 2023 13:28:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EEA9B6B033F; Wed, 15 Nov 2023 08:28:09 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E9B066B0340; Wed, 15 Nov 2023 08:28:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D155E6B0341; Wed, 15 Nov 2023 08:28:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id BC9406B033F for ; Wed, 15 Nov 2023 08:28:09 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 88CA9140A36 for ; Wed, 15 Nov 2023 13:28:09 +0000 (UTC) X-FDA: 81460267098.12.344B7DA Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf21.hostedemail.com (Postfix) with ESMTP id B4CD31C0010 for ; Wed, 15 Nov 2023 13:28:07 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700054887; a=rsa-sha256; cv=none; b=vulTDOr/nouRqRqnEaHXR/xwCdK1H4yxCEnJbl0UTOAXsOc/wwjdYq9x30ixwK3jZcb9FU 8DiCdnn7wbApAM7ZS5FhoOa+O5bpBCZy+HFcxTkAaQ1e4HyUYvpDXykSLmZnURQQVcu8L4 Zp6HQd0dAZKof87aK48vIQjFRAJXvhg= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700054887; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0512/HTGZpV4EDnLnZQIg+htVitY8R0+zwTbrDyppsA=; b=uTKOFfD6hzLM2DqEyAf90GZQYRUag6WMo88WRsDmAF5GMoomXlqrDrhV3vImc1JMnvkGb3 0zFGfhsS3zOVKKPIHJzHfn6LsU7yE8Bd/Rk8iQ530tpMD9rvUt54h0hkipvZvHBpMMGX4E iVlvVH9ZLaczUQiAC0QpTv4qtu+udpE= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6349915DB; Wed, 15 Nov 2023 05:28:52 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CFDCA3F7B4; Wed, 15 Nov 2023 05:28:03 -0800 (PST) From: Ryan Roberts To: Andrew Morton , Matthew Wilcox , Yin Fengwei , David Hildenbrand , Yu Zhao , Catalin Marinas , Anshuman Khandual , Yang Shi , "Huang, Ying" , Zi Yan , Luis Chamberlain , Itaru Kitayama , "Kirill A. Shutemov" , John Hubbard , David Rientjes , Vlastimil Babka , Hugh Dickins , Kefeng Wang Cc: Ryan Roberts , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 03/10] mm: thp: Introduce per-size thp sysfs interface Date: Wed, 15 Nov 2023 13:27:27 +0000 Message-Id: <20231115132734.931023-4-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231115132734.931023-1-ryan.roberts@arm.com> References: <20231115132734.931023-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: B4CD31C0010 X-Stat-Signature: g87yazjpkxxhuxrn4dt1rcw3y7ootn1f X-HE-Tag: 1700054887-735266 X-HE-Meta: U2FsdGVkX1+ho/WwIyfjq2WazTbz3Kpm0OoEbA2msak2dxVmJW6T+y0nupm/cSjhgd1qETpSaH4lemLjimGuw42ESTNOCHICDuDjc5rX4VxDVoREiZJsF+ngdfpcgrf5d1vUFR2W5vIMqbHbxaH8KDXlFdoNb/q3h62LYqtxMlldZCPnBdpTU09OzlIItrbqt6ICRJKivtIppdlh//VVHbhaXdXJjPvRKW1A7Htz/2y3hqUmVJGGzP/VIRpk52bhPUucUtH0ph1kZsF4xvLQOL6M8c4FDQt5D8ZbJsgwrgAphPOZBWOZaw1Ad431V+74VyU+tH8FdC4xMtFtrWaDprlCvVhvbD7p+EUrR8V6Oao3rFWtuxtpf4t2rEIJG+qMa/fL3mP+MRUL+WR7o9XT2QN83I3LKkN3iTJ4v5T9FhKUFelsnIqZ24v445kitcZyq9IIoIAG6JYSLFxdsWJFJ6K3NsJsD0upgO4H0TPRJAFPBokDJn3+gWwHAVmsZM3kxo5Zp2FmMWs2Yh7znrE5Txz2yWq61K7Sc3yCyBhMU78W5TtMmoaQJxu95mYwEv2ZHrt4gytD7mWo69GAC7IzwU36iKm37We4XjL/D+LhSOLi//UUJb52Uukdj8hQ1UFhBWuyFwTAOfEUwLWqSzB6/jdj8fpf7YLtCyVGUHDWrU5dpLrG4WNbDk3ey78bEy/z5BVqAmRGPddMlgQUOJ98SCfhajA57K+EERDicE1MMuKGc4pM17K5PoXVKSQScOWCiD1akPPHMAXDOqxN34jXm9XoGznKNF6xtSO90D7FsCEWqr82vsCrqskYFa+Htf0J2fFd01R/xJWM9fq5ZLL7xQ6x6Osktze30iO2iNr0t0a50wJBYdqCJCxDpxpM1T9GT2hXHAPhZWm3a1bKPIEQwLQ7p1bks4BlldqnPZcsQqWhQVgIKGN26HYaoOx+5KlOiH+8wBCh0wudL8BLeK8 mvW8IbFJ kuDa3/vq6x60glVN+UVM1h78HbPcKRHgNvk/nR1z3VFTlZ8JW7GbAtrVi5XbFprbF0mXi1kiGkLBlkjsN3HigjZd5VSgLOrpZXb9LlJzvjvXSQeBXU6sxZ2Qa/6XX0sMBtc13vP1TpPb9TcCkYH7+Vr+xNyk8bPOfgavQVYx5PdoywK+09fumpr7nAFpYcHkMLUYyZBXZQ4DGxLGYn7OsodEi8A67ZgbL3IqFe8xQyGGw5vw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: In preparation for adding support for anonymous small-sized THP, introduce new sysfs structure that will be used to control the new behaviours. A new directory is added under transparent_hugepage for each supported THP size, and contains an `enabled` file, which can be set to "global" (to inherrit the global setting), "always", "madvise" or "never". For now, the kernel still only supports PMD-sized anonymous THP, so only 1 directory is populated. The first half of the change converts transhuge_vma_suitable() and hugepage_vma_check() so that they take a bitfield of orders for which the user wants to determine support, and the functions filter out all the orders that can't be supported, given the current sysfs configuration and the VMA dimensions. If there is only 1 order set in the input then the output can continue to be treated like a boolean; this is the case for most call sites. The second half of the change implements the new sysfs interface. It has been done so that each supported THP size has a `struct thpsize`, which describes the relevant metadata and is itself a kobject. This is pretty minimal for now, but should make it easy to add new per-thpsize files to the interface if needed in future (e.g. per-size defrag). Rather than keep the `enabled` state directly in the struct thpsize, I've elected to directly encode it into huge_anon_orders_[always|madvise|global] bitfields since this reduces the amount of work required in transhuge_vma_suitable() which is called for every page fault. The remainder is copied from Documentation/admin-guide/mm/transhuge.rst, as modified by this commit. See that file for further details. Transparent Hugepage Support for anonymous memory can be entirely disabled (mostly for debugging purposes) or only enabled inside MADV_HUGEPAGE regions (to avoid the risk of consuming more memory resources) or enabled system wide. This can be achieved per-supported-THP-size with one of:: echo always >/sys/kernel/mm/transparent_hugepage/hugepages-kB/enabled echo madvise >/sys/kernel/mm/transparent_hugepage/hugepages-kB/enabled echo never >/sys/kernel/mm/transparent_hugepage/hugepages-kB/enabled where is the hugepage size being addressed, the available sizes for which vary by system. Alternatively it is possible to specify that a given hugepage size will inherrit the global enabled setting:: echo global >/sys/kernel/mm/transparent_hugepage/hugepages-kB/enabled The global (legacy) enabled setting can be set as follows:: echo always >/sys/kernel/mm/transparent_hugepage/enabled echo madvise >/sys/kernel/mm/transparent_hugepage/enabled echo never >/sys/kernel/mm/transparent_hugepage/enabled By default, PMD-sized hugepages have enabled="global" and all other hugepage sizes have enabled="never". If enabling multiple hugepage sizes, the kernel will select the most appropriate enabled size for a given allocation. Signed-off-by: Ryan Roberts --- Documentation/admin-guide/mm/transhuge.rst | 74 ++++-- Documentation/filesystems/proc.rst | 6 +- fs/proc/task_mmu.c | 3 +- include/linux/huge_mm.h | 100 +++++--- mm/huge_memory.c | 263 +++++++++++++++++++-- mm/khugepaged.c | 16 +- mm/memory.c | 6 +- mm/page_vma_mapped.c | 3 +- 8 files changed, 387 insertions(+), 84 deletions(-) diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst index b0cc8243e093..52565e0bd074 100644 --- a/Documentation/admin-guide/mm/transhuge.rst +++ b/Documentation/admin-guide/mm/transhuge.rst @@ -45,10 +45,23 @@ components: the two is using hugepages just because of the fact the TLB miss is going to run faster. +As well as PMD-sized THP described above, it is also possible to +configure the system to allocate "small-sized THP" to back anonymous +memory (for example 16K, 32K, 64K, etc). These THPs continue to be +PTE-mapped, but in many cases can still provide similar benefits to +those outlined above: Page faults are significantly reduced (by a +factor of e.g. 4, 8, 16, etc), but latency spikes are much less +prominent because the size of each page isn't as huge as the PMD-sized +variant and there is less memory to clear in each page fault. Some +architectures also employ TLB compression mechanisms to squeeze more +entries in when a set of PTEs are virtually and physically contiguous +and approporiately aligned. In this case, TLB misses will occur less +often. + THP can be enabled system wide or restricted to certain tasks or even memory ranges inside task's address space. Unless THP is completely disabled, there is ``khugepaged`` daemon that scans memory and -collapses sequences of basic pages into huge pages. +collapses sequences of basic pages into PMD-sized huge pages. The THP behaviour is controlled via :ref:`sysfs ` interface and using madvise(2) and prctl(2) system calls. @@ -95,12 +108,29 @@ Global THP controls Transparent Hugepage Support for anonymous memory can be entirely disabled (mostly for debugging purposes) or only enabled inside MADV_HUGEPAGE regions (to avoid the risk of consuming more memory resources) or enabled -system wide. This can be achieved with one of:: +system wide. This can be achieved per-supported-THP-size with one of:: + + echo always >/sys/kernel/mm/transparent_hugepage/hugepages-kB/enabled + echo madvise >/sys/kernel/mm/transparent_hugepage/hugepages-kB/enabled + echo never >/sys/kernel/mm/transparent_hugepage/hugepages-kB/enabled + +where is the hugepage size being addressed, the available sizes +for which vary by system. Alternatively it is possible to specify that +a given hugepage size will inherrit the global enabled setting:: + + echo global >/sys/kernel/mm/transparent_hugepage/hugepages-kB/enabled + +The global (legacy) enabled setting can be set as follows:: echo always >/sys/kernel/mm/transparent_hugepage/enabled echo madvise >/sys/kernel/mm/transparent_hugepage/enabled echo never >/sys/kernel/mm/transparent_hugepage/enabled +By default, PMD-sized hugepages have enabled="global" and all other +hugepage sizes have enabled="never". If enabling multiple hugepage +sizes, the kernel will select the most appropriate enabled size for a +given allocation. + It's also possible to limit defrag efforts in the VM to generate anonymous hugepages in case they're not immediately free to madvise regions or to never try to defrag memory and simply fallback to regular @@ -146,25 +176,34 @@ madvise never should be self-explanatory. -By default kernel tries to use huge zero page on read page fault to -anonymous mapping. It's possible to disable huge zero page by writing 0 -or enable it back by writing 1:: +By default kernel tries to use huge, PMD-mappable zero page on read +page fault to anonymous mapping. It's possible to disable huge zero +page by writing 0 or enable it back by writing 1:: echo 0 >/sys/kernel/mm/transparent_hugepage/use_zero_page echo 1 >/sys/kernel/mm/transparent_hugepage/use_zero_page -Some userspace (such as a test program, or an optimized memory allocation -library) may want to know the size (in bytes) of a transparent hugepage:: +Some userspace (such as a test program, or an optimized memory +allocation library) may want to know the size (in bytes) of a +PMD-mappable transparent hugepage:: cat /sys/kernel/mm/transparent_hugepage/hpage_pmd_size -khugepaged will be automatically started when -transparent_hugepage/enabled is set to "always" or "madvise, and it'll -be automatically shutdown if it's set to "never". +khugepaged will be automatically started when one or more hugepage +sizes are enabled (either by directly setting "always" or "madvise", +or by setting "global" while the global enabled is set to "always" or +"madvise"), and it'll be automatically shutdown when the last hugepage +size is disabled (either by directly setting "never", or by setting +"global" while the global enabled is set to "never"). Khugepaged controls ------------------- +.. note:: + khugepaged currently only searches for opportunities to collapse to + PMD-sized THP and no attempt is made to collapse to small-sized + THP. + khugepaged runs usually at low frequency so while one may not want to invoke defrag algorithms synchronously during the page faults, it should be worth invoking defrag at least in khugepaged. However it's @@ -282,10 +321,11 @@ force Need of application restart =========================== -The transparent_hugepage/enabled values and tmpfs mount option only affect -future behavior. So to make them effective you need to restart any -application that could have been using hugepages. This also applies to the -regions registered in khugepaged. +The transparent_hugepage/enabled and +transparent_hugepage/hugepages-kB/enabled values and tmpfs mount +option only affect future behavior. So to make them effective you need +to restart any application that could have been using hugepages. This +also applies to the regions registered in khugepaged. Monitoring usage ================ @@ -308,6 +348,10 @@ frequently will incur overhead. There are a number of counters in ``/proc/vmstat`` that may be used to monitor how successfully the system is providing huge pages for use. +.. note:: + Currently the below counters only record events relating to + PMD-sized THP. Events relating to small-sized THP are not included. + thp_fault_alloc is incremented every time a huge page is successfully allocated to handle a page fault. @@ -413,7 +457,7 @@ for huge pages. Optimizing the applications =========================== -To be guaranteed that the kernel will map a 2M page immediately in any +To be guaranteed that the kernel will map a THP immediately in any memory region, the mmap region has to be hugepage naturally aligned. posix_memalign() can provide that guarantee. diff --git a/Documentation/filesystems/proc.rst b/Documentation/filesystems/proc.rst index 49ef12df631b..f8e8dd1fd148 100644 --- a/Documentation/filesystems/proc.rst +++ b/Documentation/filesystems/proc.rst @@ -528,9 +528,9 @@ replaced by copy-on-write) part of the underlying shmem object out on swap. does not take into account swapped out page of underlying shmem objects. "Locked" indicates whether the mapping is locked in memory or not. -"THPeligible" indicates whether the mapping is eligible for allocating THP -pages as well as the THP is PMD mappable or not - 1 if true, 0 otherwise. -It just shows the current status. +"THPeligible" indicates whether the mapping is eligible for allocating +naturally aligned THP pages of any currently enabled size. 1 if true, 0 +otherwise. It just shows the current status. "VmFlags" field deserves a separate description. This member represents the kernel flags associated with the particular virtual memory area in two letter diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 51e0ec658457..2e25362ca9fa 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -865,7 +865,8 @@ static int show_smap(struct seq_file *m, void *v) __show_smap(m, &mss, false); seq_printf(m, "THPeligible: %8u\n", - hugepage_vma_check(vma, vma->vm_flags, true, false, true)); + !!hugepage_vma_check(vma, vma->vm_flags, true, false, true, + THP_ORDERS_ALL)); if (arch_pkeys_enabled()) seq_printf(m, "ProtectionKey: %8u\n", vma_pkey(vma)); diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index fa0350b0812a..7d6f7d96b039 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -67,6 +67,21 @@ extern struct kobj_attribute shmem_enabled_attr; #define HPAGE_PMD_ORDER (HPAGE_PMD_SHIFT-PAGE_SHIFT) #define HPAGE_PMD_NR (1<vm_start >> PAGE_SHIFT) - vma->vm_pgoff, - HPAGE_PMD_NR)) - return false; +static inline unsigned long transhuge_vma_suitable(struct vm_area_struct *vma, + unsigned long addr, unsigned long orders) +{ + int order; + + /* + * Iterate over orders, highest to lowest, removing orders that don't + * meet alignment requirements from the set. Exit loop at first order + * that meets requirements, since all lower orders must also meet + * requirements. + */ + + order = first_order(orders); + + while (orders) { + unsigned long hpage_size = PAGE_SIZE << order; + unsigned long haddr = ALIGN_DOWN(addr, hpage_size); + + if (haddr >= vma->vm_start && + haddr + hpage_size <= vma->vm_end) { + if (!vma_is_anonymous(vma)) { + if (IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - + vma->vm_pgoff, + hpage_size >> PAGE_SHIFT)) + break; + } else + break; + } + + order = next_order(&orders, order); } - haddr = addr & HPAGE_PMD_MASK; - - if (haddr < vma->vm_start || haddr + HPAGE_PMD_SIZE > vma->vm_end) - return false; - return true; + return orders; } static inline bool file_thp_enabled(struct vm_area_struct *vma) @@ -130,8 +166,9 @@ static inline bool file_thp_enabled(struct vm_area_struct *vma) !inode_is_open_for_write(inode) && S_ISREG(inode->i_mode); } -bool hugepage_vma_check(struct vm_area_struct *vma, unsigned long vm_flags, - bool smaps, bool in_pf, bool enforce_sysfs); +unsigned long hugepage_vma_check(struct vm_area_struct *vma, + unsigned long vm_flags, bool smaps, bool in_pf, + bool enforce_sysfs, unsigned long orders); #define transparent_hugepage_use_zero_page() \ (transparent_hugepage_flags & \ @@ -267,17 +304,18 @@ static inline bool folio_test_pmd_mappable(struct folio *folio) return false; } -static inline bool transhuge_vma_suitable(struct vm_area_struct *vma, - unsigned long addr) +static inline unsigned long transhuge_vma_suitable(struct vm_area_struct *vma, + unsigned long addr, unsigned long orders) { - return false; + return 0; } -static inline bool hugepage_vma_check(struct vm_area_struct *vma, - unsigned long vm_flags, bool smaps, - bool in_pf, bool enforce_sysfs) +static inline unsigned long hugepage_vma_check(struct vm_area_struct *vma, + unsigned long vm_flags, bool smaps, + bool in_pf, bool enforce_sysfs, + unsigned long orders) { - return false; + return 0; } static inline void folio_prep_large_rmappable(struct folio *folio) {} diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 6eb55f97a3d2..0b545d56420c 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -74,12 +74,60 @@ static unsigned long deferred_split_scan(struct shrinker *shrink, static atomic_t huge_zero_refcount; struct page *huge_zero_page __read_mostly; unsigned long huge_zero_pfn __read_mostly = ~0UL; +static unsigned long huge_anon_orders_always __read_mostly; +static unsigned long huge_anon_orders_madvise __read_mostly; +static unsigned long huge_anon_orders_global __read_mostly; -bool hugepage_vma_check(struct vm_area_struct *vma, unsigned long vm_flags, - bool smaps, bool in_pf, bool enforce_sysfs) +#define hugepage_global_enabled() \ + (transparent_hugepage_flags & \ + ((1<vm_flags + * @smaps: whether answer will be used for smaps file + * @in_pf: whether answer will be used by page fault handler + * @enforce_sysfs: whether sysfs config should be taken into account + * @orders: bitfield of all orders to consider + * + * Calculates the intersection of the requested hugepage orders and the allowed + * hugepage orders for the provided vma. Permitted orders are encoded as a set + * bit at the corresponding bit position (bit-2 corresponds to order-2, bit-3 + * corresponds to order-3, etc). Order-0 is never considered a hugepage order. + * + * Return: bitfield of orders allowed for hugepage in the vma. 0 if no hugepage + * orders are allowed. + */ +unsigned long hugepage_vma_check(struct vm_area_struct *vma, + unsigned long vm_flags, bool smaps, bool in_pf, + bool enforce_sysfs, unsigned long orders) +{ + /* Check the intersection of requested and supported orders. */ + orders &= vma_is_anonymous(vma) ? + THP_ORDERS_ALL_ANON : THP_ORDERS_ALL_FILE; + if (!orders) + return 0; + if (!vma->vm_mm) /* vdso */ - return false; + return 0; /* * Explicitly disabled through madvise or prctl, or some @@ -88,16 +136,16 @@ bool hugepage_vma_check(struct vm_area_struct *vma, unsigned long vm_flags, * */ if ((vm_flags & VM_NOHUGEPAGE) || test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags)) - return false; + return 0; /* * If the hardware/firmware marked hugepage support disabled. */ if (transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_UNSUPPORTED)) - return false; + return 0; /* khugepaged doesn't collapse DAX vma, but page fault is fine. */ if (vma_is_dax(vma)) - return in_pf; + return in_pf ? orders : 0; /* * khugepaged special VMA and hugetlb VMA. @@ -105,17 +153,29 @@ bool hugepage_vma_check(struct vm_area_struct *vma, unsigned long vm_flags, * VM_MIXEDMAP set. */ if (!in_pf && !smaps && (vm_flags & VM_NO_KHUGEPAGED)) - return false; + return 0; /* - * Check alignment for file vma and size for both file and anon vma. + * Check alignment for file vma and size for both file and anon vma by + * filtering out the unsuitable orders. * * Skip the check for page fault. Huge fault does the check in fault - * handlers. And this check is not suitable for huge PUD fault. + * handlers. */ - if (!in_pf && - !transhuge_vma_suitable(vma, (vma->vm_end - HPAGE_PMD_SIZE))) - return false; + if (!in_pf) { + int order = first_order(orders); + unsigned long addr; + + while (orders) { + addr = vma->vm_end - (PAGE_SIZE << order); + if (transhuge_vma_suitable(vma, addr, BIT(order))) + break; + order = next_order(&orders, order); + } + + if (!orders) + return 0; + } /* * Enabled via shmem mount options or sysfs settings. @@ -124,13 +184,27 @@ bool hugepage_vma_check(struct vm_area_struct *vma, unsigned long vm_flags, */ if (!in_pf && shmem_file(vma->vm_file)) return shmem_is_huge(file_inode(vma->vm_file), vma->vm_pgoff, - !enforce_sysfs, vma->vm_mm, vm_flags); + !enforce_sysfs, vma->vm_mm, vm_flags) + ? orders : 0; /* Enforce sysfs THP requirements as necessary */ - if (enforce_sysfs && - (!hugepage_flags_enabled() || (!(vm_flags & VM_HUGEPAGE) && - !hugepage_flags_always()))) - return false; + if (enforce_sysfs) { + if (vma_is_anonymous(vma)) { + unsigned long mask = READ_ONCE(huge_anon_orders_always); + + if (vm_flags & VM_HUGEPAGE) + mask |= READ_ONCE(huge_anon_orders_madvise); + if (hugepage_global_always() || + ((vm_flags & VM_HUGEPAGE) && hugepage_global_enabled())) + mask |= READ_ONCE(huge_anon_orders_global); + + orders &= mask; + if (!orders) + return 0; + } else if (!hugepage_global_enabled() || + (!(vm_flags & VM_HUGEPAGE) && !hugepage_global_always())) + return 0; + } if (!vma_is_anonymous(vma)) { /* @@ -138,15 +212,15 @@ bool hugepage_vma_check(struct vm_area_struct *vma, unsigned long vm_flags, * in fault path. */ if (((in_pf || smaps)) && vma->vm_ops->huge_fault) - return true; + return orders; /* Only regular file is valid in collapse path */ if (((!in_pf || smaps)) && file_thp_enabled(vma)) - return true; - return false; + return orders; + return 0; } if (vma_is_temporary_stack(vma)) - return false; + return 0; /* * THPeligible bit of smaps should show 1 for proper VMAs even @@ -156,9 +230,9 @@ bool hugepage_vma_check(struct vm_area_struct *vma, unsigned long vm_flags, * the first page fault. */ if (!vma->anon_vma) - return (smaps || in_pf); + return (smaps || in_pf) ? orders : 0; - return true; + return orders; } static bool get_huge_zero_page(void) @@ -412,9 +486,127 @@ static const struct attribute_group hugepage_attr_group = { .attrs = hugepage_attr, }; +static void hugepage_exit_sysfs(struct kobject *hugepage_kobj); +static void thpsize_release(struct kobject *kobj); +static LIST_HEAD(thpsize_list); + +struct thpsize { + struct kobject kobj; + struct list_head node; + int order; +}; + +#define to_thpsize(kobj) container_of(kobj, struct thpsize, kobj) + +static ssize_t thpsize_enabled_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) +{ + int order = to_thpsize(kobj)->order; + const char *output; + + if (test_bit(order, &huge_anon_orders_always)) + output = "[always] global madvise never"; + else if (test_bit(order, &huge_anon_orders_global)) + output = "always [global] madvise never"; + else if (test_bit(order, &huge_anon_orders_madvise)) + output = "always global [madvise] never"; + else + output = "always global madvise [never]"; + + return sysfs_emit(buf, "%s\n", output); +} + +static ssize_t thpsize_enabled_store(struct kobject *kobj, + struct kobj_attribute *attr, + const char *buf, size_t count) +{ + int order = to_thpsize(kobj)->order; + ssize_t ret = count; + + if (sysfs_streq(buf, "always")) { + set_bit(order, &huge_anon_orders_always); + clear_bit(order, &huge_anon_orders_global); + clear_bit(order, &huge_anon_orders_madvise); + } else if (sysfs_streq(buf, "global")) { + set_bit(order, &huge_anon_orders_global); + clear_bit(order, &huge_anon_orders_always); + clear_bit(order, &huge_anon_orders_madvise); + } else if (sysfs_streq(buf, "madvise")) { + set_bit(order, &huge_anon_orders_madvise); + clear_bit(order, &huge_anon_orders_always); + clear_bit(order, &huge_anon_orders_global); + } else if (sysfs_streq(buf, "never")) { + clear_bit(order, &huge_anon_orders_always); + clear_bit(order, &huge_anon_orders_global); + clear_bit(order, &huge_anon_orders_madvise); + } else + ret = -EINVAL; + + return ret; +} + +static struct kobj_attribute thpsize_enabled_attr = + __ATTR(enabled, 0644, thpsize_enabled_show, thpsize_enabled_store); + +static struct attribute *thpsize_attrs[] = { + &thpsize_enabled_attr.attr, + NULL, +}; + +static const struct attribute_group thpsize_attr_group = { + .attrs = thpsize_attrs, +}; + +static const struct kobj_type thpsize_ktype = { + .release = &thpsize_release, + .sysfs_ops = &kobj_sysfs_ops, +}; + +static struct thpsize *thpsize_create(int order, struct kobject *parent) +{ + unsigned long size = (PAGE_SIZE << order) / SZ_1K; + struct thpsize *thpsize; + int ret; + + thpsize = kzalloc(sizeof(*thpsize), GFP_KERNEL); + if (!thpsize) + return ERR_PTR(-ENOMEM); + + ret = kobject_init_and_add(&thpsize->kobj, &thpsize_ktype, parent, + "hugepages-%lukB", size); + if (ret) { + kfree(thpsize); + return ERR_PTR(ret); + } + + ret = sysfs_create_group(&thpsize->kobj, &thpsize_attr_group); + if (ret) { + kobject_put(&thpsize->kobj); + return ERR_PTR(ret); + } + + thpsize->order = order; + return thpsize; +} + +static void thpsize_release(struct kobject *kobj) +{ + kfree(to_thpsize(kobj)); +} + static int __init hugepage_init_sysfs(struct kobject **hugepage_kobj) { int err; + struct thpsize *thpsize; + unsigned long orders; + int order; + + /* + * Default to setting PMD-sized THP to follow the global setting and + * disable all other sizes. powerpc's PMD_ORDER isn't a compile-time + * constant so we have to do this here. + */ + huge_anon_orders_global = BIT(PMD_ORDER); *hugepage_kobj = kobject_create_and_add("transparent_hugepage", mm_kobj); if (unlikely(!*hugepage_kobj)) { @@ -434,8 +626,24 @@ static int __init hugepage_init_sysfs(struct kobject **hugepage_kobj) goto remove_hp_group; } + orders = THP_ORDERS_ALL_ANON; + order = first_order(orders); + while (orders) { + thpsize = thpsize_create(order, *hugepage_kobj); + if (IS_ERR(thpsize)) { + pr_err("failed to create thpsize for order %d\n", order); + err = PTR_ERR(thpsize); + goto remove_all; + } + list_add(&thpsize->node, &thpsize_list); + order = next_order(&orders, order); + } + return 0; +remove_all: + hugepage_exit_sysfs(*hugepage_kobj); + return err; remove_hp_group: sysfs_remove_group(*hugepage_kobj, &hugepage_attr_group); delete_obj: @@ -445,6 +653,13 @@ static int __init hugepage_init_sysfs(struct kobject **hugepage_kobj) static void __init hugepage_exit_sysfs(struct kobject *hugepage_kobj) { + struct thpsize *thpsize, *tmp; + + list_for_each_entry_safe(thpsize, tmp, &thpsize_list, node) { + list_del(&thpsize->node); + kobject_put(&thpsize->kobj); + } + sysfs_remove_group(hugepage_kobj, &khugepaged_attr_group); sysfs_remove_group(hugepage_kobj, &hugepage_attr_group); kobject_put(hugepage_kobj); @@ -811,7 +1026,7 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf) struct folio *folio; unsigned long haddr = vmf->address & HPAGE_PMD_MASK; - if (!transhuge_vma_suitable(vma, haddr)) + if (!transhuge_vma_suitable(vma, haddr, BIT(PMD_ORDER))) return VM_FAULT_FALLBACK; if (unlikely(anon_vma_prepare(vma))) return VM_FAULT_OOM; diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 064654717843..c3a874c0d601 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -446,7 +446,8 @@ void khugepaged_enter_vma(struct vm_area_struct *vma, { if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags) && hugepage_flags_enabled()) { - if (hugepage_vma_check(vma, vm_flags, false, false, true)) + if (hugepage_vma_check(vma, vm_flags, false, false, true, + BIT(PMD_ORDER))) __khugepaged_enter(vma->vm_mm); } } @@ -922,10 +923,10 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address, if (!vma) return SCAN_VMA_NULL; - if (!transhuge_vma_suitable(vma, address)) + if (!transhuge_vma_suitable(vma, address, BIT(PMD_ORDER))) return SCAN_ADDRESS_RANGE; if (!hugepage_vma_check(vma, vma->vm_flags, false, false, - cc->is_khugepaged)) + cc->is_khugepaged, BIT(PMD_ORDER))) return SCAN_VMA_CHECK; /* * Anon VMA expected, the address may be unmapped then @@ -1503,7 +1504,8 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, * and map it by a PMD, regardless of sysfs THP settings. As such, let's * analogously elide sysfs THP settings here. */ - if (!hugepage_vma_check(vma, vma->vm_flags, false, false, false)) + if (!hugepage_vma_check(vma, vma->vm_flags, false, false, false, + BIT(PMD_ORDER))) return SCAN_VMA_CHECK; /* Keep pmd pgtable for uffd-wp; see comment in retract_page_tables() */ @@ -2368,7 +2370,8 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, progress++; break; } - if (!hugepage_vma_check(vma, vma->vm_flags, false, false, true)) { + if (!hugepage_vma_check(vma, vma->vm_flags, false, false, true, + BIT(PMD_ORDER))) { skip: progress++; continue; @@ -2705,7 +2708,8 @@ int madvise_collapse(struct vm_area_struct *vma, struct vm_area_struct **prev, *prev = vma; - if (!hugepage_vma_check(vma, vma->vm_flags, false, false, false)) + if (!hugepage_vma_check(vma, vma->vm_flags, false, false, false, + BIT(PMD_ORDER))) return -EINVAL; cc = kmalloc(sizeof(*cc), GFP_KERNEL); diff --git a/mm/memory.c b/mm/memory.c index c32954e16b28..9d5e61d6d859 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4319,7 +4319,7 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page) pmd_t entry; vm_fault_t ret = VM_FAULT_FALLBACK; - if (!transhuge_vma_suitable(vma, haddr)) + if (!transhuge_vma_suitable(vma, haddr, BIT(PMD_ORDER))) return ret; page = compound_head(page); @@ -5117,7 +5117,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, return VM_FAULT_OOM; retry_pud: if (pud_none(*vmf.pud) && - hugepage_vma_check(vma, vm_flags, false, true, true)) { + hugepage_vma_check(vma, vm_flags, false, true, true, BIT(PUD_ORDER))) { ret = create_huge_pud(&vmf); if (!(ret & VM_FAULT_FALLBACK)) return ret; @@ -5151,7 +5151,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, goto retry_pud; if (pmd_none(*vmf.pmd) && - hugepage_vma_check(vma, vm_flags, false, true, true)) { + hugepage_vma_check(vma, vm_flags, false, true, true, BIT(PMD_ORDER))) { ret = create_huge_pmd(&vmf); if (!(ret & VM_FAULT_FALLBACK)) return ret; diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index e0b368e545ed..5f7e89c5b595 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -268,7 +268,8 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) * cleared *pmd but not decremented compound_mapcount(). */ if ((pvmw->flags & PVMW_SYNC) && - transhuge_vma_suitable(vma, pvmw->address) && + transhuge_vma_suitable(vma, pvmw->address, + BIT(PMD_ORDER)) && (pvmw->nr_pages >= HPAGE_PMD_NR)) { spinlock_t *ptl = pmd_lock(mm, pvmw->pmd); From patchwork Wed Nov 15 13:27:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13456678 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50187C48BFA for ; Wed, 15 Nov 2023 13:28:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E279F6B0341; Wed, 15 Nov 2023 08:28:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E03036B0342; Wed, 15 Nov 2023 08:28:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CA0656B0343; Wed, 15 Nov 2023 08:28:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id B8A026B0341 for ; Wed, 15 Nov 2023 08:28:12 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 8F54140B2E for ; Wed, 15 Nov 2023 13:28:12 +0000 (UTC) X-FDA: 81460267224.04.B757492 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf17.hostedemail.com (Postfix) with ESMTP id D1D0640023 for ; Wed, 15 Nov 2023 13:28:10 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700054891; a=rsa-sha256; cv=none; b=ldRfKuNZKRDfj9rXNShuJNkNbHpCoszDC8FTPyQhVdVzCcoH5MdzNC5VqmwCx3sctYzlWV AeMN5+/NS28qpKKzF4FDoV7fGsNHlJBvFhxYXHJ/phCGtWSKwArfRN/jaVIC2FxxF3RomH y35KVr0fpenyIKDpnvpTLmEGO61WT+w= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700054891; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GaRguLCBK9uUawAeJ/HZyd0iBmJtPBRZwhu2sCovmzI=; b=ZWmnILT7qEQcouf9IPaW3yAh5AZWv6ohN2RFZNQazVKYf0hL24ujb/xznPK0irFRhP/WMp a8xTT7FBJRWmqBfdE+PHyLOvU7S8Dymhlrj07RzYdcu0u03xe9gJAwIi8PVAliV+Xx/A75 JwBhYVacZOdG8SRhV2RkG41BN4VkEDM= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 775EA1595; Wed, 15 Nov 2023 05:28:55 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0BCE93F7B4; Wed, 15 Nov 2023 05:28:06 -0800 (PST) From: Ryan Roberts To: Andrew Morton , Matthew Wilcox , Yin Fengwei , David Hildenbrand , Yu Zhao , Catalin Marinas , Anshuman Khandual , Yang Shi , "Huang, Ying" , Zi Yan , Luis Chamberlain , Itaru Kitayama , "Kirill A. Shutemov" , John Hubbard , David Rientjes , Vlastimil Babka , Hugh Dickins , Kefeng Wang Cc: Ryan Roberts , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 04/10] mm: thp: Support allocation of anonymous small-sized THP Date: Wed, 15 Nov 2023 13:27:28 +0000 Message-Id: <20231115132734.931023-5-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231115132734.931023-1-ryan.roberts@arm.com> References: <20231115132734.931023-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: D1D0640023 X-Stat-Signature: yh1u7zm13e666c6h4n8h1nii6p9gcn63 X-HE-Tag: 1700054890-123132 X-HE-Meta: U2FsdGVkX18spv0kt5I3cPrYiJ3w/TbpBGE/YBzR37z/3k/IV3MDfBtkKN6EB0ojIgrgh4q+x4xPPiXD6QrHK6H8x4MIpmiLjhUSmC0efrzIAmNnVZ+TWK9g9zw+9vQKsIXL/cNcbOCm2qvJ4oztmZVzZ3MrdM+6Qvb1YPqXGaCxwcnfMBXbXlKGo/tzlQCLve5Fwrb8HfXmbJ6PT6el6vXyuM9deKNQII5iJg6XOtlSNyUesxBqg7EBsu3XhCVJqVhT46XCe+XCy5vYPv2WM2Mm8TLrF8xq/oYL8S716RLXFAxTyWWAEbALEBBLkbqwAifm4RKXDfbIo9J+KWaHRxr+HpjTMkqDnL1Mi8cFiBhUqqjoEinPlb3ee2GKyCvI2Z/uMhmwnhMXsLIJMl2Zip7weQ8n6GtxFmbP/LBMtEbI13gPyP24IjSTyQBRGy6bEPIoBH5tDFZSA7mTkTfW9tts0+XrxPyY/GXXTaISVxOxzR37iR2xX7Eo9W0cwhsIgxmDasZ9xKNWLFELnpziVjNC/Sj39XIi+uBW8ZbRBvUMxbG+S6HW/Vxu3PCHT+EuOA/bKzfUAK66cRKtd8th8t7VCVVxABB5D1NKd58b1xGQLyFoXP5XMN0EqpzkWrpBcTwM/Eb+8EeMomowVzLLBtwQBq4zltfB1BOphOCx/pxQdKlPvXDiskqTvXB8QbauwckTAknwJNpn6tsh3tvI3YF6ZWh9ON3XDNheiHuUVfUAJ74zZORZS1FH7tdicJ6J7U6jSKdNhUqsOJe2DoMcBwQh9gXFyJQokThLBymgZhsRdnq+z6LMSGvMzujSqgrYDrKNS+aIXbThuDjEHWEmxFpjstq5fewF9r2lfY6BPua299Ozyhak5KKe5gCPQ2VVOEJ3lLRqTBkbt8b/rWyAHUBhhkEIJY5gKjcRn5P23WVvv14wsaSIEwGjuR0c61YFpigIhAIFTFAx01sLTwk SxFLJKho ono9/pIyQqU8bRCS/Ehrv+B46aiW9o8YIe4nqy3L4nTZU9xPkdxh0uovBRWJCZ64K7s6iwVTcPYoeHqcqt6hg0RtApV9U892QscMWTPjNVKcENw9a7cAaOuGofAOG7MguRtjjldIgJvfBPsw5OFYonAWwsmZSfrUlZoxlhFj1fXKeZ+eJ97spk/lh6boWC8gu+YAggbRxe+OTQsLGVFAi6TugbXbj1TUk+9GNuQirq8Q7cXA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Introduce the logic to allow THP to be configured (through the new sysfs interface we just added) to allocate large folios to back anonymous memory, which are smaller than PMD-size. We call this new THP type "small-sized THP". These small-sized THPs continue to be PTE-mapped, but in many cases can still provide similar benefits to traditional PMD-sized THP: Page faults are significantly reduced (by a factor of e.g. 4, 8, 16, etc. depending on the configured order), but latency spikes are much less prominent because the size of each page isn't as huge as the PMD-sized variant and there is less memory to clear in each page fault. The number of per-page operations (e.g. ref counting, rmap management, lru list management) are also significantly reduced since those ops now become per-folio. Some architectures also employ TLB compression mechanisms to squeeze more entries in when a set of PTEs are virtually and physically contiguous and approporiately aligned. In this case, TLB misses will occur less often. The new behaviour is disabled by default, but can be enabled at runtime by writing to /sys/kernel/mm/transparent_hugepage/hugepage-XXkb/enabled (see documentation in previous commit). The long term aim is to change the default to include suitable lower orders, but there are some risks around internal fragmentation that need to be better understood first. Signed-off-by: Ryan Roberts --- include/linux/huge_mm.h | 6 ++- mm/memory.c | 106 ++++++++++++++++++++++++++++++++++++---- 2 files changed, 101 insertions(+), 11 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 7d6f7d96b039..edc302351971 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -68,9 +68,11 @@ extern struct kobj_attribute shmem_enabled_attr; #define HPAGE_PMD_NR (1<vma; + unsigned long orders; + int order; + + /* + * If uffd is active for the vma we need per-page fault fidelity to + * maintain the uffd semantics. + */ + if (userfaultfd_armed(vma)) + goto fallback; + + /* + * Get a list of all the (large) orders below PMD_ORDER that are enabled + * for this vma. Then filter out the orders that can't be allocated over + * the faulting address and still be fully contained in the vma. + */ + orders = hugepage_vma_check(vma, vma->vm_flags, false, true, true, + BIT(PMD_ORDER) - 1); + orders = transhuge_vma_suitable(vma, vmf->address, orders); + + if (!orders) + goto fallback; + + pte = pte_offset_map(vmf->pmd, vmf->address & PMD_MASK); + if (!pte) + return ERR_PTR(-EAGAIN); + + order = first_order(orders); + while (orders) { + addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); + vmf->pte = pte + pte_index(addr); + if (pte_range_none(vmf->pte, 1 << order)) + break; + order = next_order(&orders, order); + } + + vmf->pte = NULL; + pte_unmap(pte); + + gfp = vma_thp_gfp_mask(vma); + + while (orders) { + addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); + folio = vma_alloc_folio(gfp, order, vma, addr, true); + if (folio) { + clear_huge_page(&folio->page, addr, 1 << order); + return folio; + } + order = next_order(&orders, order); + } + +fallback: + return vma_alloc_zeroed_movable_folio(vma, vmf->address); +} +#else +#define alloc_anon_folio(vmf) \ + vma_alloc_zeroed_movable_folio((vmf)->vma, (vmf)->address) +#endif + /* * We enter with non-exclusive mmap_lock (to exclude vma changes, * but allow concurrent faults), and pte mapped but not yet locked. @@ -4129,6 +4207,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) */ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) { + int i; + int nr_pages = 1; + unsigned long addr = vmf->address; bool uffd_wp = vmf_orig_pte_uffd_wp(vmf); struct vm_area_struct *vma = vmf->vma; struct folio *folio; @@ -4173,10 +4254,15 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) /* Allocate our own private page. */ if (unlikely(anon_vma_prepare(vma))) goto oom; - folio = vma_alloc_zeroed_movable_folio(vma, vmf->address); + folio = alloc_anon_folio(vmf); + if (IS_ERR(folio)) + return 0; if (!folio) goto oom; + nr_pages = folio_nr_pages(folio); + addr = ALIGN_DOWN(vmf->address, nr_pages * PAGE_SIZE); + if (mem_cgroup_charge(folio, vma->vm_mm, GFP_KERNEL)) goto oom_free_page; folio_throttle_swaprate(folio, GFP_KERNEL); @@ -4193,12 +4279,13 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) if (vma->vm_flags & VM_WRITE) entry = pte_mkwrite(pte_mkdirty(entry), vma); - vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, - &vmf->ptl); + vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, addr, &vmf->ptl); if (!vmf->pte) goto release; - if (vmf_pte_changed(vmf)) { - update_mmu_tlb(vma, vmf->address, vmf->pte); + if ((nr_pages == 1 && vmf_pte_changed(vmf)) || + (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages))) { + for (i = 0; i < nr_pages; i++) + update_mmu_tlb(vma, addr + PAGE_SIZE * i, vmf->pte + i); goto release; } @@ -4213,16 +4300,17 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) return handle_userfault(vmf, VM_UFFD_MISSING); } - inc_mm_counter(vma->vm_mm, MM_ANONPAGES); - folio_add_new_anon_rmap(folio, vma, vmf->address); + folio_ref_add(folio, nr_pages - 1); + add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages); + folio_add_new_anon_rmap(folio, vma, addr); folio_add_lru_vma(folio, vma); setpte: if (uffd_wp) entry = pte_mkuffd_wp(entry); - set_pte_at(vma->vm_mm, vmf->address, vmf->pte, entry); + set_ptes(vma->vm_mm, addr, vmf->pte, entry, nr_pages); /* No need to invalidate - it was non-present before */ - update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); + update_mmu_cache_range(vmf, vma, addr, vmf->pte, nr_pages); unlock: if (vmf->pte) pte_unmap_unlock(vmf->pte, vmf->ptl); From patchwork Wed Nov 15 13:27:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13456679 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4347DC48BD7 for ; Wed, 15 Nov 2023 13:28:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D64646B0343; Wed, 15 Nov 2023 08:28:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D15606B0344; Wed, 15 Nov 2023 08:28:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B910A6B0345; Wed, 15 Nov 2023 08:28:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id A015B6B0343 for ; Wed, 15 Nov 2023 08:28:15 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 68D9640B2E for ; Wed, 15 Nov 2023 13:28:15 +0000 (UTC) X-FDA: 81460267350.26.CED4C59 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf15.hostedemail.com (Postfix) with ESMTP id AD4E3A000C for ; Wed, 15 Nov 2023 13:28:13 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700054893; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/EYrnf/ZbB63lyuSv0fKQ4km7AIKVUkTF9kM/Ly2yKI=; b=ai2+om+aIA2ZNhy+Z99Mo9ufWxbzal8+jRo1yG0auTgWqLsLwI7MHFvS7B6kommrVBOt/E 6x1h3fxB9vmUVrUA4z0s0H4nRi1XLrzWjQmk1/X/l69KidSvfC66RBp36ugS0SuBd40+bE 7+oR8qqpoU60sWDi4Eqzk8zJgD6OASQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700054893; a=rsa-sha256; cv=none; b=R2plOTnhbtZaeABndxenmTQM4cbIsfCCo5rF12YHODZEYSoxC8q28xV78k7llZq55iCTUc ryFQ7m1dfY5jtN0UqkXTVtNjZDtzC7fbJKsEX9hcSNKCNQMq8mIAxFQ6ZSbDJqJDdQ/CqY 2bWXd2/fMLkAxJS7vMc2ILOwimiJ5ng= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8865DDA7; Wed, 15 Nov 2023 05:28:58 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1E4A23F7B4; Wed, 15 Nov 2023 05:28:10 -0800 (PST) From: Ryan Roberts To: Andrew Morton , Matthew Wilcox , Yin Fengwei , David Hildenbrand , Yu Zhao , Catalin Marinas , Anshuman Khandual , Yang Shi , "Huang, Ying" , Zi Yan , Luis Chamberlain , Itaru Kitayama , "Kirill A. Shutemov" , John Hubbard , David Rientjes , Vlastimil Babka , Hugh Dickins , Kefeng Wang Cc: Ryan Roberts , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 05/10] selftests/mm/kugepaged: Restore thp settings at exit Date: Wed, 15 Nov 2023 13:27:29 +0000 Message-Id: <20231115132734.931023-6-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231115132734.931023-1-ryan.roberts@arm.com> References: <20231115132734.931023-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Stat-Signature: on3rdck6gyi7fam81m3mf8kcipnrerwf X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: AD4E3A000C X-Rspam-User: X-HE-Tag: 1700054893-152293 X-HE-Meta: U2FsdGVkX1+cCiyTS5uUhXcne1G5TuWqsF0hu2xCXOgRUGbkrq38QYWfwYjrMonCBJ5mRANuDrBPcwTzjX42ft/DPEhgMGQarU3ESKDUrYENVAShuK+jv0I0yoPgGlsuNvtRTMOxgme5W1XE2vpZHLXg2s/S7iTpFL85wNczIpguJHceXyehbKFrcWOPS2PHBRx4hzNeWEy0syGMhaQFGYUI0lfdYiu63fMZn5blxmUUIQmigkt+pV/8JTlIy2OO3joT2K9zsT0y/Zonv44JRd700YscmtIwhodKSX8kOlifJjl8GIqUI0eCqV36MLS0EuvqQmXHYggP/X4FHEZO8kg9usr40sDcGreubf4JfaFFDhyvCcyi3PQds+efpKtUszXEMlBlVcFoT1Ns94hBDcLO1hjXVDSmvdwAzgGUOMU51BTYTxHbcPNbC8fFr5MaNEE/N8EC6Cg+jvmyY1ALNR5xew4XFmf950zUIR1ti9V5ZGwmg23OGAkP7gQ6tXgruUAU6KYaRJOfJwoa1FSiJJ8kPIxN2A+LMOXD6xzCciuvhjNC1e2wjrOqKZoiM2wPO0TdZ0Pjp1nHZrcJ3FlnL2r/ixV7XwryrD48hsKrdM//uwBILTMGAZ/sZkuXId+aamYL1p1JCXVQDzrkFjWxZ946dalquhy2zguGhwhFkfHlOCyMrvjilxp54/bTFsrJyrTNaRSq1XFtRYvJUX0G2qXfXCbe4T/IcR+L92tqNGM4+nP6QGQcQHie5HyguqLkUyI8EjX9/25LZjyufxu/wj591R1UkrzlGz62v2GIyW5Qmd0JEGjRrXjmLnB1mGk5UILr5jzBpptDFJjjJ6LXFBWDtyMJgvnRHWlEXBO9EasepP+4zvRXuUVQoMFS0N+GQz9YMMc0JSp6OkqqqeSEj2mZ/eHDD8gO8RHuKjG6Su/om0EmwsBY+XhVcRDxJh6fDDXzELcl9n+4GwwcP+2 hog4VB8y NVPXmfp5BsO/zaaH+RJLznQbPIfoIywrNe6Osbc31t0x30z3JWbDZ9JXu19TX+GPPqbv1Gpf9p3BNDmfE9qKNtH9I5OnOW9bEUMsjde24csHTatmqL2Cg3ESI91BfHoYbBm5+c/YM3J/xMv+GdFZI54ScjGzAAtHhaBFH7W1Uqdy2i8KEXP7gcQYQ7e3s8l5G+Wjhp6iZ5xtjnFqSoLY+X28YixENW58ZGnjL X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Previously, the saved thp settings would be restored upon a signal or at the natural end of the test suite. But there are some tests that directly call exit() upon failure. In this case, the thp settings were not being restored, which could then influence other tests. Fix this by installing an atexit() handler to do the actual restore. The signal handler can now just call exit() and the atexit handler is invoked. Signed-off-by: Ryan Roberts --- tools/testing/selftests/mm/khugepaged.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) diff --git a/tools/testing/selftests/mm/khugepaged.c b/tools/testing/selftests/mm/khugepaged.c index 030667cb5533..fc47a1c4944c 100644 --- a/tools/testing/selftests/mm/khugepaged.c +++ b/tools/testing/selftests/mm/khugepaged.c @@ -374,18 +374,22 @@ static void pop_settings(void) write_settings(current_settings()); } -static void restore_settings(int sig) +static void restore_settings_atexit(void) { if (skip_settings_restore) - goto out; + return; printf("Restore THP and khugepaged settings..."); write_settings(&saved_settings); success("OK"); - if (sig) - exit(EXIT_FAILURE); -out: - exit(exit_status); + + skip_settings_restore = true; +} + +static void restore_settings(int sig) +{ + /* exit() will invoke the restore_settings_atexit handler. */ + exit(sig ? EXIT_FAILURE : exit_status); } static void save_settings(void) @@ -415,6 +419,7 @@ static void save_settings(void) success("OK"); + atexit(restore_settings_atexit); signal(SIGTERM, restore_settings); signal(SIGINT, restore_settings); signal(SIGHUP, restore_settings); From patchwork Wed Nov 15 13:27:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13456680 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A875EC48BFA for ; Wed, 15 Nov 2023 13:28:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4159F6B0345; Wed, 15 Nov 2023 08:28:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3C60C6B0346; Wed, 15 Nov 2023 08:28:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 241186B0347; Wed, 15 Nov 2023 08:28:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 11C736B0345 for ; Wed, 15 Nov 2023 08:28:19 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id AA1BB160828 for ; Wed, 15 Nov 2023 13:28:18 +0000 (UTC) X-FDA: 81460267476.12.0D810E3 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf07.hostedemail.com (Postfix) with ESMTP id E17BF4000C for ; Wed, 15 Nov 2023 13:28:16 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf07.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700054897; a=rsa-sha256; cv=none; b=TAL9MN1RwpzhknteUSWTYhx9QForgd7XDDfvo3GDhpt6B0s7wnOMpkhZlswxilTtSMHANv 7Cz0ibFfPwJLKEaJB6cLAz9CQzt+I9QrbpNsrWInrafZDjoVkvxhvgEUAFDpsxOmtqpA4z /WSi4Lnc0+ovBxZqDei9T517HF6Wang= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf07.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700054897; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=++A/GjgI5odLCYj+I8EUZ7TNOXj/lOsMg4n9pD9vfbA=; b=xC52lSoBNYPLp1gkM4uNTqrasUkHwuRbTgBntdtIjK3mbcDfahvxMMG0rOp3s64+FDOoL/ WtYRHzY/5c6n4gkMnuhmmD582Ow4gsgkB17PI/y7xPpanp5fJGeR1mPzswMd58eoRLM0eC Vp3r1Eax3p1dBOV1ZUXfhNZ+lERTmJk= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BEF3915BF; Wed, 15 Nov 2023 05:29:01 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 318A83F7B4; Wed, 15 Nov 2023 05:28:13 -0800 (PST) From: Ryan Roberts To: Andrew Morton , Matthew Wilcox , Yin Fengwei , David Hildenbrand , Yu Zhao , Catalin Marinas , Anshuman Khandual , Yang Shi , "Huang, Ying" , Zi Yan , Luis Chamberlain , Itaru Kitayama , "Kirill A. Shutemov" , John Hubbard , David Rientjes , Vlastimil Babka , Hugh Dickins , Kefeng Wang Cc: Ryan Roberts , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 06/10] selftests/mm: Factor out thp settings management Date: Wed, 15 Nov 2023 13:27:30 +0000 Message-Id: <20231115132734.931023-7-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231115132734.931023-1-ryan.roberts@arm.com> References: <20231115132734.931023-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: E17BF4000C X-Stat-Signature: tim7of6a5wferbc5fp3ufdm55xdbuykc X-Rspam-User: X-HE-Tag: 1700054896-21769 X-HE-Meta: U2FsdGVkX1/ivZ0q+o0N0d9Io941KH5K9yTFIpZovI+J54bAxn67hIfoGHAruyFMrEOwJ2pmcbAkMyH7gqelU31i1CVOJz+JLE5MJvOfF82TYEGLuQhse60R+D3RA0u13olYHzWJKXJ1SGon4mIRNToqw0TPk2sAK4nxE0h2//3VSJUbMqodDSUKRhhLJOvHSCbqBSgYyVW71Krqv1/NfUHfb6jlV1/CwCPNWWDJgHiclxXsN/w26PjYz9C9KhRo+IbuIOC1ojnufjlW3iHL1W/OyvceoJDrVg8AOcqp+G1wQuCi24YydnzCMusSVb/JRhpK4GdixmqBFXyXWhzZliMSj0j4+njoj2TPLiPd18gqwQT0YyzrVwG5k14exi5tnXTdiLKksLr5vm8EXhcQtAQm6LCI2e4bW5PW/jFjBmaYhX1buJ5iICUQHmhwaKNModKsqFhIfAJBUOh75/ccMTxs2X8EmgGshSY0Z4feh6xmo6nuA6Nsd03hpUvlZ9UZtZ0zVcLHSaxun6pqM/GMc5JlualBd+Q5BSRQiN5l+NAGlavcldjN0SHbw98b7gHL/rX2Id7TY+frPapSn1Y4GopgW+t/ZSuvc0SWEdRwjI42IGBsrCVB6m0hge7cVuyZiU+jMoLYjdQ4wBJeCN2d0PZ8qsMZbMycjQntO8B3KOTCLsFiFvOIiDRyJgAnj0oU9yIB7cVJERhAF8Ma/jrpSmMJQWJg2ReOYBHw6kYTfsjJzWiW9Kxk5CikQqNr0VXFNxbdsunsNpaQhixkban0Ovhdd66yy+a4VRapjMgzJIG3tMvYXOWROybFd/EBaEg1/C61+sdPw6c6fGGTVkN4dNRy+CtIgSGTglU7NeJEdTpFGPHq2mYrcD12p1tKCIuYy/cR9ZsCtE5S3w4s8t7UW03GPMXsmH1iVJwEussserp6tWm5sD0tWXYemYukErB1H74vTEVi5Jg+//O9xh4 prtbBKsh UOk4sN0vtV5niGWRr0CPjBAjIs8vPrt7LxEadm4cau8/Waj++oIGFTTMBjmP1vHv7nhSe2JUCipwk54MSy3AzkUdHayFMTmt9CIoYM1Cvc7YlH+uJHk3BDvMHWQMJbwOeTSUzka4pF6LvesGDy2a9CilnA34j+MnWfrPPsIzHIUL6xq44B1cuweI0xfRm+rS0GSe1dMau9ibYtkFe0OrjxcHb+n2Cap7nOPqa3uFDgq9ul5E= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The khugepaged test has a useful framework for save/restore/pop/push of all thp settings via the sysfs interface. This will be useful to explicitly control small-sized THP settings in other tests, so let's move it out of khugepaged and into its own thp_settings.[c|h] utility. Signed-off-by: Ryan Roberts --- tools/testing/selftests/mm/Makefile | 4 +- tools/testing/selftests/mm/khugepaged.c | 346 ++-------------------- tools/testing/selftests/mm/thp_settings.c | 296 ++++++++++++++++++ tools/testing/selftests/mm/thp_settings.h | 71 +++++ 4 files changed, 391 insertions(+), 326 deletions(-) create mode 100644 tools/testing/selftests/mm/thp_settings.c create mode 100644 tools/testing/selftests/mm/thp_settings.h diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile index 78dfec8bc676..a3eb20c186e7 100644 --- a/tools/testing/selftests/mm/Makefile +++ b/tools/testing/selftests/mm/Makefile @@ -117,8 +117,8 @@ TEST_FILES += va_high_addr_switch.sh include ../lib.mk -$(TEST_GEN_PROGS): vm_util.c -$(TEST_GEN_FILES): vm_util.c +$(TEST_GEN_PROGS): vm_util.c thp_settings.c +$(TEST_GEN_FILES): vm_util.c thp_settings.c $(OUTPUT)/uffd-stress: uffd-common.c $(OUTPUT)/uffd-unit-tests: uffd-common.c diff --git a/tools/testing/selftests/mm/khugepaged.c b/tools/testing/selftests/mm/khugepaged.c index fc47a1c4944c..b15e7fd70176 100644 --- a/tools/testing/selftests/mm/khugepaged.c +++ b/tools/testing/selftests/mm/khugepaged.c @@ -22,13 +22,13 @@ #include "linux/magic.h" #include "vm_util.h" +#include "thp_settings.h" #define BASE_ADDR ((void *)(1UL << 30)) static unsigned long hpage_pmd_size; static unsigned long page_size; static int hpage_pmd_nr; -#define THP_SYSFS "/sys/kernel/mm/transparent_hugepage/" #define PID_SMAPS "/proc/self/smaps" #define TEST_FILE "collapse_test_file" @@ -71,78 +71,7 @@ struct file_info { }; static struct file_info finfo; - -enum thp_enabled { - THP_ALWAYS, - THP_MADVISE, - THP_NEVER, -}; - -static const char *thp_enabled_strings[] = { - "always", - "madvise", - "never", - NULL -}; - -enum thp_defrag { - THP_DEFRAG_ALWAYS, - THP_DEFRAG_DEFER, - THP_DEFRAG_DEFER_MADVISE, - THP_DEFRAG_MADVISE, - THP_DEFRAG_NEVER, -}; - -static const char *thp_defrag_strings[] = { - "always", - "defer", - "defer+madvise", - "madvise", - "never", - NULL -}; - -enum shmem_enabled { - SHMEM_ALWAYS, - SHMEM_WITHIN_SIZE, - SHMEM_ADVISE, - SHMEM_NEVER, - SHMEM_DENY, - SHMEM_FORCE, -}; - -static const char *shmem_enabled_strings[] = { - "always", - "within_size", - "advise", - "never", - "deny", - "force", - NULL -}; - -struct khugepaged_settings { - bool defrag; - unsigned int alloc_sleep_millisecs; - unsigned int scan_sleep_millisecs; - unsigned int max_ptes_none; - unsigned int max_ptes_swap; - unsigned int max_ptes_shared; - unsigned long pages_to_scan; -}; - -struct settings { - enum thp_enabled thp_enabled; - enum thp_defrag thp_defrag; - enum shmem_enabled shmem_enabled; - bool use_zero_page; - struct khugepaged_settings khugepaged; - unsigned long read_ahead_kb; -}; - -static struct settings saved_settings; static bool skip_settings_restore; - static int exit_status; static void success(const char *msg) @@ -161,226 +90,13 @@ static void skip(const char *msg) printf(" \e[33m%s\e[0m\n", msg); } -static int read_file(const char *path, char *buf, size_t buflen) -{ - int fd; - ssize_t numread; - - fd = open(path, O_RDONLY); - if (fd == -1) - return 0; - - numread = read(fd, buf, buflen - 1); - if (numread < 1) { - close(fd); - return 0; - } - - buf[numread] = '\0'; - close(fd); - - return (unsigned int) numread; -} - -static int write_file(const char *path, const char *buf, size_t buflen) -{ - int fd; - ssize_t numwritten; - - fd = open(path, O_WRONLY); - if (fd == -1) { - printf("open(%s)\n", path); - exit(EXIT_FAILURE); - return 0; - } - - numwritten = write(fd, buf, buflen - 1); - close(fd); - if (numwritten < 1) { - printf("write(%s)\n", buf); - exit(EXIT_FAILURE); - return 0; - } - - return (unsigned int) numwritten; -} - -static int read_string(const char *name, const char *strings[]) -{ - char path[PATH_MAX]; - char buf[256]; - char *c; - int ret; - - ret = snprintf(path, PATH_MAX, THP_SYSFS "%s", name); - if (ret >= PATH_MAX) { - printf("%s: Pathname is too long\n", __func__); - exit(EXIT_FAILURE); - } - - if (!read_file(path, buf, sizeof(buf))) { - perror(path); - exit(EXIT_FAILURE); - } - - c = strchr(buf, '['); - if (!c) { - printf("%s: Parse failure\n", __func__); - exit(EXIT_FAILURE); - } - - c++; - memmove(buf, c, sizeof(buf) - (c - buf)); - - c = strchr(buf, ']'); - if (!c) { - printf("%s: Parse failure\n", __func__); - exit(EXIT_FAILURE); - } - *c = '\0'; - - ret = 0; - while (strings[ret]) { - if (!strcmp(strings[ret], buf)) - return ret; - ret++; - } - - printf("Failed to parse %s\n", name); - exit(EXIT_FAILURE); -} - -static void write_string(const char *name, const char *val) -{ - char path[PATH_MAX]; - int ret; - - ret = snprintf(path, PATH_MAX, THP_SYSFS "%s", name); - if (ret >= PATH_MAX) { - printf("%s: Pathname is too long\n", __func__); - exit(EXIT_FAILURE); - } - - if (!write_file(path, val, strlen(val) + 1)) { - perror(path); - exit(EXIT_FAILURE); - } -} - -static const unsigned long _read_num(const char *path) -{ - char buf[21]; - - if (read_file(path, buf, sizeof(buf)) < 0) { - perror("read_file(read_num)"); - exit(EXIT_FAILURE); - } - - return strtoul(buf, NULL, 10); -} - -static const unsigned long read_num(const char *name) -{ - char path[PATH_MAX]; - int ret; - - ret = snprintf(path, PATH_MAX, THP_SYSFS "%s", name); - if (ret >= PATH_MAX) { - printf("%s: Pathname is too long\n", __func__); - exit(EXIT_FAILURE); - } - return _read_num(path); -} - -static void _write_num(const char *path, unsigned long num) -{ - char buf[21]; - - sprintf(buf, "%ld", num); - if (!write_file(path, buf, strlen(buf) + 1)) { - perror(path); - exit(EXIT_FAILURE); - } -} - -static void write_num(const char *name, unsigned long num) -{ - char path[PATH_MAX]; - int ret; - - ret = snprintf(path, PATH_MAX, THP_SYSFS "%s", name); - if (ret >= PATH_MAX) { - printf("%s: Pathname is too long\n", __func__); - exit(EXIT_FAILURE); - } - _write_num(path, num); -} - -static void write_settings(struct settings *settings) -{ - struct khugepaged_settings *khugepaged = &settings->khugepaged; - - write_string("enabled", thp_enabled_strings[settings->thp_enabled]); - write_string("defrag", thp_defrag_strings[settings->thp_defrag]); - write_string("shmem_enabled", - shmem_enabled_strings[settings->shmem_enabled]); - write_num("use_zero_page", settings->use_zero_page); - - write_num("khugepaged/defrag", khugepaged->defrag); - write_num("khugepaged/alloc_sleep_millisecs", - khugepaged->alloc_sleep_millisecs); - write_num("khugepaged/scan_sleep_millisecs", - khugepaged->scan_sleep_millisecs); - write_num("khugepaged/max_ptes_none", khugepaged->max_ptes_none); - write_num("khugepaged/max_ptes_swap", khugepaged->max_ptes_swap); - write_num("khugepaged/max_ptes_shared", khugepaged->max_ptes_shared); - write_num("khugepaged/pages_to_scan", khugepaged->pages_to_scan); - - if (file_ops && finfo.type == VMA_FILE) - _write_num(finfo.dev_queue_read_ahead_path, - settings->read_ahead_kb); -} - -#define MAX_SETTINGS_DEPTH 4 -static struct settings settings_stack[MAX_SETTINGS_DEPTH]; -static int settings_index; - -static struct settings *current_settings(void) -{ - if (!settings_index) { - printf("Fail: No settings set"); - exit(EXIT_FAILURE); - } - return settings_stack + settings_index - 1; -} - -static void push_settings(struct settings *settings) -{ - if (settings_index >= MAX_SETTINGS_DEPTH) { - printf("Fail: Settings stack exceeded"); - exit(EXIT_FAILURE); - } - settings_stack[settings_index++] = *settings; - write_settings(current_settings()); -} - -static void pop_settings(void) -{ - if (settings_index <= 0) { - printf("Fail: Settings stack empty"); - exit(EXIT_FAILURE); - } - --settings_index; - write_settings(current_settings()); -} - static void restore_settings_atexit(void) { if (skip_settings_restore) return; printf("Restore THP and khugepaged settings..."); - write_settings(&saved_settings); + thp_restore_settings(); success("OK"); skip_settings_restore = true; @@ -395,27 +111,9 @@ static void restore_settings(int sig) static void save_settings(void) { printf("Save THP and khugepaged settings..."); - saved_settings = (struct settings) { - .thp_enabled = read_string("enabled", thp_enabled_strings), - .thp_defrag = read_string("defrag", thp_defrag_strings), - .shmem_enabled = - read_string("shmem_enabled", shmem_enabled_strings), - .use_zero_page = read_num("use_zero_page"), - }; - saved_settings.khugepaged = (struct khugepaged_settings) { - .defrag = read_num("khugepaged/defrag"), - .alloc_sleep_millisecs = - read_num("khugepaged/alloc_sleep_millisecs"), - .scan_sleep_millisecs = - read_num("khugepaged/scan_sleep_millisecs"), - .max_ptes_none = read_num("khugepaged/max_ptes_none"), - .max_ptes_swap = read_num("khugepaged/max_ptes_swap"), - .max_ptes_shared = read_num("khugepaged/max_ptes_shared"), - .pages_to_scan = read_num("khugepaged/pages_to_scan"), - }; if (file_ops && finfo.type == VMA_FILE) - saved_settings.read_ahead_kb = - _read_num(finfo.dev_queue_read_ahead_path); + thp_set_read_ahead_path(finfo.dev_queue_read_ahead_path); + thp_save_settings(); success("OK"); @@ -798,7 +496,7 @@ static void __madvise_collapse(const char *msg, char *p, int nr_hpages, struct mem_ops *ops, bool expect) { int ret; - struct settings settings = *current_settings(); + struct thp_settings settings = *thp_current_settings(); printf("%s...", msg); @@ -808,7 +506,7 @@ static void __madvise_collapse(const char *msg, char *p, int nr_hpages, */ settings.thp_enabled = THP_NEVER; settings.shmem_enabled = SHMEM_NEVER; - push_settings(&settings); + thp_push_settings(&settings); /* Clear VM_NOHUGEPAGE */ madvise(p, nr_hpages * hpage_pmd_size, MADV_HUGEPAGE); @@ -820,7 +518,7 @@ static void __madvise_collapse(const char *msg, char *p, int nr_hpages, else success("OK"); - pop_settings(); + thp_pop_settings(); } static void madvise_collapse(const char *msg, char *p, int nr_hpages, @@ -850,13 +548,13 @@ static bool wait_for_scan(const char *msg, char *p, int nr_hpages, madvise(p, nr_hpages * hpage_pmd_size, MADV_HUGEPAGE); /* Wait until the second full_scan completed */ - full_scans = read_num("khugepaged/full_scans") + 2; + full_scans = thp_read_num("khugepaged/full_scans") + 2; printf("%s...", msg); while (timeout--) { if (ops->check_huge(p, nr_hpages)) break; - if (read_num("khugepaged/full_scans") >= full_scans) + if (thp_read_num("khugepaged/full_scans") >= full_scans) break; printf("."); usleep(TICK); @@ -911,11 +609,11 @@ static bool is_tmpfs(struct mem_ops *ops) static void alloc_at_fault(void) { - struct settings settings = *current_settings(); + struct thp_settings settings = *thp_current_settings(); char *p; settings.thp_enabled = THP_ALWAYS; - push_settings(&settings); + thp_push_settings(&settings); p = alloc_mapping(1); *p = 1; @@ -925,7 +623,7 @@ static void alloc_at_fault(void) else fail("Fail"); - pop_settings(); + thp_pop_settings(); madvise(p, page_size, MADV_DONTNEED); printf("Split huge PMD on MADV_DONTNEED..."); @@ -973,11 +671,11 @@ static void collapse_single_pte_entry(struct collapse_context *c, struct mem_ops static void collapse_max_ptes_none(struct collapse_context *c, struct mem_ops *ops) { int max_ptes_none = hpage_pmd_nr / 2; - struct settings settings = *current_settings(); + struct thp_settings settings = *thp_current_settings(); void *p; settings.khugepaged.max_ptes_none = max_ptes_none; - push_settings(&settings); + thp_push_settings(&settings); p = ops->setup_area(1); @@ -1002,7 +700,7 @@ static void collapse_max_ptes_none(struct collapse_context *c, struct mem_ops *o } skip: ops->cleanup_area(p, hpage_pmd_size); - pop_settings(); + thp_pop_settings(); } static void collapse_swapin_single_pte(struct collapse_context *c, struct mem_ops *ops) @@ -1033,7 +731,7 @@ static void collapse_swapin_single_pte(struct collapse_context *c, struct mem_op static void collapse_max_ptes_swap(struct collapse_context *c, struct mem_ops *ops) { - int max_ptes_swap = read_num("khugepaged/max_ptes_swap"); + int max_ptes_swap = thp_read_num("khugepaged/max_ptes_swap"); void *p; p = ops->setup_area(1); @@ -1250,11 +948,11 @@ static void collapse_fork_compound(struct collapse_context *c, struct mem_ops *o fail("Fail"); ops->fault(p, 0, page_size); - write_num("khugepaged/max_ptes_shared", hpage_pmd_nr - 1); + thp_write_num("khugepaged/max_ptes_shared", hpage_pmd_nr - 1); c->collapse("Collapse PTE table full of compound pages in child", p, 1, ops, true); - write_num("khugepaged/max_ptes_shared", - current_settings()->khugepaged.max_ptes_shared); + thp_write_num("khugepaged/max_ptes_shared", + thp_current_settings()->khugepaged.max_ptes_shared); validate_memory(p, 0, hpage_pmd_size); ops->cleanup_area(p, hpage_pmd_size); @@ -1275,7 +973,7 @@ static void collapse_fork_compound(struct collapse_context *c, struct mem_ops *o static void collapse_max_ptes_shared(struct collapse_context *c, struct mem_ops *ops) { - int max_ptes_shared = read_num("khugepaged/max_ptes_shared"); + int max_ptes_shared = thp_read_num("khugepaged/max_ptes_shared"); int wstatus; void *p; @@ -1443,7 +1141,7 @@ static void parse_test_type(int argc, const char **argv) int main(int argc, const char **argv) { - struct settings default_settings = { + struct thp_settings default_settings = { .thp_enabled = THP_MADVISE, .thp_defrag = THP_DEFRAG_ALWAYS, .shmem_enabled = SHMEM_ADVISE, @@ -1484,7 +1182,7 @@ int main(int argc, const char **argv) default_settings.khugepaged.pages_to_scan = hpage_pmd_nr * 8; save_settings(); - push_settings(&default_settings); + thp_push_settings(&default_settings); alloc_at_fault(); diff --git a/tools/testing/selftests/mm/thp_settings.c b/tools/testing/selftests/mm/thp_settings.c new file mode 100644 index 000000000000..5e8ec792cac7 --- /dev/null +++ b/tools/testing/selftests/mm/thp_settings.c @@ -0,0 +1,296 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include +#include +#include +#include + +#include "thp_settings.h" + +#define THP_SYSFS "/sys/kernel/mm/transparent_hugepage/" +#define MAX_SETTINGS_DEPTH 4 +static struct thp_settings settings_stack[MAX_SETTINGS_DEPTH]; +static int settings_index; +static struct thp_settings saved_settings; +static char dev_queue_read_ahead_path[PATH_MAX]; + +static const char * const thp_enabled_strings[] = { + "always", + "madvise", + "never", + NULL +}; + +static const char * const thp_defrag_strings[] = { + "always", + "defer", + "defer+madvise", + "madvise", + "never", + NULL +}; + +static const char * const shmem_enabled_strings[] = { + "always", + "within_size", + "advise", + "never", + "deny", + "force", + NULL +}; + +int read_file(const char *path, char *buf, size_t buflen) +{ + int fd; + ssize_t numread; + + fd = open(path, O_RDONLY); + if (fd == -1) + return 0; + + numread = read(fd, buf, buflen - 1); + if (numread < 1) { + close(fd); + return 0; + } + + buf[numread] = '\0'; + close(fd); + + return (unsigned int) numread; +} + +int write_file(const char *path, const char *buf, size_t buflen) +{ + int fd; + ssize_t numwritten; + + fd = open(path, O_WRONLY); + if (fd == -1) { + printf("open(%s)\n", path); + exit(EXIT_FAILURE); + return 0; + } + + numwritten = write(fd, buf, buflen - 1); + close(fd); + if (numwritten < 1) { + printf("write(%s)\n", buf); + exit(EXIT_FAILURE); + return 0; + } + + return (unsigned int) numwritten; +} + +const unsigned long read_num(const char *path) +{ + char buf[21]; + + if (read_file(path, buf, sizeof(buf)) < 0) { + perror("read_file()"); + exit(EXIT_FAILURE); + } + + return strtoul(buf, NULL, 10); +} + +void write_num(const char *path, unsigned long num) +{ + char buf[21]; + + sprintf(buf, "%ld", num); + if (!write_file(path, buf, strlen(buf) + 1)) { + perror(path); + exit(EXIT_FAILURE); + } +} + +int thp_read_string(const char *name, const char * const strings[]) +{ + char path[PATH_MAX]; + char buf[256]; + char *c; + int ret; + + ret = snprintf(path, PATH_MAX, THP_SYSFS "%s", name); + if (ret >= PATH_MAX) { + printf("%s: Pathname is too long\n", __func__); + exit(EXIT_FAILURE); + } + + if (!read_file(path, buf, sizeof(buf))) { + perror(path); + exit(EXIT_FAILURE); + } + + c = strchr(buf, '['); + if (!c) { + printf("%s: Parse failure\n", __func__); + exit(EXIT_FAILURE); + } + + c++; + memmove(buf, c, sizeof(buf) - (c - buf)); + + c = strchr(buf, ']'); + if (!c) { + printf("%s: Parse failure\n", __func__); + exit(EXIT_FAILURE); + } + *c = '\0'; + + ret = 0; + while (strings[ret]) { + if (!strcmp(strings[ret], buf)) + return ret; + ret++; + } + + printf("Failed to parse %s\n", name); + exit(EXIT_FAILURE); +} + +void thp_write_string(const char *name, const char *val) +{ + char path[PATH_MAX]; + int ret; + + ret = snprintf(path, PATH_MAX, THP_SYSFS "%s", name); + if (ret >= PATH_MAX) { + printf("%s: Pathname is too long\n", __func__); + exit(EXIT_FAILURE); + } + + if (!write_file(path, val, strlen(val) + 1)) { + perror(path); + exit(EXIT_FAILURE); + } +} + +const unsigned long thp_read_num(const char *name) +{ + char path[PATH_MAX]; + int ret; + + ret = snprintf(path, PATH_MAX, THP_SYSFS "%s", name); + if (ret >= PATH_MAX) { + printf("%s: Pathname is too long\n", __func__); + exit(EXIT_FAILURE); + } + return read_num(path); +} + +void thp_write_num(const char *name, unsigned long num) +{ + char path[PATH_MAX]; + int ret; + + ret = snprintf(path, PATH_MAX, THP_SYSFS "%s", name); + if (ret >= PATH_MAX) { + printf("%s: Pathname is too long\n", __func__); + exit(EXIT_FAILURE); + } + write_num(path, num); +} + +void thp_read_settings(struct thp_settings *settings) +{ + *settings = (struct thp_settings) { + .thp_enabled = thp_read_string("enabled", thp_enabled_strings), + .thp_defrag = thp_read_string("defrag", thp_defrag_strings), + .shmem_enabled = + thp_read_string("shmem_enabled", shmem_enabled_strings), + .use_zero_page = thp_read_num("use_zero_page"), + }; + settings->khugepaged = (struct khugepaged_settings) { + .defrag = thp_read_num("khugepaged/defrag"), + .alloc_sleep_millisecs = + thp_read_num("khugepaged/alloc_sleep_millisecs"), + .scan_sleep_millisecs = + thp_read_num("khugepaged/scan_sleep_millisecs"), + .max_ptes_none = thp_read_num("khugepaged/max_ptes_none"), + .max_ptes_swap = thp_read_num("khugepaged/max_ptes_swap"), + .max_ptes_shared = thp_read_num("khugepaged/max_ptes_shared"), + .pages_to_scan = thp_read_num("khugepaged/pages_to_scan"), + }; + if (dev_queue_read_ahead_path[0]) + settings->read_ahead_kb = read_num(dev_queue_read_ahead_path); +} + +void thp_write_settings(struct thp_settings *settings) +{ + struct khugepaged_settings *khugepaged = &settings->khugepaged; + + thp_write_string("enabled", thp_enabled_strings[settings->thp_enabled]); + thp_write_string("defrag", thp_defrag_strings[settings->thp_defrag]); + thp_write_string("shmem_enabled", + shmem_enabled_strings[settings->shmem_enabled]); + thp_write_num("use_zero_page", settings->use_zero_page); + + thp_write_num("khugepaged/defrag", khugepaged->defrag); + thp_write_num("khugepaged/alloc_sleep_millisecs", + khugepaged->alloc_sleep_millisecs); + thp_write_num("khugepaged/scan_sleep_millisecs", + khugepaged->scan_sleep_millisecs); + thp_write_num("khugepaged/max_ptes_none", khugepaged->max_ptes_none); + thp_write_num("khugepaged/max_ptes_swap", khugepaged->max_ptes_swap); + thp_write_num("khugepaged/max_ptes_shared", khugepaged->max_ptes_shared); + thp_write_num("khugepaged/pages_to_scan", khugepaged->pages_to_scan); + + if (dev_queue_read_ahead_path[0]) + write_num(dev_queue_read_ahead_path, settings->read_ahead_kb); +} + +struct thp_settings *thp_current_settings(void) +{ + if (!settings_index) { + printf("Fail: No settings set"); + exit(EXIT_FAILURE); + } + return settings_stack + settings_index - 1; +} + +void thp_push_settings(struct thp_settings *settings) +{ + if (settings_index >= MAX_SETTINGS_DEPTH) { + printf("Fail: Settings stack exceeded"); + exit(EXIT_FAILURE); + } + settings_stack[settings_index++] = *settings; + thp_write_settings(thp_current_settings()); +} + +void thp_pop_settings(void) +{ + if (settings_index <= 0) { + printf("Fail: Settings stack empty"); + exit(EXIT_FAILURE); + } + --settings_index; + thp_write_settings(thp_current_settings()); +} + +void thp_restore_settings(void) +{ + thp_write_settings(&saved_settings); +} + +void thp_save_settings(void) +{ + thp_read_settings(&saved_settings); +} + +void thp_set_read_ahead_path(char *path) +{ + if (!path) { + dev_queue_read_ahead_path[0] = '\0'; + return; + } + + strncpy(dev_queue_read_ahead_path, path, + sizeof(dev_queue_read_ahead_path)); + dev_queue_read_ahead_path[sizeof(dev_queue_read_ahead_path) - 1] = '\0'; +} diff --git a/tools/testing/selftests/mm/thp_settings.h b/tools/testing/selftests/mm/thp_settings.h new file mode 100644 index 000000000000..ff3d98c30617 --- /dev/null +++ b/tools/testing/selftests/mm/thp_settings.h @@ -0,0 +1,71 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __THP_SETTINGS_H__ +#define __THP_SETTINGS_H__ + +#include +#include +#include + +enum thp_enabled { + THP_ALWAYS, + THP_MADVISE, + THP_NEVER, +}; + +enum thp_defrag { + THP_DEFRAG_ALWAYS, + THP_DEFRAG_DEFER, + THP_DEFRAG_DEFER_MADVISE, + THP_DEFRAG_MADVISE, + THP_DEFRAG_NEVER, +}; + +enum shmem_enabled { + SHMEM_ALWAYS, + SHMEM_WITHIN_SIZE, + SHMEM_ADVISE, + SHMEM_NEVER, + SHMEM_DENY, + SHMEM_FORCE, +}; + +struct khugepaged_settings { + bool defrag; + unsigned int alloc_sleep_millisecs; + unsigned int scan_sleep_millisecs; + unsigned int max_ptes_none; + unsigned int max_ptes_swap; + unsigned int max_ptes_shared; + unsigned long pages_to_scan; +}; + +struct thp_settings { + enum thp_enabled thp_enabled; + enum thp_defrag thp_defrag; + enum shmem_enabled shmem_enabled; + bool use_zero_page; + struct khugepaged_settings khugepaged; + unsigned long read_ahead_kb; +}; + +int read_file(const char *path, char *buf, size_t buflen); +int write_file(const char *path, const char *buf, size_t buflen); +const unsigned long read_num(const char *path); +void write_num(const char *path, unsigned long num); + +int thp_read_string(const char *name, const char * const strings[]); +void thp_write_string(const char *name, const char *val); +const unsigned long thp_read_num(const char *name); +void thp_write_num(const char *name, unsigned long num); + +void thp_write_settings(struct thp_settings *settings); +void thp_read_settings(struct thp_settings *settings); +struct thp_settings *thp_current_settings(void); +void thp_push_settings(struct thp_settings *settings); +void thp_pop_settings(void); +void thp_restore_settings(void); +void thp_save_settings(void); + +void thp_set_read_ahead_path(char *path); + +#endif /* __THP_SETTINGS_H__ */ From patchwork Wed Nov 15 13:27:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13456681 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D596C531AC for ; Wed, 15 Nov 2023 13:28:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2DD4C6B0347; Wed, 15 Nov 2023 08:28:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 266C56B0348; Wed, 15 Nov 2023 08:28:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 12F716B0349; Wed, 15 Nov 2023 08:28:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 0337E6B0347 for ; Wed, 15 Nov 2023 08:28:22 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id C3D4814086F for ; Wed, 15 Nov 2023 13:28:21 +0000 (UTC) X-FDA: 81460267602.03.4E31E88 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf04.hostedemail.com (Postfix) with ESMTP id 1E33340017 for ; Wed, 15 Nov 2023 13:28:19 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=none; spf=pass (imf04.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700054900; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=c00HuZh8f7EUjjIu9Dd6BGADK5NutdsxCPhz2j3JNic=; b=gx5ezs6T34kipBJF59rbGWkxAd4YqsYNUswT1SDntEEb2mGfG2amZPauLjnBrgPABkOtRA 0jtEtZCp101jjlBXJaxAUmJ5S6GmJs1BmocJYu3b2LyFKp+dOwzqEw0eLQ1AVJBCMwJPDp 7BNbAy5BTmKw1xcsTrJg1MWXrB33Cig= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700054900; a=rsa-sha256; cv=none; b=YELt+LhrXKuunfyTynj5Ht3FHN0qqMpBv5sRwFptte8HrZ09QzCj/hPlrfy8R/ngAJ2CRL aS22XZGu/wuk9hsWDBzCSxQG/ZJfBlKWmo6RZrhCfoEJ5j2MUBMwvmlCZfLKaznhP/Rtiw vDy5VnmtcTysROY8oZqSc7x8g19kdkk= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=none; spf=pass (imf04.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D1B8315DB; Wed, 15 Nov 2023 05:29:04 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 676053F7B4; Wed, 15 Nov 2023 05:28:16 -0800 (PST) From: Ryan Roberts To: Andrew Morton , Matthew Wilcox , Yin Fengwei , David Hildenbrand , Yu Zhao , Catalin Marinas , Anshuman Khandual , Yang Shi , "Huang, Ying" , Zi Yan , Luis Chamberlain , Itaru Kitayama , "Kirill A. Shutemov" , John Hubbard , David Rientjes , Vlastimil Babka , Hugh Dickins , Kefeng Wang Cc: Ryan Roberts , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 07/10] selftests/mm: Support small-sized THP interface in thp_settings Date: Wed, 15 Nov 2023 13:27:31 +0000 Message-Id: <20231115132734.931023-8-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231115132734.931023-1-ryan.roberts@arm.com> References: <20231115132734.931023-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 1E33340017 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: 1detgw9zo68zda31nu7e4w74banow97d X-HE-Tag: 1700054899-967988 X-HE-Meta: U2FsdGVkX18b0HEG4Ljd67iRXzEYQVu32v9qHFNmpsKjAeLM8xy0q+zfN6tp8j45sFJYJPCqclzXL+EvFhimpdc5Vb2O1RohGG1+mPtKv8pLUbFZ2QUnSER9rRVbT9nXCVtOqWNUfGo6WbyZ1D5Y8ZQSNAQa3bOkMgFo1Kzl2C9z4fHU63iUY4QzYn4+JM4yyUBvhZxQ0XJrQNLP3PI1eFPj+AChuz12SSUIy6E+zBzOs5LO6WwbyeXllfLewk3BFX27S2gFOAUdwjmWsN1eYwHCnxqdTIhcRel4+F3GJTSTuhCUq9xFmX4YSdhT7225ra1PdgTTOR0qGWJZb/aqjjl+MNEzx3zIqbHz5P+Kp8vKCQpLMpQ6nJ7mcNajo0ivH/xyDTdksKtqquJLrG+l/eWoyp2eDb/1c64gHh9ryHzjCF38WGWVAG2dSCglK5ba5RZ+YMBAaDa3deIWU+Q4p8YmvixxrOVoaPKgLD6+JU1ouniQE6bDxQ/yNQz/GLpwu4lol715ZXsydvEPCSCNGwWH9bsbGrm4GyC+0+eov/BAW22PKMu9vl29J49yC/CPkZfFdijhu5EIJRHEgamEb0K/RmR2h3dz4gc+2LlNREEjXlcd0VovlMPMjyDDfCG+K5OImAAYy0dbenLZRcVmEGkLt5YAzU3jS5pLlu5tJhNOJDqpJkMWItycrC9FXTk7qMZNx8kNNFmpHGIkK3ITN9is8rH2Yq+hEbzQnfwrQRMwkofgyFSOgbs2sIPgrQnzyIZ2/xKjum0hZdhhskdmwhQAIzbBkqQxjamLHw4mI0+i/kTeANiZpqZXddxNnk2peigQYnlS132cZ4jr1oYQpjSkDLT+F66WyssiaWZyNIyEZotLq5CgSAcvWE3zmhDg32jcbKQUp7P7/CIyEknwv9AVo1CRrnKjoVc6GTzohZf0Azd4jKY5mT7dMLjeOrkw7CzXxFn5gJlRCExgrrc WlLUpNkR ETODtbnqnFh7nGniWyGMLBurNsOrOpYnB2mqGfGgquj0Qt1K7UT+oso0S3XI3bLcxjqjDlly4HDwiuyFX8ZQM+BdMG1oPU32uy9Z3Df4ufNKaMkTE1j7hay/oRfhKkKs3WG8cGkPpUzZgi6HTC7BzEUldMv/uSAU+sIJyOMMwzPeqW2wYaKGi4vMb8vcljfBewMpTRlsZW6JYgL/D6M3nZo+lkawFqdVWa9uajq6qyOuvjTjnaHVfjAu1WDidHf18iHARncyKmRo4zzw0fgi0rg5QRQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Save and restore the new per-size hugepage enabled setting, if available on the running kernel. Since the number of per-size directories is not fixed, solve this as simply as possible by catering for a maximum number in the thp_settings struct (20). Each array index is the order. The value of THP_NEVER is changed to 0 so that all of these new settings default to THP_NEVER and the user only needs to fill in the ones they want to enable. Signed-off-by: Ryan Roberts --- tools/testing/selftests/mm/khugepaged.c | 3 ++ tools/testing/selftests/mm/thp_settings.c | 55 ++++++++++++++++++++++- tools/testing/selftests/mm/thp_settings.h | 11 ++++- 3 files changed, 67 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/mm/khugepaged.c b/tools/testing/selftests/mm/khugepaged.c index b15e7fd70176..473ba095cffd 100644 --- a/tools/testing/selftests/mm/khugepaged.c +++ b/tools/testing/selftests/mm/khugepaged.c @@ -1141,6 +1141,7 @@ static void parse_test_type(int argc, const char **argv) int main(int argc, const char **argv) { + int hpage_pmd_order; struct thp_settings default_settings = { .thp_enabled = THP_MADVISE, .thp_defrag = THP_DEFRAG_ALWAYS, @@ -1175,11 +1176,13 @@ int main(int argc, const char **argv) exit(EXIT_FAILURE); } hpage_pmd_nr = hpage_pmd_size / page_size; + hpage_pmd_order = __builtin_ctz(hpage_pmd_nr); default_settings.khugepaged.max_ptes_none = hpage_pmd_nr - 1; default_settings.khugepaged.max_ptes_swap = hpage_pmd_nr / 8; default_settings.khugepaged.max_ptes_shared = hpage_pmd_nr / 2; default_settings.khugepaged.pages_to_scan = hpage_pmd_nr * 8; + default_settings.hugepages[hpage_pmd_order].enabled = THP_GLOBAL; save_settings(); thp_push_settings(&default_settings); diff --git a/tools/testing/selftests/mm/thp_settings.c b/tools/testing/selftests/mm/thp_settings.c index 5e8ec792cac7..3296995c8e4b 100644 --- a/tools/testing/selftests/mm/thp_settings.c +++ b/tools/testing/selftests/mm/thp_settings.c @@ -16,9 +16,10 @@ static struct thp_settings saved_settings; static char dev_queue_read_ahead_path[PATH_MAX]; static const char * const thp_enabled_strings[] = { + "never", "always", + "global", "madvise", - "never", NULL }; @@ -198,6 +199,10 @@ void thp_write_num(const char *name, unsigned long num) void thp_read_settings(struct thp_settings *settings) { + unsigned long orders = thp_supported_orders(); + char path[PATH_MAX]; + int i; + *settings = (struct thp_settings) { .thp_enabled = thp_read_string("enabled", thp_enabled_strings), .thp_defrag = thp_read_string("defrag", thp_defrag_strings), @@ -218,11 +223,26 @@ void thp_read_settings(struct thp_settings *settings) }; if (dev_queue_read_ahead_path[0]) settings->read_ahead_kb = read_num(dev_queue_read_ahead_path); + + for (i = 0; i < NR_ORDERS; i++) { + if (!((1 << i) & orders)) { + settings->hugepages[i].enabled = THP_NEVER; + continue; + } + snprintf(path, PATH_MAX, "hugepages-%ukB/enabled", + (getpagesize() >> 10) << i); + settings->hugepages[i].enabled = + thp_read_string(path, thp_enabled_strings); + } } void thp_write_settings(struct thp_settings *settings) { struct khugepaged_settings *khugepaged = &settings->khugepaged; + unsigned long orders = thp_supported_orders(); + char path[PATH_MAX]; + int enabled; + int i; thp_write_string("enabled", thp_enabled_strings[settings->thp_enabled]); thp_write_string("defrag", thp_defrag_strings[settings->thp_defrag]); @@ -242,6 +262,15 @@ void thp_write_settings(struct thp_settings *settings) if (dev_queue_read_ahead_path[0]) write_num(dev_queue_read_ahead_path, settings->read_ahead_kb); + + for (i = 0; i < NR_ORDERS; i++) { + if (!((1 << i) & orders)) + continue; + snprintf(path, PATH_MAX, "hugepages-%ukB/enabled", + (getpagesize() >> 10) << i); + enabled = settings->hugepages[i].enabled; + thp_write_string(path, thp_enabled_strings[enabled]); + } } struct thp_settings *thp_current_settings(void) @@ -294,3 +323,27 @@ void thp_set_read_ahead_path(char *path) sizeof(dev_queue_read_ahead_path)); dev_queue_read_ahead_path[sizeof(dev_queue_read_ahead_path) - 1] = '\0'; } + +unsigned long thp_supported_orders(void) +{ + unsigned long orders = 0; + char path[PATH_MAX]; + char buf[256]; + int ret; + int i; + + for (i = 0; i < NR_ORDERS; i++) { + ret = snprintf(path, PATH_MAX, THP_SYSFS "hugepages-%ukB/enabled", + (getpagesize() >> 10) << i); + if (ret >= PATH_MAX) { + printf("%s: Pathname is too long\n", __func__); + exit(EXIT_FAILURE); + } + + ret = read_file(path, buf, sizeof(buf)); + if (ret) + orders |= 1UL << i; + } + + return orders; +} diff --git a/tools/testing/selftests/mm/thp_settings.h b/tools/testing/selftests/mm/thp_settings.h index ff3d98c30617..8e71e323546c 100644 --- a/tools/testing/selftests/mm/thp_settings.h +++ b/tools/testing/selftests/mm/thp_settings.h @@ -7,9 +7,10 @@ #include enum thp_enabled { + THP_NEVER, THP_ALWAYS, + THP_GLOBAL, THP_MADVISE, - THP_NEVER, }; enum thp_defrag { @@ -29,6 +30,12 @@ enum shmem_enabled { SHMEM_FORCE, }; +#define NR_ORDERS 20 + +struct hugepages_settings { + enum thp_enabled enabled; +}; + struct khugepaged_settings { bool defrag; unsigned int alloc_sleep_millisecs; @@ -46,6 +53,7 @@ struct thp_settings { bool use_zero_page; struct khugepaged_settings khugepaged; unsigned long read_ahead_kb; + struct hugepages_settings hugepages[NR_ORDERS]; }; int read_file(const char *path, char *buf, size_t buflen); @@ -67,5 +75,6 @@ void thp_restore_settings(void); void thp_save_settings(void); void thp_set_read_ahead_path(char *path); +unsigned long thp_supported_orders(void); #endif /* __THP_SETTINGS_H__ */ From patchwork Wed Nov 15 13:27:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13456682 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E8A4C48BF9 for ; Wed, 15 Nov 2023 13:28:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 269A06B0349; Wed, 15 Nov 2023 08:28:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2183D6B034A; Wed, 15 Nov 2023 08:28:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 106FF6B034B; Wed, 15 Nov 2023 08:28:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id F19B96B0349 for ; Wed, 15 Nov 2023 08:28:24 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id C4DA6120778 for ; Wed, 15 Nov 2023 13:28:24 +0000 (UTC) X-FDA: 81460267728.22.5B3A3EA Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf27.hostedemail.com (Postfix) with ESMTP id 1B4CB40029 for ; Wed, 15 Nov 2023 13:28:22 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=none; spf=pass (imf27.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700054903; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zWwgPOnMyq5HoZd3FqMCvj8FMHXJmiw0mbIe3GqzCJ4=; b=3wwK8AYggYSCzUlitP0t5a2uQiuI9fhpZssmVGaA5IJAdBJbF0R57VZyFbKn8JBoYk5E3u tx/SKQfVXQEQ0COnjBs8himNl/YQMOE5ZCZAQt/ea1I7DM0YwKeIXKT1KD8FBPAC7xBW1Q WOwcmhdtpWE6l+0jQFzrmbQ+wJWzrv4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700054903; a=rsa-sha256; cv=none; b=bERB+WeADg8u/wHoiTXy2c9Ppj1ClLw7a9uliI48EX9epVeumRLS8TQPZ+Ip4xf6IVNcVk OLhR65ua3inepRqJBEkGlXbQ6krk05TbZG+KRkm16+xKHeyk+KBG+AP3QHKqWnIFQPGkDW FByjoZlK4xS8rvMG/AT6B23od4vo5PI= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=none; spf=pass (imf27.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E9C04DA7; Wed, 15 Nov 2023 05:29:07 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7A2383F7B4; Wed, 15 Nov 2023 05:28:19 -0800 (PST) From: Ryan Roberts To: Andrew Morton , Matthew Wilcox , Yin Fengwei , David Hildenbrand , Yu Zhao , Catalin Marinas , Anshuman Khandual , Yang Shi , "Huang, Ying" , Zi Yan , Luis Chamberlain , Itaru Kitayama , "Kirill A. Shutemov" , John Hubbard , David Rientjes , Vlastimil Babka , Hugh Dickins , Kefeng Wang Cc: Ryan Roberts , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 08/10] selftests/mm/khugepaged: Enlighten for small-sized THP Date: Wed, 15 Nov 2023 13:27:32 +0000 Message-Id: <20231115132734.931023-9-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231115132734.931023-1-ryan.roberts@arm.com> References: <20231115132734.931023-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 1B4CB40029 X-Rspam-User: X-Stat-Signature: x59s53xbfgagxr7gyyctpj33i648dwee X-Rspamd-Server: rspam03 X-HE-Tag: 1700054902-512542 X-HE-Meta: U2FsdGVkX18Vx8Lv5xnSsoS3A7yHEkGEIxMitZTVhzNSh6CdjFtLFztAfVMOit0Hy34VEsLiiQ0x7XuuMLhBqdEAyWnA0f5NKzuu5+YF0/+fG/dJ1OpExCWddN1sbl4U29JRrXkzbm2SLIW44rbAiAbnoXUv1pW069F8rL4IxzIIj2FVsIq/2Rs8h6dYjQZ7yA8WqUnlYwfMUiH34h9nMQdGwRl5J0WRd/uxuzfZKO4fTY+lZWQAcwCUpBYoDyaHKoBjPZT398f2XKg2yvloPRIKvjmmbC7mzpMhKT39KFvW68qMlCqtRhzvem+lkzusstAXOJtMJaN+SNQZ6/Pyx5hgDwFGKr5ph/Zs2p7SnEcpYIQOKTo1HzNtbj0ch4i2/W4aSafV9/MARrcOTGieI+BpGFdEdGW68Acw9y4CtUIaJJGE7A7yEKG2hiRCC2Ypj4rNJGNuX1sUjJDK86GfcVEdncPAdc4dErpSlh/u+KHc8fnsdCTL4If2DXjAoZ3CpPmZv+jcMKMqkr3wmzPztCtont6T4xTer8OSH26KS12/SLlS9Bz++NpZLWC++GcF8WNyDGDzwfahQgiueNchMbdD/w3gHC7laSX1tr60GUDPUA5uC4k2ADzP0CRDhH1pprtpvQMUlKRrZyz8V8jhtMg2mA8T3u9YhzNL20GxJUEj82ETBHUOShp8zgqq9Jr/hTWUAokDx8TOT4HYywdB85RB0b5rDGsnqXItqDiKAllqw4XZSu3jTwnWUZyhapFPCEDgdpvmDMuEjtCZyelhU+L5aYLQAcsX0kGvg7W6a9jJ4rv6Ms6TEpkhhPd743lfjcANDln27WjeXL8d9ZJtBBPUrLFLRoxCBMwOmtjEqFg+e7tnbdfuHj7dWEc7EiKVkeqY+fmHVzFfgV1Md++hZJW0L25B/p9Ikjm32vGbyctPxbWme+LVJEWrZsLgAYw4192+Ny06pHuyBJ/ZEu2 Sr9ZCM65 VgMvokPuH2lbtJcSdLJx9PstlU1nNzbYsSOzMMw0S3nrHS8rHJwcp9hHLm6yO9uocWhjyif6VeqIMjX8FdqWLoD3qFoFXM5IJwmGF+cVDXRaN62FMc1gXqsx0sjDCpqvOBTLmGpKl7lx4lBqIK5BwB68w8fyoBeE001/DPyox3yVHOo5kXwXoi4LgbYv8Ry2jeHB6jrkOTnXx9O/CJtqiltEkGvmTuYO2lKDrUDLoGF4J/lc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The `collapse_max_ptes_none` test was previously failing when small-sized THP had enabled="always". The root cause is because the test faults in 1 page less than the threshold it set for collapsing. But when using small-sized THP we "over allocate" and therefore the threshold is passed, and collapse unexpectedly succeeds. Solve this by enlightening khugepaged selftest. Add a command line option to pass in the desired small-sized THP that should be used for all anonymous allocations. The harness will then explicitly configure small-sized THP as requested and modify the `collapse_max_ptes_none` test so that it faults in the threshold minus the number of pages in the configured small-sized THP. If no command line option is provided, default to order 0, as per previous behaviour. I chose to use an order in the command line interface, since this makes the interface agnostic of base page size, making it easier to invoke from run_vmtests.sh. Signed-off-by: Ryan Roberts --- tools/testing/selftests/mm/khugepaged.c | 48 +++++++++++++++++------ tools/testing/selftests/mm/run_vmtests.sh | 2 + 2 files changed, 39 insertions(+), 11 deletions(-) diff --git a/tools/testing/selftests/mm/khugepaged.c b/tools/testing/selftests/mm/khugepaged.c index 473ba095cffd..4d24f2eb158e 100644 --- a/tools/testing/selftests/mm/khugepaged.c +++ b/tools/testing/selftests/mm/khugepaged.c @@ -28,6 +28,7 @@ static unsigned long hpage_pmd_size; static unsigned long page_size; static int hpage_pmd_nr; +static int anon_order; #define PID_SMAPS "/proc/self/smaps" #define TEST_FILE "collapse_test_file" @@ -607,6 +608,11 @@ static bool is_tmpfs(struct mem_ops *ops) return ops == &__file_ops && finfo.type == VMA_SHMEM; } +static bool is_anon(struct mem_ops *ops) +{ + return ops == &__anon_ops; +} + static void alloc_at_fault(void) { struct thp_settings settings = *thp_current_settings(); @@ -673,6 +679,7 @@ static void collapse_max_ptes_none(struct collapse_context *c, struct mem_ops *o int max_ptes_none = hpage_pmd_nr / 2; struct thp_settings settings = *thp_current_settings(); void *p; + int fault_nr_pages = is_anon(ops) ? 1 << anon_order : 1; settings.khugepaged.max_ptes_none = max_ptes_none; thp_push_settings(&settings); @@ -686,10 +693,10 @@ static void collapse_max_ptes_none(struct collapse_context *c, struct mem_ops *o goto skip; } - ops->fault(p, 0, (hpage_pmd_nr - max_ptes_none - 1) * page_size); + ops->fault(p, 0, (hpage_pmd_nr - max_ptes_none - fault_nr_pages) * page_size); c->collapse("Maybe collapse with max_ptes_none exceeded", p, 1, ops, !c->enforce_pte_scan_limits); - validate_memory(p, 0, (hpage_pmd_nr - max_ptes_none - 1) * page_size); + validate_memory(p, 0, (hpage_pmd_nr - max_ptes_none - fault_nr_pages) * page_size); if (c->enforce_pte_scan_limits) { ops->fault(p, 0, (hpage_pmd_nr - max_ptes_none) * page_size); @@ -1076,7 +1083,7 @@ static void madvise_retracted_page_tables(struct collapse_context *c, static void usage(void) { - fprintf(stderr, "\nUsage: ./khugepaged [dir]\n\n"); + fprintf(stderr, "\nUsage: ./khugepaged [OPTIONS] [dir]\n\n"); fprintf(stderr, "\t\t: :\n"); fprintf(stderr, "\t\t: [all|khugepaged|madvise]\n"); fprintf(stderr, "\t\t: [all|anon|file|shmem]\n"); @@ -1085,15 +1092,34 @@ static void usage(void) fprintf(stderr, "\tCONFIG_READ_ONLY_THP_FOR_FS=y\n"); fprintf(stderr, "\n\tif [dir] is a (sub)directory of a tmpfs mount, tmpfs must be\n"); fprintf(stderr, "\tmounted with huge=madvise option for khugepaged tests to work\n"); + fprintf(stderr, "\n\tSupported Options:\n"); + fprintf(stderr, "\t\t-h: This help message.\n"); + fprintf(stderr, "\t\t-s: Small-sized THP size, expressed as page order.\n"); + fprintf(stderr, "\t\t Defaults to 0. Use this size for anon allocations.\n"); exit(1); } -static void parse_test_type(int argc, const char **argv) +static void parse_test_type(int argc, char **argv) { + int opt; char *buf; const char *token; - if (argc == 1) { + while ((opt = getopt(argc, argv, "s:h")) != -1) { + switch (opt) { + case 's': + anon_order = atoi(optarg); + break; + case 'h': + default: + usage(); + } + } + + argv += optind; + argc -= optind; + + if (argc == 0) { /* Backwards compatibility */ khugepaged_context = &__khugepaged_context; madvise_context = &__madvise_context; @@ -1101,7 +1127,7 @@ static void parse_test_type(int argc, const char **argv) return; } - buf = strdup(argv[1]); + buf = strdup(argv[0]); token = strsep(&buf, ":"); if (!strcmp(token, "all")) { @@ -1135,11 +1161,13 @@ static void parse_test_type(int argc, const char **argv) if (!file_ops) return; - if (argc != 3) + if (argc != 2) usage(); + + get_finfo(argv[1]); } -int main(int argc, const char **argv) +int main(int argc, char **argv) { int hpage_pmd_order; struct thp_settings default_settings = { @@ -1164,9 +1192,6 @@ int main(int argc, const char **argv) parse_test_type(argc, argv); - if (file_ops) - get_finfo(argv[2]); - setbuf(stdout, NULL); page_size = getpagesize(); @@ -1183,6 +1208,7 @@ int main(int argc, const char **argv) default_settings.khugepaged.max_ptes_shared = hpage_pmd_nr / 2; default_settings.khugepaged.pages_to_scan = hpage_pmd_nr * 8; default_settings.hugepages[hpage_pmd_order].enabled = THP_GLOBAL; + default_settings.hugepages[anon_order].enabled = THP_ALWAYS; save_settings(); thp_push_settings(&default_settings); diff --git a/tools/testing/selftests/mm/run_vmtests.sh b/tools/testing/selftests/mm/run_vmtests.sh index 00757445278e..f3fa2238daef 100755 --- a/tools/testing/selftests/mm/run_vmtests.sh +++ b/tools/testing/selftests/mm/run_vmtests.sh @@ -359,6 +359,8 @@ CATEGORY="cow" run_test ./cow CATEGORY="thp" run_test ./khugepaged +CATEGORY="thp" run_test ./khugepaged -s 2 + CATEGORY="thp" run_test ./transhuge-stress -d 20 CATEGORY="thp" run_test ./split_huge_page_test From patchwork Wed Nov 15 13:27:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13456683 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ABE30C48BF9 for ; Wed, 15 Nov 2023 13:28:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 415816B034B; Wed, 15 Nov 2023 08:28:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3C69E6B034C; Wed, 15 Nov 2023 08:28:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 23FCC6B034D; Wed, 15 Nov 2023 08:28:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 10CED6B034B for ; Wed, 15 Nov 2023 08:28:28 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id BB07E16081C for ; Wed, 15 Nov 2023 13:28:27 +0000 (UTC) X-FDA: 81460267854.18.31814F7 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf23.hostedemail.com (Postfix) with ESMTP id 21E4414000F for ; Wed, 15 Nov 2023 13:28:25 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=none; spf=pass (imf23.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700054906; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rAtan5yxh3gshsLYEs1fIKCCaelJr+Yedz3NEnvQJMU=; b=BME8np9LH52txG4V6dXCvhZAXhtMHzXrgauZ0cKhX424pHxVFt1Q69RXtQVAYcmHiV5ajQ M1j5qiVLFho/HS1cv0BlS3ZPUvkmZLyNDosu9fv08iqTekHdRgKxV66fvgNrJ7tHFkWt3y JADPE2N2pL0WtNCTa8WgHS6bw6m9UmA= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=none; spf=pass (imf23.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700054906; a=rsa-sha256; cv=none; b=BHdXzK2mCEI8z2/EHkrLi82vAgbnjr1cK8DrZfs4aX9G4wyAAVrl+v+MErk+sV/H0mEaFE OaiuNGlQJ2h0lVKjIUgsIDrlBGqOewkgbS4AutyiVQF5sErBd5iAg6z1/hx//VMA24M+o0 b9gZXceTgW9BFdoSuJPfjElcULkmMxA= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 08A441650; Wed, 15 Nov 2023 05:29:11 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 923993F7B4; Wed, 15 Nov 2023 05:28:22 -0800 (PST) From: Ryan Roberts To: Andrew Morton , Matthew Wilcox , Yin Fengwei , David Hildenbrand , Yu Zhao , Catalin Marinas , Anshuman Khandual , Yang Shi , "Huang, Ying" , Zi Yan , Luis Chamberlain , Itaru Kitayama , "Kirill A. Shutemov" , John Hubbard , David Rientjes , Vlastimil Babka , Hugh Dickins , Kefeng Wang Cc: Ryan Roberts , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 09/10] selftests/mm/cow: Generalize do_run_with_thp() helper Date: Wed, 15 Nov 2023 13:27:33 +0000 Message-Id: <20231115132734.931023-10-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231115132734.931023-1-ryan.roberts@arm.com> References: <20231115132734.931023-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 21E4414000F X-Rspam-User: X-Stat-Signature: 1saaxakepw41possatq9jpyoqreotw6i X-Rspamd-Server: rspam01 X-HE-Tag: 1700054905-652252 X-HE-Meta: U2FsdGVkX1+qoGNqR5c0e8/IRrYw6MRQqZhSHSWP5eQz2rBtTqprQmjn2YOh8Xiukuw9b8To7kNLgAP0KcduzeTZNOxOY2RY1JtpBL6RGJTRT02Z813w9zoHTfiKHh1JVjHbMxHMTxDFZWv8G1vUyVHHbwvZFv3+Ccg6O+R/NjKi72KcZKh4WN2zEm7EvuHRtMKysGdP6z4znroRvevM/4QBZ/BUnu8WHT9Rxgd4MXnHdn4t27lHP3JPp+bZJXmuY4BXb0Spw9IEoR/EyAjfAvDHDBiODEihrq9uR0t63RtxTiqdbj1U6EHoFbRopON80AU5Gnq2BwZ/S8SVGHLqAYpPahUtZWF5kgkWDlIyOfOXmPbiLUd0KMyhfkyOjVQvVaf1wvG5mAApsb59XkSDDzGrdCMktVSiIyq5q5S+EPHtc3doZoqybH4Jx4bHQ754z5bAPtzvrYJxEzDnlFwHudIpwRaEJNBRoSCctnQ7FcoW6agg/A4jQfLMknBjoiB9gmJ07IBUS0Lkr3VwWDbIZxpygAEQWiH/Wq6fdP4zBcB/r1ISMvsLcPrc1jH9TL0MKOguSuUqOmOlSy5S6dr2SYIxlDHX3OwtoUh1AqPIvw9RGWeCDPz3i9EQmBhCsyJyjsc96L+TaL9JoOwAe0K9rnknYNHxkKWtQ5/WkFbG32oWoiwO8cwHxBCSs8j3prwGqPeUJZpu7FjF+CnoXU8D2HV+lkKoyWvDZ4yzqSC3woHD/i3QqRTjC2U7HsVeJIVDSaK6yCeFz6Em6VowuH3+oFMmS0SZx4drhc6TrfJCJsbSmr5JlPFydAiaThDR7XDjaRRU7I5HJqJFLIGDcaU9RVrXTld/ihw41Ng932fmhOSULRnTaC0mqolzJ08w/LW6nswab5WYt1LowPzfID75MpF4/Mi1UcFQT2ATlmrIoYZjpOt9TNBx7StM7Qz4cxgiO1g6srC7nNN1Fars56W NPko73hX WA3PE8zz9/d42FiyHArWQdPodsCPB+mdn14w5q2yCNniI77R1vQwqYEnEzvwCln3ylPe85e6LUrraXj+HRoEzYyraDZUqSs24QegzG3EVrJwdeLubYZmflUVWNoM8A811AksHDWMc1hD/8DiKwmInUe4RTfeqCvCkR67h5vmY9aXYHZCfVa+gTQeT7TU63mPKUr4sh0sHDeJ1NJ6djnFFbOiFjC9l2tUTbvsDKT5bQ/sKPY/qt6jRrQU9L4E1X3dF393mdJFljgfe+oMWJuJvYA1HbQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: do_run_with_thp() prepares (PMD-sized) THP memory into different states before running tests. With the introduction of small-sized THP, we would like to reuse this logic to also test those smaller THP sizes. So let's add a size parameter which tells the function what size THP it should operate on. A separate commit will utilize this change to add new tests for small-sized THP, where available. Signed-off-by: Ryan Roberts --- tools/testing/selftests/mm/cow.c | 146 +++++++++++++++++-------------- 1 file changed, 79 insertions(+), 67 deletions(-) diff --git a/tools/testing/selftests/mm/cow.c b/tools/testing/selftests/mm/cow.c index 7324ce5363c0..d03c453cfd5c 100644 --- a/tools/testing/selftests/mm/cow.c +++ b/tools/testing/selftests/mm/cow.c @@ -32,7 +32,7 @@ static size_t pagesize; static int pagemap_fd; -static size_t thpsize; +static size_t pmdsize; static int nr_hugetlbsizes; static size_t hugetlbsizes[10]; static int gup_fd; @@ -734,14 +734,14 @@ enum thp_run { THP_RUN_PARTIAL_SHARED, }; -static void do_run_with_thp(test_fn fn, enum thp_run thp_run) +static void do_run_with_thp(test_fn fn, enum thp_run thp_run, size_t size) { char *mem, *mmap_mem, *tmp, *mremap_mem = MAP_FAILED; - size_t size, mmap_size, mremap_size; + size_t mmap_size, mremap_size; int ret; - /* For alignment purposes, we need twice the thp size. */ - mmap_size = 2 * thpsize; + /* For alignment purposes, we need twice the requested size. */ + mmap_size = 2 * size; mmap_mem = mmap(NULL, mmap_size, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); if (mmap_mem == MAP_FAILED) { @@ -749,36 +749,40 @@ static void do_run_with_thp(test_fn fn, enum thp_run thp_run) return; } - /* We need a THP-aligned memory area. */ - mem = (char *)(((uintptr_t)mmap_mem + thpsize) & ~(thpsize - 1)); + /* We need to naturally align the memory area. */ + mem = (char *)(((uintptr_t)mmap_mem + size) & ~(size - 1)); - ret = madvise(mem, thpsize, MADV_HUGEPAGE); + ret = madvise(mem, size, MADV_HUGEPAGE); if (ret) { ksft_test_result_fail("MADV_HUGEPAGE failed\n"); goto munmap; } /* - * Try to populate a THP. Touch the first sub-page and test if we get - * another sub-page populated automatically. + * Try to populate a THP. Touch the first sub-page and test if + * we get the last sub-page populated automatically. */ mem[0] = 0; - if (!pagemap_is_populated(pagemap_fd, mem + pagesize)) { + if (!pagemap_is_populated(pagemap_fd, mem + size - pagesize)) { ksft_test_result_skip("Did not get a THP populated\n"); goto munmap; } - memset(mem, 0, thpsize); + memset(mem, 0, size); - size = thpsize; switch (thp_run) { case THP_RUN_PMD: case THP_RUN_PMD_SWAPOUT: + if (size != pmdsize) { + ksft_test_result_fail("test bug: can't PMD-map size\n"); + goto munmap; + } break; case THP_RUN_PTE: case THP_RUN_PTE_SWAPOUT: /* * Trigger PTE-mapping the THP by temporarily mapping a single - * subpage R/O. + * subpage R/O. This is a noop if the THP is not pmdsize (and + * therefore already PTE-mapped). */ ret = mprotect(mem + pagesize, pagesize, PROT_READ); if (ret) { @@ -797,7 +801,7 @@ static void do_run_with_thp(test_fn fn, enum thp_run thp_run) * Discard all but a single subpage of that PTE-mapped THP. What * remains is a single PTE mapping a single subpage. */ - ret = madvise(mem + pagesize, thpsize - pagesize, MADV_DONTNEED); + ret = madvise(mem + pagesize, size - pagesize, MADV_DONTNEED); if (ret) { ksft_test_result_fail("MADV_DONTNEED failed\n"); goto munmap; @@ -809,7 +813,7 @@ static void do_run_with_thp(test_fn fn, enum thp_run thp_run) * Remap half of the THP. We need some new memory location * for that. */ - mremap_size = thpsize / 2; + mremap_size = size / 2; mremap_mem = mmap(NULL, mremap_size, PROT_NONE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); if (mem == MAP_FAILED) { @@ -830,7 +834,7 @@ static void do_run_with_thp(test_fn fn, enum thp_run thp_run) * child. This will result in some parts of the THP never * have been shared. */ - ret = madvise(mem + pagesize, thpsize - pagesize, MADV_DONTFORK); + ret = madvise(mem + pagesize, size - pagesize, MADV_DONTFORK); if (ret) { ksft_test_result_fail("MADV_DONTFORK failed\n"); goto munmap; @@ -844,7 +848,7 @@ static void do_run_with_thp(test_fn fn, enum thp_run thp_run) } wait(&ret); /* Allow for sharing all pages again. */ - ret = madvise(mem + pagesize, thpsize - pagesize, MADV_DOFORK); + ret = madvise(mem + pagesize, size - pagesize, MADV_DOFORK); if (ret) { ksft_test_result_fail("MADV_DOFORK failed\n"); goto munmap; @@ -875,52 +879,60 @@ static void do_run_with_thp(test_fn fn, enum thp_run thp_run) munmap(mremap_mem, mremap_size); } -static void run_with_thp(test_fn fn, const char *desc) +static void run_with_thp(test_fn fn, const char *desc, size_t size) { - ksft_print_msg("[RUN] %s ... with THP\n", desc); - do_run_with_thp(fn, THP_RUN_PMD); + ksft_print_msg("[RUN] %s ... with THP (%zu kB)\n", + desc, size / 1024); + do_run_with_thp(fn, THP_RUN_PMD, size); } -static void run_with_thp_swap(test_fn fn, const char *desc) +static void run_with_thp_swap(test_fn fn, const char *desc, size_t size) { - ksft_print_msg("[RUN] %s ... with swapped-out THP\n", desc); - do_run_with_thp(fn, THP_RUN_PMD_SWAPOUT); + ksft_print_msg("[RUN] %s ... with swapped-out THP (%zu kB)\n", + desc, size / 1024); + do_run_with_thp(fn, THP_RUN_PMD_SWAPOUT, size); } -static void run_with_pte_mapped_thp(test_fn fn, const char *desc) +static void run_with_pte_mapped_thp(test_fn fn, const char *desc, size_t size) { - ksft_print_msg("[RUN] %s ... with PTE-mapped THP\n", desc); - do_run_with_thp(fn, THP_RUN_PTE); + ksft_print_msg("[RUN] %s ... with PTE-mapped THP (%zu kB)\n", + desc, size / 1024); + do_run_with_thp(fn, THP_RUN_PTE, size); } -static void run_with_pte_mapped_thp_swap(test_fn fn, const char *desc) +static void run_with_pte_mapped_thp_swap(test_fn fn, const char *desc, size_t size) { - ksft_print_msg("[RUN] %s ... with swapped-out, PTE-mapped THP\n", desc); - do_run_with_thp(fn, THP_RUN_PTE_SWAPOUT); + ksft_print_msg("[RUN] %s ... with swapped-out, PTE-mapped THP (%zu kB)\n", + desc, size / 1024); + do_run_with_thp(fn, THP_RUN_PTE_SWAPOUT, size); } -static void run_with_single_pte_of_thp(test_fn fn, const char *desc) +static void run_with_single_pte_of_thp(test_fn fn, const char *desc, size_t size) { - ksft_print_msg("[RUN] %s ... with single PTE of THP\n", desc); - do_run_with_thp(fn, THP_RUN_SINGLE_PTE); + ksft_print_msg("[RUN] %s ... with single PTE of THP (%zu kB)\n", + desc, size / 1024); + do_run_with_thp(fn, THP_RUN_SINGLE_PTE, size); } -static void run_with_single_pte_of_thp_swap(test_fn fn, const char *desc) +static void run_with_single_pte_of_thp_swap(test_fn fn, const char *desc, size_t size) { - ksft_print_msg("[RUN] %s ... with single PTE of swapped-out THP\n", desc); - do_run_with_thp(fn, THP_RUN_SINGLE_PTE_SWAPOUT); + ksft_print_msg("[RUN] %s ... with single PTE of swapped-out THP (%zu kB)\n", + desc, size / 1024); + do_run_with_thp(fn, THP_RUN_SINGLE_PTE_SWAPOUT, size); } -static void run_with_partial_mremap_thp(test_fn fn, const char *desc) +static void run_with_partial_mremap_thp(test_fn fn, const char *desc, size_t size) { - ksft_print_msg("[RUN] %s ... with partially mremap()'ed THP\n", desc); - do_run_with_thp(fn, THP_RUN_PARTIAL_MREMAP); + ksft_print_msg("[RUN] %s ... with partially mremap()'ed THP (%zu kB)\n", + desc, size / 1024); + do_run_with_thp(fn, THP_RUN_PARTIAL_MREMAP, size); } -static void run_with_partial_shared_thp(test_fn fn, const char *desc) +static void run_with_partial_shared_thp(test_fn fn, const char *desc, size_t size) { - ksft_print_msg("[RUN] %s ... with partially shared THP\n", desc); - do_run_with_thp(fn, THP_RUN_PARTIAL_SHARED); + ksft_print_msg("[RUN] %s ... with partially shared THP (%zu kB)\n", + desc, size / 1024); + do_run_with_thp(fn, THP_RUN_PARTIAL_SHARED, size); } static void run_with_hugetlb(test_fn fn, const char *desc, size_t hugetlbsize) @@ -1091,15 +1103,15 @@ static void run_anon_test_case(struct test_case const *test_case) run_with_base_page(test_case->fn, test_case->desc); run_with_base_page_swap(test_case->fn, test_case->desc); - if (thpsize) { - run_with_thp(test_case->fn, test_case->desc); - run_with_thp_swap(test_case->fn, test_case->desc); - run_with_pte_mapped_thp(test_case->fn, test_case->desc); - run_with_pte_mapped_thp_swap(test_case->fn, test_case->desc); - run_with_single_pte_of_thp(test_case->fn, test_case->desc); - run_with_single_pte_of_thp_swap(test_case->fn, test_case->desc); - run_with_partial_mremap_thp(test_case->fn, test_case->desc); - run_with_partial_shared_thp(test_case->fn, test_case->desc); + if (pmdsize) { + run_with_thp(test_case->fn, test_case->desc, pmdsize); + run_with_thp_swap(test_case->fn, test_case->desc, pmdsize); + run_with_pte_mapped_thp(test_case->fn, test_case->desc, pmdsize); + run_with_pte_mapped_thp_swap(test_case->fn, test_case->desc, pmdsize); + run_with_single_pte_of_thp(test_case->fn, test_case->desc, pmdsize); + run_with_single_pte_of_thp_swap(test_case->fn, test_case->desc, pmdsize); + run_with_partial_mremap_thp(test_case->fn, test_case->desc, pmdsize); + run_with_partial_shared_thp(test_case->fn, test_case->desc, pmdsize); } for (i = 0; i < nr_hugetlbsizes; i++) run_with_hugetlb(test_case->fn, test_case->desc, @@ -1120,7 +1132,7 @@ static int tests_per_anon_test_case(void) { int tests = 2 + nr_hugetlbsizes; - if (thpsize) + if (pmdsize) tests += 8; return tests; } @@ -1329,7 +1341,7 @@ static void run_anon_thp_test_cases(void) { int i; - if (!thpsize) + if (!pmdsize) return; ksft_print_msg("[INFO] Anonymous THP tests\n"); @@ -1338,13 +1350,13 @@ static void run_anon_thp_test_cases(void) struct test_case const *test_case = &anon_thp_test_cases[i]; ksft_print_msg("[RUN] %s\n", test_case->desc); - do_run_with_thp(test_case->fn, THP_RUN_PMD); + do_run_with_thp(test_case->fn, THP_RUN_PMD, pmdsize); } } static int tests_per_anon_thp_test_case(void) { - return thpsize ? 1 : 0; + return pmdsize ? 1 : 0; } typedef void (*non_anon_test_fn)(char *mem, const char *smem, size_t size); @@ -1419,7 +1431,7 @@ static void run_with_huge_zeropage(non_anon_test_fn fn, const char *desc) } /* For alignment purposes, we need twice the thp size. */ - mmap_size = 2 * thpsize; + mmap_size = 2 * pmdsize; mmap_mem = mmap(NULL, mmap_size, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); if (mmap_mem == MAP_FAILED) { @@ -1434,11 +1446,11 @@ static void run_with_huge_zeropage(non_anon_test_fn fn, const char *desc) } /* We need a THP-aligned memory area. */ - mem = (char *)(((uintptr_t)mmap_mem + thpsize) & ~(thpsize - 1)); - smem = (char *)(((uintptr_t)mmap_smem + thpsize) & ~(thpsize - 1)); + mem = (char *)(((uintptr_t)mmap_mem + pmdsize) & ~(pmdsize - 1)); + smem = (char *)(((uintptr_t)mmap_smem + pmdsize) & ~(pmdsize - 1)); - ret = madvise(mem, thpsize, MADV_HUGEPAGE); - ret |= madvise(smem, thpsize, MADV_HUGEPAGE); + ret = madvise(mem, pmdsize, MADV_HUGEPAGE); + ret |= madvise(smem, pmdsize, MADV_HUGEPAGE); if (ret) { ksft_test_result_fail("MADV_HUGEPAGE failed\n"); goto munmap; @@ -1457,7 +1469,7 @@ static void run_with_huge_zeropage(non_anon_test_fn fn, const char *desc) goto munmap; } - fn(mem, smem, thpsize); + fn(mem, smem, pmdsize); munmap: munmap(mmap_mem, mmap_size); if (mmap_smem != MAP_FAILED) @@ -1650,7 +1662,7 @@ static void run_non_anon_test_case(struct non_anon_test_case const *test_case) run_with_zeropage(test_case->fn, test_case->desc); run_with_memfd(test_case->fn, test_case->desc); run_with_tmpfile(test_case->fn, test_case->desc); - if (thpsize) + if (pmdsize) run_with_huge_zeropage(test_case->fn, test_case->desc); for (i = 0; i < nr_hugetlbsizes; i++) run_with_memfd_hugetlb(test_case->fn, test_case->desc, @@ -1671,7 +1683,7 @@ static int tests_per_non_anon_test_case(void) { int tests = 3 + nr_hugetlbsizes; - if (thpsize) + if (pmdsize) tests += 1; return tests; } @@ -1681,10 +1693,10 @@ int main(int argc, char **argv) int err; pagesize = getpagesize(); - thpsize = read_pmd_pagesize(); - if (thpsize) - ksft_print_msg("[INFO] detected THP size: %zu KiB\n", - thpsize / 1024); + pmdsize = read_pmd_pagesize(); + if (pmdsize) + ksft_print_msg("[INFO] detected PMD-mapped THP size: %zu KiB\n", + pmdsize / 1024); nr_hugetlbsizes = detect_hugetlb_page_sizes(hugetlbsizes, ARRAY_SIZE(hugetlbsizes)); detect_huge_zeropage(); From patchwork Wed Nov 15 13:27:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13456684 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59D59C48BF9 for ; Wed, 15 Nov 2023 13:28:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E9D246B034D; Wed, 15 Nov 2023 08:28:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E4D346B034E; Wed, 15 Nov 2023 08:28:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D15946B034F; Wed, 15 Nov 2023 08:28:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id C29CA6B034D for ; Wed, 15 Nov 2023 08:28:31 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 99BCA80A89 for ; Wed, 15 Nov 2023 13:28:31 +0000 (UTC) X-FDA: 81460268022.04.A48619A Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf01.hostedemail.com (Postfix) with ESMTP id 9471940010 for ; Wed, 15 Nov 2023 13:28:29 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf01.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700054909; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lVXulFLM/BbOzECqCrIvkGa5SvSb3hxRV73P3vTC2Os=; b=CaNLym3hpsSTS3xjzvLs9JPJOKj3IG/jCfxdfrBnLtlNGrCJ0PMn+yLgqAvaJBgJIgzpNH rPvGUf0znWOi1Y16+QM5NNVLobYrLBhPySoZVYwr8kNmEB+LmX05jCG6Ob6RjFqBcIshtz gbokTXlp6h3UmjbxrHqjp/knxRrBSRA= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf01.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700054909; a=rsa-sha256; cv=none; b=Fj4p7/x1Mlb959xYZ/JRV50Yt5X6TiXoNKPim722EZu8vtqda1hyP5vJBAKWObuaJf2t9Z wi1eIl79eKysZe52PyH65Hn839AUfX1T99wBPCXErPZO3aai0NF80713chv/YhsjY372JO 1Ou8qw2H6vSYcLecgMO77EPTe+4D+dg= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 20C55DA7; Wed, 15 Nov 2023 05:29:14 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A56AB3F7B4; Wed, 15 Nov 2023 05:28:25 -0800 (PST) From: Ryan Roberts To: Andrew Morton , Matthew Wilcox , Yin Fengwei , David Hildenbrand , Yu Zhao , Catalin Marinas , Anshuman Khandual , Yang Shi , "Huang, Ying" , Zi Yan , Luis Chamberlain , Itaru Kitayama , "Kirill A. Shutemov" , John Hubbard , David Rientjes , Vlastimil Babka , Hugh Dickins , Kefeng Wang Cc: Ryan Roberts , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 10/10] selftests/mm/cow: Add tests for anonymous small-sized THP Date: Wed, 15 Nov 2023 13:27:34 +0000 Message-Id: <20231115132734.931023-11-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231115132734.931023-1-ryan.roberts@arm.com> References: <20231115132734.931023-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 9471940010 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: fo88mnkuegarce9eoumgfgxo47b655zf X-HE-Tag: 1700054909-575265 X-HE-Meta: U2FsdGVkX1+xz71aOMmfx1yfwvXJkb4np7my0RBs0j9vmBWrhCKe7y3CNYK8ml1YypGFRMVVUoLGHfdcawrWTiWOEuoL5FUB81ZIv7JjDcBmUMS57bD2K+jhzBMyxR1Rmk9Z70Aju0oAyNH0yOUeeeJXiuGgL6LSY4TlO2S+P9uRPZJs7CjjtN4cxup0GIyhEg9Qo+B+tBk0Yyb2vLyqwy1TCwVKI0A/2Ow/Y33phKGUoCPKh8ZbCT6YJ3NFvx58affpKPshGJ/MEg0zzDZfnQe7gpxNzLRHAj+bcBkGkD2d/FVZTC9WWMchKYMCEQpaH5O8F5wkQ6MG9ussDNehHq4Dkc/jCGTn1zAuo2csqtJOtCkino5xIjme4xCB4ZhDespFUcmBj9W7sGpD2u+jYOkbjJLZc6jQrLn9B6ZHynhykDWqmutohl81QW3JN4V4V/bisSgRqktJynLNyUcl9Fi3wj69yWB/4j/p6yMaqkJkh6sxeIITxXOxjLfh52ML5KovgOIRUyC5bsEvkcR/zgBzFg22zo+noyovfKIcZyg9H3dJR9HpvyIKdaM+TTbshh6UD7L7Q+kxKrMR/SeEck/ZZh0X6TFY8XO0oH+5jLM+CVCmpZWZ9+PQckxJQbUw3fpBCzam8ddGN4PTZZQebwoKtyHU+mXXQu6lhpF1XjEsz+T1it6A/6L58u8mCRIAuiqAg6vfNoWmL1nWVTl30TjTvgsrCJnqZ2Cm7qoDKDWI8sIa9808daS1G2fz5o/0ZpbicVy2qvZCdZKlYCkfVkKabHuWq914aE73g4H3YfYFGR+oHt7dBRazNDpzjHURx+wjaQD1wYTo/FJPTIX9cWfNNfaq3UFxb7xpOruA/2UKH9vif2WX+Kv5CqSRkU4jaWbQsetSZj2QBGgAESa26i4RcJVg4ftELqDpP23+YMG6AuB4aSUbnmoKZA6IQxXVevT74NJEldBer/Nm7ki lUhUTwV7 sbkSjif1EPH0+Io+3clJWm9FgZH8eKGXG7D5n7sCUwslKq/Wk3lOc5KfxN9H6zwelpsYn43+Ki+pikuHV1icLMkdIdq/ZTqUa2Y6SDVBKgmHoR4gJjNRDkdodR3NdNe1ppOp2kVIZCx76np0BtcwXkW2rkHkdSiBlM70PZXsWYUu/F2Oh+Tufrz0RLqCv2OnRIN88xbL2P9vZleP/JG0XZ16TaeE/ZjAdn0l9 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add tests similar to the existing PMD-sized THP tests, but which operate on memory backed by (PTE-mapped) small-sized THP. This reuses all the existing infrastructure. If the test suite detects that small-sized THP is not supported by the kernel, the new tests are skipped. Signed-off-by: Ryan Roberts --- tools/testing/selftests/mm/cow.c | 71 +++++++++++++++++++++++++++++++- 1 file changed, 70 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/mm/cow.c b/tools/testing/selftests/mm/cow.c index d03c453cfd5c..3efc395c7077 100644 --- a/tools/testing/selftests/mm/cow.c +++ b/tools/testing/selftests/mm/cow.c @@ -29,15 +29,49 @@ #include "../../../../mm/gup_test.h" #include "../kselftest.h" #include "vm_util.h" +#include "thp_settings.h" static size_t pagesize; static int pagemap_fd; static size_t pmdsize; +static int nr_thpsmallsizes; +static size_t thpsmallsizes[20]; static int nr_hugetlbsizes; static size_t hugetlbsizes[10]; static int gup_fd; static bool has_huge_zeropage; +static int sz2ord(size_t size) +{ + return __builtin_ctzll(size / pagesize); +} + +static int detect_smallthp_sizes(size_t sizes[], int max) +{ + int count = 0; + unsigned long orders; + size_t kb; + int i; + + /* thp not supported at all. */ + if (!pmdsize) + return 0; + + orders = thp_supported_orders(); + + /* Only interested in small-sized THP (less than PMD-size). */ + for (i = 0; i < sz2ord(pmdsize); i++) { + if (!(orders & (1UL << i))) + continue; + kb = (pagesize >> 10) << i; + sizes[count++] = kb * 1024; + ksft_print_msg("[INFO] detected small-sized THP size: %zu KiB\n", + kb); + } + + return count; +} + static void detect_huge_zeropage(void) { int fd = open("/sys/kernel/mm/transparent_hugepage/use_zero_page", @@ -1113,6 +1147,23 @@ static void run_anon_test_case(struct test_case const *test_case) run_with_partial_mremap_thp(test_case->fn, test_case->desc, pmdsize); run_with_partial_shared_thp(test_case->fn, test_case->desc, pmdsize); } + for (i = 0; i < nr_thpsmallsizes; i++) { + size_t size = thpsmallsizes[i]; + struct thp_settings settings = *thp_current_settings(); + + settings.hugepages[sz2ord(pmdsize)].enabled = THP_NEVER; + settings.hugepages[sz2ord(size)].enabled = THP_ALWAYS; + thp_push_settings(&settings); + + run_with_pte_mapped_thp(test_case->fn, test_case->desc, size); + run_with_pte_mapped_thp_swap(test_case->fn, test_case->desc, size); + run_with_single_pte_of_thp(test_case->fn, test_case->desc, size); + run_with_single_pte_of_thp_swap(test_case->fn, test_case->desc, size); + run_with_partial_mremap_thp(test_case->fn, test_case->desc, size); + run_with_partial_shared_thp(test_case->fn, test_case->desc, size); + + thp_pop_settings(); + } for (i = 0; i < nr_hugetlbsizes; i++) run_with_hugetlb(test_case->fn, test_case->desc, hugetlbsizes[i]); @@ -1134,6 +1185,7 @@ static int tests_per_anon_test_case(void) if (pmdsize) tests += 8; + tests += 6 * nr_thpsmallsizes; return tests; } @@ -1691,12 +1743,24 @@ static int tests_per_non_anon_test_case(void) int main(int argc, char **argv) { int err; + struct thp_settings default_settings; pagesize = getpagesize(); pmdsize = read_pmd_pagesize(); - if (pmdsize) + if (pmdsize) { + /* Only if THP is supported. */ + thp_read_settings(&default_settings); + default_settings.hugepages[sz2ord(pmdsize)].enabled = THP_GLOBAL; + thp_save_settings(); + thp_push_settings(&default_settings); + ksft_print_msg("[INFO] detected PMD-mapped THP size: %zu KiB\n", pmdsize / 1024); + + nr_thpsmallsizes = detect_smallthp_sizes(thpsmallsizes, + ARRAY_SIZE(thpsmallsizes)); + } + nr_hugetlbsizes = detect_hugetlb_page_sizes(hugetlbsizes, ARRAY_SIZE(hugetlbsizes)); detect_huge_zeropage(); @@ -1715,6 +1779,11 @@ int main(int argc, char **argv) run_anon_thp_test_cases(); run_non_anon_test_cases(); + if (pmdsize) { + /* Only if THP is supported. */ + thp_restore_settings(); + } + err = ksft_get_fail_cnt(); if (err) ksft_exit_fail_msg("%d out of %d tests failed\n",