From patchwork Tue Jun 14 07:16:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 12880532 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94033C433EF for ; Tue, 14 Jun 2022 07:17:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 514928D0218; Tue, 14 Jun 2022 03:17:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4A08C8D0215; Tue, 14 Jun 2022 03:17:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 28B4D8D0217; Tue, 14 Jun 2022 03:17:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 0CC8B8D0215 for ; Tue, 14 Jun 2022 03:17:33 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id C6E4E34F29 for ; Tue, 14 Jun 2022 07:17:32 +0000 (UTC) X-FDA: 79575985944.24.34D3D20 Received: from mail-il1-f201.google.com (mail-il1-f201.google.com [209.85.166.201]) by imf07.hostedemail.com (Postfix) with ESMTP id 56AEF40084 for ; Tue, 14 Jun 2022 07:17:32 +0000 (UTC) Received: by mail-il1-f201.google.com with SMTP id 3-20020a056e0220c300b002d3d7ebdfdeso6034324ilq.16 for ; Tue, 14 Jun 2022 00:17:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc:content-transfer-encoding; bh=XIsXFs30BiQ4xVpnyaiGSMl+eo9iapqmZvbIkD7nffY=; b=kV3K2cQaLBR41xqFMrN/Ux7hMuL6Rg1Vu68/PiSlyzxAOtRnVEVl0Cn1CYlMGenwKA hGdRBxXfRcZABFmeilG2wo6Bo7tUG6ONQPpoU2rPUiMMOAcS0x9sVR/2gCl8FpP2eunk sI7gKYZbBiiIFUnOH8RLZIQSTU373ETF583DCcNdSXBpaPkD+qFFsDPsS4OAm1OGL3g0 WOwsQBxN5VJkRJJeqIv78UNWXfDsdbSseA8Pg9aYuQWBwr8RjOA13UCbTdYfpaOQThKh yWRLqGhIw4xROnUHXGl9s/DdpCx7XbZk7g0UyaMcfK9hKA1RuYMvtHmTIffae+bbYxDR +Nbg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc:content-transfer-encoding; bh=XIsXFs30BiQ4xVpnyaiGSMl+eo9iapqmZvbIkD7nffY=; b=c4O2YwThYBwVTkj0GJMcgo5enpUzj5gEVY6iCfX7/T4YPtIKNxd2SKPsA3mPCap9bI EmQnNcZZdavQ1RxAfCKDi8+Jdixkv1ANE311IaMaCTmXrZ0HLFv0R2xPYPAo9bo2XKia 55Y5y7FjrWhaD5jG/l9h7TqfIjNbsqGtk4bXp3G/fuQ4LOhfgBtUZQy83OWzZPOMRVLf 6zQRZchXIiVektqg6MxyM9bzmSQllIDrZxccZgoCrmwvh6gFatqdktQ64SoX8ms/5qyd zWht8wChTuN7ow0BtgVWd1KIFeViup5s6OM6mJQnnuxkOOGt8LitIGfsBckpkiYzcWXh 2v9A== X-Gm-Message-State: AOAM532jEsSnca8U7Wawai6MrJsCLr3kyI9sDySgeaMcayopKtgXXQr0 EthzZQieWoeGoHWV/sewXb7KOAMPj+8= X-Google-Smtp-Source: ABdhPJwa+IuL3uSYhO0EkwzPcVoaUMEgy/aSEtokQvpBtMacR7RNVPI7HK1y0235IWN8BGc4jvSwVHc/fdE= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:eaa7:1f3f:e74a:2a26]) (user=yuzhao job=sendgmr) by 2002:a05:6638:2481:b0:331:df8f:95e0 with SMTP id x1-20020a056638248100b00331df8f95e0mr1974663jat.280.1655191048469; Tue, 14 Jun 2022 00:17:28 -0700 (PDT) Date: Tue, 14 Jun 2022 01:16:39 -0600 In-Reply-To: <20220614071650.206064-1-yuzhao@google.com> Message-Id: <20220614071650.206064-3-yuzhao@google.com> Mime-Version: 1.0 References: <20220614071650.206064-1-yuzhao@google.com> X-Mailer: git-send-email 2.36.1.476.g0c4daa206d-goog Subject: [PATCH v12 02/14] mm: x86: add CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG From: Yu Zhao To: Andrew Morton Cc: Andi Kleen , Aneesh Kumar , Catalin Marinas , Dave Hansen , Hillf Danton , Jens Axboe , Johannes Weiner , Jonathan Corbet , Linus Torvalds , Matthew Wilcox , Mel Gorman , Michael Larabel , Michal Hocko , Mike Rapoport , Peter Zijlstra , Tejun Heo , Vlastimil Babka , Will Deacon , linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org, page-reclaim@google.com, Yu Zhao , Barry Song , Brian Geffon , Jan Alexander Steffens , Oleksandr Natalenko , Steven Barrett , Suleiman Souhlal , Daniel Byrne , Donald Carr , " =?utf-8?q?Holger_Hoffst=C3=A4tte?= " , Konstantin Kharlamov , Shuang Zhai , Sofia Trinh , Vaibhav Jain ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1655191052; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=XIsXFs30BiQ4xVpnyaiGSMl+eo9iapqmZvbIkD7nffY=; b=rwQxIxHoohGcJgCMwBIeC+HpbCfSJ+9pFy5ID37U25XF3rC5WAuZjTTTEde4DWkXre+0Up hC4h6C8mUQxYeRrZDXAhYmUSU+RuBE62DpZrEMnpmwBdHt63ZVC5A2i40he3DBZrBVkQpG JnvI8O+zgT8bqG2fXbgTDQzLzYGdlKU= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1655191052; a=rsa-sha256; cv=none; b=6+r8jdWoGOtiRkBlqtfCGWjm8NP5ypQH4epM5r96Wk7YGudkvPJJkR42nr/n8hP90DbPzg 1lRuIGMnsY2ZtAn7KIGmXM1nfHpQzrepImh36W5aqqw3YWvgBlhsSG5bB3LZjhIxcP0eoB 68d5zgaBIRpwDoPwv/Z6Q95qpe1NFps= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=kV3K2cQa; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf07.hostedemail.com: domain of 3CDaoYgYKCDElhmUNbTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--yuzhao.bounces.google.com designates 209.85.166.201 as permitted sender) smtp.mailfrom=3CDaoYgYKCDElhmUNbTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--yuzhao.bounces.google.com Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=kV3K2cQa; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf07.hostedemail.com: domain of 3CDaoYgYKCDElhmUNbTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--yuzhao.bounces.google.com designates 209.85.166.201 as permitted sender) smtp.mailfrom=3CDaoYgYKCDElhmUNbTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--yuzhao.bounces.google.com X-Rspamd-Server: rspam12 X-Rspam-User: X-Stat-Signature: 4fw55bnujgjixfncyan4da6u41e1q5ph X-Rspamd-Queue-Id: 56AEF40084 X-HE-Tag: 1655191052-317007 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Some architectures support the accessed bit in non-leaf PMD entries, e.g., x86 sets the accessed bit in a non-leaf PMD entry when using it as part of linear address translation [1]. Page table walkers that clear the accessed bit may use this capability to reduce their search space. Note that: 1. Although an inline function is preferable, this capability is added as a configuration option for consistency with the existing macros. 2. Due to the little interest in other varieties, this capability was only tested on Intel and AMD CPUs. Thanks to the following developers for their efforts [2][3]. Randy Dunlap Stephen Rothwell [1]: Intel 64 and IA-32 Architectures Software Developer's Manual Volume 3 (June 2021), section 4.8 [2] https://lore.kernel.org/r/bfdcc7c8-922f-61a9-aa15-7e7250f04af7@infradead.org/ [3] https://lore.kernel.org/r/20220413151513.5a0d7a7e@canb.auug.org.au/ Signed-off-by: Yu Zhao Reviewed-by: Barry Song Acked-by: Brian Geffon Acked-by: Jan Alexander Steffens (heftig) Acked-by: Oleksandr Natalenko Acked-by: Steven Barrett Acked-by: Suleiman Souhlal Tested-by: Daniel Byrne Tested-by: Donald Carr Tested-by: Holger Hoffstätte Tested-by: Konstantin Kharlamov Tested-by: Shuang Zhai Tested-by: Sofia Trinh Tested-by: Vaibhav Jain --- arch/Kconfig | 8 ++++++++ arch/x86/Kconfig | 1 + arch/x86/include/asm/pgtable.h | 3 ++- arch/x86/mm/pgtable.c | 5 ++++- include/linux/pgtable.h | 4 ++-- 5 files changed, 17 insertions(+), 4 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index fcf9a41a4ef5..eaeec187bd6a 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -1403,6 +1403,14 @@ config DYNAMIC_SIGFRAME config HAVE_ARCH_NODE_DEV_GROUP bool +config ARCH_HAS_NONLEAF_PMD_YOUNG + bool + help + Architectures that select this option are capable of setting the + accessed bit in non-leaf PMD entries when using them as part of linear + address translations. Page table walkers that clear the accessed bit + may use this capability to reduce their search space. + source "kernel/gcov/Kconfig" source "scripts/gcc-plugins/Kconfig" diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index be0b95e51df6..5715111abe13 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -85,6 +85,7 @@ config X86 select ARCH_HAS_PMEM_API if X86_64 select ARCH_HAS_PTE_DEVMAP if X86_64 select ARCH_HAS_PTE_SPECIAL + select ARCH_HAS_NONLEAF_PMD_YOUNG if PGTABLE_LEVELS > 2 select ARCH_HAS_UACCESS_FLUSHCACHE if X86_64 select ARCH_HAS_COPY_MC if X86_64 select ARCH_HAS_SET_MEMORY diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index dc5f7d8ef68a..5059799bebe3 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -815,7 +815,8 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd) static inline int pmd_bad(pmd_t pmd) { - return (pmd_flags(pmd) & ~_PAGE_USER) != _KERNPG_TABLE; + return (pmd_flags(pmd) & ~(_PAGE_USER | _PAGE_ACCESSED)) != + (_KERNPG_TABLE & ~_PAGE_ACCESSED); } static inline unsigned long pages_to_mb(unsigned long npg) diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index a932d7712d85..8525f2876fb4 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -550,7 +550,7 @@ int ptep_test_and_clear_young(struct vm_area_struct *vma, return ret; } -#ifdef CONFIG_TRANSPARENT_HUGEPAGE +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG) int pmdp_test_and_clear_young(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmdp) { @@ -562,6 +562,9 @@ int pmdp_test_and_clear_young(struct vm_area_struct *vma, return ret; } +#endif + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE int pudp_test_and_clear_young(struct vm_area_struct *vma, unsigned long addr, pud_t *pudp) { diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 8eee31bc9bde..9c57c5cc49c2 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -213,7 +213,7 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, #endif #ifndef __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG -#ifdef CONFIG_TRANSPARENT_HUGEPAGE +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG) static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp) @@ -234,7 +234,7 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma, BUILD_BUG(); return 0; } -#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ +#endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG */ #endif #ifndef __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH