From patchwork Sun Dec 1 01:51:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11268269 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 05172921 for ; Sun, 1 Dec 2019 01:51:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B6F76208C3 for ; Sun, 1 Dec 2019 01:51:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="hkgxusBf" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B6F76208C3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9CC2C6B02B2; Sat, 30 Nov 2019 20:51:39 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 9A1FD6B02B4; Sat, 30 Nov 2019 20:51:39 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8DF396B02B5; Sat, 30 Nov 2019 20:51:39 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0056.hostedemail.com [216.40.44.56]) by kanga.kvack.org (Postfix) with ESMTP id 760EC6B02B2 for ; Sat, 30 Nov 2019 20:51:39 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 28787180AD817 for ; Sun, 1 Dec 2019 01:51:39 +0000 (UTC) X-FDA: 76214895918.16.able57_e61d3d83f405 X-Spam-Summary: 2,0,0,f10024382c9e4107,d41d8cd98f00b204,akpm@linux-foundation.org,:akpm@linux-foundation.org:alex@ghiti.fr:aou@eecs.berkeley.edu:ard.biesheuvel@linaro.org:arnd@arndb.de:aryabinin@virtuozzo.com:benh@kernel.crashing.org:borntraeger@de.ibm.com:bp@alien8.de:catalin.marinas@arm.com:dave.hansen@linux.intel.com:dave.jiang@intel.com:davem@davemloft.net:dvyukov@google.com:glider@google.com:gor@linux.ibm.com:heiko.carstens@de.ibm.com:hpa@zytor.com:james.morse@arm.com:jhogan@kernel.org:kan.liang@linux.intel.com::linux@armlinux.org.uk:luto@kernel.org:mark.rutland@arm.com:mawilcox@microsoft.com:mingo@elte.hu:mm-commits@vger.kernel.org:mpe@ellerman.id.au:n-horiguchi@ah.jp.nec.com:palmer@sifive.com:paul.burton@mips.com:paul.walmsley@sifive.com:paulus@samba.org:peterz@infradead.org:ralf@linux-mips.org:shashim@codeaurora.org:steven.price@arm.com:tglx@linutronix.de:torvalds@linux-foundation.org:vgupta@synopsys.com:will@kernel.org:zong.li@sifive.com,RULES_HIT:2:41:355:379:800:96 0:967:97 X-HE-Tag: able57_e61d3d83f405 X-Filterd-Recvd-Size: 7370 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf14.hostedemail.com (Postfix) with ESMTP for ; Sun, 1 Dec 2019 01:51:38 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 804442084D; Sun, 1 Dec 2019 01:51:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1575165098; bh=PGTiczPxnurPRNULcbLumtCE012mMbkW6yRFj/lyTe4=; h=Date:From:To:Subject:From; b=hkgxusBf80udsFDn0eQxN4ZO152RgG1lNI2E3bvxQpHblvku0a3SzpkKP0xRWK7R8 ptBUnmOf61owFKqp1C+5H58DqCed17iznNxzxhPiu4SLWLPUD/St13U0QRBPMaKsg/ u4iMatYUkbzYtdO8g7PVPYppKyI43XcJvNPWpmI8= Date: Sat, 30 Nov 2019 17:51:36 -0800 From: akpm@linux-foundation.org To: akpm@linux-foundation.org, alex@ghiti.fr, aou@eecs.berkeley.edu, ard.biesheuvel@linaro.org, arnd@arndb.de, aryabinin@virtuozzo.com, benh@kernel.crashing.org, borntraeger@de.ibm.com, bp@alien8.de, catalin.marinas@arm.com, dave.hansen@linux.intel.com, dave.jiang@intel.com, davem@davemloft.net, dvyukov@google.com, glider@google.com, gor@linux.ibm.com, heiko.carstens@de.ibm.com, hpa@zytor.com, james.morse@arm.com, jhogan@kernel.org, kan.liang@linux.intel.com, linux-mm@kvack.org, linux@armlinux.org.uk, luto@kernel.org, Mark.Rutland@arm.com, mawilcox@microsoft.com, mingo@elte.hu, mm-commits@vger.kernel.org, mpe@ellerman.id.au, n-horiguchi@ah.jp.nec.com, palmer@sifive.com, paul.burton@mips.com, paul.walmsley@sifive.com, paulus@samba.org, peterz@infradead.org, ralf@linux-mips.org, shashim@codeaurora.org, steven.price@arm.com, tglx@linutronix.de, torvalds@linux-foundation.org, vgupta@synopsys.com, will@kernel.org, zong.li@sifive.com Subject: [patch 046/158] mm: add generic p?d_leaf() macros Message-ID: <20191201015136.hM2KwyVgH%akpm@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Steven Price Subject: mm: add generic p?d_leaf() macros Patch series "Generic page walk and ptdump", v15. Many architectures current have a debugfs file for dumping the kernel page tables. Currently each architecture has to implement custom functions for this because the details of walking the page tables used by the kernel are different between architectures. This series extends the capabilities of walk_page_range() so that it can deal with the page tables of the kernel (which have no VMAs and can contain larger huge pages than exist for user space). A generic PTDUMP implementation is the implemented making use of the new functionality of walk_page_range() and finally arm64 and x86 are switch to using it, removing the custom table walkers. To enable a generic page table walker to walk the unusual mappings of the kernel we need to implement a set of functions which let us know when the walker has reached the leaf entry. After a suggestion from Will Deacon I've chosen the name p?d_leaf() as this (hopefully) describes the purpose (and is a new name so has no historic baggage). Some architectures have p?d_large macros but this is easily confused with "large pages". This series ends with a generic PTDUMP implemention for arm64 and x86. Mostly this is a clean up and there should be very little functional change. The exceptions are: * arm64 PTDUMP debugfs now displays pages which aren't present (patch 22). * arm64 has the ability to efficiently process KASAN pages (which previously only x86 implemented). This means that the combination of KASAN and DEBUG_WX is now useable. This patch (of 23): Exposing the pud/pgd levels of the page tables to walk_page_range() means we may come across the exotic large mappings that come with large areas of contiguous memory (such as the kernel's linear map). For architectures that don't provide all p?d_leaf() macros, provide generic do nothing default that are suitable where there cannot be leaf pages at that level. Futher patches will add implementations for individual architectures. The name p?d_leaf() is chosen to minimize the confusion with existing uses of "large" pages and "huge" pages which do not necessary mean that the entry is a leaf (for example it may be a set of contiguous entries that only take 1 TLB slot). For the purpose of walking the page tables we don't need to know how it will be represented in the TLB, but we do need to know for sure if it is a leaf of the tree. Link: http://lkml.kernel.org/r/20191028135910.33253-2-steven.price@arm.com Signed-off-by: Steven Price Acked-by: Mark Rutland Cc: Albert Ou Cc: Alexandre Ghiti Cc: Andy Lutomirski Cc: Ard Biesheuvel Cc: Arnd Bergmann Cc: Benjamin Herrenschmidt Cc: Borislav Petkov Cc: Catalin Marinas Cc: Christian Borntraeger Cc: Dave Hansen Cc: Heiko Carstens Cc: "H. Peter Anvin" Cc: James Hogan Cc: James Morse Cc: "Liang, Kan" Cc: Mark Rutland Cc: Michael Ellerman Cc: Palmer Dabbelt Cc: Paul Burton Cc: Paul Mackerras Cc: Paul Walmsley Cc: Peter Zijlstra Cc: Ralf Baechle Cc: Russell King Cc: Thomas Gleixner Cc: Vasily Gorbik Cc: Vineet Gupta Cc: Will Deacon Cc: Zong Li Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Dave Jiang Cc: David S. Miller Cc: Dmitry Vyukov Cc: Ingo Molnar Cc: Matthew Wilcox Cc: Naoya Horiguchi Cc: Shiraz Hashim Signed-off-by: Andrew Morton --- include/asm-generic/pgtable.h | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) --- a/include/asm-generic/pgtable.h~mm-add-generic-pd_leaf-macros +++ a/include/asm-generic/pgtable.h @@ -1238,4 +1238,24 @@ static inline bool arch_has_pfn_modify_c #define mm_pmd_folded(mm) __is_defined(__PAGETABLE_PMD_FOLDED) #endif +/* + * p?d_leaf() - true if this entry is a final mapping to a physical address. + * This differs from p?d_huge() by the fact that they are always available (if + * the architecture supports large pages at the appropriate level) even + * if CONFIG_HUGETLB_PAGE is not defined. + * Only meaningful when called on a valid entry. + */ +#ifndef pgd_leaf +#define pgd_leaf(x) 0 +#endif +#ifndef p4d_leaf +#define p4d_leaf(x) 0 +#endif +#ifndef pud_leaf +#define pud_leaf(x) 0 +#endif +#ifndef pmd_leaf +#define pmd_leaf(x) 0 +#endif + #endif /* _ASM_GENERIC_PGTABLE_H */