From patchwork Sun Dec 1 01:52:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11268291 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E655E112B for ; Sun, 1 Dec 2019 01:52:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A8A592084D for ; Sun, 1 Dec 2019 01:52:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="mRYMy0Aj" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A8A592084D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7A81C6B02CA; Sat, 30 Nov 2019 20:52:33 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 77EBD6B02CB; Sat, 30 Nov 2019 20:52:33 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6950E6B02CC; Sat, 30 Nov 2019 20:52:33 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0133.hostedemail.com [216.40.44.133]) by kanga.kvack.org (Postfix) with ESMTP id 52AEF6B02CA for ; Sat, 30 Nov 2019 20:52:33 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id F0F8D2C7C for ; Sun, 1 Dec 2019 01:52:32 +0000 (UTC) X-FDA: 76214898186.15.flame06_16334689ba763 X-Spam-Summary: 2,0,0,2534a848b67b08cc,d41d8cd98f00b204,akpm@linux-foundation.org,:akpm@linux-foundation.org:alex@ghiti.fr:aou@eecs.berkeley.edu:ard.biesheuvel@linaro.org:arnd@arndb.de:aryabinin@virtuozzo.com:benh@kernel.crashing.org:borntraeger@de.ibm.com:bp@alien8.de:cai@lca.pw:catalin.marinas@arm.com:dave.hansen@linux.intel.com:dave.jiang@intel.com:davem@davemloft.net:dvyukov@google.com:glider@google.com:gor@linux.ibm.com:heiko.carstens@de.ibm.com:hpa@zytor.com:james.morse@arm.com:jhogan@kernel.org:kan.liang@linux.intel.com::linux@armlinux.org.uk:luto@kernel.org:mark.rutland@arm.com:mawilcox@microsoft.com:mingo@elte.hu:mm-commits@vger.kernel.org:mpe@ellerman.id.au:n-horiguchi@ah.jp.nec.com:palmer@sifive.com:paul.burton@mips.com:paul.walmsley@sifive.com:paulus@samba.org:peterz@infradead.org:ralf@linux-mips.org:sfr@canb.auug.org.au:shashim@codeaurora.org:steven.price@arm.com:tglx@linutronix.de:torvalds@linux-foundation.org:vgupta@synopsys.com:will@kernel.org:zong.li@sifive.c om,RULES X-HE-Tag: flame06_16334689ba763 X-Filterd-Recvd-Size: 8051 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Sun, 1 Dec 2019 01:52:32 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 42AAB215E5; Sun, 1 Dec 2019 01:52:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1575165151; bh=uzDc+uufhtMBaymUpqZStdlHoygzBFXgxPNBC31CVZ8=; h=Date:From:To:Subject:From; b=mRYMy0Aj3ERI60UnXrdsMvO1Ohc6Q7Mi5iQ4Suc+Fp5z/E3xK9BGiOTyLt+JmZwrY qjqgEKdBg8ZPtxvVbxMEUF0yJ3Gta3RVWpaeVL5wYDigUUvzG4pOvlD5ltucG2sGBs pGj+bBdiEk6N8SgUSRpBxq81pgwDTU2pZu3AzrAw= Date: Sat, 30 Nov 2019 17:52:29 -0800 From: akpm@linux-foundation.org To: akpm@linux-foundation.org, alex@ghiti.fr, aou@eecs.berkeley.edu, ard.biesheuvel@linaro.org, arnd@arndb.de, aryabinin@virtuozzo.com, benh@kernel.crashing.org, borntraeger@de.ibm.com, bp@alien8.de, cai@lca.pw, catalin.marinas@arm.com, dave.hansen@linux.intel.com, dave.jiang@intel.com, davem@davemloft.net, dvyukov@google.com, glider@google.com, gor@linux.ibm.com, heiko.carstens@de.ibm.com, hpa@zytor.com, james.morse@arm.com, jhogan@kernel.org, kan.liang@linux.intel.com, linux-mm@kvack.org, linux@armlinux.org.uk, luto@kernel.org, mark.rutland@arm.com, mawilcox@microsoft.com, mingo@elte.hu, mm-commits@vger.kernel.org, mpe@ellerman.id.au, n-horiguchi@ah.jp.nec.com, palmer@sifive.com, paul.burton@mips.com, paul.walmsley@sifive.com, paulus@samba.org, peterz@infradead.org, ralf@linux-mips.org, sfr@canb.auug.org.au, shashim@codeaurora.org, steven.price@arm.com, tglx@linutronix.de, torvalds@linux-foundation.org, vgupta@synopsys.com, will@kernel.org, zong.li@sifive.com Subject: [patch 057/158] mm: pagewalk: allow walking without vma Message-ID: <20191201015229.Athbgankp%akpm@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Steven Price Subject: mm: pagewalk: allow walking without vma Since 48684a65b4e3: "mm: pagewalk: fix misbehavior of walk_page_range for vma(VM_PFNMAP)", page_table_walk() will report any kernel area as a hole, because it lacks a vma. This means each arch has re-implemented page table walking when needed, for example in the per-arch ptdump walker. Remove the requirement to have a vma in the generic code and add a new function walk_page_range_novma() which ignores the VMAs and simply walks the page tables. [steven.price@arm.com: v15] Link: http://lkml.kernel.org/r/20191101140942.51554-13-steven.price@arm.com [steven.price@arm.com: fix boot crash] Link: http://lkml.kernel.org/r/20191028135910.33253-13-steven.price@arm.com Cc: Zong Li Cc: Naoya Horiguchi Cc: Shiraz Hashim Cc: Albert Ou Cc: Alexander Potapenko Cc: Alexandre Ghiti Cc: Andrey Ryabinin Cc: Andy Lutomirski Cc: Ard Biesheuvel Cc: Arnd Bergmann Cc: Benjamin Herrenschmidt Cc: Borislav Petkov Cc: Catalin Marinas Cc: Christian Borntraeger Cc: Dave Hansen Cc: Dave Jiang Cc: David S. Miller Cc: Dmitry Vyukov Cc: Heiko Carstens Cc: "H. Peter Anvin" Cc: Ingo Molnar Cc: James Hogan Cc: James Morse Cc: "Liang, Kan" Cc: Mark Rutland Cc: Matthew Wilcox Cc: Michael Ellerman Cc: Palmer Dabbelt Cc: Paul Burton Cc: Paul Mackerras Cc: Paul Walmsley Cc: Peter Zijlstra Cc: Ralf Baechle Cc: Russell King Cc: Thomas Gleixner Cc: Vasily Gorbik Cc: Vineet Gupta Cc: Will Deacon Cc: Qian Cai Cc: Stephen Rothwell Signed-off-by: Andrew Morton --- include/linux/pagewalk.h | 5 ++++ mm/pagewalk.c | 44 ++++++++++++++++++++++++++++++------- 2 files changed, 41 insertions(+), 8 deletions(-) --- a/include/linux/pagewalk.h~mm-pagewalk-allow-walking-without-vma +++ a/include/linux/pagewalk.h @@ -59,6 +59,7 @@ struct mm_walk_ops { * @ops: operation to call during the walk * @mm: mm_struct representing the target process of page table walk * @vma: vma currently walked (NULL if walking outside vmas) + * @no_vma: walk ignoring vmas (vma will always be NULL) * @private: private data for callbacks' usage * * (see the comment on walk_page_range() for more details) @@ -67,12 +68,16 @@ struct mm_walk { const struct mm_walk_ops *ops; struct mm_struct *mm; struct vm_area_struct *vma; + bool no_vma; void *private; }; int walk_page_range(struct mm_struct *mm, unsigned long start, unsigned long end, const struct mm_walk_ops *ops, void *private); +int walk_page_range_novma(struct mm_struct *mm, unsigned long start, + unsigned long end, const struct mm_walk_ops *ops, + void *private); int walk_page_vma(struct vm_area_struct *vma, const struct mm_walk_ops *ops, void *private); int walk_page_mapping(struct address_space *mapping, pgoff_t first_index, --- a/mm/pagewalk.c~mm-pagewalk-allow-walking-without-vma +++ a/mm/pagewalk.c @@ -39,7 +39,7 @@ static int walk_pmd_range(pud_t *pud, un do { again: next = pmd_addr_end(addr, end); - if (pmd_none(*pmd) || !walk->vma) { + if (pmd_none(*pmd) || (!walk->vma && !walk->no_vma)) { if (ops->pte_hole) err = ops->pte_hole(addr, next, walk); if (err) @@ -62,9 +62,14 @@ again: if (!ops->pte_entry) continue; - split_huge_pmd(walk->vma, pmd, addr); - if (pmd_trans_unstable(pmd)) - goto again; + if (walk->vma) { + split_huge_pmd(walk->vma, pmd, addr); + if (pmd_trans_unstable(pmd)) + goto again; + } else if (pmd_leaf(*pmd) || !pmd_present(*pmd)) { + continue; + } + err = walk_pte_range(pmd, addr, next, walk); if (err) break; @@ -85,7 +90,7 @@ static int walk_pud_range(p4d_t *p4d, un do { again: next = pud_addr_end(addr, end); - if (pud_none(*pud) || !walk->vma) { + if (pud_none(*pud) || (!walk->vma && !walk->no_vma)) { if (ops->pte_hole) err = ops->pte_hole(addr, next, walk); if (err) @@ -99,9 +104,13 @@ static int walk_pud_range(p4d_t *p4d, un break; } - split_huge_pud(walk->vma, pud, addr); - if (pud_none(*pud)) - goto again; + if (walk->vma) { + split_huge_pud(walk->vma, pud, addr); + if (pud_none(*pud)) + goto again; + } else if (pud_leaf(*pud) || !pud_present(*pud)) { + continue; + } if (ops->pmd_entry || ops->pte_entry) err = walk_pmd_range(pud, addr, next, walk); @@ -374,6 +383,25 @@ int walk_page_range(struct mm_struct *mm return err; } +int walk_page_range_novma(struct mm_struct *mm, unsigned long start, + unsigned long end, const struct mm_walk_ops *ops, + void *private) +{ + struct mm_walk walk = { + .ops = ops, + .mm = mm, + .private = private, + .no_vma = true + }; + + if (start >= end || !walk.mm) + return -EINVAL; + + lockdep_assert_held(&walk.mm->mmap_sem); + + return __walk_page_range(start, end, &walk); +} + int walk_page_vma(struct vm_area_struct *vma, const struct mm_walk_ops *ops, void *private) {