From patchwork Sun Dec 1 01:52:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11268293 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 939E2112B for ; Sun, 1 Dec 2019 01:52:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5697D206E1 for ; Sun, 1 Dec 2019 01:52:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="SJt+dt8d" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5697D206E1 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 166896B02CB; Sat, 30 Nov 2019 20:52:38 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 13CB46B02CD; Sat, 30 Nov 2019 20:52:38 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 053946B02CE; Sat, 30 Nov 2019 20:52:38 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0129.hostedemail.com [216.40.44.129]) by kanga.kvack.org (Postfix) with ESMTP id E493F6B02CB for ; Sat, 30 Nov 2019 20:52:37 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 9A5272A87 for ; Sun, 1 Dec 2019 01:52:37 +0000 (UTC) X-FDA: 76214898354.23.bat35_16e1893a65960 X-Spam-Summary: 2,0,0,d6c5ca502456421b,d41d8cd98f00b204,akpm@linux-foundation.org,:akpm@linux-foundation.org:alex@ghiti.fr:aou@eecs.berkeley.edu:ard.biesheuvel@linaro.org:arnd@arndb.de:aryabinin@virtuozzo.com:benh@kernel.crashing.org:borntraeger@de.ibm.com:bp@alien8.de:catalin.marinas@arm.com:dave.hansen@linux.intel.com:dave.jiang@intel.com:davem@davemloft.net:dvyukov@google.com:glider@google.com:gor@linux.ibm.com:heiko.carstens@de.ibm.com:hpa@zytor.com:james.morse@arm.com:jhogan@kernel.org:kan.liang@linux.intel.com::linux@armlinux.org.uk:luto@kernel.org:mark.rutland@arm.com:mawilcox@microsoft.com:mingo@elte.hu:mm-commits@vger.kernel.org:mpe@ellerman.id.au:n-horiguchi@ah.jp.nec.com:palmer@sifive.com:paul.burton@mips.com:paul.walmsley@sifive.com:paulus@samba.org:peterz@infradead.org:ralf@linux-mips.org:shashim@codeaurora.org:steven.price@arm.com:tglx@linutronix.de:torvalds@linux-foundation.org:vgupta@synopsys.com:will@kernel.org:zong.li@sifive.com,RULES_HIT:41:355:379:800:960: 967:973: X-HE-Tag: bat35_16e1893a65960 X-Filterd-Recvd-Size: 6969 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf13.hostedemail.com (Postfix) with ESMTP for ; Sun, 1 Dec 2019 01:52:37 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 0FED82084D; Sun, 1 Dec 2019 01:52:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1575165156; bh=iAklNOP04yUqFkjJSb1BKCgNRlcd/ky1dTIicmIGqMg=; h=Date:From:To:Subject:From; b=SJt+dt8d5jvK3triE090YlSwfy3bHrqJV372jSy1klI/b8hsmpEhW/olzB8TQaTFN 5ucJDLQIe7uLJ/6ZW89I84OEcyAt/EbnurqP6+u3B1R0xNEi9lVVs5IX95XENsPZww Sc2QiJIFags5Zwd/9Csy2y5zXMCPyaxcyPyUz87s= Date: Sat, 30 Nov 2019 17:52:34 -0800 From: akpm@linux-foundation.org To: akpm@linux-foundation.org, alex@ghiti.fr, aou@eecs.berkeley.edu, ard.biesheuvel@linaro.org, arnd@arndb.de, aryabinin@virtuozzo.com, benh@kernel.crashing.org, borntraeger@de.ibm.com, bp@alien8.de, catalin.marinas@arm.com, dave.hansen@linux.intel.com, dave.jiang@intel.com, davem@davemloft.net, dvyukov@google.com, glider@google.com, gor@linux.ibm.com, heiko.carstens@de.ibm.com, hpa@zytor.com, james.morse@arm.com, jhogan@kernel.org, kan.liang@linux.intel.com, linux-mm@kvack.org, linux@armlinux.org.uk, luto@kernel.org, mark.rutland@arm.com, mawilcox@microsoft.com, mingo@elte.hu, mm-commits@vger.kernel.org, mpe@ellerman.id.au, n-horiguchi@ah.jp.nec.com, palmer@sifive.com, paul.burton@mips.com, paul.walmsley@sifive.com, paulus@samba.org, peterz@infradead.org, ralf@linux-mips.org, shashim@codeaurora.org, steven.price@arm.com, tglx@linutronix.de, torvalds@linux-foundation.org, vgupta@synopsys.com, will@kernel.org, zong.li@sifive.com Subject: [patch 058/158] mm: pagewalk: add test_p?d callbacks Message-ID: <20191201015234.2h9y8PXQL%akpm@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Steven Price Subject: mm: pagewalk: add test_p?d callbacks It is useful to be able to skip parts of the page table tree even when walking without VMAs. Add test_p?d callbacks similar to test_walk but which are called just before a table at that level is walked. If the callback returns non-zero then the entire table is skipped. Link: http://lkml.kernel.org/r/20191028135910.33253-14-steven.price@arm.com Signed-off-by: Steven Price Tested-by: Zong Li Cc: Albert Ou Cc: Alexander Potapenko Cc: Alexandre Ghiti Cc: Andrey Ryabinin Cc: Andy Lutomirski Cc: Ard Biesheuvel Cc: Arnd Bergmann Cc: Benjamin Herrenschmidt Cc: Borislav Petkov Cc: Catalin Marinas Cc: Christian Borntraeger Cc: Dave Hansen Cc: Dave Jiang Cc: David S. Miller Cc: Dmitry Vyukov Cc: Heiko Carstens Cc: "H. Peter Anvin" Cc: Ingo Molnar Cc: James Hogan Cc: James Morse Cc: "Liang, Kan" Cc: Mark Rutland Cc: Matthew Wilcox Cc: Michael Ellerman Cc: Naoya Horiguchi Cc: Palmer Dabbelt Cc: Paul Burton Cc: Paul Mackerras Cc: Paul Walmsley Cc: Peter Zijlstra Cc: Ralf Baechle Cc: Russell King Cc: Shiraz Hashim Cc: Thomas Gleixner Cc: Vasily Gorbik Cc: Vineet Gupta Cc: Will Deacon Signed-off-by: Andrew Morton --- include/linux/pagewalk.h | 11 +++++++++++ mm/pagewalk.c | 24 ++++++++++++++++++++++++ 2 files changed, 35 insertions(+) --- a/include/linux/pagewalk.h~mm-pagewalk-add-test_pd-callbacks +++ a/include/linux/pagewalk.h @@ -24,6 +24,11 @@ struct mm_walk; * "do page table walk over the current vma", returning * a negative value means "abort current page table walk * right now" and returning 1 means "skip the current vma" + * @test_pmd: similar to test_walk(), but called for every pmd. + * @test_pud: similar to test_walk(), but called for every pud. + * @test_p4d: similar to test_walk(), but called for every p4d. + * Returning 0 means walk this part of the page tables, + * returning 1 means to skip this range. * @pre_vma: if set, called before starting walk on a non-null vma. * @post_vma: if set, called after a walk on a non-null vma, provided * that @pre_vma and the vma walk succeeded. @@ -47,6 +52,12 @@ struct mm_walk_ops { int (*hugetlb_entry)(pte_t *pte, unsigned long hmask, unsigned long addr, unsigned long next, struct mm_walk *walk); + int (*test_pmd)(unsigned long addr, unsigned long next, + pmd_t *pmd_start, struct mm_walk *walk); + int (*test_pud)(unsigned long addr, unsigned long next, + pud_t *pud_start, struct mm_walk *walk); + int (*test_p4d)(unsigned long addr, unsigned long next, + p4d_t *p4d_start, struct mm_walk *walk); int (*test_walk)(unsigned long addr, unsigned long next, struct mm_walk *walk); int (*pre_vma)(unsigned long start, unsigned long end, --- a/mm/pagewalk.c~mm-pagewalk-add-test_pd-callbacks +++ a/mm/pagewalk.c @@ -35,6 +35,14 @@ static int walk_pmd_range(pud_t *pud, un const struct mm_walk_ops *ops = walk->ops; int err = 0; + if (ops->test_pmd) { + err = ops->test_pmd(addr, end, pmd_offset(pud, 0UL), walk); + if (err < 0) + return err; + if (err > 0) + return 0; + } + pmd = pmd_offset(pud, addr); do { again: @@ -86,6 +94,14 @@ static int walk_pud_range(p4d_t *p4d, un const struct mm_walk_ops *ops = walk->ops; int err = 0; + if (ops->test_pud) { + err = ops->test_pud(addr, end, pud_offset(p4d, 0UL), walk); + if (err < 0) + return err; + if (err > 0) + return 0; + } + pud = pud_offset(p4d, addr); do { again: @@ -129,6 +145,14 @@ static int walk_p4d_range(pgd_t *pgd, un const struct mm_walk_ops *ops = walk->ops; int err = 0; + if (ops->test_p4d) { + err = ops->test_p4d(addr, end, p4d_offset(pgd, 0UL), walk); + if (err < 0) + return err; + if (err > 0) + return 0; + } + p4d = p4d_offset(pgd, addr); do { next = p4d_addr_end(addr, end);