From patchwork Mon Apr 4 03:18:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 12799796 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4FEDC433EF for ; Mon, 4 Apr 2022 03:19:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4B5C86B0073; Sun, 3 Apr 2022 23:18:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 416D96B0074; Sun, 3 Apr 2022 23:18:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 291596B0075; Sun, 3 Apr 2022 23:18:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 17A376B0073 for ; Sun, 3 Apr 2022 23:18:37 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id EBB6D215FB for ; Mon, 4 Apr 2022 03:18:26 +0000 (UTC) X-FDA: 79317738612.02.81BA6DB Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf22.hostedemail.com (Postfix) with ESMTP id 69B6DC001D for ; Mon, 4 Apr 2022 03:18:26 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B3122D6E; Sun, 3 Apr 2022 20:18:25 -0700 (PDT) Received: from a077893.arm.com (unknown [10.163.36.163]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 948093F73B; Sun, 3 Apr 2022 20:18:23 -0700 (PDT) From: Anshuman Khandual To: linux-mm@kvack.org Cc: Anshuman Khandual , linux-kernel@vger.kernel.org, Christoph Hellwig , Andrew Morton Subject: [PATCH 1/2] mm/debug_vm_pgtable: Drop protection_map[] usage Date: Mon, 4 Apr 2022 08:48:39 +0530 Message-Id: <20220404031840.588321-2-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220404031840.588321-1-anshuman.khandual@arm.com> References: <20220404031840.588321-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Stat-Signature: y1ayfqarfm3tzqzhkixwkwgstsmrr98y Authentication-Results: imf22.hostedemail.com; dkim=none; spf=pass (imf22.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com; dmarc=pass (policy=none) header.from=arm.com X-Rspam-User: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 69B6DC001D X-HE-Tag: 1649042306-374283 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Although protection_map[] contains the platform defined page protection map for a given vm_flags combination, vm_get_page_prot() is the right interface to use. This will also reduce dependency on protection_map[] which is going to be dropped off completely later on. Cc: Andrew Morton Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual Reviewed-by: Christoph Hellwig --- mm/debug_vm_pgtable.c | 31 +++++++++++++++++++------------ 1 file changed, 19 insertions(+), 12 deletions(-) diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c index db2abd9e415b..30fd11a2ed32 100644 --- a/mm/debug_vm_pgtable.c +++ b/mm/debug_vm_pgtable.c @@ -93,7 +93,7 @@ struct pgtable_debug_args { static void __init pte_basic_tests(struct pgtable_debug_args *args, int idx) { - pgprot_t prot = protection_map[idx]; + pgprot_t prot = vm_get_page_prot(idx); pte_t pte = pfn_pte(args->fixed_pte_pfn, prot); unsigned long val = idx, *ptr = &val; @@ -101,7 +101,7 @@ static void __init pte_basic_tests(struct pgtable_debug_args *args, int idx) /* * This test needs to be executed after the given page table entry - * is created with pfn_pte() to make sure that protection_map[idx] + * is created with pfn_pte() to make sure that vm_get_page_prot(idx) * does not have the dirty bit enabled from the beginning. This is * important for platforms like arm64 where (!PTE_RDONLY) indicate * dirty bit being set. @@ -190,7 +190,7 @@ static void __init pte_savedwrite_tests(struct pgtable_debug_args *args) #ifdef CONFIG_TRANSPARENT_HUGEPAGE static void __init pmd_basic_tests(struct pgtable_debug_args *args, int idx) { - pgprot_t prot = protection_map[idx]; + pgprot_t prot = vm_get_page_prot(idx); unsigned long val = idx, *ptr = &val; pmd_t pmd; @@ -202,7 +202,7 @@ static void __init pmd_basic_tests(struct pgtable_debug_args *args, int idx) /* * This test needs to be executed after the given page table entry - * is created with pfn_pmd() to make sure that protection_map[idx] + * is created with pfn_pmd() to make sure that vm_get_page_prot(idx) * does not have the dirty bit enabled from the beginning. This is * important for platforms like arm64 where (!PTE_RDONLY) indicate * dirty bit being set. @@ -325,7 +325,7 @@ static void __init pmd_savedwrite_tests(struct pgtable_debug_args *args) #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD static void __init pud_basic_tests(struct pgtable_debug_args *args, int idx) { - pgprot_t prot = protection_map[idx]; + pgprot_t prot = vm_get_page_prot(idx); unsigned long val = idx, *ptr = &val; pud_t pud; @@ -337,7 +337,7 @@ static void __init pud_basic_tests(struct pgtable_debug_args *args, int idx) /* * This test needs to be executed after the given page table entry - * is created with pfn_pud() to make sure that protection_map[idx] + * is created with pfn_pud() to make sure that vm_get_page_prot(idx) * does not have the dirty bit enabled from the beginning. This is * important for platforms like arm64 where (!PTE_RDONLY) indicate * dirty bit being set. @@ -1106,14 +1106,14 @@ static int __init init_args(struct pgtable_debug_args *args) /* * Initialize the debugging data. * - * protection_map[0] (or even protection_map[8]) will help create - * page table entries with PROT_NONE permission as required for - * pxx_protnone_tests(). + * vm_get_page_prot(VM_NONE) or vm_get_page_prot(VM_SHARED|VM_NONE) + * will help create page table entries with PROT_NONE permission as + * required for pxx_protnone_tests(). */ memset(args, 0, sizeof(*args)); args->vaddr = get_random_vaddr(); args->page_prot = vm_get_page_prot(VMFLAGS); - args->page_prot_none = protection_map[0]; + args->page_prot_none = vm_get_page_prot(VM_NONE); args->is_contiguous_page = false; args->pud_pfn = ULONG_MAX; args->pmd_pfn = ULONG_MAX; @@ -1248,12 +1248,19 @@ static int __init debug_vm_pgtable(void) return ret; /* - * Iterate over the protection_map[] to make sure that all + * Iterate over each possible vm_flags to make sure that all * the basic page table transformation validations just hold * true irrespective of the starting protection value for a * given page table entry. + * + * Protection based vm_flags combinatins are always linear + * and increasing i.e starting from VM_NONE and going upto + * (VM_SHARED | READ | WRITE | EXEC). */ - for (idx = 0; idx < ARRAY_SIZE(protection_map); idx++) { +#define VM_FLAGS_START (VM_NONE) +#define VM_FLAGS_END (VM_SHARED | VM_EXEC | VM_WRITE | VM_READ) + + for (idx = VM_FLAGS_START; idx <= VM_FLAGS_END; idx++) { pte_basic_tests(&args, idx); pmd_basic_tests(&args, idx); pud_basic_tests(&args, idx); From patchwork Mon Apr 4 03:18:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 12799797 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86A49C433EF for ; Mon, 4 Apr 2022 03:19:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 15B266B0074; Sun, 3 Apr 2022 23:18:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0BBE46B0075; Sun, 3 Apr 2022 23:18:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E78EC6B0078; Sun, 3 Apr 2022 23:18:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0220.hostedemail.com [216.40.44.220]) by kanga.kvack.org (Postfix) with ESMTP id D52B16B0074 for ; Sun, 3 Apr 2022 23:18:39 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 87B33183D71D3 for ; Mon, 4 Apr 2022 03:18:29 +0000 (UTC) X-FDA: 79317738738.22.780855E Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf17.hostedemail.com (Postfix) with ESMTP id EF39E40010 for ; Mon, 4 Apr 2022 03:18:28 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 41A9A1FB; Sun, 3 Apr 2022 20:18:28 -0700 (PDT) Received: from a077893.arm.com (unknown [10.163.36.163]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4222D3F73B; Sun, 3 Apr 2022 20:18:25 -0700 (PDT) From: Anshuman Khandual To: linux-mm@kvack.org Cc: Anshuman Khandual , linux-kernel@vger.kernel.org, Christoph Hellwig , Andrew Morton Subject: [PATCH 2/2] mm/mmap: Clarify protection_map[] indices Date: Mon, 4 Apr 2022 08:48:40 +0530 Message-Id: <20220404031840.588321-3-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220404031840.588321-1-anshuman.khandual@arm.com> References: <20220404031840.588321-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: EF39E40010 X-Stat-Signature: oqtg5mpixme6fwyux1s1r8j4prx63yaj X-Rspam-User: Authentication-Results: imf17.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf17.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com X-HE-Tag: 1649042308-197888 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: protection_map[] maps vm_flags access combinations into page protection value as defined by the platform via __PXXX and __SXXX macros. The array indices in protection_map[], represents vm_flags access combinations but it's not very intuitive to derive. This makes it clear and explicit. Cc: Andrew Morton Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org Reviewed-by: Christoph Hellwig Signed-off-by: Anshuman Khandual Reviewed-by: Christoph Hellwig --- mm/mmap.c | 18 ++++++++++++++++-- 1 file changed, 16 insertions(+), 2 deletions(-) diff --git a/mm/mmap.c b/mm/mmap.c index 3aa839f81e63..4b8ee3fd03ad 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -102,8 +102,22 @@ static void unmap_region(struct mm_struct *mm, * x: (yes) yes */ pgprot_t protection_map[16] __ro_after_init = { - __P000, __P001, __P010, __P011, __P100, __P101, __P110, __P111, - __S000, __S001, __S010, __S011, __S100, __S101, __S110, __S111 + [VM_NONE] = __P000, + [VM_READ] = __P001, + [VM_WRITE] = __P010, + [VM_WRITE | VM_READ] = __P011, + [VM_EXEC] = __P100, + [VM_EXEC | VM_READ] = __P101, + [VM_EXEC | VM_WRITE] = __P110, + [VM_EXEC | VM_WRITE | VM_READ] = __P111, + [VM_SHARED] = __S000, + [VM_SHARED | VM_READ] = __S001, + [VM_SHARED | VM_WRITE] = __S010, + [VM_SHARED | VM_WRITE | VM_READ] = __S011, + [VM_SHARED | VM_EXEC] = __S100, + [VM_SHARED | VM_EXEC | VM_READ] = __S101, + [VM_SHARED | VM_EXEC | VM_WRITE] = __S110, + [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = __S111 }; #ifndef CONFIG_ARCH_HAS_FILTER_PGPROT