Message ID | 20220627045833.1590055-8-anshuman.khandual@arm.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | mm/mmap: Drop __SXXX/__PXXX macros from across platforms | expand |
Le 27/06/2022 à 06:58, Anshuman Khandual a écrit : > Now that protection_map[] has been moved inside those platforms that enable > ARCH_HAS_VM_GET_PAGE_PROT. Hence generic protection_map[] array now can be > protected with CONFIG_ARCH_HAS_VM_GET_PAGE_PROT intead of __P000. > > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: linux-mm@kvack.org > Cc: linux-kernel@vger.kernel.org > Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> > --- > include/linux/mm.h | 2 +- > mm/mmap.c | 5 +---- > 2 files changed, 2 insertions(+), 5 deletions(-) > > diff --git a/include/linux/mm.h b/include/linux/mm.h > index 47bfe038d46e..65b7f3d9ff87 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -424,7 +424,7 @@ extern unsigned int kobjsize(const void *objp); > * mapping from the currently active vm_flags protection bits (the > * low four bits) to a page protection mask.. > */ > -#ifdef __P000 > +#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT > extern pgprot_t protection_map[16]; > #endif > > diff --git a/mm/mmap.c b/mm/mmap.c > index b46d5e931bb3..2cc722e162fa 100644 > --- a/mm/mmap.c > +++ b/mm/mmap.c > @@ -81,7 +81,7 @@ static void unmap_region(struct mm_struct *mm, > struct vm_area_struct *vma, struct vm_area_struct *prev, > unsigned long start, unsigned long end); > > -#ifdef __P000 > +#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT > pgprot_t protection_map[16] __ro_after_init = { > [VM_NONE] = __P000, > [VM_READ] = __P001, > @@ -100,9 +100,6 @@ pgprot_t protection_map[16] __ro_after_init = { > [VM_SHARED | VM_EXEC | VM_WRITE] = __S110, > [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = __S111 > }; > -#endif > - > -#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT > DECLARE_VM_GET_PAGE_PROT > #endif /* CONFIG_ARCH_HAS_VM_GET_PAGE_PROT */ >
diff --git a/include/linux/mm.h b/include/linux/mm.h index 47bfe038d46e..65b7f3d9ff87 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -424,7 +424,7 @@ extern unsigned int kobjsize(const void *objp); * mapping from the currently active vm_flags protection bits (the * low four bits) to a page protection mask.. */ -#ifdef __P000 +#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT extern pgprot_t protection_map[16]; #endif diff --git a/mm/mmap.c b/mm/mmap.c index b46d5e931bb3..2cc722e162fa 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -81,7 +81,7 @@ static void unmap_region(struct mm_struct *mm, struct vm_area_struct *vma, struct vm_area_struct *prev, unsigned long start, unsigned long end); -#ifdef __P000 +#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT pgprot_t protection_map[16] __ro_after_init = { [VM_NONE] = __P000, [VM_READ] = __P001, @@ -100,9 +100,6 @@ pgprot_t protection_map[16] __ro_after_init = { [VM_SHARED | VM_EXEC | VM_WRITE] = __S110, [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = __S111 }; -#endif - -#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT DECLARE_VM_GET_PAGE_PROT #endif /* CONFIG_ARCH_HAS_VM_GET_PAGE_PROT */
Now that protection_map[] has been moved inside those platforms that enable ARCH_HAS_VM_GET_PAGE_PROT. Hence generic protection_map[] array now can be protected with CONFIG_ARCH_HAS_VM_GET_PAGE_PROT intead of __P000. Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> --- include/linux/mm.h | 2 +- mm/mmap.c | 5 +---- 2 files changed, 2 insertions(+), 5 deletions(-)