From patchwork Mon Feb 21 06:38:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 12753128 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8DE2EC433F5 for ; Mon, 21 Feb 2022 06:39:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 220848D000C; Mon, 21 Feb 2022 01:39:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1D1658D0001; Mon, 21 Feb 2022 01:39:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 048588D000C; Mon, 21 Feb 2022 01:39:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0047.hostedemail.com [216.40.44.47]) by kanga.kvack.org (Postfix) with ESMTP id E83188D0001 for ; Mon, 21 Feb 2022 01:39:17 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id ABF468249980 for ; Mon, 21 Feb 2022 06:39:17 +0000 (UTC) X-FDA: 79165835154.23.2FE78F1 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf03.hostedemail.com (Postfix) with ESMTP id 4939D20003 for ; Mon, 21 Feb 2022 06:39:17 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CF8911396; Sun, 20 Feb 2022 22:39:16 -0800 (PST) Received: from p8cg001049571a15.arm.com (unknown [10.163.49.67]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D8A5E3F70D; Sun, 20 Feb 2022 22:39:13 -0800 (PST) From: Anshuman Khandual To: linux-mm@kvack.org, akpm@linux-foundation.org Cc: linux-kernel@vger.kernel.org, Anshuman Khandual , Christoph Hellwig , linux-arch@vger.kernel.org, Russell King , Arnd Bergmann , linux-arm-kernel@lists.infradead.org Subject: [PATCH V2 09/30] arm/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT Date: Mon, 21 Feb 2022 12:08:18 +0530 Message-Id: <1645425519-9034-10-git-send-email-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1645425519-9034-1-git-send-email-anshuman.khandual@arm.com> References: <1645425519-9034-1-git-send-email-anshuman.khandual@arm.com> Authentication-Results: imf03.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf03.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com X-Rspam-User: X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 4939D20003 X-Stat-Signature: ibjdi4dyxep8fshu9459gsrk4ky9e5rn X-HE-Tag: 1645425557-366063 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This defines and exports a platform specific custom vm_get_page_prot() via subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX macros can be dropped which are no longer needed. Cc: Russell King Cc: Arnd Bergmann Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual --- arch/arm/Kconfig | 1 + arch/arm/include/asm/pgtable.h | 18 ------------ arch/arm/lib/uaccess_with_memcpy.c | 2 +- arch/arm/mm/mmu.c | 44 ++++++++++++++++++++++++++---- 4 files changed, 40 insertions(+), 25 deletions(-) diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 4c97cb40eebb..87b2e89ef3d6 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -23,6 +23,7 @@ config ARM select ARCH_HAS_SYNC_DMA_FOR_CPU if SWIOTLB || !MMU select ARCH_HAS_TEARDOWN_DMA_OPS if MMU select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST + select ARCH_HAS_VM_GET_PAGE_PROT select ARCH_HAVE_CUSTOM_GPIO_H select ARCH_HAVE_NMI_SAFE_CMPXCHG if CPU_V7 || CPU_V7M || CPU_V6K select ARCH_HAS_GCOV_PROFILE_ALL diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h index cd1f84bb40ae..ec062dd6082a 100644 --- a/arch/arm/include/asm/pgtable.h +++ b/arch/arm/include/asm/pgtable.h @@ -137,24 +137,6 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, * 2) If we could do execute protection, then read is implied * 3) write implies read permissions */ -#define __P000 __PAGE_NONE -#define __P001 __PAGE_READONLY -#define __P010 __PAGE_COPY -#define __P011 __PAGE_COPY -#define __P100 __PAGE_READONLY_EXEC -#define __P101 __PAGE_READONLY_EXEC -#define __P110 __PAGE_COPY_EXEC -#define __P111 __PAGE_COPY_EXEC - -#define __S000 __PAGE_NONE -#define __S001 __PAGE_READONLY -#define __S010 __PAGE_SHARED -#define __S011 __PAGE_SHARED -#define __S100 __PAGE_READONLY_EXEC -#define __S101 __PAGE_READONLY_EXEC -#define __S110 __PAGE_SHARED_EXEC -#define __S111 __PAGE_SHARED_EXEC - #ifndef __ASSEMBLY__ /* * ZERO_PAGE is a global shared page that is always zero: used diff --git a/arch/arm/lib/uaccess_with_memcpy.c b/arch/arm/lib/uaccess_with_memcpy.c index 106f83a5ea6d..12d8d9794a28 100644 --- a/arch/arm/lib/uaccess_with_memcpy.c +++ b/arch/arm/lib/uaccess_with_memcpy.c @@ -247,7 +247,7 @@ static int __init test_size_treshold(void) if (!dst_page) goto no_dst; kernel_ptr = page_address(src_page); - user_ptr = vmap(&dst_page, 1, VM_IOREMAP, __pgprot(__P010)); + user_ptr = vmap(&dst_page, 1, VM_IOREMAP, __pgprot(__PAGE_COPY)); if (!user_ptr) goto no_vmap; diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index 274e4f73fd33..9cdf45da57de 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -403,6 +403,8 @@ void __set_fixmap(enum fixed_addresses idx, phys_addr_t phys, pgprot_t prot) local_flush_tlb_kernel_range(vaddr, vaddr + PAGE_SIZE); } +static pteval_t user_pgprot; + /* * Adjust the PMD section entries according to the CPU in use. */ @@ -410,7 +412,7 @@ static void __init build_mem_type_table(void) { struct cachepolicy *cp; unsigned int cr = get_cr(); - pteval_t user_pgprot, kern_pgprot, vecs_pgprot; + pteval_t kern_pgprot, vecs_pgprot; int cpu_arch = cpu_architecture(); int i; @@ -627,11 +629,6 @@ static void __init build_mem_type_table(void) user_pgprot |= PTE_EXT_PXN; #endif - for (i = 0; i < 16; i++) { - pteval_t v = pgprot_val(protection_map[i]); - protection_map[i] = __pgprot(v | user_pgprot); - } - mem_types[MT_LOW_VECTORS].prot_pte |= vecs_pgprot; mem_types[MT_HIGH_VECTORS].prot_pte |= vecs_pgprot; @@ -670,6 +667,41 @@ static void __init build_mem_type_table(void) } } +pgprot_t vm_get_page_prot(unsigned long vm_flags) +{ + switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) { + case VM_NONE: + return __pgprot(pgprot_val(__PAGE_NONE) | user_pgprot); + case VM_READ: + return __pgprot(pgprot_val(__PAGE_READONLY) | user_pgprot); + case VM_WRITE: + case VM_WRITE | VM_READ: + return __pgprot(pgprot_val(__PAGE_COPY) | user_pgprot); + case VM_EXEC: + case VM_EXEC | VM_READ: + return __pgprot(pgprot_val(__PAGE_READONLY_EXEC) | user_pgprot); + case VM_EXEC | VM_WRITE: + case VM_EXEC | VM_WRITE | VM_READ: + return __pgprot(pgprot_val(__PAGE_COPY_EXEC) | user_pgprot); + case VM_SHARED: + return __pgprot(pgprot_val(__PAGE_NONE) | user_pgprot); + case VM_SHARED | VM_READ: + return __pgprot(pgprot_val(__PAGE_READONLY) | user_pgprot); + case VM_SHARED | VM_WRITE: + case VM_SHARED | VM_WRITE | VM_READ: + return __pgprot(pgprot_val(__PAGE_SHARED) | user_pgprot); + case VM_SHARED | VM_EXEC: + case VM_SHARED | VM_EXEC | VM_READ: + return __pgprot(pgprot_val(__PAGE_READONLY_EXEC) | user_pgprot); + case VM_SHARED | VM_EXEC | VM_WRITE: + case VM_SHARED | VM_EXEC | VM_WRITE | VM_READ: + return __pgprot(pgprot_val(__PAGE_SHARED_EXEC) | user_pgprot); + default: + BUILD_BUG(); + } +} +EXPORT_SYMBOL(vm_get_page_prot); + #ifdef CONFIG_ARM_DMA_MEM_BUFFERABLE pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, unsigned long size, pgprot_t vma_prot)