From patchwork Mon Jan 24 12:56:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 12722070 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C838DC433F5 for ; Mon, 24 Jan 2022 12:57:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 697686B009F; Mon, 24 Jan 2022 07:57:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 647966B00A0; Mon, 24 Jan 2022 07:57:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5363F6B00A1; Mon, 24 Jan 2022 07:57:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay039.a.hostedemail.com [64.99.140.39]) by kanga.kvack.org (Postfix) with ESMTP id 447AE6B009F for ; Mon, 24 Jan 2022 07:57:52 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id E606C20BBA for ; Mon, 24 Jan 2022 12:57:51 +0000 (UTC) X-FDA: 79065182742.03.CEA7F4F Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf12.hostedemail.com (Postfix) with ESMTP id 71EAC4002A for ; Mon, 24 Jan 2022 12:57:51 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EC814101E; Mon, 24 Jan 2022 04:57:50 -0800 (PST) Received: from p8cg001049571a15.arm.com (unknown [10.163.43.190]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 8BF7B3F774; Mon, 24 Jan 2022 04:57:47 -0800 (PST) From: Anshuman Khandual To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, hch@infradead.org, akpm@linux-foundation.org, Anshuman Khandual , Thomas Gleixner , Ingo Molnar Subject: [RFC V1 10/31] x86/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT Date: Mon, 24 Jan 2022 18:26:47 +0530 Message-Id: <1643029028-12710-11-git-send-email-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1643029028-12710-1-git-send-email-anshuman.khandual@arm.com> References: <1643029028-12710-1-git-send-email-anshuman.khandual@arm.com> Authentication-Results: imf12.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf12.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com X-Stat-Signature: mknxcf3zsyow5apu6u3epgb3jcg458wk X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 71EAC4002A X-HE-Tag: 1643029071-411335 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Christoph Hellwig This defines and exports a platform specific custom vm_get_page_prot() via subscribing ARCH_HAS_VM_GET_PAGE_PROT. Subsequently all __SXXX and __PXXX macros can be dropped which are no longer needed. This also unsubscribes from ARCH_HAS_FILTER_PGPROT, after dropping off arch_filter_pgprot() and arch_vm_get_page_prot(). Cc: Thomas Gleixner Cc: Ingo Molnar Cc: linux-kernel@vger.kernel.org Signed-off-by: Christoph Hellwig Signed-off-by: Anshuman Khandual --- arch/x86/Kconfig | 2 +- arch/x86/include/asm/pgtable.h | 5 -- arch/x86/include/asm/pgtable_types.h | 19 -------- arch/x86/include/uapi/asm/mman.h | 14 ------ arch/x86/mm/Makefile | 2 +- arch/x86/mm/mem_encrypt_amd.c | 4 -- arch/x86/mm/pgprot.c | 71 ++++++++++++++++++++++++++++ 7 files changed, 73 insertions(+), 44 deletions(-) create mode 100644 arch/x86/mm/pgprot.c diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 5f33a10d3dcc..00ab039044ba 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -75,7 +75,6 @@ config X86 select ARCH_HAS_EARLY_DEBUG if KGDB select ARCH_HAS_ELF_RANDOMIZE select ARCH_HAS_FAST_MULTIPLIER - select ARCH_HAS_FILTER_PGPROT select ARCH_HAS_FORTIFY_SOURCE select ARCH_HAS_GCOV_PROFILE_ALL select ARCH_HAS_KCOV if X86_64 @@ -94,6 +93,7 @@ config X86 select ARCH_HAS_SYNC_CORE_BEFORE_USERMODE select ARCH_HAS_SYSCALL_WRAPPER select ARCH_HAS_UBSAN_SANITIZE_ALL + select ARCH_HAS_VM_GET_PAGE_PROT select ARCH_HAS_DEBUG_WX select ARCH_HAS_ZONE_DMA_SET if EXPERT select ARCH_HAVE_NMI_SAFE_CMPXCHG diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 8a9432fb3802..985e1b823691 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -648,11 +648,6 @@ static inline pgprot_t pgprot_modify(pgprot_t oldprot, pgprot_t newprot) #define canon_pgprot(p) __pgprot(massage_pgprot(p)) -static inline pgprot_t arch_filter_pgprot(pgprot_t prot) -{ - return canon_pgprot(prot); -} - static inline int is_new_memtype_allowed(u64 paddr, unsigned long size, enum page_cache_mode pcm, enum page_cache_mode new_pcm) diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index 40497a9020c6..1a9dd933088e 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -228,25 +228,6 @@ enum page_cache_mode { #endif /* __ASSEMBLY__ */ -/* xwr */ -#define __P000 PAGE_NONE -#define __P001 PAGE_READONLY -#define __P010 PAGE_COPY -#define __P011 PAGE_COPY -#define __P100 PAGE_READONLY_EXEC -#define __P101 PAGE_READONLY_EXEC -#define __P110 PAGE_COPY_EXEC -#define __P111 PAGE_COPY_EXEC - -#define __S000 PAGE_NONE -#define __S001 PAGE_READONLY -#define __S010 PAGE_SHARED -#define __S011 PAGE_SHARED -#define __S100 PAGE_READONLY_EXEC -#define __S101 PAGE_READONLY_EXEC -#define __S110 PAGE_SHARED_EXEC -#define __S111 PAGE_SHARED_EXEC - /* * early identity mapping pte attrib macros. */ diff --git a/arch/x86/include/uapi/asm/mman.h b/arch/x86/include/uapi/asm/mman.h index d4a8d0424bfb..775dbd3aff73 100644 --- a/arch/x86/include/uapi/asm/mman.h +++ b/arch/x86/include/uapi/asm/mman.h @@ -5,20 +5,6 @@ #define MAP_32BIT 0x40 /* only give out 32bit addresses */ #ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS -/* - * Take the 4 protection key bits out of the vma->vm_flags - * value and turn them in to the bits that we can put in - * to a pte. - * - * Only override these if Protection Keys are available - * (which is only on 64-bit). - */ -#define arch_vm_get_page_prot(vm_flags) __pgprot( \ - ((vm_flags) & VM_PKEY_BIT0 ? _PAGE_PKEY_BIT0 : 0) | \ - ((vm_flags) & VM_PKEY_BIT1 ? _PAGE_PKEY_BIT1 : 0) | \ - ((vm_flags) & VM_PKEY_BIT2 ? _PAGE_PKEY_BIT2 : 0) | \ - ((vm_flags) & VM_PKEY_BIT3 ? _PAGE_PKEY_BIT3 : 0)) - #define arch_calc_vm_prot_bits(prot, key) ( \ ((key) & 0x1 ? VM_PKEY_BIT0 : 0) | \ ((key) & 0x2 ? VM_PKEY_BIT1 : 0) | \ diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index fe3d3061fc11..fb6b41a48ae5 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -20,7 +20,7 @@ CFLAGS_REMOVE_mem_encrypt_identity.o = -pg endif obj-y := init.o init_$(BITS).o fault.o ioremap.o extable.o mmap.o \ - pgtable.o physaddr.o setup_nx.o tlb.o cpu_entry_area.o maccess.o + pgtable.o physaddr.o setup_nx.o tlb.o cpu_entry_area.o maccess.o pgprot.o obj-y += pat/ diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c index 2b2d018ea345..e0ac16ee08f4 100644 --- a/arch/x86/mm/mem_encrypt_amd.c +++ b/arch/x86/mm/mem_encrypt_amd.c @@ -188,10 +188,6 @@ void __init sme_early_init(void) __supported_pte_mask = __sme_set(__supported_pte_mask); - /* Update the protection map with memory encryption mask */ - for (i = 0; i < ARRAY_SIZE(protection_map); i++) - protection_map[i] = pgprot_encrypted(protection_map[i]); - if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) swiotlb_force = SWIOTLB_FORCE; } diff --git a/arch/x86/mm/pgprot.c b/arch/x86/mm/pgprot.c new file mode 100644 index 000000000000..a813adabfe4f --- /dev/null +++ b/arch/x86/mm/pgprot.c @@ -0,0 +1,71 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include + +static inline pgprot_t __vm_get_page_prot(unsigned long vm_flags) +{ + switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) { + case 0: + return PAGE_NONE; + case VM_READ: + return PAGE_READONLY; + case VM_WRITE: + return PAGE_COPY; + case VM_READ | VM_WRITE: + return PAGE_COPY; + case VM_EXEC: + case VM_EXEC | VM_READ: + return PAGE_READONLY_EXEC; + case VM_EXEC | VM_WRITE: + case VM_EXEC | VM_READ | VM_WRITE: + return PAGE_COPY_EXEC; + case VM_SHARED: + return PAGE_NONE; + case VM_SHARED | VM_READ: + return PAGE_READONLY; + case VM_SHARED | VM_WRITE: + case VM_SHARED | VM_READ | VM_WRITE: + return PAGE_SHARED; + case VM_SHARED | VM_EXEC: + case VM_SHARED | VM_READ | VM_EXEC: + return PAGE_READONLY_EXEC; + case VM_SHARED | VM_WRITE | VM_EXEC: + case VM_SHARED | VM_READ | VM_WRITE | VM_EXEC: + return PAGE_SHARED_EXEC; + default: + BUILD_BUG(); + return PAGE_NONE; + } +} + + +pgprot_t vm_get_page_prot(unsigned long vm_flags) +{ + unsigned long val = pgprot_val(__vm_get_page_prot(vm_flags)); + +#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS + /* + * Take the 4 protection key bits out of the vma->vm_flags value and + * turn them in to the bits that we can put in to a pte. + * + * Only override these if Protection Keys are available (which is only + * on 64-bit). + */ + if (vm_flags & VM_PKEY_BIT0) + val |= _PAGE_PKEY_BIT0; + if (vm_flags & VM_PKEY_BIT1) + val |= _PAGE_PKEY_BIT1; + if (vm_flags & VM_PKEY_BIT2) + val |= _PAGE_PKEY_BIT2; + if (vm_flags & VM_PKEY_BIT3) + val |= _PAGE_PKEY_BIT3; +#endif + + val = __sme_set(val); + if (val & _PAGE_PRESENT) + val &= __supported_pte_mask; + return __pgprot(val); +} +EXPORT_SYMBOL(vm_get_page_prot);