From patchwork Thu Jun 30 05:16:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 12901149 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0497C433EF for ; Thu, 30 Jun 2022 05:19:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 81BE48E000B; Thu, 30 Jun 2022 01:19:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7A5526B00A0; Thu, 30 Jun 2022 01:19:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 646088E000B; Thu, 30 Jun 2022 01:19:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 519A36B009F for ; Thu, 30 Jun 2022 01:19:32 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 2DE431201A2 for ; Thu, 30 Jun 2022 05:19:32 +0000 (UTC) X-FDA: 79633749384.29.C0DEB8A Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf10.hostedemail.com (Postfix) with ESMTP id AEA40C0037 for ; Thu, 30 Jun 2022 05:19:31 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6D6191A32; Wed, 29 Jun 2022 22:19:31 -0700 (PDT) Received: from a077893.blr.arm.com (unknown [10.162.41.8]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 621433F66F; Wed, 29 Jun 2022 22:19:23 -0700 (PDT) From: Anshuman Khandual To: linux-mm@kvack.org, akpm@linux-foundation.org Cc: hch@infradead.org, christophe.leroy@csgroup.eu, Anshuman Khandual , linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, x86@kernel.org, openrisc@lists.librecores.org, linux-xtensa@linux-xtensa.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-parisc@vger.kernel.org, linux-alpha@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-um@lists.infradead.org, linux-sh@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH V6 19/26] ia64/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT Date: Thu, 30 Jun 2022 10:46:23 +0530 Message-Id: <20220630051630.1718927-20-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220630051630.1718927-1-anshuman.khandual@arm.com> References: <20220630051630.1718927-1-anshuman.khandual@arm.com> MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1656566371; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eB23OPLc12+KJFiu9LrYnMw2LctlcIh0+2xZEILXJUw=; b=zZs0oZBoT5WD1TOszT2hp9LXs1Hffg3fq6OZV8sF/uFzfXO5xqlKphW6Aiq1vKBjFmZi1D YIKQGWFFBJT46ZVR8oK+EE/tnKHHnq4OHwJpWEfXmvqhkvjAJPTPQBfnIe08XQk6cJUvYN kHhDHfxhhY5dVmu8Utg0S2IqKiNE1OQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1656566371; a=rsa-sha256; cv=none; b=Tt0eqv186VxweK+A1Jf9nXpRIJg5pXouqAkHPYESQNkyge7XttygXJ04t0SPsPGgqpsSVI AYpPV+o19WgpvVBEhzrksAeXIdv+QuR3QzBuPIK9TGjRYyDgPF7zE0h+KTseZyNOUKVvoO 57j1slJkxrUJfHTaX5QiqVHa4RMl6Qg= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=none; spf=pass (imf10.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com; dmarc=pass (policy=none) header.from=arm.com Authentication-Results: imf10.hostedemail.com; dkim=none; spf=pass (imf10.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com; dmarc=pass (policy=none) header.from=arm.com X-Rspamd-Server: rspam03 X-Rspam-User: X-Stat-Signature: fao6p4kmjmnntdqijot7xrkcaep1you1 X-Rspamd-Queue-Id: AEA40C0037 X-HE-Tag: 1656566371-23391 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks up a private and static protection_map[] array. Subsequently all __SXXX and __PXXX macros can be dropped which are no longer needed. Cc: linux-ia64@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual --- arch/ia64/Kconfig | 1 + arch/ia64/include/asm/pgtable.h | 18 ------------------ arch/ia64/mm/init.c | 28 +++++++++++++++++++++++++++- 3 files changed, 28 insertions(+), 19 deletions(-) diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig index cb93769a9f2a..0510a5737711 100644 --- a/arch/ia64/Kconfig +++ b/arch/ia64/Kconfig @@ -12,6 +12,7 @@ config IA64 select ARCH_HAS_DMA_MARK_CLEAN select ARCH_HAS_STRNCPY_FROM_USER select ARCH_HAS_STRNLEN_USER + select ARCH_HAS_VM_GET_PAGE_PROT select ARCH_MIGHT_HAVE_PC_PARPORT select ARCH_MIGHT_HAVE_PC_SERIO select ACPI diff --git a/arch/ia64/include/asm/pgtable.h b/arch/ia64/include/asm/pgtable.h index 7aa8f2330fb1..6925e28ae61d 100644 --- a/arch/ia64/include/asm/pgtable.h +++ b/arch/ia64/include/asm/pgtable.h @@ -161,24 +161,6 @@ * attempts to write to the page. */ /* xwr */ -#define __P000 PAGE_NONE -#define __P001 PAGE_READONLY -#define __P010 PAGE_READONLY /* write to priv pg -> copy & make writable */ -#define __P011 PAGE_READONLY /* ditto */ -#define __P100 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_X_RX) -#define __P101 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RX) -#define __P110 PAGE_COPY_EXEC -#define __P111 PAGE_COPY_EXEC - -#define __S000 PAGE_NONE -#define __S001 PAGE_READONLY -#define __S010 PAGE_SHARED /* we don't have (and don't need) write-only */ -#define __S011 PAGE_SHARED -#define __S100 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_X_RX) -#define __S101 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RX) -#define __S110 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RWX) -#define __S111 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RWX) - #define pgd_ERROR(e) printk("%s:%d: bad pgd %016lx.\n", __FILE__, __LINE__, pgd_val(e)) #if CONFIG_PGTABLE_LEVELS == 4 #define pud_ERROR(e) printk("%s:%d: bad pud %016lx.\n", __FILE__, __LINE__, pud_val(e)) diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c index 855d949d81df..fc4e4217e87f 100644 --- a/arch/ia64/mm/init.c +++ b/arch/ia64/mm/init.c @@ -273,7 +273,7 @@ static int __init gate_vma_init(void) gate_vma.vm_start = FIXADDR_USER_START; gate_vma.vm_end = FIXADDR_USER_END; gate_vma.vm_flags = VM_READ | VM_MAYREAD | VM_EXEC | VM_MAYEXEC; - gate_vma.vm_page_prot = __P101; + gate_vma.vm_page_prot = __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RX); return 0; } @@ -490,3 +490,29 @@ void arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap) __remove_pages(start_pfn, nr_pages, altmap); } #endif + +static const pgprot_t protection_map[16] = { + [VM_NONE] = PAGE_NONE, + [VM_READ] = PAGE_READONLY, + [VM_WRITE] = PAGE_READONLY, + [VM_WRITE | VM_READ] = PAGE_READONLY, + [VM_EXEC] = __pgprot(__ACCESS_BITS | _PAGE_PL_3 | + _PAGE_AR_X_RX), + [VM_EXEC | VM_READ] = __pgprot(__ACCESS_BITS | _PAGE_PL_3 | + _PAGE_AR_RX), + [VM_EXEC | VM_WRITE] = PAGE_COPY_EXEC, + [VM_EXEC | VM_WRITE | VM_READ] = PAGE_COPY_EXEC, + [VM_SHARED] = PAGE_NONE, + [VM_SHARED | VM_READ] = PAGE_READONLY, + [VM_SHARED | VM_WRITE] = PAGE_SHARED, + [VM_SHARED | VM_WRITE | VM_READ] = PAGE_SHARED, + [VM_SHARED | VM_EXEC] = __pgprot(__ACCESS_BITS | _PAGE_PL_3 | + _PAGE_AR_X_RX), + [VM_SHARED | VM_EXEC | VM_READ] = __pgprot(__ACCESS_BITS | _PAGE_PL_3 | + _PAGE_AR_RX), + [VM_SHARED | VM_EXEC | VM_WRITE] = __pgprot(__ACCESS_BITS | _PAGE_PL_3 | + _PAGE_AR_RWX), + [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = __pgprot(__ACCESS_BITS | _PAGE_PL_3 | + _PAGE_AR_RWX) +}; +DECLARE_VM_GET_PAGE_PROT