From patchwork Thu Mar 14 09:48:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 13592283 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8D949C54E67 for ; Thu, 14 Mar 2024 09:48:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=7GbdENAqwPBzp8kDlCpkKpKYnPCEkRMqFiO0MegC19g=; b=dFDe8WmtpKzOWIlODlSwswAkZe uM5YDScWHpfAVCAZVahY4sMUVvkH7zYWaeweZ3ypXqJlJRPqpd8LDT8nB7ygF/THtPC5wfO0bWHA1 npnVGdGVngP8zx6ZXjymLtkFYqatzpGbrfCiBXQeyjoUlW1raKWKenLUCM6Mn3Une9QWacLp45CqW OHnBaAXVeHqS2M66d+UPQ1pQo7VfVX3RV8y0MCiwhQm7PM9jo1F0KepBjJSb6BDgDz7CembZdG1Q/ 0YeVn7hS1NdSGPCfmnoDH9jbIcPW+og0vzufXfOyhLUVjFxjWeCIYf38xN9wbiF4x0hewNgReTdeV 2I/njXDA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rkhhK-0000000DmYY-2V72; Thu, 14 Mar 2024 09:48:26 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rkhhD-0000000DmVI-2yq8 for linux-arm-kernel@lists.infradead.org; Thu, 14 Mar 2024 09:48:21 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-608e4171382so12810637b3.3 for ; Thu, 14 Mar 2024 02:48:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1710409696; x=1711014496; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=IlI3ps46jh7dlvHuPvJwMd4DSom82zjKEXhUoNMdN68=; b=m2dUvvM8PgRyYLU35nVgua/xr/HYPOuen9wGjEvM3buBvnXj5Tbl7AyJZqON3nukCa UClfa1LZ45rLK2WxM6jlGR1rF8pudGqZXJDUQwWgYpf6bFMXNbx7gBqGVKRqs5ugvHnm UbuNSUpoimmNhB9r5WPIqdrfSmUrFBI2o7LAqnWqwWXpPuw1IoBHIinwCFGIJBFHiH3d D0XoubfombpwBHvoefipBA0oDPoxKgyMnTMGk9u0kYBNNQrb5Mw3o0Ttl5HcQSvMMMz2 8XGpQWEUFuLqgAY56DE5sfgzmwklxtIbZ/7nPyDtuKO0PMhxnUsZLlW5YU92oY2hfp9N Hh6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710409696; x=1711014496; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=IlI3ps46jh7dlvHuPvJwMd4DSom82zjKEXhUoNMdN68=; b=EvKckGBlElims/ZNwC1DcI+cR51SDe9Fy8yU9Ogwk3UXsjRfxFtEEZymYSHma0j+xG qvdj3cpQRrOL4wFSuC6S7ISJHwJO139l2LvCqaTOIGPWveNVlA8mTalA7YjymIdWpjN+ N9fqF4dIZN3nSQ8NlJx5+q78qTNwM8COpv6EX3uo6kAcDiWzvOZOgzGkgYKsXl6sjyeI UC+pwUqyOuWGwNQ6kz74wZZyQGso9Dws0xOhvvJI+G4QxOVID1gDoTqzbWJfM1bRFMsZ NLhi8vbq635uoW2uExXFICjWq1omqO3++7SfEnhtd8JujbLce76uJYV17Pn6dbas3x8O tTNw== X-Gm-Message-State: AOJu0YzEReRg2RpUnqWJ9ju6hdw+YOIUFzPAuYmXSLDaFReD62WeP8mz IMRjlBU94UaoFnYw+acuSA9YbCZrvWADV2x18dyXcbECchZoa+KaCLJWSZwtUscW+OK2TBz3aGL n/vZ2nOCQ449nDNlVcjun1vitccfkYkeWLmX4UiVFq0ybupOdZ9rfakLXZorYofEQygvX5+0RLO axka7jwoyIIPoUIagioiRDhXQAW0NwE0BRE/3PfNnZ X-Google-Smtp-Source: AGHT+IG28QifKV+HG0qIVYrWWx6DnlS+bUa9Z49QA/NMyk/MUDjAnhBfF06jHpGbPTtGc1c2G88k20F2 X-Received: from palermo.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:118a]) (user=ardb job=sendgmr) by 2002:a0d:e210:0:b0:5e6:1e40:e2e3 with SMTP id l16-20020a0de210000000b005e61e40e2e3mr231360ywe.5.1710409696530; Thu, 14 Mar 2024 02:48:16 -0700 (PDT) Date: Thu, 14 Mar 2024 10:48:07 +0100 In-Reply-To: <20240314094804.3094098-4-ardb+git@google.com> Mime-Version: 1.0 References: <20240314094804.3094098-4-ardb+git@google.com> X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-Developer-Signature: v=1; a=openpgp-sha256; l=8884; i=ardb@kernel.org; h=from:subject; bh=WgvJXSRJgAua82n4mdzz3/3MgTjmX1yh/GVhUg6Le/4=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIfXT8eteViVHNk1XZH1VobUlKZrbvfzj+oiAr4+ecyotn bZELDapo5SFQYyDQVZMkUVg9t93O09PlKp1niULM4eVCWQIAxenAEzEsp7hrxDvyelbw4KuJmWd 7ODkEDtld4Bvu3Gu6s944Yy2W3kdPAz/1HWc/OJEtf2UIpnf3XQ6+vOrknPhs+TbJ2Xqt3G59Pm wAwA= X-Mailer: git-send-email 2.44.0.278.ge034bb2e1d-goog Message-ID: <20240314094804.3094098-6-ardb+git@google.com> Subject: [PATCH 2/2] arm64: mm: add support for WXN memory translation attribute From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Joey Gouly , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240314_024819_787863_E9559DA7 X-CRM114-Status: GOOD ( 29.01 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Ard Biesheuvel The AArch64 virtual memory system supports a global WXN control, which can be enabled to make all writable mappings implicitly no-exec. This is a useful hardening feature, as it prevents mistakes in managing page table permissions from being exploited to attack the system. When enabled at EL1, the restrictions apply to both EL1 and EL0. EL1 is completely under our control, and has been cleaned up to allow WXN to be enabled from boot onwards. So as far as EL1 is concerned, there is no reason not to enable this. Since there is no practical way to make this restriction apply to EL1 only, or selectively to some user processes and not others, this will apply to the kernel and all of user space when it is enabled. This constitutes an ABI change and is therefore not recommended in general. Note that mmap()/mprotect() calls requesting writable executable mappings will fail gracefully under this policy, and many software components and libraries have already been updated to deal with this limitation, given that hardening schemes such as PaX or SeLinux have already been imposing such restrictions for years. Signed-off-by: Ard Biesheuvel --- arch/arm64/Kconfig | 11 +++++++ arch/arm64/include/asm/cpufeature.h | 8 ++++++ arch/arm64/include/asm/mman.h | 16 +++++++++++ arch/arm64/include/asm/mmu_context.h | 30 +++++++++++++++++++- arch/arm64/kernel/pi/idreg-override.c | 4 ++- arch/arm64/kernel/pi/map_kernel.c | 23 +++++++++++++++ arch/arm64/mm/proc.S | 6 ++++ 7 files changed, 96 insertions(+), 2 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 4869265ace2d..24dfd87fab93 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1606,6 +1606,17 @@ config RODATA_FULL_DEFAULT_ENABLED This requires the linear region to be mapped down to pages, which may adversely affect performance in some cases. +config ARM64_WXN + bool "Enable WXN attribute so all writable mappings are non-exec" + help + Set the WXN bit in the SCTLR system register so that all writable + mappings are treated as if the PXN/UXN bit is set as well. + If this is set to Y, it can still be disabled at runtime by + passing 'arm64.nowxn' on the kernel command line. + + This should only be set if no software needs to be supported that + relies on being able to execute from writable mappings. + config ARM64_SW_TTBR0_PAN bool "Emulate Privileged Access Never using TTBR0_EL1 switching" help diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 6d86ad37c615..66ba0801f7b7 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -18,6 +18,7 @@ #define ARM64_SW_FEATURE_OVERRIDE_NOKASLR 0 #define ARM64_SW_FEATURE_OVERRIDE_HVHE 4 #define ARM64_SW_FEATURE_OVERRIDE_RODATA_OFF 8 +#define ARM64_SW_FEATURE_OVERRIDE_NOWXN 12 #ifndef __ASSEMBLY__ @@ -967,6 +968,13 @@ static inline bool kaslr_disabled_cmdline(void) return arm64_test_sw_feature_override(ARM64_SW_FEATURE_OVERRIDE_NOKASLR); } +static inline bool arm64_wxn_enabled(void) +{ + if (!IS_ENABLED(CONFIG_ARM64_WXN)) + return false; + return !arm64_test_sw_feature_override(ARM64_SW_FEATURE_OVERRIDE_NOWXN); +} + u32 get_kvm_ipa_limit(void); void dump_cpu_features(void); diff --git a/arch/arm64/include/asm/mman.h b/arch/arm64/include/asm/mman.h index 5966ee4a6154..20ac42d645c5 100644 --- a/arch/arm64/include/asm/mman.h +++ b/arch/arm64/include/asm/mman.h @@ -35,6 +35,22 @@ static inline unsigned long arch_calc_vm_flag_bits(unsigned long flags) } #define arch_calc_vm_flag_bits(flags) arch_calc_vm_flag_bits(flags) +static inline bool arch_deny_write_exec(void) +{ + if (!arm64_wxn_enabled()) + return false; + + /* + * When we are running with SCTLR_ELx.WXN==1, writable mappings are + * implicitly non-executable. This means we should reject such mappings + * when user space attempts to create them using mmap() or mprotect(). + */ + pr_info_ratelimited("process %s (%d) attempted to create PROT_WRITE+PROT_EXEC mapping\n", + current->comm, current->pid); + return true; +} +#define arch_deny_write_exec arch_deny_write_exec + static inline bool arch_validate_prot(unsigned long prot, unsigned long addr __always_unused) { diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h index c768d16b81a4..f0fe2d09d139 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -20,13 +20,41 @@ #include #include #include -#include #include #include #include extern bool rodata_full; +static inline int arch_dup_mmap(struct mm_struct *oldmm, + struct mm_struct *mm) +{ + return 0; +} + +static inline void arch_exit_mmap(struct mm_struct *mm) +{ +} + +static inline void arch_unmap(struct mm_struct *mm, + unsigned long start, unsigned long end) +{ +} + +static inline bool arch_vma_access_permitted(struct vm_area_struct *vma, + bool write, bool execute, bool foreign) +{ + if (IS_ENABLED(CONFIG_ARM64_WXN) && execute && + (vma->vm_flags & (VM_WRITE | VM_EXEC)) == (VM_WRITE | VM_EXEC)) { + pr_warn_ratelimited( + "process %s (%d) attempted to execute from writable memory\n", + current->comm, current->pid); + /* disallow unless the nowxn override is set */ + return !arm64_wxn_enabled(); + } + return true; +} + static inline void contextidr_thread_switch(struct task_struct *next) { if (!IS_ENABLED(CONFIG_PID_IN_CONTEXTIDR)) diff --git a/arch/arm64/kernel/pi/idreg-override.c b/arch/arm64/kernel/pi/idreg-override.c index aad399796e81..bccfee34f62f 100644 --- a/arch/arm64/kernel/pi/idreg-override.c +++ b/arch/arm64/kernel/pi/idreg-override.c @@ -189,6 +189,7 @@ static const struct ftr_set_desc sw_features __prel64_initconst = { FIELD("nokaslr", ARM64_SW_FEATURE_OVERRIDE_NOKASLR, NULL), FIELD("hvhe", ARM64_SW_FEATURE_OVERRIDE_HVHE, hvhe_filter), FIELD("rodataoff", ARM64_SW_FEATURE_OVERRIDE_RODATA_OFF, NULL), + FIELD("nowxn", ARM64_SW_FEATURE_OVERRIDE_NOWXN, NULL), {} }, }; @@ -221,8 +222,9 @@ static const struct { { "arm64.nomops", "id_aa64isar2.mops=0" }, { "arm64.nomte", "id_aa64pfr1.mte=0" }, { "nokaslr", "arm64_sw.nokaslr=1" }, - { "rodata=off", "arm64_sw.rodataoff=1" }, + { "rodata=off", "arm64_sw.rodataoff=1 arm64_sw.nowxn=1" }, { "arm64.nolva", "id_aa64mmfr2.varange=0" }, + { "arm64.nowxn", "arm64_sw.nowxn=1" }, }; static int __init parse_hexdigit(const char *p, u64 *v) diff --git a/arch/arm64/kernel/pi/map_kernel.c b/arch/arm64/kernel/pi/map_kernel.c index 5fa08e13e17e..cac1e1f63c44 100644 --- a/arch/arm64/kernel/pi/map_kernel.c +++ b/arch/arm64/kernel/pi/map_kernel.c @@ -132,6 +132,25 @@ static void __init map_kernel(u64 kaslr_offset, u64 va_offset, int root_level) idmap_cpu_replace_ttbr1(swapper_pg_dir); } +static void noinline __section(".idmap.text") disable_wxn(void) +{ + u64 sctlr = read_sysreg(sctlr_el1) & ~SCTLR_ELx_WXN; + + /* + * We cannot safely clear the WXN bit while the MMU and caches are on, + * so turn the MMU off, flush the TLBs and turn it on again but with + * the WXN bit cleared this time. + */ + asm(" msr sctlr_el1, %0 ;" + " isb ;" + " tlbi vmalle1 ;" + " dsb nsh ;" + " isb ;" + " msr sctlr_el1, %1 ;" + " isb ;" + :: "r"(sctlr & ~SCTLR_ELx_M), "r"(sctlr)); +} + static void noinline __section(".idmap.text") set_ttbr0_for_lpa2(u64 ttbr) { u64 sctlr = read_sysreg(sctlr_el1); @@ -229,6 +248,10 @@ asmlinkage void __init early_map_kernel(u64 boot_status, void *fdt) if (va_bits > VA_BITS_MIN) sysreg_clear_set(tcr_el1, TCR_T1SZ_MASK, TCR_T1SZ(va_bits)); + if (IS_ENABLED(CONFIG_ARM64_WXN) && + arm64_test_sw_feature_override(ARM64_SW_FEATURE_OVERRIDE_NOWXN)) + disable_wxn(); + /* * The virtual KASLR displacement modulo 2MiB is decided by the * physical placement of the image, as otherwise, we might not be able diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 9d40f3ffd8d2..bfd2ad896108 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -546,6 +546,12 @@ alternative_else_nop_endif * Prepare SCTLR */ mov_q x0, INIT_SCTLR_EL1_MMU_ON +#ifdef CONFIG_ARM64_WXN + ldr_l x1, arm64_sw_feature_override + FTR_OVR_VAL_OFFSET + tst x1, #0xf << ARM64_SW_FEATURE_OVERRIDE_NOWXN + orr x1, x0, #SCTLR_ELx_WXN + csel x0, x0, x1, ne +#endif ret // return to head.S .unreq mair