From patchwork Mon Apr 15 09:49:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincenzo Frascino X-Patchwork-Id: 10900415 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 446C2186D for ; Mon, 15 Apr 2019 09:50:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2415C2867F for ; Mon, 15 Apr 2019 09:50:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2220728923; Mon, 15 Apr 2019 09:50:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 7814128927 for ; Mon, 15 Apr 2019 09:50:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=hER5PGLwizzyiqhPgpOsDIBkoLeIxESIbziBQgbMhDU=; b=NA1yE/cvwM6drc gGRBFPOXQGn9yMRQiaDcDkR2L5gOZwHGdnZcmEGeGmTPoD52Kkn4vzxgUEheh5Uoil1isO0HvMhCD /LE6EM/FLEBhUvPIMIpsOcP24jN7jckVooW94e3M6e08VpRCC5j0gIOxkGGua+bxgv4+WwPL7vtCI BoGWkID2zg2GBrzHHEKvEq/LYLpK6aHu+IGYbdMN4UvaLZfoDjkfmZWrI3d4OXFd/iWq+EZ9fk0AG gpzJ1T8dRginsy2MIWKbIPfeg6gHAH4CBl7KHPX6Cq4kJrCr+IORcpV9n2sAnagDtqIhYFmhmwGZW 4PJWh83xg5wK71w4jilA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hFyFw-0004s0-4x; Mon, 15 Apr 2019 09:50:28 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hFyFN-00031F-59 for linux-arm-kernel@lists.infradead.org; Mon, 15 Apr 2019 09:49:55 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F0E71A78; Mon, 15 Apr 2019 02:49:50 -0700 (PDT) Received: from e119884-lin.cambridge.arm.com (e119884-lin.cambridge.arm.com [10.1.196.72]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D70CD3F557; Mon, 15 Apr 2019 02:49:49 -0700 (PDT) From: Vincenzo Frascino To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v4 1/4] arm64: compat: Alloc separate pages for vectors and sigpage Date: Mon, 15 Apr 2019 10:49:34 +0100 Message-Id: <20190415094937.13518-2-vincenzo.frascino@arm.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190415094937.13518-1-vincenzo.frascino@arm.com> References: <20190415094937.13518-1-vincenzo.frascino@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190415_024953_202626_62E6A504 X-CRM114-Status: GOOD ( 22.20 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Catalin Marinas , Will Deacon Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP For AArch32 tasks, we install a special "[vectors]" page that contains the sigreturn trampolines and kuser helpers, which is mapped at a fixed address specified by the kuser helpers ABI. Having the sigreturn trampolines in the same page as the kuser helpers makes it impossible to disable the kuser helpers independently. Follow the Arm implementation, by moving the signal trampolines out of the "[vectors]" page and into their own "[sigpage]". Cc: Catalin Marinas Cc: Will Deacon Signed-off-by: Vincenzo Frascino Reviewed-by: Catalin Marinas --- arch/arm64/include/asm/elf.h | 6 +- arch/arm64/include/asm/signal32.h | 2 - arch/arm64/kernel/signal32.c | 5 +- arch/arm64/kernel/vdso.c | 127 +++++++++++++++++++++++------- 4 files changed, 106 insertions(+), 34 deletions(-) diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h index 6adc1a90e7e6..355d120b78cb 100644 --- a/arch/arm64/include/asm/elf.h +++ b/arch/arm64/include/asm/elf.h @@ -214,10 +214,10 @@ typedef compat_elf_greg_t compat_elf_gregset_t[COMPAT_ELF_NGREG]; set_thread_flag(TIF_32BIT); \ }) #define COMPAT_ARCH_DLINFO -extern int aarch32_setup_vectors_page(struct linux_binprm *bprm, - int uses_interp); +extern int aarch32_setup_additional_pages(struct linux_binprm *bprm, + int uses_interp); #define compat_arch_setup_additional_pages \ - aarch32_setup_vectors_page + aarch32_setup_additional_pages #endif /* CONFIG_COMPAT */ diff --git a/arch/arm64/include/asm/signal32.h b/arch/arm64/include/asm/signal32.h index 81abea0b7650..58e288aaf0ba 100644 --- a/arch/arm64/include/asm/signal32.h +++ b/arch/arm64/include/asm/signal32.h @@ -20,8 +20,6 @@ #ifdef CONFIG_COMPAT #include -#define AARCH32_KERN_SIGRET_CODE_OFFSET 0x500 - int compat_setup_frame(int usig, struct ksignal *ksig, sigset_t *set, struct pt_regs *regs); int compat_setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set, diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c index cb7800acd19f..3846a1b710b5 100644 --- a/arch/arm64/kernel/signal32.c +++ b/arch/arm64/kernel/signal32.c @@ -379,6 +379,7 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka, compat_ulong_t retcode; compat_ulong_t spsr = regs->pstate & ~(PSR_f | PSR_AA32_E_BIT); int thumb; + void *sigreturn_base; /* Check if the handler is written for ARM or Thumb */ thumb = handler & 1; @@ -399,12 +400,12 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka, } else { /* Set up sigreturn pointer */ unsigned int idx = thumb << 1; + sigreturn_base = current->mm->context.vdso; if (ka->sa.sa_flags & SA_SIGINFO) idx += 3; - retcode = AARCH32_VECTORS_BASE + - AARCH32_KERN_SIGRET_CODE_OFFSET + + retcode = ptr_to_compat(sigreturn_base) + (idx << 2) + thumb; } diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c index 2d419006ad43..79fd7a65ae55 100644 --- a/arch/arm64/kernel/vdso.c +++ b/arch/arm64/kernel/vdso.c @@ -1,5 +1,7 @@ /* - * VDSO implementation for AArch64 and vector page setup for AArch32. + * VDSO implementation for AArch64 and for AArch32: + * AArch64: vDSO implementation contains pages setup and data page update. + * AArch32: vDSO implementation contains sigreturn and kuser pages setup. * * Copyright (C) 2012 ARM Limited * @@ -53,61 +55,132 @@ struct vdso_data *vdso_data = &vdso_data_store.data; /* * Create and map the vectors page for AArch32 tasks. */ -static struct page *vectors_page[1] __ro_after_init; +/* + * aarch32_vdso_pages: + * 0 - kuser helpers + * 1 - sigreturn code + */ +#define C_VECTORS 0 +#define C_SIGPAGE 1 +#define C_PAGES (C_SIGPAGE + 1) +static struct page *aarch32_vdso_pages[C_PAGES] __ro_after_init; +static const struct vm_special_mapping aarch32_vdso_spec[C_PAGES] = { + { + /* Must be named [vectors] for compatibility with arm. */ + .name = "[vectors]", + .pages = &aarch32_vdso_pages[C_VECTORS], + }, + { + /* Must be named [sigpage] for compatibility with arm. */ + .name = "[sigpage]", + .pages = &aarch32_vdso_pages[C_SIGPAGE], + }, +}; -static int __init alloc_vectors_page(void) +static int __init aarch32_alloc_vdso_pages(void) { extern char __kuser_helper_start[], __kuser_helper_end[]; extern char __aarch32_sigret_code_start[], __aarch32_sigret_code_end[]; int kuser_sz = __kuser_helper_end - __kuser_helper_start; int sigret_sz = __aarch32_sigret_code_end - __aarch32_sigret_code_start; - unsigned long vpage; + unsigned long vdso_pages[C_PAGES]; - vpage = get_zeroed_page(GFP_ATOMIC); + vdso_pages[C_VECTORS] = get_zeroed_page(GFP_ATOMIC); + if (!vdso_pages[C_VECTORS]) + return -ENOMEM; - if (!vpage) + vdso_pages[C_SIGPAGE] = get_zeroed_page(GFP_ATOMIC); + if (!vdso_pages[C_SIGPAGE]) { + /* + * free_page() it is required to avoid to leak the vectors page + * if the allocation of sigpage fails. + */ + free_page(vdso_pages[C_VECTORS]); return -ENOMEM; + } /* kuser helpers */ - memcpy((void *)vpage + 0x1000 - kuser_sz, __kuser_helper_start, - kuser_sz); + memcpy((void *)(vdso_pages[C_VECTORS] + 0x1000 - kuser_sz), + __kuser_helper_start, + kuser_sz); /* sigreturn code */ - memcpy((void *)vpage + AARCH32_KERN_SIGRET_CODE_OFFSET, - __aarch32_sigret_code_start, sigret_sz); + memcpy((void *)vdso_pages[C_SIGPAGE], + __aarch32_sigret_code_start, + sigret_sz); + + flush_icache_range(vdso_pages[C_VECTORS], + vdso_pages[C_VECTORS] + PAGE_SIZE); + flush_icache_range(vdso_pages[C_SIGPAGE], + vdso_pages[C_SIGPAGE] + PAGE_SIZE); - flush_icache_range(vpage, vpage + PAGE_SIZE); - vectors_page[0] = virt_to_page(vpage); + aarch32_vdso_pages[C_VECTORS] = virt_to_page(vdso_pages[C_VECTORS]); + aarch32_vdso_pages[C_SIGPAGE] = virt_to_page(vdso_pages[C_SIGPAGE]); return 0; } -arch_initcall(alloc_vectors_page); +arch_initcall(aarch32_alloc_vdso_pages); -int aarch32_setup_vectors_page(struct linux_binprm *bprm, int uses_interp) +static int aarch32_kuser_helpers_setup(struct mm_struct *mm) { - struct mm_struct *mm = current->mm; - unsigned long addr = AARCH32_VECTORS_BASE; - static const struct vm_special_mapping spec = { - .name = "[vectors]", - .pages = vectors_page, + void *ret; + + /* The kuser helpers must be mapped at the ABI-defined high address */ + ret = _install_special_mapping(mm, AARCH32_VECTORS_BASE, PAGE_SIZE, + VM_READ | VM_EXEC | + VM_MAYREAD | VM_MAYEXEC, + &aarch32_vdso_spec[C_VECTORS]); + + return PTR_ERR_OR_ZERO(ret); +} - }; +static int aarch32_sigreturn_setup(struct mm_struct *mm) +{ + unsigned long addr; void *ret; - if (down_write_killable(&mm->mmap_sem)) - return -EINTR; - current->mm->context.vdso = (void *)addr; + addr = get_unmapped_area(NULL, 0, PAGE_SIZE, 0, 0); + if (IS_ERR_VALUE(addr)) { + ret = ERR_PTR(addr); + goto out; + } - /* Map vectors page at the high address. */ + /* + * VM_MAYWRITE is required to allow gdb to Copy-on-Write and + * set breakpoints. + */ ret = _install_special_mapping(mm, addr, PAGE_SIZE, - VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC, - &spec); + VM_READ | VM_EXEC | VM_MAYREAD | + VM_MAYWRITE | VM_MAYEXEC, + &aarch32_vdso_spec[C_SIGPAGE]); + if (IS_ERR(ret)) + goto out; - up_write(&mm->mmap_sem); + mm->context.vdso = (void *)addr; +out: return PTR_ERR_OR_ZERO(ret); } + +int aarch32_setup_additional_pages(struct linux_binprm *bprm, int uses_interp) +{ + struct mm_struct *mm = current->mm; + int ret; + + if (down_write_killable(&mm->mmap_sem)) + return -EINTR; + + ret = aarch32_kuser_helpers_setup(mm); + if (ret) + goto out; + + ret = aarch32_sigreturn_setup(mm); + +out: + up_write(&mm->mmap_sem); + return ret; +} #endif /* CONFIG_COMPAT */ static int vdso_mremap(const struct vm_special_mapping *sm, From patchwork Mon Apr 15 09:49:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincenzo Frascino X-Patchwork-Id: 10900413 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 95EBB14DB for ; Mon, 15 Apr 2019 09:50:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7149D28496 for ; Mon, 15 Apr 2019 09:50:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6F9E5285F9; Mon, 15 Apr 2019 09:50:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id DB91F2855A for ; Mon, 15 Apr 2019 09:50:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=DxbqxjZJzZSEhc8gAkzm4371m7l4hzZJ1pWdGyVzgk8=; b=IR8Oe6hg+Vp1G0 QlirOfBxGpYo9veoRT4G6dzUw7dPaHfiLJynkr0+LzTA97VtF2d+It87JJMiECzC2Ah7VTpY+Ci1A wmVDBft/OpVRB5Xg2zd1Iu1366gz2PM1pXUvq9vHvmRCWx1AhMIUHNccBD24xQQGgEQh/bWlzly1g 81A4qLKyEVi16Hx0oJrvtroXlRaxkl/liCZU4Z6a1btNZU2lStDh+CiayPc7wCRtyvkraI+MbpmE7 g963AgxIS0N0/gsp8vlWYhkUdJFFQCVed6yrlFU54IPbm5RPkPIsKOqEFVBEuoVpdRvlPl3syKYul J1uJ4AopLQMn8x4dvYVw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hFyFd-0003mn-9b; Mon, 15 Apr 2019 09:50:09 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hFyFM-00031V-SS for linux-arm-kernel@lists.infradead.org; Mon, 15 Apr 2019 09:49:54 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 276931684; Mon, 15 Apr 2019 02:49:52 -0700 (PDT) Received: from e119884-lin.cambridge.arm.com (e119884-lin.cambridge.arm.com [10.1.196.72]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3B0903F557; Mon, 15 Apr 2019 02:49:51 -0700 (PDT) From: Vincenzo Frascino To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v4 2/4] arm64: compat: Split kuser32 Date: Mon, 15 Apr 2019 10:49:35 +0100 Message-Id: <20190415094937.13518-3-vincenzo.frascino@arm.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190415094937.13518-1-vincenzo.frascino@arm.com> References: <20190415094937.13518-1-vincenzo.frascino@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190415_024952_926808_9A6D3C70 X-CRM114-Status: GOOD ( 18.33 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Catalin Marinas , Will Deacon Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP To make it possible to disable kuser helpers in aarch32 we need to divide the kuser and the sigreturn functionalities. Split the current version of kuser32 in kuser32 (for kuser helpers) and sigreturn32 (for sigreturn helpers). Cc: Catalin Marinas Cc: Will Deacon Signed-off-by: Vincenzo Frascino Reviewed-by: Catalin Marinas --- arch/arm64/kernel/Makefile | 2 +- arch/arm64/kernel/kuser32.S | 59 ++------------------------------- arch/arm64/kernel/sigreturn32.S | 46 +++++++++++++++++++++++++ 3 files changed, 50 insertions(+), 57 deletions(-) create mode 100644 arch/arm64/kernel/sigreturn32.S diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index cd434d0719c1..50f76b88a967 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -28,7 +28,7 @@ $(obj)/%.stub.o: $(obj)/%.o FORCE $(call if_changed,objcopy) obj-$(CONFIG_COMPAT) += sys32.o kuser32.o signal32.o \ - sys_compat.o + sigreturn32.o sys_compat.o obj-$(CONFIG_FUNCTION_TRACER) += ftrace.o entry-ftrace.o obj-$(CONFIG_MODULES) += module.o obj-$(CONFIG_ARM64_MODULE_PLTS) += module-plts.o diff --git a/arch/arm64/kernel/kuser32.S b/arch/arm64/kernel/kuser32.S index 997e6b27ff6a..c5f2bbafd723 100644 --- a/arch/arm64/kernel/kuser32.S +++ b/arch/arm64/kernel/kuser32.S @@ -1,24 +1,10 @@ +/* SPDX-License-Identifier: GPL-2.0 */ /* - * Low-level user helpers placed in the vectors page for AArch32. + * AArch32 user helpers. * Based on the kuser helpers in arch/arm/kernel/entry-armv.S. * * Copyright (C) 2005-2011 Nicolas Pitre - * Copyright (C) 2012 ARM Ltd. - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program. If not, see . - * - * - * AArch32 user helpers. + * Copyright (C) 2012-2018 ARM Ltd. * * Each segment is 32-byte aligned and will be moved to the top of the high * vector page. New segments (if ever needed) must be added in front of @@ -77,42 +63,3 @@ __kuser_helper_version: // 0xffff0ffc .word ((__kuser_helper_end - __kuser_helper_start) >> 5) .globl __kuser_helper_end __kuser_helper_end: - -/* - * AArch32 sigreturn code - * - * For ARM syscalls, the syscall number has to be loaded into r7. - * We do not support an OABI userspace. - * - * For Thumb syscalls, we also pass the syscall number via r7. We therefore - * need two 16-bit instructions. - */ - .globl __aarch32_sigret_code_start -__aarch32_sigret_code_start: - - /* - * ARM Code - */ - .byte __NR_compat_sigreturn, 0x70, 0xa0, 0xe3 // mov r7, #__NR_compat_sigreturn - .byte __NR_compat_sigreturn, 0x00, 0x00, 0xef // svc #__NR_compat_sigreturn - - /* - * Thumb code - */ - .byte __NR_compat_sigreturn, 0x27 // svc #__NR_compat_sigreturn - .byte __NR_compat_sigreturn, 0xdf // mov r7, #__NR_compat_sigreturn - - /* - * ARM code - */ - .byte __NR_compat_rt_sigreturn, 0x70, 0xa0, 0xe3 // mov r7, #__NR_compat_rt_sigreturn - .byte __NR_compat_rt_sigreturn, 0x00, 0x00, 0xef // svc #__NR_compat_rt_sigreturn - - /* - * Thumb code - */ - .byte __NR_compat_rt_sigreturn, 0x27 // svc #__NR_compat_rt_sigreturn - .byte __NR_compat_rt_sigreturn, 0xdf // mov r7, #__NR_compat_rt_sigreturn - - .globl __aarch32_sigret_code_end -__aarch32_sigret_code_end: diff --git a/arch/arm64/kernel/sigreturn32.S b/arch/arm64/kernel/sigreturn32.S new file mode 100644 index 000000000000..475d30d471ac --- /dev/null +++ b/arch/arm64/kernel/sigreturn32.S @@ -0,0 +1,46 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * AArch32 sigreturn code. + * Based on the kuser helpers in arch/arm/kernel/entry-armv.S. + * + * Copyright (C) 2005-2011 Nicolas Pitre + * Copyright (C) 2012-2018 ARM Ltd. + * + * For ARM syscalls, the syscall number has to be loaded into r7. + * We do not support an OABI userspace. + * + * For Thumb syscalls, we also pass the syscall number via r7. We therefore + * need two 16-bit instructions. + */ + +#include + + .globl __aarch32_sigret_code_start +__aarch32_sigret_code_start: + + /* + * ARM Code + */ + .byte __NR_compat_sigreturn, 0x70, 0xa0, 0xe3 // mov r7, #__NR_compat_sigreturn + .byte __NR_compat_sigreturn, 0x00, 0x00, 0xef // svc #__NR_compat_sigreturn + + /* + * Thumb code + */ + .byte __NR_compat_sigreturn, 0x27 // svc #__NR_compat_sigreturn + .byte __NR_compat_sigreturn, 0xdf // mov r7, #__NR_compat_sigreturn + + /* + * ARM code + */ + .byte __NR_compat_rt_sigreturn, 0x70, 0xa0, 0xe3 // mov r7, #__NR_compat_rt_sigreturn + .byte __NR_compat_rt_sigreturn, 0x00, 0x00, 0xef // svc #__NR_compat_rt_sigreturn + + /* + * Thumb code + */ + .byte __NR_compat_rt_sigreturn, 0x27 // svc #__NR_compat_rt_sigreturn + .byte __NR_compat_rt_sigreturn, 0xdf // mov r7, #__NR_compat_rt_sigreturn + + .globl __aarch32_sigret_code_end +__aarch32_sigret_code_end: From patchwork Mon Apr 15 09:49:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincenzo Frascino X-Patchwork-Id: 10900419 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 75D82139A for ; Mon, 15 Apr 2019 09:50:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 52B0828924 for ; Mon, 15 Apr 2019 09:50:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 50DB228925; Mon, 15 Apr 2019 09:50:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id E3D1628924 for ; Mon, 15 Apr 2019 09:50:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=g35XZGH7kvd1medy/oSQ6XYOgfeCcheUw0Uv9NJqr3Q=; b=WPTuSuHH0D0nAr JROKFn8UoifJ9+TeNOK/EgrahWhV9rAxTO7bvC024wm7zxREgbTbkDRwIyguZlw9nJfI+prEoecof BerQoyDY1u7mY0ja7Au+Pw3UuqO/qgDA1jwmNfy31WO0QNNRSc6fPnS2v+NZ9FqZuCMNmr4NUzi6M WtO32ynQxgMnxN7inOAjrdhBW34gECFx2zzSn7LhUYfsEJ8iR3ZvDzcj71UstUHeYV3IQfvmwCkAZ acYKqIIHkGTVaDndPeFhA99uoOufUUAPND4n7aYDmoi422TABknTfnLXi9l97nKc9Nc28lObTNF1H YI33xGLXbJj3OgZZgIgQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hFyFm-0004ZL-E7; Mon, 15 Apr 2019 09:50:18 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hFyFN-000323-TT for linux-arm-kernel@lists.infradead.org; Mon, 15 Apr 2019 09:49:55 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 51F4F1688; Mon, 15 Apr 2019 02:49:53 -0700 (PDT) Received: from e119884-lin.cambridge.arm.com (e119884-lin.cambridge.arm.com [10.1.196.72]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 65AAB3F557; Mon, 15 Apr 2019 02:49:52 -0700 (PDT) From: Vincenzo Frascino To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v4 3/4] arm64: compat: Refactor aarch32_alloc_vdso_pages() Date: Mon, 15 Apr 2019 10:49:36 +0100 Message-Id: <20190415094937.13518-4-vincenzo.frascino@arm.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190415094937.13518-1-vincenzo.frascino@arm.com> References: <20190415094937.13518-1-vincenzo.frascino@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190415_024953_996558_3D56C456 X-CRM114-Status: GOOD ( 15.64 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Catalin Marinas , Will Deacon Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP aarch32_alloc_vdso_pages() needs to be refactored to make it easier to disable kuser helpers. Divide the function in aarch32_alloc_kuser_vdso_page() and aarch32_alloc_sigreturn_vdso_page(). Cc: Catalin Marinas Cc: Will Deacon Signed-off-by: Vincenzo Frascino Reviewed-by: Catalin Marinas --- arch/arm64/kernel/vdso.c | 73 ++++++++++++++++++++++++++-------------- 1 file changed, 48 insertions(+), 25 deletions(-) diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c index 79fd7a65ae55..22e8b039cfe6 100644 --- a/arch/arm64/kernel/vdso.c +++ b/arch/arm64/kernel/vdso.c @@ -77,46 +77,69 @@ static const struct vm_special_mapping aarch32_vdso_spec[C_PAGES] = { }, }; -static int __init aarch32_alloc_vdso_pages(void) +static int aarch32_alloc_kuser_vdso_page(void) { extern char __kuser_helper_start[], __kuser_helper_end[]; - extern char __aarch32_sigret_code_start[], __aarch32_sigret_code_end[]; - int kuser_sz = __kuser_helper_end - __kuser_helper_start; - int sigret_sz = __aarch32_sigret_code_end - __aarch32_sigret_code_start; - unsigned long vdso_pages[C_PAGES]; + unsigned long vdso_page; - vdso_pages[C_VECTORS] = get_zeroed_page(GFP_ATOMIC); - if (!vdso_pages[C_VECTORS]) + vdso_page = get_zeroed_page(GFP_ATOMIC); + if (!vdso_page) return -ENOMEM; - vdso_pages[C_SIGPAGE] = get_zeroed_page(GFP_ATOMIC); - if (!vdso_pages[C_SIGPAGE]) { - /* - * free_page() it is required to avoid to leak the vectors page - * if the allocation of sigpage fails. - */ - free_page(vdso_pages[C_VECTORS]); - return -ENOMEM; - } - /* kuser helpers */ - memcpy((void *)(vdso_pages[C_VECTORS] + 0x1000 - kuser_sz), + memcpy((void *)(vdso_page + 0x1000 - kuser_sz), __kuser_helper_start, kuser_sz); + aarch32_vdso_pages[C_VECTORS] = virt_to_page(vdso_page); + + flush_dcache_page(aarch32_vdso_pages[C_VECTORS]); + + return 0; +} + +static int aarch32_alloc_sigreturn_vdso_page(void) +{ + extern char __aarch32_sigret_code_start[], __aarch32_sigret_code_end[]; + int sigret_sz = __aarch32_sigret_code_end - __aarch32_sigret_code_start; + unsigned long vdso_page; + + vdso_page = get_zeroed_page(GFP_ATOMIC); + if (!vdso_page) + return -ENOMEM; + /* sigreturn code */ - memcpy((void *)vdso_pages[C_SIGPAGE], + memcpy((void *)vdso_page, __aarch32_sigret_code_start, sigret_sz); - flush_icache_range(vdso_pages[C_VECTORS], - vdso_pages[C_VECTORS] + PAGE_SIZE); - flush_icache_range(vdso_pages[C_SIGPAGE], - vdso_pages[C_SIGPAGE] + PAGE_SIZE); + aarch32_vdso_pages[C_SIGPAGE] = virt_to_page(vdso_page); + + flush_dcache_page(aarch32_vdso_pages[C_SIGPAGE]); + + return 0; +} + +static int __init aarch32_alloc_vdso_pages(void) +{ + int ret; + + ret = aarch32_alloc_kuser_vdso_page(); + if (ret) + return ret; - aarch32_vdso_pages[C_VECTORS] = virt_to_page(vdso_pages[C_VECTORS]); - aarch32_vdso_pages[C_SIGPAGE] = virt_to_page(vdso_pages[C_SIGPAGE]); + ret = aarch32_alloc_sigreturn_vdso_page(); + if (ret) { + unsigned long vectors_addr = (unsigned long)page_to_virt( + aarch32_vdso_pages[C_VECTORS]); + /* + * free_page() it is required to avoid to leak the vectors page + * if the allocation of sigpage fails. + */ + free_page(vectors_addr); + return ret; + } return 0; } From patchwork Mon Apr 15 09:49:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincenzo Frascino X-Patchwork-Id: 10900417 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7708114DB for ; Mon, 15 Apr 2019 09:50:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5999328174 for ; Mon, 15 Apr 2019 09:50:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 57B62285B0; Mon, 15 Apr 2019 09:50:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 8D0D228929 for ; Mon, 15 Apr 2019 09:50:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=FydeUaMqkGmA5EVi9DCJbx29m/ENfy/AzTpeuLkCxO0=; b=MQahMytHgmovxO NZSJG5glYIpetSmJ9NArLFA00z2lyTuBp9IxBYKyCbFxY5EGxLYtfualTgWBXZ7SDdCMIVDRd9O0a TErPHfd4J9j/NlnC0RxBecULcGmtjvgkoa6fWjn2rzFKphwZUaWQpPqYJNyQjiHtOWT9RFxMgfszT 9azruZQ/JxpCR/UJ93juhe1nRaf9LH+jXAXWlvcbq9iXGWlzTS10KqSozWZw2qjZCUsHgLkFCEpcM STa1xiUToZL3DQwPi5jnCBtIQFc03875puPTUwPpNLTk7CBDESil0LGDBQmVEd7eRcyPisO8RUWTg HTc7AQfr+2tPClNyKCNQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hFyG7-00054l-5Q; Mon, 15 Apr 2019 09:50:39 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hFyFO-00032J-On for linux-arm-kernel@lists.infradead.org; Mon, 15 Apr 2019 09:49:57 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7C9EF80D; Mon, 15 Apr 2019 02:49:54 -0700 (PDT) Received: from e119884-lin.cambridge.arm.com (e119884-lin.cambridge.arm.com [10.1.196.72]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 905413F557; Mon, 15 Apr 2019 02:49:53 -0700 (PDT) From: Vincenzo Frascino To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v4 4/4] arm64: compat: Add KUSER_HELPERS config option Date: Mon, 15 Apr 2019 10:49:37 +0100 Message-Id: <20190415094937.13518-5-vincenzo.frascino@arm.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190415094937.13518-1-vincenzo.frascino@arm.com> References: <20190415094937.13518-1-vincenzo.frascino@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190415_024954_817684_57763463 X-CRM114-Status: GOOD ( 21.13 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Catalin Marinas , Will Deacon Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP When kuser helpers are enabled the kernel maps the relative code at a fixed address (0xffff0000). Making configurable the option to disable them means that the kernel can remove this mapping and any access to this memory area results in a sigfault. Add a KUSER_HELPERS config option that can be used to disable the mapping when it is turned off. This option can be turned off if and only if the applications are designed specifically for the platform and they do not make use of the kuser helpers code. Cc: Catalin Marinas Cc: Will Deacon Signed-off-by: Vincenzo Frascino Reviewed-by: Catalin Marinas --- arch/arm64/Kconfig | 28 ++++++++++++++++++++++++++++ arch/arm64/kernel/Makefile | 3 ++- arch/arm64/kernel/kuser32.S | 7 +++---- arch/arm64/kernel/vdso.c | 15 +++++++++++++++ 4 files changed, 48 insertions(+), 5 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 7e34b9eba5de..aa28884a2376 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1494,6 +1494,34 @@ config COMPAT If you want to execute 32-bit userspace applications, say Y. +config KUSER_HELPERS + bool "Enable kuser helpers page for 32 bit applications." + depends on COMPAT + default y + help + Warning: disabling this option may break 32-bit user programs. + + Provide kuser helpers to compat tasks. The kernel provides + helper code to userspace in read only form at a fixed location + to allow userspace to be independent of the CPU type fitted to + the system. This permits binaries to be run on ARMv4 through + to ARMv8 without modification. + + See Documentation/arm/kernel_user_helpers.txt for details. + + However, the fixed address nature of these helpers can be used + by ROP (return orientated programming) authors when creating + exploits. + + If all of the binaries and libraries which run on your platform + are built specifically for your platform, and make no use of + these helpers, then you can turn this option off to hinder + such exploits. However, in that case, if a binary or library + relying on those helpers is run, it will not function correctly. + + Say N here only if you are absolutely certain that you do not + need these helpers; otherwise, the safe option is to say Y. + config SYSVIPC_COMPAT def_bool y depends on COMPAT && SYSVIPC diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index 50f76b88a967..c7bd0794855a 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -27,8 +27,9 @@ OBJCOPYFLAGS := --prefix-symbols=__efistub_ $(obj)/%.stub.o: $(obj)/%.o FORCE $(call if_changed,objcopy) -obj-$(CONFIG_COMPAT) += sys32.o kuser32.o signal32.o \ +obj-$(CONFIG_COMPAT) += sys32.o signal32.o \ sigreturn32.o sys_compat.o +obj-$(CONFIG_KUSER_HELPERS) += kuser32.o obj-$(CONFIG_FUNCTION_TRACER) += ftrace.o entry-ftrace.o obj-$(CONFIG_MODULES) += module.o obj-$(CONFIG_ARM64_MODULE_PLTS) += module-plts.o diff --git a/arch/arm64/kernel/kuser32.S b/arch/arm64/kernel/kuser32.S index c5f2bbafd723..49825e9e421e 100644 --- a/arch/arm64/kernel/kuser32.S +++ b/arch/arm64/kernel/kuser32.S @@ -6,10 +6,9 @@ * Copyright (C) 2005-2011 Nicolas Pitre * Copyright (C) 2012-2018 ARM Ltd. * - * Each segment is 32-byte aligned and will be moved to the top of the high - * vector page. New segments (if ever needed) must be added in front of - * existing ones. This mechanism should be used only for things that are - * really small and justified, and not be abused freely. + * The kuser helpers below are mapped at a fixed address by + * aarch32_setup_additional_pages() and are provided for compatibility + * reasons with 32 bit (aarch32) applications that need them. * * See Documentation/arm/kernel_user_helpers.txt for formal definitions. */ diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c index 22e8b039cfe6..86022b29c4b8 100644 --- a/arch/arm64/kernel/vdso.c +++ b/arch/arm64/kernel/vdso.c @@ -77,6 +77,7 @@ static const struct vm_special_mapping aarch32_vdso_spec[C_PAGES] = { }, }; +#ifdef CONFIG_KUSER_HELPERS static int aarch32_alloc_kuser_vdso_page(void) { extern char __kuser_helper_start[], __kuser_helper_end[]; @@ -98,6 +99,12 @@ static int aarch32_alloc_kuser_vdso_page(void) return 0; } +#else +static int aarch32_alloc_kuser_vdso_page(void) +{ + return 0; +} +#endif /* CONFIG_KUSER_HELPER */ static int aarch32_alloc_sigreturn_vdso_page(void) { @@ -145,6 +152,7 @@ static int __init aarch32_alloc_vdso_pages(void) } arch_initcall(aarch32_alloc_vdso_pages); +#ifdef CONFIG_KUSER_HELPERS static int aarch32_kuser_helpers_setup(struct mm_struct *mm) { void *ret; @@ -157,6 +165,13 @@ static int aarch32_kuser_helpers_setup(struct mm_struct *mm) return PTR_ERR_OR_ZERO(ret); } +#else +static int aarch32_kuser_helpers_setup(struct mm_struct *mm) +{ + /* kuser helpers not enabled */ + return 0; +} +#endif /* CONFIG_KUSER_HELPERS */ static int aarch32_sigreturn_setup(struct mm_struct *mm) {