From patchwork Mon Dec 3 13:55:17 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Thierry X-Patchwork-Id: 10709623 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 156CC16B1 for ; Mon, 3 Dec 2018 13:56:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0408E2AC74 for ; Mon, 3 Dec 2018 13:56:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EC31C2B132; Mon, 3 Dec 2018 13:56:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 6DFAB2AC74 for ; Mon, 3 Dec 2018 13:56:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=EcYqwgdCeJ3Mfbj1IL/N05YXy4zKRynsjF/adNH9Ey0=; b=G4EVoxy3wIvfOnUS8+hzPzDkK9 QPQ/oSxOqZY/tIT9VU5WRim8kR6zAKz01Armd+2AK5xdVTbviy/Dr9DVvHmmQs5v6m5ru7NrzVwi4 DYxIiFHgQJzCIiU9d2gBBZaBv9HDDb1QnfPHTEUNWr4q8R1yu5PCKnKq5aqhxJ4e9vgFhjAUh9esJ q/ebhraJzYk8wHrrP8NrKx1WmLSPmLv5UTltsZ/kAWcbsXkMAOHrPfjFYwN+SjLFehL/1wmJs6HVe 0cRwUgbZj8cVdNseakJMEEiWPJ+mlqUgDTCfaOLlzHCZiMX94y57HpaVz1ktfANIrZ6XJr/Han/r3 36bhotug==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gTohh-0003Gh-VW; Mon, 03 Dec 2018 13:56:05 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gTohE-0002kb-24 for linux-arm-kernel@lists.infradead.org; Mon, 03 Dec 2018 13:55:43 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 95820168F; Mon, 3 Dec 2018 05:55:27 -0800 (PST) Received: from e112298-lin.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E42523F614; Mon, 3 Dec 2018 05:55:25 -0800 (PST) From: Julien Thierry To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [RFC PATCH v2 1/2] uaccess: Check no rescheduling function is called in unsafe region Date: Mon, 3 Dec 2018 13:55:17 +0000 Message-Id: <1543845318-24543-2-git-send-email-julien.thierry@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1543845318-24543-1-git-send-email-julien.thierry@arm.com> References: <1543845318-24543-1-git-send-email-julien.thierry@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20181203_055536_123945_25B2FD06 X-CRM114-Status: GOOD ( 16.11 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Julien Thierry , peterz@infradead.org, catalin.marinas@arm.com, will.deacon@arm.com, mingo@redhat.com, james.morse@arm.com, hpa@zytor.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP While running a user_access regions, it is not supported to reschedule. Add an overridable primitive to indicate whether a user_access region is active and check that this is not the case when calling rescheduling functions. Also, add a comment clarifying the behaviour of user_access regions. Signed-off-by: Julien Thierry --- include/linux/kernel.h | 6 ++++-- include/linux/uaccess.h | 11 +++++++++++ kernel/sched/core.c | 19 +++++++++++++++++++ 3 files changed, 34 insertions(+), 2 deletions(-) I'm not sure these are the best locations to check this but I was hoping this patch could start the discussion. Should I move the check? Should I add a config option to conditionally build those checks? -- 1.9.1 diff --git a/include/linux/kernel.h b/include/linux/kernel.h index d6aac75..fe0e984 100644 --- a/include/linux/kernel.h +++ b/include/linux/kernel.h @@ -237,11 +237,13 @@ struct pt_regs; struct user; +extern void __might_resched(const char *file, int line); #ifdef CONFIG_PREEMPT_VOLUNTARY extern int _cond_resched(void); -# define might_resched() _cond_resched() +# define might_resched() \ + do { __might_resched(__FILE__, __LINE__); _cond_resched(); } while (0) #else -# define might_resched() do { } while (0) +# define might_resched() __might_resched(__FILE__, __LINE__) #endif #ifdef CONFIG_DEBUG_ATOMIC_SLEEP diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index efe79c1..50adb84 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -266,6 +266,13 @@ static inline unsigned long __copy_from_user_inatomic_nocache(void *to, #define probe_kernel_address(addr, retval) \ probe_kernel_read(&retval, addr, sizeof(retval)) +/* + * user_access_begin() and user_access_end() define a region where + * unsafe user accessors can be used. + * During execution of this region, no sleeping functions should be called. + * Exceptions and interrupt shall exit the user_access region and re-enter it + * when returning to the interrupted context. + */ #ifndef user_access_begin #define user_access_begin() do { } while (0) #define user_access_end() do { } while (0) @@ -273,6 +280,10 @@ static inline unsigned long __copy_from_user_inatomic_nocache(void *to, #define unsafe_put_user(x, ptr, err) do { if (unlikely(__put_user(x, ptr))) goto err; } while (0) #endif +#ifndef unsafe_user_region_active +#define unsafe_user_region_active() false +#endif + #ifdef CONFIG_HARDENED_USERCOPY void usercopy_warn(const char *name, const char *detail, bool to_user, unsigned long offset, unsigned long len); diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 6fedf3a..03f53c8 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3289,6 +3289,13 @@ static inline void schedule_debug(struct task_struct *prev) __schedule_bug(prev); preempt_count_set(PREEMPT_DISABLED); } + + if (unlikely(unsafe_user_region_active())) { + printk(KERN_ERR "BUG: scheduling while user_access enabled: %s/%d/0x%08x\n", + prev->comm, prev->pid, preempt_count()); + dump_stack(); + } + rcu_sleep_check(); profile_hit(SCHED_PROFILING, __builtin_return_address(0)); @@ -6151,6 +6158,18 @@ void ___might_sleep(const char *file, int line, int preempt_offset) EXPORT_SYMBOL(___might_sleep); #endif +void __might_resched(const char *file, int line) +{ + if (!unsafe_user_region_active()) + return; + + printk(KERN_ERR + "BUG: rescheduling function called from user access context at %s:%d\n", + file, line); + dump_stack(); +} +EXPORT_SYMBOL(__might_resched); + #ifdef CONFIG_MAGIC_SYSRQ void normalize_rt_tasks(void) { From patchwork Mon Dec 3 13:55:18 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Thierry X-Patchwork-Id: 10709619 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6495E16B1 for ; Mon, 3 Dec 2018 13:55:47 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 530BC2AC74 for ; Mon, 3 Dec 2018 13:55:47 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 46EB32B132; Mon, 3 Dec 2018 13:55:47 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B425E2AC74 for ; Mon, 3 Dec 2018 13:55:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Wu/oqjWTywPg12CEi2sFrkK5j1IbH9e9qMHOeMGwmIY=; b=uv2t9yFg4L/tMHhnei6hIiEZax QtxNmhNTFDnj1cwYkTO98+5dOOoqj1BfrqNI9ZrPbHyL/j4Osze0Nx8frb0fIACYd6TkfNkcPTfNv M/lAcdsw4+AAOQe6cZR8U6BVuZsoyjx66w+uuNLC2FVvbNzDk0z8Kl6FHox+dEGZC6w/TB+Ubp+0A KqIjKt2BCfFfSrkhLm9kwXBaeL0XtqGnBDJ89bfFN9J0l5SEW5i8NkrThwd5rmyX9SDHQ23Kzr7m0 084rJvMBsvDcAtXAlR6zHV/FIqdEfjvtWculLyEfvlW9qvbWW2o/HwPP7ysUU30hHMfsbZMT7R/gk l67mHUMQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gTohH-0002mK-6N; Mon, 03 Dec 2018 13:55:39 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gTohE-0002kf-21 for linux-arm-kernel@lists.infradead.org; Mon, 03 Dec 2018 13:55:37 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 58CD2169E; Mon, 3 Dec 2018 05:55:29 -0800 (PST) Received: from e112298-lin.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D32993F614; Mon, 3 Dec 2018 05:55:27 -0800 (PST) From: Julien Thierry To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 2/2] arm64: uaccess: Implement unsafe accessors Date: Mon, 3 Dec 2018 13:55:18 +0000 Message-Id: <1543845318-24543-3-git-send-email-julien.thierry@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1543845318-24543-1-git-send-email-julien.thierry@arm.com> References: <1543845318-24543-1-git-send-email-julien.thierry@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20181203_055536_129263_D15314C4 X-CRM114-Status: GOOD ( 14.36 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Julien Thierry , peterz@infradead.org, catalin.marinas@arm.com, will.deacon@arm.com, mingo@redhat.com, james.morse@arm.com, hpa@zytor.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Current implementation of get/put_user_unsafe default to get/put_user which toggle PAN before each access, despite having been told by the caller that multiple accesses to user memory were about to happen. Provide implementations for user_access_begin/end to turn PAN off/on and implement unsafe accessors that assume PAN was already turned off. Signed-off-by: Julien Thierry Reviewed-by: James Morse --- arch/arm64/include/asm/sysreg.h | 2 + arch/arm64/include/asm/uaccess.h | 89 +++++++++++++++++++++++++++++++--------- 2 files changed, 71 insertions(+), 20 deletions(-) diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 842fb95..4e6477b 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -108,6 +108,8 @@ #define SYS_DC_CSW sys_insn(1, 0, 7, 10, 2) #define SYS_DC_CISW sys_insn(1, 0, 7, 14, 2) +#define SYS_PSTATE_PAN sys_reg(3, 0, 4, 2, 3) + #define SYS_OSDTRRX_EL1 sys_reg(2, 0, 0, 0, 2) #define SYS_MDCCINT_EL1 sys_reg(2, 0, 0, 2, 0) #define SYS_MDSCR_EL1 sys_reg(2, 0, 0, 2, 2) diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index 07c3408..cabfcae 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -233,6 +233,23 @@ static inline void uaccess_enable_not_uao(void) __uaccess_enable(ARM64_ALT_PAN_NOT_UAO); } +#define unsafe_user_region_active uaccess_region_active +static inline bool uaccess_region_active(void) +{ + if (system_uses_ttbr0_pan()) { + u64 ttbr; + + ttbr = read_sysreg(ttbr1_el1); + return ttbr & TTBR_ASID_MASK; + } else if (cpus_have_const_cap(ARM64_ALT_PAN_NOT_UAO)) { + return (read_sysreg(sctlr_el1) & SCTLR_EL1_SPAN) ? + false : + !read_sysreg_s(SYS_PSTATE_PAN); + } + + return false; +} + /* * Sanitise a uaccess pointer such that it becomes NULL if above the * current addr_limit. @@ -276,11 +293,9 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr) : "+r" (err), "=&r" (x) \ : "r" (addr), "i" (-EFAULT)) -#define __get_user_err(x, ptr, err) \ +#define __get_user_err_unsafe(x, ptr, err) \ do { \ unsigned long __gu_val; \ - __chk_user_ptr(ptr); \ - uaccess_enable_not_uao(); \ switch (sizeof(*(ptr))) { \ case 1: \ __get_user_asm("ldrb", "ldtrb", "%w", __gu_val, (ptr), \ @@ -301,17 +316,26 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr) default: \ BUILD_BUG(); \ } \ - uaccess_disable_not_uao(); \ (x) = (__force __typeof__(*(ptr)))__gu_val; \ } while (0) -#define __get_user_check(x, ptr, err) \ +#define __get_user_err_check(x, ptr, err) \ +do { \ + __typeof__(x) __gu_dest; \ + __chk_user_ptr(ptr); \ + uaccess_enable_not_uao(); \ + __get_user_err_unsafe((__gu_dest), (ptr), (err)); \ + uaccess_disable_not_uao(); \ + (x) = __gu_dest; \ +} while (0) + +#define __get_user_err(x, ptr, err, accessor) \ ({ \ __typeof__(*(ptr)) __user *__p = (ptr); \ might_fault(); \ if (access_ok(VERIFY_READ, __p, sizeof(*__p))) { \ __p = uaccess_mask_ptr(__p); \ - __get_user_err((x), __p, (err)); \ + accessor((x), __p, (err)); \ } else { \ (x) = 0; (err) = -EFAULT; \ } \ @@ -319,14 +343,14 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr) #define __get_user_error(x, ptr, err) \ ({ \ - __get_user_check((x), (ptr), (err)); \ + __get_user_err((x), (ptr), (err), __get_user_err_check); \ (void)0; \ }) #define __get_user(x, ptr) \ ({ \ int __gu_err = 0; \ - __get_user_check((x), (ptr), __gu_err); \ + __get_user_err((x), (ptr), __gu_err, __get_user_err_check); \ __gu_err; \ }) @@ -346,41 +370,46 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr) : "+r" (err) \ : "r" (x), "r" (addr), "i" (-EFAULT)) -#define __put_user_err(x, ptr, err) \ +#define __put_user_err_unsafe(x, ptr, err) \ do { \ - __typeof__(*(ptr)) __pu_val = (x); \ - __chk_user_ptr(ptr); \ - uaccess_enable_not_uao(); \ switch (sizeof(*(ptr))) { \ case 1: \ - __put_user_asm("strb", "sttrb", "%w", __pu_val, (ptr), \ + __put_user_asm("strb", "sttrb", "%w", (x), (ptr), \ (err), ARM64_HAS_UAO); \ break; \ case 2: \ - __put_user_asm("strh", "sttrh", "%w", __pu_val, (ptr), \ + __put_user_asm("strh", "sttrh", "%w", (x), (ptr), \ (err), ARM64_HAS_UAO); \ break; \ case 4: \ - __put_user_asm("str", "sttr", "%w", __pu_val, (ptr), \ + __put_user_asm("str", "sttr", "%w", (x), (ptr), \ (err), ARM64_HAS_UAO); \ break; \ case 8: \ - __put_user_asm("str", "sttr", "%x", __pu_val, (ptr), \ + __put_user_asm("str", "sttr", "%x", (x), (ptr), \ (err), ARM64_HAS_UAO); \ break; \ default: \ BUILD_BUG(); \ } \ +} while (0) + +#define __put_user_err_check(x, ptr, err) \ +do { \ + __typeof__(*(ptr)) __pu_val = (x); \ + __chk_user_ptr(ptr); \ + uaccess_enable_not_uao(); \ + __put_user_err_unsafe(__pu_val, (ptr), (err)); \ uaccess_disable_not_uao(); \ } while (0) -#define __put_user_check(x, ptr, err) \ +#define __put_user_err(x, ptr, err, accessor) \ ({ \ __typeof__(*(ptr)) __user *__p = (ptr); \ might_fault(); \ if (access_ok(VERIFY_WRITE, __p, sizeof(*__p))) { \ __p = uaccess_mask_ptr(__p); \ - __put_user_err((x), __p, (err)); \ + accessor((x), __p, (err)); \ } else { \ (err) = -EFAULT; \ } \ @@ -388,19 +417,39 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr) #define __put_user_error(x, ptr, err) \ ({ \ - __put_user_check((x), (ptr), (err)); \ + __put_user_err((x), (ptr), (err), __put_user_err_check); \ (void)0; \ }) #define __put_user(x, ptr) \ ({ \ int __pu_err = 0; \ - __put_user_check((x), (ptr), __pu_err); \ + __put_user_err((x), (ptr), __pu_err, __put_user_err_check); \ __pu_err; \ }) #define put_user __put_user + +#define user_access_begin() uaccess_enable_not_uao() +#define user_access_end() uaccess_disable_not_uao() + +#define unsafe_get_user(x, ptr, err) \ +do { \ + int __gu_err = 0; \ + __get_user_err((x), (ptr), __gu_err, __get_user_err_unsafe); \ + if (__gu_err != 0) \ + goto err; \ +} while (0) + +#define unsafe_put_user(x, ptr, err) \ +do { \ + int __pu_err = 0; \ + __put_user_err((x), (ptr), __pu_err, __put_user_err_unsafe); \ + if (__pu_err != 0) \ + goto err; \ +} while (0) + extern unsigned long __must_check __arch_copy_from_user(void *to, const void __user *from, unsigned long n); #define raw_copy_from_user(to, from, n) \ ({ \