From patchwork Wed Jul 6 22:25:21 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 9217141 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id AA71160467 for ; Wed, 6 Jul 2016 22:26:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 995B9284C0 for ; Wed, 6 Jul 2016 22:26:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8C9F0284E4; Wed, 6 Jul 2016 22:26:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 721AC284C0 for ; Wed, 6 Jul 2016 22:26:26 +0000 (UTC) Received: (qmail 15606 invoked by uid 550); 6 Jul 2016 22:26:00 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Reply-To: kernel-hardening@lists.openwall.com Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 15461 invoked from network); 6 Jul 2016 22:25:56 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=2AREe5nGkHuNB3eZ+/JW/kP4ghcim5pXIjJIHwxbKmo=; b=AfzeJqQhjuFBIv/h1ZD2RPhFDqpt+jZgsW2uA1Uiq9A6t/P86henbFh0hDjqOIRrGE 80iTOGXlvESWRBTb16rr7NYNLEOLRgd+oQd33VELcDjvUcBZwfXyfJQf/WruH0wst383 MJkITm/D1gk/dq5ig5m/MsIQ0GYP8hUOt0eAQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=2AREe5nGkHuNB3eZ+/JW/kP4ghcim5pXIjJIHwxbKmo=; b=d0gIr2tu3QppdUO7qPU5Huvw9HvsN2zZA4/c0FFf+1xeEmn3BQkZFfQ6ULBx84BuRx zQ13kWLup1pvnvAfRb3vRRDXxt0/lWLu+TpH/7Pij7OC+ZirralZBqauSUu0JHXhxK68 UeaTdbGiurkljdR1e6c1/wSqXLuijcgiuvmEsFMshKRxBvT7ZQmIYsJ9otw+bDnVOf0U 6+Ye11KmtpMlV/nGgLbCfTeCptxvIH+D+9Y22+ZGRAagW/5FnBRETD35MhFxUfV50kpQ SO13Gkbh8VJX0mY7Y7u+Ke/9hY0rWFlvyhL0yw2r7Wf2TN2bmjFQyFzEbEinyfzRZXpi U0Ng== X-Gm-Message-State: ALyK8tKuDYZIkcMF578ajIL26/OESr+Bxe8icsWy0XSv0orSC4NnTC3MwgJruhLPaJllgZK1 X-Received: by 10.98.80.79 with SMTP id e76mr45965969pfb.67.1467843944591; Wed, 06 Jul 2016 15:25:44 -0700 (PDT) From: Kees Cook To: linux-kernel@vger.kernel.org Cc: Kees Cook , Rik van Riel , Casey Schaufler , PaX Team , Brad Spengler , Russell King , Catalin Marinas , Will Deacon , Ard Biesheuvel , Benjamin Herrenschmidt , Michael Ellerman , Tony Luck , Fenghua Yu , "David S. Miller" , x86@kernel.org, Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Andy Lutomirski , Borislav Petkov , Mathias Krause , Jan Kara , Vitaly Wool , Andrea Arcangeli , Dmitry Vyukov , Laura Abbott , linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, kernel-hardening@lists.openwall.com Date: Wed, 6 Jul 2016 15:25:21 -0700 Message-Id: <1467843928-29351-3-git-send-email-keescook@chromium.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1467843928-29351-1-git-send-email-keescook@chromium.org> References: <1467843928-29351-1-git-send-email-keescook@chromium.org> Subject: [kernel-hardening] [PATCH 2/9] x86/uaccess: Enable hardened usercopy X-Virus-Scanned: ClamAV using ClamSMTP Enables CONFIG_HARDENED_USERCOPY checks on x86. This is done both in copy_*_user() and __copy_*_user() because copy_*_user() actually calls down to _copy_*_user() and not __copy_*_user(). Based on code from PaX and grsecurity. Signed-off-by: Kees Cook --- arch/x86/Kconfig | 2 ++ arch/x86/include/asm/uaccess.h | 10 ++++++---- arch/x86/include/asm/uaccess_32.h | 2 ++ arch/x86/include/asm/uaccess_64.h | 2 ++ 4 files changed, 12 insertions(+), 4 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 0a7b885964ba..2a66b73a996d 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -80,11 +80,13 @@ config X86 select HAVE_ALIGNED_STRUCT_PAGE if SLUB select HAVE_AOUT if X86_32 select HAVE_ARCH_AUDITSYSCALL + select HAVE_ARCH_HARDENED_USERCOPY select HAVE_ARCH_HUGE_VMAP if X86_64 || X86_PAE select HAVE_ARCH_JUMP_LABEL select HAVE_ARCH_KASAN if X86_64 && SPARSEMEM_VMEMMAP select HAVE_ARCH_KGDB select HAVE_ARCH_KMEMCHECK + select HAVE_ARCH_LINEAR_KERNEL_MAPPING if X86_64 select HAVE_ARCH_MMAP_RND_BITS if MMU select HAVE_ARCH_MMAP_RND_COMPAT_BITS if MMU && COMPAT select HAVE_ARCH_SECCOMP_FILTER diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index 2982387ba817..aa9cc58409c6 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -742,9 +742,10 @@ copy_from_user(void *to, const void __user *from, unsigned long n) * case, and do only runtime checking for non-constant sizes. */ - if (likely(sz < 0 || sz >= n)) + if (likely(sz < 0 || sz >= n)) { + check_object_size(to, n, false); n = _copy_from_user(to, from, n); - else if(__builtin_constant_p(n)) + } else if(__builtin_constant_p(n)) copy_from_user_overflow(); else __copy_from_user_overflow(sz, n); @@ -762,9 +763,10 @@ copy_to_user(void __user *to, const void *from, unsigned long n) might_fault(); /* See the comment in copy_from_user() above. */ - if (likely(sz < 0 || sz >= n)) + if (likely(sz < 0 || sz >= n)) { + check_object_size(from, n, true); n = _copy_to_user(to, from, n); - else if(__builtin_constant_p(n)) + } else if(__builtin_constant_p(n)) copy_to_user_overflow(); else __copy_to_user_overflow(sz, n); diff --git a/arch/x86/include/asm/uaccess_32.h b/arch/x86/include/asm/uaccess_32.h index 4b32da24faaf..7d3bdd1ed697 100644 --- a/arch/x86/include/asm/uaccess_32.h +++ b/arch/x86/include/asm/uaccess_32.h @@ -37,6 +37,7 @@ unsigned long __must_check __copy_from_user_ll_nocache_nozero static __always_inline unsigned long __must_check __copy_to_user_inatomic(void __user *to, const void *from, unsigned long n) { + check_object_size(from, n, true); return __copy_to_user_ll(to, from, n); } @@ -95,6 +96,7 @@ static __always_inline unsigned long __copy_from_user(void *to, const void __user *from, unsigned long n) { might_fault(); + check_object_size(to, n, false); if (__builtin_constant_p(n)) { unsigned long ret; diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h index 2eac2aa3e37f..673059a109fe 100644 --- a/arch/x86/include/asm/uaccess_64.h +++ b/arch/x86/include/asm/uaccess_64.h @@ -54,6 +54,7 @@ int __copy_from_user_nocheck(void *dst, const void __user *src, unsigned size) { int ret = 0; + check_object_size(dst, size, false); if (!__builtin_constant_p(size)) return copy_user_generic(dst, (__force void *)src, size); switch (size) { @@ -119,6 +120,7 @@ int __copy_to_user_nocheck(void __user *dst, const void *src, unsigned size) { int ret = 0; + check_object_size(src, size, true); if (!__builtin_constant_p(size)) return copy_user_generic((__force void *)dst, src, size); switch (size) {