From patchwork Wed Sep 20 20:45:08 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 9962497 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 68A7F60208 for ; Wed, 20 Sep 2017 20:51:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5A3C329178 for ; Wed, 20 Sep 2017 20:51:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4EB8A29233; Wed, 20 Sep 2017 20:51:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID,URIBL_BLACK autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3E42229178 for ; Wed, 20 Sep 2017 20:51:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751938AbdITUv3 (ORCPT ); Wed, 20 Sep 2017 16:51:29 -0400 Received: from mail-pf0-f169.google.com ([209.85.192.169]:53734 "EHLO mail-pf0-f169.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751799AbdITUqB (ORCPT ); Wed, 20 Sep 2017 16:46:01 -0400 Received: by mail-pf0-f169.google.com with SMTP id x78so2102135pff.10 for ; Wed, 20 Sep 2017 13:46:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=06/UKgH88HtDSL3N3wOyzrmsupGpyFbyWGxiyU/bLMQ=; b=GonCaJgYOv5X4N8JwHc3SAHw5vNPjq9BMc73m+3JcusDd9D+EKjFYBuNE8kctNr+53 bxDi1jagAnOU0zne4i8GWQVfAzLIcZ07rJ2G7BHYGnSEW7h+QAytMsXuqLgHYW5vmpSO mGU9duzmZgV6ag5AXCOPuv6rDeKBzkv0AH2ww= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=06/UKgH88HtDSL3N3wOyzrmsupGpyFbyWGxiyU/bLMQ=; b=GEX6CZwvfbB8cSbg6cWGqcNdLXc9gmmgdewdtORrk6K0Q40YynLu2EGFDOsdeuCy+1 21QMcvpAw98W+VIHCTeqre1aWQbuTgAUhgWh3v1+zCUugWeXZKp9Jr0DYl+ZQmExdk7i vT1aWmLa4xttDDfDCmGLKdmhHjbcdeN2tNXJ3c62oZBRCtiB8HmilOA2IO+Fk78dcJst /PLnAnhwDbaRkvs32oHWblrl00l8mkTMa/SZCpkKMPcepPzCZCWSO9uAzCG2GEC/qsfV 2kXATJV0h3jNBhTwfseO+EzZTEtMYPhTqsFIPP/UBe8qibVrtKKJGDrDYB/aVmkSvE5H /48A== X-Gm-Message-State: AHPjjUjxReKRmNWRwFJIFGQQHEit9H2QJHDs0Qs+Lq3ZWfYKVrAFQpC/ ZK+02fCcscKQZLC8RNVyE9Wm5Q== X-Google-Smtp-Source: AOwi7QC0I949laQ2Cy5BwqZCazQDWaMqClbP9hpbbm/tO/xKGHMUoQWl09Fh5qPPHZc6a69b7Q9eXA== X-Received: by 10.84.232.135 with SMTP id i7mr3358615plk.104.1505940360846; Wed, 20 Sep 2017 13:46:00 -0700 (PDT) Received: from www.outflux.net (173-164-112-133-Oregon.hfc.comcastbusiness.net. [173.164.112.133]) by smtp.gmail.com with ESMTPSA id p77sm9841509pfa.92.2017.09.20.13.45.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 20 Sep 2017 13:45:54 -0700 (PDT) From: Kees Cook To: linux-kernel@vger.kernel.org Cc: Kees Cook , David Windsor , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Laura Abbott , Ingo Molnar , Mark Rutland , linux-mm@kvack.org, linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, netdev@vger.kernel.org, kernel-hardening@lists.openwall.com Subject: [PATCH v3 02/31] usercopy: Enforce slab cache usercopy region boundaries Date: Wed, 20 Sep 2017 13:45:08 -0700 Message-Id: <1505940337-79069-3-git-send-email-keescook@chromium.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1505940337-79069-1-git-send-email-keescook@chromium.org> References: <1505940337-79069-1-git-send-email-keescook@chromium.org> Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: David Windsor This patch adds the enforcement component of usercopy cache whitelisting, and is modified from Brad Spengler/PaX Team's PAX_USERCOPY whitelisting code in the last public patch of grsecurity/PaX based on my understanding of the code. Changes or omissions from the original code are mine and don't reflect the original grsecurity/PaX code. The SLAB and SLUB allocators are modified to deny all copy operations in which the kernel heap memory being modified falls outside of the cache's defined usercopy region. Signed-off-by: David Windsor [kees: adjust commit log and comments] Cc: Christoph Lameter Cc: Pekka Enberg Cc: David Rientjes Cc: Joonsoo Kim Cc: Andrew Morton Cc: Laura Abbott Cc: Ingo Molnar Cc: Mark Rutland Cc: linux-mm@kvack.org Cc: linux-xfs@vger.kernel.org Signed-off-by: Kees Cook --- mm/slab.c | 16 +++++++++++----- mm/slub.c | 18 +++++++++++------- mm/usercopy.c | 12 ++++++++++++ 3 files changed, 34 insertions(+), 12 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index 87b6e5e0cdaf..df268999cf02 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -4408,7 +4408,9 @@ module_init(slab_proc_init); #ifdef CONFIG_HARDENED_USERCOPY /* - * Rejects objects that are incorrectly sized. + * Rejects incorrectly sized objects and objects that are to be copied + * to/from userspace but do not fall entirely within the containing slab + * cache's usercopy region. * * Returns NULL if check passes, otherwise const char * to name of cache * to indicate an error. @@ -4428,11 +4430,15 @@ const char *__check_heap_object(const void *ptr, unsigned long n, /* Find offset within object. */ offset = ptr - index_to_obj(cachep, page, objnr) - obj_offset(cachep); - /* Allow address range falling entirely within object size. */ - if (offset <= cachep->object_size && n <= cachep->object_size - offset) - return NULL; + /* Make sure object falls entirely within cache's usercopy region. */ + if (offset < cachep->useroffset) + return cachep->name; + if (offset - cachep->useroffset > cachep->usersize) + return cachep->name; + if (n > cachep->useroffset - offset + cachep->usersize) + return cachep->name; - return cachep->name; + return NULL; } #endif /* CONFIG_HARDENED_USERCOPY */ diff --git a/mm/slub.c b/mm/slub.c index fae637726c44..bbf73024be3a 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3833,7 +3833,9 @@ EXPORT_SYMBOL(__kmalloc_node); #ifdef CONFIG_HARDENED_USERCOPY /* - * Rejects objects that are incorrectly sized. + * Rejects incorrectly sized objects and objects that are to be copied + * to/from userspace but do not fall entirely within the containing slab + * cache's usercopy region. * * Returns NULL if check passes, otherwise const char * to name of cache * to indicate an error. @@ -3843,11 +3845,9 @@ const char *__check_heap_object(const void *ptr, unsigned long n, { struct kmem_cache *s; unsigned long offset; - size_t object_size; /* Find object and usable object size. */ s = page->slab_cache; - object_size = slab_ksize(s); /* Reject impossible pointers. */ if (ptr < page_address(page)) @@ -3863,11 +3863,15 @@ const char *__check_heap_object(const void *ptr, unsigned long n, offset -= s->red_left_pad; } - /* Allow address range falling entirely within object size. */ - if (offset <= object_size && n <= object_size - offset) - return NULL; + /* Make sure object falls entirely within cache's usercopy region. */ + if (offset < s->useroffset) + return s->name; + if (offset - s->useroffset > s->usersize) + return s->name; + if (n > s->useroffset - offset + s->usersize) + return s->name; - return s->name; + return NULL; } #endif /* CONFIG_HARDENED_USERCOPY */ diff --git a/mm/usercopy.c b/mm/usercopy.c index a9852b24715d..cbffde670c49 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -58,6 +58,18 @@ static noinline int check_stack_object(const void *obj, unsigned long len) return GOOD_STACK; } +/* + * If this function is reached, then CONFIG_HARDENED_USERCOPY has found an + * unexpected state during a copy_from_user() or copy_to_user() call. + * There are several checks being performed on the buffer by the + * __check_object_size() function. Normal stack buffer usage should never + * trip the checks, and kernel text addressing will always trip the check. + * For cache objects, it is checking that only the whitelisted range of + * bytes for a given cache is being accessed (via the cache's usersize and + * useroffset fields). To adjust a cache whitelist, use the usercopy-aware + * kmem_cache_create_usercopy() function to create the cache (and + * carefully audit the whitelist range). + */ static void report_usercopy(const void *ptr, unsigned long len, bool to_user, const char *type) {