From patchwork Thu Jun 9 23:39:27 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 9168599 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 12F66604DB for ; Thu, 9 Jun 2016 23:39:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EEE1821BED for ; Thu, 9 Jun 2016 23:39:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D08F52835B; Thu, 9 Jun 2016 23:39:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 0674221BED for ; Thu, 9 Jun 2016 23:39:44 +0000 (UTC) Received: (qmail 5709 invoked by uid 550); 9 Jun 2016 23:39:43 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Reply-To: kernel-hardening@lists.openwall.com Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 5682 invoked from network); 9 Jun 2016 23:39:42 -0000 Date: Thu, 9 Jun 2016 19:39:27 -0400 From: Rik van Riel To: Kees Cook Cc: kernel-hardening@lists.openwall.com, Brad Spengler , PaX Team , Casey Schaufler , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton Message-ID: <20160609193927.2f03f128@annuminas.surriel.com> In-Reply-To: <1465420302-23754-1-git-send-email-keescook@chromium.org> References: <1465420302-23754-1-git-send-email-keescook@chromium.org> Organization: Red Hat, Inc. MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.68 on 10.5.11.27 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Thu, 09 Jun 2016 23:39:31 +0000 (UTC) Subject: [kernel-hardening] [RFC][PATCH 6/4] mm: disallow user copy to/from separately allocated pages X-Virus-Scanned: ClamAV using ClamSMTP This patch is some new functionality for the usercopy hardening method. I hope it will be useful both to the mainline community and the grsecurity community. I have not figured out what to do about CMA yet, but that can probably be done in a follow-up patch. ---8<--- Subject: mm: disallow user copy to/from separately allocated pages A single copy_from_user or copy_to_user should go to or from a single kernel object. Inside the slab, or on the stack, we can track the individual objects. For the general kernel heap, we do not know exactly where each object is, but we can tell whether the whole range from ptr to ptr + n is inside the same page, or inside the same compound page. If the start and end of the "object" are in pages that were not allocated together, we are likely dealing with an overflow from one object into the next page, and should disallow this copy. The kernel will have some objects that cross page boundaries in sections like .rodata, .bss, etc. Copying from those needs to be allowed; they can be identified with PageReserved. TODO: figure out what to do with CMA memory Signed-off-by: Rik van Riel --- mm/usercopy.c | 23 +++++++++++++++++++---- 2 files changed, 20 insertions(+), 5 deletions(-) diff --git a/mm/usercopy.c b/mm/usercopy.c index e09c33070759..be75d97c5d75 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -109,7 +109,7 @@ static inline bool check_kernel_text_object(const void *ptr, unsigned long n) static inline const char *check_heap_object(const void *ptr, unsigned long n) { - struct page *page; + struct page *page, *endpage; if (ZERO_OR_NULL_PTR(ptr)) return ""; @@ -118,11 +118,26 @@ static inline const char *check_heap_object(const void *ptr, unsigned long n) return NULL; page = virt_to_head_page(ptr); - if (!PageSlab(page)) + if (PageSlab(page)) + /* Check allocator for flags and size. */ + return __check_heap_object(ptr, n, page); + + /* Is the object wholly within one base page? Great. */ + if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) == + ((unsigned long)(ptr + n - 1) & (unsigned long)PAGE_MASK))) + return NULL; + + /* Are the start and end inside the same compound page? Great. */ + endpage = virt_to_head_page(ptr + n - 1); + if (likely(endpage == page)) + return NULL; + + /* Is this a special area, eg. .rodata, .bss, or device memory? */ + if (PageReserved(page) && PageReserved(endpage)) return NULL; - /* Check allocator for flags and size. */ - return __check_heap_object(ptr, n, page); + /* Uh oh. The "object" spans several independently allocated pages. */ + return ""; } /*