From patchwork Thu Jan 11 02:03:01 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10156615 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1FAA9605BA for ; Thu, 11 Jan 2018 02:13:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0DF742874B for ; Thu, 11 Jan 2018 02:13:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 026822874E; Thu, 11 Jan 2018 02:13:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.4 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID, URIBL_BLACK autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id F41962874B for ; Thu, 11 Jan 2018 02:13:31 +0000 (UTC) Received: (qmail 30067 invoked by uid 550); 11 Jan 2018 02:10:13 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 29798 invoked from network); 11 Jan 2018 02:10:02 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=FZoJICJotuAzGqQDFJ5azsrb47mSKcdNyj5ozJokbJk=; b=YBV95Zfeh7lurZUPXP0GkQhUegfPmk3Kh4oCdg5eGmlfwHJPCI5CCIAaNbMqQiLH6Y OA4vzsH2mYMIBt83N6UbZVG1JHY7TmRO1SIsHS2bdpxbPcjdBbDUfbc92mNNnvUGo0pz 58C7lRzqDryv/wErRE39cMEFtIJoaz849Za64= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=FZoJICJotuAzGqQDFJ5azsrb47mSKcdNyj5ozJokbJk=; b=qSTm7ll+Ww12dPEHTvEpmJq0NPIOFMrO+XeVXC91MGnvdzbUb32azkbhtG+lnmJkMC oHJQVyQp1Temp7qgihW4rHlJKyjER0mHj7+OWtkrTfz3BkyiAHTehJtGl36T7tE7s0e7 LselpemyrPoZxqlrXAJ4in//xcog3xDZ9hfYkbMLsqD0D+EpeRz4XD/9Os/fRkltRNhr PsFm8dukJfp+g04DCXoocQ5w3p+P8OIiXKG+zbZZvq3PQPDh6C7lNDwfnVeKs0hRad+T CE4FC1DhVi5C7tP6AvodhuVNBUzbUwPSWbA71fIFliUpIhuVZWkJxRQoGqK9nIa3+NN6 t6hw== X-Gm-Message-State: AKwxytf+A+JVAtcoHio5cSlw5xORlOnR7uEwGdDLStM72dLEzzAgNCWE lYbyijJjAbKlT8dMVRU+fSGqZQ== X-Google-Smtp-Source: ACJfBotootwZeboeCYUY6WW1U8PxxGulQsnG4O3kK9OvUqjRy+WiJ5KZdaGmkb17YXr9BINITC7HRQ== X-Received: by 10.84.231.9 with SMTP id f9mr4308147plk.180.1515636590574; Wed, 10 Jan 2018 18:09:50 -0800 (PST) From: Kees Cook To: linux-kernel@vger.kernel.org Cc: Kees Cook , David Windsor , Ingo Molnar , Andrew Morton , Thomas Gleixner , Andy Lutomirski , Linus Torvalds , Alexander Viro , Christoph Hellwig , Christoph Lameter , "David S. Miller" , Laura Abbott , Mark Rutland , "Martin K. Petersen" , Paolo Bonzini , Christian Borntraeger , Christoffer Dall , Dave Kleikamp , Jan Kara , Luis de Bethencourt , Marc Zyngier , Rik van Riel , Matthew Garrett , linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, netdev@vger.kernel.org, linux-mm@kvack.org, kernel-hardening@lists.openwall.com Date: Wed, 10 Jan 2018 18:03:01 -0800 Message-Id: <1515636190-24061-30-git-send-email-keescook@chromium.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1515636190-24061-1-git-send-email-keescook@chromium.org> References: <1515636190-24061-1-git-send-email-keescook@chromium.org> Subject: [kernel-hardening] [PATCH 29/38] fork: Define usercopy region in mm_struct slab caches X-Virus-Scanned: ClamAV using ClamSMTP From: David Windsor In support of usercopy hardening, this patch defines a region in the mm_struct slab caches in which userspace copy operations are allowed. Only the auxv field is copied to userspace. cache object allocation: kernel/fork.c: #define allocate_mm() (kmem_cache_alloc(mm_cachep, GFP_KERNEL)) dup_mm(): ... mm = allocate_mm(); copy_mm(...): ... dup_mm(); copy_process(...): ... copy_mm(...) _do_fork(...): ... copy_process(...) example usage trace: fs/binfmt_elf.c: create_elf_tables(...): ... elf_info = (elf_addr_t *)current->mm->saved_auxv; ... copy_to_user(..., elf_info, ei_index * sizeof(elf_addr_t)) load_elf_binary(...): ... create_elf_tables(...); This region is known as the slab cache's usercopy region. Slab caches can now check that each dynamically sized copy operation involving cache-managed memory falls entirely within the slab's usercopy region. This patch is modified from Brad Spengler/PaX Team's PAX_USERCOPY whitelisting code in the last public patch of grsecurity/PaX based on my understanding of the code. Changes or omissions from the original code are mine and don't reflect the original grsecurity/PaX code. Signed-off-by: David Windsor [kees: adjust commit log, split patch, provide usage trace] Cc: Ingo Molnar Cc: Andrew Morton Cc: Thomas Gleixner Cc: Andy Lutomirski Signed-off-by: Kees Cook Acked-by: Rik van Riel --- kernel/fork.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/kernel/fork.c b/kernel/fork.c index 432eadf6b58c..82f2a0441d3b 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -2225,9 +2225,11 @@ void __init proc_caches_init(void) * maximum number of CPU's we can ever have. The cpumask_allocation * is at the end of the structure, exactly for that reason. */ - mm_cachep = kmem_cache_create("mm_struct", + mm_cachep = kmem_cache_create_usercopy("mm_struct", sizeof(struct mm_struct), ARCH_MIN_MMSTRUCT_ALIGN, SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT, + offsetof(struct mm_struct, saved_auxv), + sizeof_field(struct mm_struct, saved_auxv), NULL); vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC|SLAB_ACCOUNT); mmap_init();