From patchwork Sun Jun 26 21:55:35 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Lutomirski X-Patchwork-Id: 9199671 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2E67160752 for ; Sun, 26 Jun 2016 21:58:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2118028521 for ; Sun, 26 Jun 2016 21:58:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 148612855A; Sun, 26 Jun 2016 21:58:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 486F128521 for ; Sun, 26 Jun 2016 21:58:09 +0000 (UTC) Received: (qmail 32603 invoked by uid 550); 26 Jun 2016 21:56:32 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Reply-To: kernel-hardening@lists.openwall.com Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 32437 invoked from network); 26 Jun 2016 21:56:27 -0000 From: Andy Lutomirski To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, Borislav Petkov , Nadav Amit , Kees Cook , Brian Gerst , "kernel-hardening@lists.openwall.com" , Linus Torvalds , Josh Poimboeuf , Jann Horn , Heiko Carstens , Andy Lutomirski Date: Sun, 26 Jun 2016 14:55:35 -0700 Message-Id: <5ec59addf4d5ee9ed8dc51f9e927867534d0f911.1466974736.git.luto@kernel.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: X-Virus-Scanned: ClamAV using ClamSMTP Subject: [kernel-hardening] [PATCH v4 13/29] x86/dumpstack: Try harder to get a call trace on stack overflow X-Virus-Scanned: ClamAV using ClamSMTP If we overflow the stack, print_context_stack will abort. Detect this case and rewind back into the valid part of the stack so that we can trace it. Reviewed-by: Josh Poimboeuf Signed-off-by: Andy Lutomirski --- arch/x86/kernel/dumpstack.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c index 0d05f113805e..8a2113dfc154 100644 --- a/arch/x86/kernel/dumpstack.c +++ b/arch/x86/kernel/dumpstack.c @@ -87,7 +87,7 @@ static inline int valid_stack_ptr(struct task_struct *task, else return 0; } - return p > t && p < t + THREAD_SIZE - size; + return p >= t && p < t + THREAD_SIZE - size; } unsigned long @@ -98,6 +98,14 @@ print_context_stack(struct task_struct *task, { struct stack_frame *frame = (struct stack_frame *)bp; + /* + * If we overflowed the stack into a guard page, jump back to the + * bottom of the usable stack. + */ + if ((unsigned long)task_stack_page(task) - (unsigned long)stack < + PAGE_SIZE) + stack = (unsigned long *)task_stack_page(task); + while (valid_stack_ptr(task, stack, sizeof(*stack), end)) { unsigned long addr;