From patchwork Thu Nov 5 09:21:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 11883735 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 13044C00A89 for ; Thu, 5 Nov 2020 09:23:00 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6A56320825 for ; Thu, 5 Nov 2020 09:22:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="K36cSz7l"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="t80j7HIk" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6A56320825 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:To:From:Subject:Mime-Version:Message-Id:Date: Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender :Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Owner; bh=fVRKVsvSD9H2eJ4DUq57FdyC+wD0mGhvZ4Fm40QM+00=; b=K36cSz7l6XGxt/LBEzTYUHTRG8 /m+3I+AFdnwT//fS/EQ2l6U+KbB0oWREJ9hdlGNqZKNdXDZItDzQuTFBc5fYJKbIF89fARuriAxSs Xsd9V9PaCpxvWQZw2cdc+ezgLqlqIDyjiXohz86lzAy0RPmhfozw0/3rNvD50edN9HY3jH8YGDlz/ J+IB/yMuqe1i6pwpZlBGIApVO8SaOshQkMHGPVKcduug6/0rfISB6DhVf6CiVel2nF5ZrnDzr8uWJ YKgzdSzUCez59CXhDjXrxpeZtjsiOs1CpUeFKQXuqqG9HrbfPdpV8LPXbMcZI7Cmk6w6p4yPWd9Fy cBm9K9cA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kabSn-0001we-0I; Thu, 05 Nov 2020 09:21:49 +0000 Received: from mail-qv1-xf4a.google.com ([2607:f8b0:4864:20::f4a]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kabSj-0001uf-15 for linux-arm-kernel@lists.infradead.org; Thu, 05 Nov 2020 09:21:47 +0000 Received: by mail-qv1-xf4a.google.com with SMTP id x10so487497qvo.22 for ; Thu, 05 Nov 2020 01:21:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:message-id:mime-version:subject:from:to:cc; bh=bFemS5lF24Ay3Ly62yBwjbW4q0e7c3wKlLqU+bgvo/I=; b=t80j7HIkdOHjMvPsNgoTiHKDwLTSbbxs4r9tjdU+10dGrdNtqKxpqaREpU+3ayq/MB N7yoiB1TX3u2HLE1gPPsK+5KyONXCzikTUU70qJrkAOswvwdFIGm/ZHTGRIF1myFYNvX gT76CeRW5NIxPYUdekpRxX5Qtx917w1WrmBCjnNb+O3LL3BUOq6GKIickj0JgS/wrzZh tnN3XzwreN9vw3POtf3HOKJdfwKxqDdsz6D82h0+AmaKXK8VJCtYRz95tF8aqayKWmRA hQXUB/KsfuwI+totA/KXGvMm4KbBHpJEzA6LZA8ge529ZMUWdw1+jkRVLSELOqAram/D krjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:message-id:mime-version:subject:from :to:cc; bh=bFemS5lF24Ay3Ly62yBwjbW4q0e7c3wKlLqU+bgvo/I=; b=afAebObh7cMHe6qFB8CqaHy+8ydygdxM8GFO5ALNCUNsfZ7YhE6sLfx4wW1PBSAysB 2cWjVYdHVUD34SKBNvFGglCEWbnYbSfY+pucp28Cb519LZdnAH9MOs/KwRjtZ+7ewT/i 8MCj4ibBAtLkAprNd+frg0EkIy4bxPWPg5k8JcP7uH6So+MSgv/yyyPjlfdWnC9R3GCW fg8Hkf7zPBQ3QyD1ztVPaEicdShH2SRuZfbZ5CRqshGuFcMZWGdFTNfiuw2NifZENhCp VYCY6F7vRdAyLxF8x52MWhx/wkWMTspQHAt/mvkkJWyxoupVqaJS8WIrLcFJgRXK/wRH WBVw== X-Gm-Message-State: AOAM533K4e6/0YsURZgq/VcJmu0S3yR7Eb5EUHtrlmLWYrlzQ2PTt+QA WWO9AkZVCxfwK6lrNuFTu1soWLivLw== X-Google-Smtp-Source: ABdhPJwTuchlHWol+5p0JL/YFUwph8F//VmgiSWrHzJencdZwxUeO0yVeJhMboPyL7otHYjKt3OXyFDKcg== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:f693:9fff:fef4:2449]) (user=elver job=sendgmr) by 2002:a0c:b44a:: with SMTP id e10mr1321540qvf.4.1604568101175; Thu, 05 Nov 2020 01:21:41 -0800 (PST) Date: Thu, 5 Nov 2020 10:21:33 +0100 Message-Id: <20201105092133.2075331-1-elver@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.29.1.341.ge80a0c044ae-goog Subject: [PATCH] kfence: Use pt_regs to generate stack trace on faults From: Marco Elver To: elver@google.com, akpm@linux-foundation.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201105_042145_143394_088B9225 X-CRM114-Status: GOOD ( 27.31 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, jannh@google.com, x86@kernel.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, glider@google.com, linux-arm-kernel@lists.infradead.org, dvyukov@google.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Instead of removing the fault handling portion of the stack trace based on the fault handler's name, just use struct pt_regs directly. Change kfence_handle_page_fault() to take a struct pt_regs, and plumb it through to kfence_report_error() for out-of-bounds, use-after-free, or invalid access errors, where pt_regs is used to generate the stack trace. If the kernel is a DEBUG_KERNEL, also show registers for more information. Suggested-by: Mark Rutland Signed-off-by: Marco Elver Acked-by: Mark Rutland --- arch/arm64/include/asm/kfence.h | 2 -- arch/arm64/mm/fault.c | 2 +- arch/x86/include/asm/kfence.h | 6 ---- arch/x86/mm/fault.c | 2 +- include/linux/kfence.h | 5 +-- mm/kfence/core.c | 10 +++--- mm/kfence/kfence.h | 4 +-- mm/kfence/report.c | 63 +++++++++++++++++++-------------- 8 files changed, 48 insertions(+), 46 deletions(-) diff --git a/arch/arm64/include/asm/kfence.h b/arch/arm64/include/asm/kfence.h index 5ac0f599cc9a..6c0afeeab635 100644 --- a/arch/arm64/include/asm/kfence.h +++ b/arch/arm64/include/asm/kfence.h @@ -5,8 +5,6 @@ #include -#define KFENCE_SKIP_ARCH_FAULT_HANDLER "el1_sync" - static inline bool arch_kfence_init_pool(void) { return true; } static inline bool kfence_protect_page(unsigned long addr, bool protect) diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 2d60204b4ed2..183d1e6dd9e0 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -323,7 +323,7 @@ static void __do_kernel_fault(unsigned long addr, unsigned int esr, } else if (addr < PAGE_SIZE) { msg = "NULL pointer dereference"; } else { - if (kfence_handle_page_fault(addr)) + if (kfence_handle_page_fault(addr, regs)) return; msg = "paging request"; diff --git a/arch/x86/include/asm/kfence.h b/arch/x86/include/asm/kfence.h index beeac105dae7..2f3f877a7a5c 100644 --- a/arch/x86/include/asm/kfence.h +++ b/arch/x86/include/asm/kfence.h @@ -11,12 +11,6 @@ #include #include -/* - * The page fault handler entry function, up to which the stack trace is - * truncated in reports. - */ -#define KFENCE_SKIP_ARCH_FAULT_HANDLER "asm_exc_page_fault" - /* Force 4K pages for __kfence_pool. */ static inline bool arch_kfence_init_pool(void) { diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index e42db2836438..53d732161b4f 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -727,7 +727,7 @@ no_context(struct pt_regs *regs, unsigned long error_code, efi_recover_from_page_fault(address); /* Only not-present faults should be handled by KFENCE. */ - if (!(error_code & X86_PF_PROT) && kfence_handle_page_fault(address)) + if (!(error_code & X86_PF_PROT) && kfence_handle_page_fault(address, regs)) return; oops: diff --git a/include/linux/kfence.h b/include/linux/kfence.h index ed2d48acdafe..98a97f9d43cd 100644 --- a/include/linux/kfence.h +++ b/include/linux/kfence.h @@ -171,6 +171,7 @@ static __always_inline __must_check bool kfence_free(void *addr) /** * kfence_handle_page_fault() - perform page fault handling for KFENCE pages * @addr: faulting address + * @regs: current struct pt_regs (can be NULL, but shows full stack trace) * * Return: * * false - address outside KFENCE pool, @@ -181,7 +182,7 @@ static __always_inline __must_check bool kfence_free(void *addr) * cases KFENCE prints an error message and marks the offending page as * present, so that the kernel can proceed. */ -bool __must_check kfence_handle_page_fault(unsigned long addr); +bool __must_check kfence_handle_page_fault(unsigned long addr, struct pt_regs *regs); #else /* CONFIG_KFENCE */ @@ -194,7 +195,7 @@ static inline size_t kfence_ksize(const void *addr) { return 0; } static inline void *kfence_object_start(const void *addr) { return NULL; } static inline void __kfence_free(void *addr) { } static inline bool __must_check kfence_free(void *addr) { return false; } -static inline bool __must_check kfence_handle_page_fault(unsigned long addr) { return false; } +static inline bool __must_check kfence_handle_page_fault(unsigned long addr, struct pt_regs *regs) { return false; } #endif diff --git a/mm/kfence/core.c b/mm/kfence/core.c index 9d597013cd5d..9358f42a9a9e 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -212,7 +212,7 @@ static inline bool check_canary_byte(u8 *addr) return true; atomic_long_inc(&counters[KFENCE_COUNTER_BUGS]); - kfence_report_error((unsigned long)addr, addr_to_metadata((unsigned long)addr), + kfence_report_error((unsigned long)addr, NULL, addr_to_metadata((unsigned long)addr), KFENCE_ERROR_CORRUPTION); return false; } @@ -351,7 +351,7 @@ static void kfence_guarded_free(void *addr, struct kfence_metadata *meta, bool z if (meta->state != KFENCE_OBJECT_ALLOCATED || meta->addr != (unsigned long)addr) { /* Invalid or double-free, bail out. */ atomic_long_inc(&counters[KFENCE_COUNTER_BUGS]); - kfence_report_error((unsigned long)addr, meta, KFENCE_ERROR_INVALID_FREE); + kfence_report_error((unsigned long)addr, NULL, meta, KFENCE_ERROR_INVALID_FREE); raw_spin_unlock_irqrestore(&meta->lock, flags); return; } @@ -752,7 +752,7 @@ void __kfence_free(void *addr) kfence_guarded_free(addr, meta, false); } -bool kfence_handle_page_fault(unsigned long addr) +bool kfence_handle_page_fault(unsigned long addr, struct pt_regs *regs) { const int page_index = (addr - (unsigned long)__kfence_pool) / PAGE_SIZE; struct kfence_metadata *to_report = NULL; @@ -815,11 +815,11 @@ bool kfence_handle_page_fault(unsigned long addr) out: if (to_report) { - kfence_report_error(addr, to_report, error_type); + kfence_report_error(addr, regs, to_report, error_type); raw_spin_unlock_irqrestore(&to_report->lock, flags); } else { /* This may be a UAF or OOB access, but we can't be sure. */ - kfence_report_error(addr, NULL, KFENCE_ERROR_INVALID); + kfence_report_error(addr, regs, NULL, KFENCE_ERROR_INVALID); } return kfence_unprotect(addr); /* Unprotect and let access proceed. */ diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h index f115aabc2052..fa3579d03089 100644 --- a/mm/kfence/kfence.h +++ b/mm/kfence/kfence.h @@ -99,8 +99,8 @@ enum kfence_error_type { KFENCE_ERROR_INVALID_FREE, /* Invalid free. */ }; -void kfence_report_error(unsigned long address, const struct kfence_metadata *meta, - enum kfence_error_type type); +void kfence_report_error(unsigned long address, struct pt_regs *regs, + const struct kfence_metadata *meta, enum kfence_error_type type); void kfence_print_object(struct seq_file *seq, const struct kfence_metadata *meta); diff --git a/mm/kfence/report.c b/mm/kfence/report.c index 0fdaa3ddf1b4..4dedc2ff8f28 100644 --- a/mm/kfence/report.c +++ b/mm/kfence/report.c @@ -5,6 +5,7 @@ #include #include #include +#include #include #include #include @@ -36,7 +37,6 @@ static int get_stack_skipnr(const unsigned long stack_entries[], int num_entries { char buf[64]; int skipnr, fallback = 0; - bool is_access_fault = false; if (type) { /* Depending on error type, find different stack entries. */ @@ -44,8 +44,12 @@ static int get_stack_skipnr(const unsigned long stack_entries[], int num_entries case KFENCE_ERROR_UAF: case KFENCE_ERROR_OOB: case KFENCE_ERROR_INVALID: - is_access_fault = true; - break; + /* + * kfence_handle_page_fault() may be called with pt_regs + * set to NULL; in that case we'll simply show the full + * stack trace. + */ + return 0; case KFENCE_ERROR_CORRUPTION: case KFENCE_ERROR_INVALID_FREE: break; @@ -55,26 +59,21 @@ static int get_stack_skipnr(const unsigned long stack_entries[], int num_entries for (skipnr = 0; skipnr < num_entries; skipnr++) { int len = scnprintf(buf, sizeof(buf), "%ps", (void *)stack_entries[skipnr]); - if (is_access_fault) { - if (!strncmp(buf, KFENCE_SKIP_ARCH_FAULT_HANDLER, len)) - goto found; - } else { - if (str_has_prefix(buf, "kfence_") || str_has_prefix(buf, "__kfence_") || - !strncmp(buf, "__slab_free", len)) { - /* - * In case of tail calls from any of the below - * to any of the above. - */ - fallback = skipnr + 1; - } - - /* Also the *_bulk() variants by only checking prefixes. */ - if (str_has_prefix(buf, "kfree") || - str_has_prefix(buf, "kmem_cache_free") || - str_has_prefix(buf, "__kmalloc") || - str_has_prefix(buf, "kmem_cache_alloc")) - goto found; + if (str_has_prefix(buf, "kfence_") || str_has_prefix(buf, "__kfence_") || + !strncmp(buf, "__slab_free", len)) { + /* + * In case of tail calls from any of the below + * to any of the above. + */ + fallback = skipnr + 1; } + + /* Also the *_bulk() variants by only checking prefixes. */ + if (str_has_prefix(buf, "kfree") || + str_has_prefix(buf, "kmem_cache_free") || + str_has_prefix(buf, "__kmalloc") || + str_has_prefix(buf, "kmem_cache_alloc")) + goto found; } if (fallback < num_entries) return fallback; @@ -152,13 +151,20 @@ static void print_diff_canary(unsigned long address, size_t bytes_to_show, pr_cont(" ]"); } -void kfence_report_error(unsigned long address, const struct kfence_metadata *meta, - enum kfence_error_type type) +void kfence_report_error(unsigned long address, struct pt_regs *regs, + const struct kfence_metadata *meta, enum kfence_error_type type) { unsigned long stack_entries[KFENCE_STACK_DEPTH] = { 0 }; - int num_stack_entries = stack_trace_save(stack_entries, KFENCE_STACK_DEPTH, 1); - int skipnr = get_stack_skipnr(stack_entries, num_stack_entries, &type); const ptrdiff_t object_index = meta ? meta - kfence_metadata : -1; + int num_stack_entries; + int skipnr = 0; + + if (regs) { + num_stack_entries = stack_trace_save_regs(regs, stack_entries, KFENCE_STACK_DEPTH, 0); + } else { + num_stack_entries = stack_trace_save(stack_entries, KFENCE_STACK_DEPTH, 1); + skipnr = get_stack_skipnr(stack_entries, num_stack_entries, &type); + } /* Require non-NULL meta, except if KFENCE_ERROR_INVALID. */ if (WARN_ON(type != KFENCE_ERROR_INVALID && !meta)) @@ -222,7 +228,10 @@ void kfence_report_error(unsigned long address, const struct kfence_metadata *me /* Print report footer. */ pr_err("\n"); - dump_stack_print_info(KERN_ERR); + if (IS_ENABLED(CONFIG_DEBUG_KERNEL) && regs) + show_regs(regs); + else + dump_stack_print_info(KERN_ERR); pr_err("==================================================================\n"); lockdep_on();