From patchwork Tue Aug 8 10:17:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13345926 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2D317C001DF for ; Tue, 8 Aug 2023 10:22:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=S+ph4OCmKtQrZi72m1/YMZJXyP7SAqroZQ+zRbKsmCQ=; b=ke1dXcPMr9O/ElH5tBUFLfFDec qYWAqvEYvVQUv3kkXRXG+V9GhM+TuS+AHfFA+U8e1P4ouXEG5tzcLILwHr+AGX4EaxbU9rK9RmhOI erIJESNwnNWbz1kTiS2lLhfGif7ExPGfp6em2hq7Kt+iViD+w8RubC49eROpxRbeYzvMqBwJys313 ZR/5wy5JX5ZDyYM3UcCCT5WIUBJEP4VSDq07q68Vr4PyN/ZdrPgqmfneWJJdV2WKM0PwLjIYOLb0d znaypuV+mRctLWTbj8nvQXzZIpIjDcm4hKTYAKkTZ287g+DYlgQIpawlKdzksN0HBMLYMDHIv6J4K cMBD8rTA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qTJqN-002Dcv-2h; Tue, 08 Aug 2023 10:21:39 +0000 Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qTJqH-002Dae-2t for linux-arm-kernel@lists.infradead.org; Tue, 08 Aug 2023 10:21:35 +0000 Received: by mail-wr1-x44a.google.com with SMTP id ffacd0b85a97d-30e3ee8a42eso2549613f8f.1 for ; Tue, 08 Aug 2023 03:21:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691490092; x=1692094892; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=8CDlzc3vfWuiyArBAiXe26CLSydTdANPU8OF+/ikf0k=; b=Mudnf9IBAGhp+cqaJ3JEk9pwoyq1YPscCpFQJdJDjoOmIeoICs42r1lSRunhtwQJPR v7y3Vc2fMMYj+lKFB+WR09L0w8ZkEGeGG39L98goAMUMJkUQpb2Hk7q20fxSwAYzpcAJ SLPGq5B0V6QikbBVYxVJMEN8vTI3fNZdJJNHBmm2yAxMZMgsE/ee02Xe2iHc/y5sXoSK 9HJKRU6/qfY6yYo3EcLxVfqklEpNPSF759vLGq7GK6ZIIyG7vnHvlY2YrChpaHRvmHbS 87WveU7Jiz/SLWouPBgBJnZtiXsP2wdBKGOHTjxZ/tk3xBIcHHvIGjFZemd8qsE2h5JP /pnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691490092; x=1692094892; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=8CDlzc3vfWuiyArBAiXe26CLSydTdANPU8OF+/ikf0k=; b=DH8/HzwAQYHdRY4iUjPB2xdSqc5iWO8Vf4OX7IYNjLgBNAGt/ENprr652jW6Xf/3bR 4GCoan7ND9bjYO5lzC5KezRzu0OxbQhGHq4alMdPEltg5kICfI5/B2jOkces+P0kMtTj eI3jl8WJVebD4bAtU1rHksVdhwjqS9NGWM64tt/qDmW2lxn9yyKl6A+y8Slms5dVFH8f gVjpc+IOeDZVzSC6409EngAoxY1LD1JjCONDtP5ZW0+ur6g93pssFJeLkuyPBZ/MgsPe nPtyuWKvoKmHi4K96KnxYyTvqS4z/i8PbmEGlDS4ubmZy2lY0jE7JqD5H/bRpMDJFnhE /U9A== X-Gm-Message-State: AOJu0YyBOZ3oC4+Pis4xsUPpZx3RxPDwaZ50bptKsSXL/e3kVOI9FlbK t6an640mtchAkUw3o3CJ6ow4Ndvdlg== X-Google-Smtp-Source: AGHT+IHfUmliuT2Nfy5W+EoyEa3s55z0i/1Jvaz7gEb7uzHFheyioWA4wLPjYGPXfJr48fuCitIzk0/rOw== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:39c0:833d:c267:7f64]) (user=elver job=sendgmr) by 2002:a5d:66c6:0:b0:317:cc33:106 with SMTP id k6-20020a5d66c6000000b00317cc330106mr78663wrw.11.1691490091926; Tue, 08 Aug 2023 03:21:31 -0700 (PDT) Date: Tue, 8 Aug 2023 12:17:27 +0200 In-Reply-To: <20230808102049.465864-1-elver@google.com> Mime-Version: 1.0 References: <20230808102049.465864-1-elver@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230808102049.465864-3-elver@google.com> Subject: [PATCH v3 3/3] list_debug: Introduce CONFIG_DEBUG_LIST_MINIMAL From: Marco Elver To: elver@google.com, Andrew Morton , Kees Cook Cc: Guenter Roeck , Peter Zijlstra , Mark Rutland , Steven Rostedt , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , Nathan Chancellor , Nick Desaulniers , Tom Rix , Miguel Ojeda , Sami Tolvanen , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, Dmitry Vyukov , Alexander Potapenko , kasan-dev@googlegroups.com, linux-toolchains@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230808_032133_934128_5BE49B9E X-CRM114-Status: GOOD ( 25.31 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Numerous production kernel configs (see [1, 2]) are choosing to enable CONFIG_DEBUG_LIST, which is also being recommended by KSPP for hardened configs [3]. The feature has never been designed with performance in mind, yet common list manipulation is happening across hot paths all over the kernel. Introduce CONFIG_DEBUG_LIST_MINIMAL, which performs list pointer checking inline, and only upon list corruption delegates to the reporting slow path. To generate optimal machine code with CONFIG_DEBUG_LIST_MINIMAL: 1. Elide checking for pointer values which upon dereference would result in an immediate access fault -- therefore "minimal" checks. The trade-off is lower-quality error reports. 2. Use the newly introduced __preserve_most function attribute (available with Clang, but not yet with GCC) to minimize the code footprint for calling the reporting slow path. As a result, function size of callers is reduced by avoiding saving registers before calling the rarely called reporting slow path. Note that all TUs in lib/Makefile already disable function tracing, including list_debug.c, and __preserve_most's implied notrace has no effect in this case. 3. Because the inline checks are a subset of the full set of checks in ___list_*_valid(), always return false if the inline checks failed. This avoids redundant compare and conditional branch right after return from the slow path. As a side-effect of the checks being inline, if the compiler can prove some condition to always be true, it can completely elide some checks. Running netperf with CONFIG_DEBUG_LIST_MINIMAL (using a Clang compiler with "preserve_most") shows throughput improvements, in my case of ~7% on average (up to 20-30% on some test cases). Link: https://r.android.com/1266735 [1] Link: https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/blob/main/config [2] Link: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings [3] Signed-off-by: Marco Elver Signed-off-by: Marco Elver Signed-off-by: Marco Elver --- v3: * Rename ___list_*_valid() to __list_*_valid_or_report(). * More comments. v2: * Note that lib/Makefile disables function tracing for everything and __preserve_most's implied notrace is a noop here. --- arch/arm64/kvm/hyp/nvhe/list_debug.c | 2 + include/linux/list.h | 64 +++++++++++++++++++++++++--- lib/Kconfig.debug | 15 +++++++ lib/list_debug.c | 2 + 4 files changed, 77 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/list_debug.c b/arch/arm64/kvm/hyp/nvhe/list_debug.c index 16266a939a4c..46a2d4f2b3c6 100644 --- a/arch/arm64/kvm/hyp/nvhe/list_debug.c +++ b/arch/arm64/kvm/hyp/nvhe/list_debug.c @@ -26,6 +26,7 @@ static inline __must_check bool nvhe_check_data_corruption(bool v) /* The predicates checked here are taken from lib/list_debug.c. */ +__list_valid_slowpath bool __list_add_valid_or_report(struct list_head *new, struct list_head *prev, struct list_head *next) { @@ -37,6 +38,7 @@ bool __list_add_valid_or_report(struct list_head *new, struct list_head *prev, return true; } +__list_valid_slowpath bool __list_del_entry_valid_or_report(struct list_head *entry) { struct list_head *prev, *next; diff --git a/include/linux/list.h b/include/linux/list.h index 130c6a1bb45c..066fe33e99bf 100644 --- a/include/linux/list.h +++ b/include/linux/list.h @@ -39,38 +39,90 @@ static inline void INIT_LIST_HEAD(struct list_head *list) } #ifdef CONFIG_DEBUG_LIST + +#ifdef CONFIG_DEBUG_LIST_MINIMAL +# define __list_valid_slowpath __cold __preserve_most +#else +# define __list_valid_slowpath +#endif + /* * Performs the full set of list corruption checks before __list_add(). * On list corruption reports a warning, and returns false. */ -extern bool __list_add_valid_or_report(struct list_head *new, - struct list_head *prev, - struct list_head *next); +extern bool __list_valid_slowpath __list_add_valid_or_report(struct list_head *new, + struct list_head *prev, + struct list_head *next); /* * Performs list corruption checks before __list_add(). Returns false if a * corruption is detected, true otherwise. + * + * With CONFIG_DEBUG_LIST_MINIMAL set, performs minimal list integrity checking + * (that do not result in a fault) inline, and only if a corruption is detected + * calls the reporting function __list_add_valid_or_report(). */ static __always_inline bool __list_add_valid(struct list_head *new, struct list_head *prev, struct list_head *next) { - return __list_add_valid_or_report(new, prev, next); + bool ret = true; + + if (IS_ENABLED(CONFIG_DEBUG_LIST_MINIMAL)) { + /* + * In the minimal config, elide checking if next and prev are + * NULL, since the immediate dereference of them below would + * result in a fault if NULL. + * + * With the minimal config we can afford to inline the checks, + * which also gives the compiler a chance to elide some of them + * completely if they can be proven at compile-time. If one of + * the pre-conditions does not hold, the slow-path will show a + * report which pre-condition failed. + */ + if (likely(next->prev == prev && prev->next == next && new != prev && new != next)) + return true; + ret = false; + } + + ret &= __list_add_valid_or_report(new, prev, next); + return ret; } /* * Performs the full set of list corruption checks before __list_del_entry(). * On list corruption reports a warning, and returns false. */ -extern bool __list_del_entry_valid_or_report(struct list_head *entry); +extern bool __list_valid_slowpath __list_del_entry_valid_or_report(struct list_head *entry); /* * Performs list corruption checks before __list_del_entry(). Returns false if a * corruption is detected, true otherwise. + * + * With CONFIG_DEBUG_LIST_MINIMAL set, performs minimal list integrity checking + * (that do not result in a fault) inline, and only if a corruption is detected + * calls the reporting function __list_del_entry_valid_or_report(). */ static __always_inline bool __list_del_entry_valid(struct list_head *entry) { - return __list_del_entry_valid_or_report(entry); + bool ret = true; + + if (IS_ENABLED(CONFIG_DEBUG_LIST_MINIMAL)) { + struct list_head *prev = entry->prev; + struct list_head *next = entry->next; + + /* + * In the minimal config, elide checking if next and prev are + * NULL, LIST_POISON1 or LIST_POISON2, since the immediate + * dereference of them below would result in a fault. + */ + if (likely(prev->next == entry && next->prev == entry)) + return true; + ret = false; + } + + ret &= __list_del_entry_valid_or_report(entry); + return ret; } #else static inline bool __list_add_valid(struct list_head *new, diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index fbc89baf7de6..e72cf08af0fa 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -1680,6 +1680,21 @@ config DEBUG_LIST If unsure, say N. +config DEBUG_LIST_MINIMAL + bool "Minimal linked list debug checks" + default !DEBUG_KERNEL + depends on DEBUG_LIST + help + Only perform the minimal set of checks in the linked-list walking + routines to catch corruptions that are not guaranteed to result in an + immediate access fault. + + This trades lower quality error reports for improved performance: the + generated code should be more optimal and provide trade-offs that may + better serve safety- and performance- critical environments. + + If unsure, say Y. + config DEBUG_PLIST bool "Debug priority linked list manipulation" depends on DEBUG_KERNEL diff --git a/lib/list_debug.c b/lib/list_debug.c index 2def33b1491f..0ff547910dd0 100644 --- a/lib/list_debug.c +++ b/lib/list_debug.c @@ -17,6 +17,7 @@ * attempt). */ +__list_valid_slowpath bool __list_add_valid_or_report(struct list_head *new, struct list_head *prev, struct list_head *next) { @@ -39,6 +40,7 @@ bool __list_add_valid_or_report(struct list_head *new, struct list_head *prev, } EXPORT_SYMBOL(__list_add_valid_or_report); +__list_valid_slowpath bool __list_del_entry_valid_or_report(struct list_head *entry) { struct list_head *prev, *next;