From patchwork Tue Aug 8 10:17:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13345924 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 00FFDC001DB for ; Tue, 8 Aug 2023 10:21:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: Mime-Version:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To: References:List-Owner; bh=H/uK6xPbTOvB9vs0JvxxURvg/NaJd/Lo+oeiSGApECs=; b=v+a WtE7PFF92qT0PMFTGIIXVMK7nRvA0IxFXGMw/1ymELcQyOpKWZzxUGyKVCHap70u5jBTe3F2FlOlC 1QUXdC54b7Upbee+wY7uWIEGoxLq4xUdxbBR42neBzOnVeiHL6TJ77phVs5fM9wYVQb+Y9nGEkeuI KZxUXymQuWopInqKL9BRuIvdHY4iNla5Du3JslDhYSd8AI5qHFdDd0wIJ5e1H6bQxND7uGWyM4n4v fjl/YEaOSl4mtSR1eqfYZP/0Y6BrvncYq3FxpZ3VomH0ObtekPUXKkl3ABz0j1gySvXXhWkFz0BGD gBUVv+0stZhIE4Ay2fXtTDUecaozqpA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qTJqG-002Dac-2E; Tue, 08 Aug 2023 10:21:32 +0000 Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qTJqD-002DZP-2Q for linux-arm-kernel@lists.infradead.org; Tue, 08 Aug 2023 10:21:31 +0000 Received: by mail-wr1-x449.google.com with SMTP id ffacd0b85a97d-3176c4de5bbso2555208f8f.0 for ; Tue, 08 Aug 2023 03:21:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691490087; x=1692094887; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=hmnOfvRNRbQKO2Fig5tO7oMJZvMK80sR98XXmZHYIOg=; b=UcRiSP5zMkafefDkpMy4ag+CqrreExj50BgcvvqUxENJG1/lpohTMUGaMRE6dQWAe+ dc0KSS4bJtSXBo6u8B2TD1qebHcDd5nh7/wEMRUvQktCQ4V/bMLKV0b1xtucMQ2ElFsD ND4RxmLFCJwArzcfgz12LoyG/eZo31pUrpRu20Td78p+BpbDSmOfCjj+Z+IC7HAW0UJI wNAjAblR9P5aB6un9d6Wp0XUwVazJc3Q87ZThxQIRdmvgDi/7Bg1jdLM7HiLjR2VXgNZ T2SsfgyBgPh1IFQrdE4pgKb+j8xLGZwt6QAMfltBDv+DaBw14Bc1t2LIR4Vp0fXHemjD QpoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691490087; x=1692094887; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=hmnOfvRNRbQKO2Fig5tO7oMJZvMK80sR98XXmZHYIOg=; b=USq+jpCIHYmTWvM2okb3BxuQ1Z1J0a19CUhsKfHHzukiWZoScQmE1AQ37SjmxNdmE7 QMx63/vs5sgeYK9bkZ8evn1Mt0K9Nr4dtqcZQ1I7542fPq1ivfG8uLD1OR1hoi50Kv8n MEPyLJPnPpqXRnkqunN9D7WYl9yx6/lTo0WsI8g8yTtMsjtiIee9kO3pIyuPOztH6Jpv VfHGTtBPsgy+l1W8NiK15RindLWJ1Kr6WyMuBjGqSq/qqHFnsTd14Q8iVslnxa128t0X vwPki/rpLWVTaN5gsCGHKAfatcv6jD8gEmK2+G52lW+ZCQDUYlTfGvN3hHZ+ZYiKSLcD 1JdQ== X-Gm-Message-State: AOJu0YzuaBdHrWBJpHMpcVnpuQeAJdMOCwkufPQOnVwq8E4GrCK5pFeZ 9NUNl+JiL8MD40bAbJtUYetJXHVVxw== X-Google-Smtp-Source: AGHT+IFO91+Sq0a4bROaXXMkpleB+MNLqFDhhb5O1iR8Zhdu/8OTj2ivFrTXYjJox8ukr1691thj7vV+Eg== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:39c0:833d:c267:7f64]) (user=elver job=sendgmr) by 2002:adf:f0cb:0:b0:317:5e4f:9097 with SMTP id x11-20020adff0cb000000b003175e4f9097mr74564wro.7.1691490086705; Tue, 08 Aug 2023 03:21:26 -0700 (PDT) Date: Tue, 8 Aug 2023 12:17:25 +0200 Mime-Version: 1.0 X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230808102049.465864-1-elver@google.com> Subject: [PATCH v3 1/3] compiler_types: Introduce the Clang __preserve_most function attribute From: Marco Elver To: elver@google.com, Andrew Morton , Kees Cook Cc: Guenter Roeck , Peter Zijlstra , Mark Rutland , Steven Rostedt , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , Nathan Chancellor , Nick Desaulniers , Tom Rix , Miguel Ojeda , Sami Tolvanen , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, Dmitry Vyukov , Alexander Potapenko , kasan-dev@googlegroups.com, linux-toolchains@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230808_032129_816007_5E8A4134 X-CRM114-Status: GOOD ( 19.43 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org [1]: "On X86-64 and AArch64 targets, this attribute changes the calling convention of a function. The preserve_most calling convention attempts to make the code in the caller as unintrusive as possible. This convention behaves identically to the C calling convention on how arguments and return values are passed, but it uses a different set of caller/callee-saved registers. This alleviates the burden of saving and recovering a large register set before and after the call in the caller. If the arguments are passed in callee-saved registers, then they will be preserved by the callee across the call. This doesn't apply for values returned in callee-saved registers. * On X86-64 the callee preserves all general purpose registers, except for R11. R11 can be used as a scratch register. Floating-point registers (XMMs/YMMs) are not preserved and need to be saved by the caller. * On AArch64 the callee preserve all general purpose registers, except x0-X8 and X16-X18." [1] https://clang.llvm.org/docs/AttributeReference.html#preserve-most Introduce the attribute to compiler_types.h as __preserve_most. Use of this attribute results in better code generation for calls to very rarely called functions, such as error-reporting functions, or rarely executed slow paths. Beware that the attribute conflicts with instrumentation calls inserted on function entry which do not use __preserve_most themselves. Notably, function tracing which assumes the normal C calling convention for the given architecture. Where the attribute is supported, __preserve_most will imply notrace. It is recommended to restrict use of the attribute to functions that should or already disable tracing. The attribute may be supported by a future GCC version (see https://gcc.gnu.org/bugzilla/show_bug.cgi?id=110899). Signed-off-by: Marco Elver Reviewed-by: Miguel Ojeda Acked-by: Steven Rostedt (Google) Acked-by: Mark Rutland --- v3: * Quote more from LLVM documentation about which registers are callee/caller with preserve_most. * Code comment to restrict use where tracing is meant to be disabled. v2: * Imply notrace, to avoid any conflicts with tracing which is inserted on function entry. See added comments. --- include/linux/compiler_types.h | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+) diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h index 547ea1ff806e..c88488715a39 100644 --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -106,6 +106,34 @@ static inline void __chk_io_ptr(const volatile void __iomem *ptr) { } #define __cold #endif +/* + * On x86-64 and arm64 targets, __preserve_most changes the calling convention + * of a function to make the code in the caller as unintrusive as possible. This + * convention behaves identically to the C calling convention on how arguments + * and return values are passed, but uses a different set of caller- and callee- + * saved registers. + * + * The purpose is to alleviates the burden of saving and recovering a large + * register set before and after the call in the caller. This is beneficial for + * rarely taken slow paths, such as error-reporting functions that may be called + * from hot paths. + * + * Note: This may conflict with instrumentation inserted on function entry which + * does not use __preserve_most or equivalent convention (if in assembly). Since + * function tracing assumes the normal C calling convention, where the attribute + * is supported, __preserve_most implies notrace. It is recommended to restrict + * use of the attribute to functions that should or already disable tracing. + * + * Optional: not supported by gcc. + * + * clang: https://clang.llvm.org/docs/AttributeReference.html#preserve-most + */ +#if __has_attribute(__preserve_most__) +# define __preserve_most notrace __attribute__((__preserve_most__)) +#else +# define __preserve_most +#endif + /* Builtins */ /* From patchwork Tue Aug 8 10:17:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13345925 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C7553C04A94 for ; Tue, 8 Aug 2023 10:21:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=5QnwP4a0EhiGVeqrOc0NoYPtCTDAE1CTmLN/b07kk2g=; b=5CNL8XqVmQXdZmGx7XhHJ7Ljrc X9gEuSGamqveILuieRuhyNs0wVlX53FE8JiYTIYHN6Uld6+t/d9ppebt1tpeNUlBLEmDk5QuwNPrC IaUhhGqqHO/Q0mMiu63H7Ta1i9O+nssZ7QHrntUKoAzMwK04izQedgte8dvzsyvAYeOJtEJqoFXSQ zzVss1lJX5Q/gWifTfjCuWVIRXHaiJpkZnX5jqomc1iwSI6zHCDLLrl7CJdCFKWxmZtYsmAWC2JES f4dfiqGk7zBE56EVKwvmHH69SVZoiR9AuC5CMQ1r1kisBnSU7k+EyceB1TYutwUFLmnhr+ykCWwlH IJAU+7iQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qTJqK-002DcE-0p; Tue, 08 Aug 2023 10:21:36 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qTJqH-002DZn-0Y for linux-arm-kernel@lists.infradead.org; Tue, 08 Aug 2023 10:21:34 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-586a5cd0ea9so50039277b3.2 for ; Tue, 08 Aug 2023 03:21:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691490089; x=1692094889; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=LIkYxQB49dC0FQNHu2erIkpIfETNvfhvCz995ch8SuY=; b=eGKzGkRmwHjO3/n1QcWNjJOltyC41u0duV0TVTZ3fBMgGDl0nlQ5QiRp7lOQAsE7QO OGr2AAIfLtRW2QiBskIUQSU2DGwjDOx7UMVUgMhvxWw/gE++HwEMAvv7dC/CU3+R4HMt GzhlZLUY2AGzPICPTPx5NbJmIyVj52XRkSbsKdm9qGLcq7/xfg0KX88jfLPuSqFKZFv2 ha1KlWQw7jxa9v/R0N3pMeV5TzG0W+8X0/EZX7J+jnqEG00HIpJFpFLo2GvWg7evJYo1 ewY4Ufv9e369vjGwh85Jn1JoVfJCD1dE9/tT9sf7m+Wi9s2I9JgifsHU2zq8qO0/nbKl FnWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691490089; x=1692094889; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=LIkYxQB49dC0FQNHu2erIkpIfETNvfhvCz995ch8SuY=; b=JLr8NPqZnWpFMnIOL4zP1UM9hQ3ZqxVa3OG+1L/AejKEniw7x6/yTjXX9accFcZTcI M+dT6VP0zqgdIcJETtcNgH/8aWUWfzZ61lLcl75lmbuOGH4YAytAjJ1v8Ob1eOVt9Vkf alCoUtkYVuuiN8KJQZnenT1t0TkvKJ1DEkYAIWKLeasj8AzMfSNgyiSP9vm1Ctd3Ikrp AZsMbgANNYYSSfPpIoTVNx4MMiToY3EZSZlo9rZg2woiFxXvRhj472XktluZBlod+Xhj SIHXWk1h3zJsKVQI9MrH2w+5WHiO/yDTDeBn7pK7Zvx/SArG6084TNi9KvCOD6TJTVyC PXPw== X-Gm-Message-State: AOJu0YzwqKA3kyI48+ubBPlMVGJ1xy/CI6lVpAHN9x64/Rils28naad+ 5L9HCoNT3+wwtW9GI7oh3bUmNiJ86w== X-Google-Smtp-Source: AGHT+IHxvhUVKdMAT1tPy5MkypgIrF/CSoYfvgc0eKx9Lu6TYjeI7LJ8ieHcAdveSGPemZkzK5rR5O62Ng== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:39c0:833d:c267:7f64]) (user=elver job=sendgmr) by 2002:a81:451f:0:b0:577:617b:f881 with SMTP id s31-20020a81451f000000b00577617bf881mr88047ywa.8.1691490089320; Tue, 08 Aug 2023 03:21:29 -0700 (PDT) Date: Tue, 8 Aug 2023 12:17:26 +0200 In-Reply-To: <20230808102049.465864-1-elver@google.com> Mime-Version: 1.0 References: <20230808102049.465864-1-elver@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230808102049.465864-2-elver@google.com> Subject: [PATCH v3 2/3] list_debug: Introduce inline wrappers for debug checks From: Marco Elver To: elver@google.com, Andrew Morton , Kees Cook Cc: Guenter Roeck , Peter Zijlstra , Mark Rutland , Steven Rostedt , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , Nathan Chancellor , Nick Desaulniers , Tom Rix , Miguel Ojeda , Sami Tolvanen , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, Dmitry Vyukov , Alexander Potapenko , kasan-dev@googlegroups.com, linux-toolchains@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230808_032133_214649_B3C9B8E6 X-CRM114-Status: GOOD ( 14.10 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Turn the list debug checking functions __list_*_valid() into inline functions that wrap the out-of-line functions. Care is taken to ensure the inline wrappers are always inlined, so that additional compiler instrumentation (such as sanitizers) does not result in redundant outlining. This change is preparation for performing checks in the inline wrappers. No functional change intended. Signed-off-by: Marco Elver --- v3: * Rename ___list_*_valid() to __list_*_valid_or_report(). * Some documentation. --- arch/arm64/kvm/hyp/nvhe/list_debug.c | 6 ++--- include/linux/list.h | 37 +++++++++++++++++++++++++--- lib/list_debug.c | 11 ++++----- 3 files changed, 41 insertions(+), 13 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/list_debug.c b/arch/arm64/kvm/hyp/nvhe/list_debug.c index d68abd7ea124..16266a939a4c 100644 --- a/arch/arm64/kvm/hyp/nvhe/list_debug.c +++ b/arch/arm64/kvm/hyp/nvhe/list_debug.c @@ -26,8 +26,8 @@ static inline __must_check bool nvhe_check_data_corruption(bool v) /* The predicates checked here are taken from lib/list_debug.c. */ -bool __list_add_valid(struct list_head *new, struct list_head *prev, - struct list_head *next) +bool __list_add_valid_or_report(struct list_head *new, struct list_head *prev, + struct list_head *next) { if (NVHE_CHECK_DATA_CORRUPTION(next->prev != prev) || NVHE_CHECK_DATA_CORRUPTION(prev->next != next) || @@ -37,7 +37,7 @@ bool __list_add_valid(struct list_head *new, struct list_head *prev, return true; } -bool __list_del_entry_valid(struct list_head *entry) +bool __list_del_entry_valid_or_report(struct list_head *entry) { struct list_head *prev, *next; diff --git a/include/linux/list.h b/include/linux/list.h index f10344dbad4d..130c6a1bb45c 100644 --- a/include/linux/list.h +++ b/include/linux/list.h @@ -39,10 +39,39 @@ static inline void INIT_LIST_HEAD(struct list_head *list) } #ifdef CONFIG_DEBUG_LIST -extern bool __list_add_valid(struct list_head *new, - struct list_head *prev, - struct list_head *next); -extern bool __list_del_entry_valid(struct list_head *entry); +/* + * Performs the full set of list corruption checks before __list_add(). + * On list corruption reports a warning, and returns false. + */ +extern bool __list_add_valid_or_report(struct list_head *new, + struct list_head *prev, + struct list_head *next); + +/* + * Performs list corruption checks before __list_add(). Returns false if a + * corruption is detected, true otherwise. + */ +static __always_inline bool __list_add_valid(struct list_head *new, + struct list_head *prev, + struct list_head *next) +{ + return __list_add_valid_or_report(new, prev, next); +} + +/* + * Performs the full set of list corruption checks before __list_del_entry(). + * On list corruption reports a warning, and returns false. + */ +extern bool __list_del_entry_valid_or_report(struct list_head *entry); + +/* + * Performs list corruption checks before __list_del_entry(). Returns false if a + * corruption is detected, true otherwise. + */ +static __always_inline bool __list_del_entry_valid(struct list_head *entry) +{ + return __list_del_entry_valid_or_report(entry); +} #else static inline bool __list_add_valid(struct list_head *new, struct list_head *prev, diff --git a/lib/list_debug.c b/lib/list_debug.c index d98d43f80958..2def33b1491f 100644 --- a/lib/list_debug.c +++ b/lib/list_debug.c @@ -17,8 +17,8 @@ * attempt). */ -bool __list_add_valid(struct list_head *new, struct list_head *prev, - struct list_head *next) +bool __list_add_valid_or_report(struct list_head *new, struct list_head *prev, + struct list_head *next) { if (CHECK_DATA_CORRUPTION(prev == NULL, "list_add corruption. prev is NULL.\n") || @@ -37,9 +37,9 @@ bool __list_add_valid(struct list_head *new, struct list_head *prev, return true; } -EXPORT_SYMBOL(__list_add_valid); +EXPORT_SYMBOL(__list_add_valid_or_report); -bool __list_del_entry_valid(struct list_head *entry) +bool __list_del_entry_valid_or_report(struct list_head *entry) { struct list_head *prev, *next; @@ -65,6 +65,5 @@ bool __list_del_entry_valid(struct list_head *entry) return false; return true; - } -EXPORT_SYMBOL(__list_del_entry_valid); +EXPORT_SYMBOL(__list_del_entry_valid_or_report); From patchwork Tue Aug 8 10:17:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13345926 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2D317C001DF for ; Tue, 8 Aug 2023 10:22:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=S+ph4OCmKtQrZi72m1/YMZJXyP7SAqroZQ+zRbKsmCQ=; b=ke1dXcPMr9O/ElH5tBUFLfFDec qYWAqvEYvVQUv3kkXRXG+V9GhM+TuS+AHfFA+U8e1P4ouXEG5tzcLILwHr+AGX4EaxbU9rK9RmhOI erIJESNwnNWbz1kTiS2lLhfGif7ExPGfp6em2hq7Kt+iViD+w8RubC49eROpxRbeYzvMqBwJys313 ZR/5wy5JX5ZDyYM3UcCCT5WIUBJEP4VSDq07q68Vr4PyN/ZdrPgqmfneWJJdV2WKM0PwLjIYOLb0d znaypuV+mRctLWTbj8nvQXzZIpIjDcm4hKTYAKkTZ287g+DYlgQIpawlKdzksN0HBMLYMDHIv6J4K cMBD8rTA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qTJqN-002Dcv-2h; Tue, 08 Aug 2023 10:21:39 +0000 Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qTJqH-002Dae-2t for linux-arm-kernel@lists.infradead.org; Tue, 08 Aug 2023 10:21:35 +0000 Received: by mail-wr1-x44a.google.com with SMTP id ffacd0b85a97d-30e3ee8a42eso2549613f8f.1 for ; Tue, 08 Aug 2023 03:21:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691490092; x=1692094892; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=8CDlzc3vfWuiyArBAiXe26CLSydTdANPU8OF+/ikf0k=; b=Mudnf9IBAGhp+cqaJ3JEk9pwoyq1YPscCpFQJdJDjoOmIeoICs42r1lSRunhtwQJPR v7y3Vc2fMMYj+lKFB+WR09L0w8ZkEGeGG39L98goAMUMJkUQpb2Hk7q20fxSwAYzpcAJ SLPGq5B0V6QikbBVYxVJMEN8vTI3fNZdJJNHBmm2yAxMZMgsE/ee02Xe2iHc/y5sXoSK 9HJKRU6/qfY6yYo3EcLxVfqklEpNPSF759vLGq7GK6ZIIyG7vnHvlY2YrChpaHRvmHbS 87WveU7Jiz/SLWouPBgBJnZtiXsP2wdBKGOHTjxZ/tk3xBIcHHvIGjFZemd8qsE2h5JP /pnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691490092; x=1692094892; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=8CDlzc3vfWuiyArBAiXe26CLSydTdANPU8OF+/ikf0k=; b=DH8/HzwAQYHdRY4iUjPB2xdSqc5iWO8Vf4OX7IYNjLgBNAGt/ENprr652jW6Xf/3bR 4GCoan7ND9bjYO5lzC5KezRzu0OxbQhGHq4alMdPEltg5kICfI5/B2jOkces+P0kMtTj eI3jl8WJVebD4bAtU1rHksVdhwjqS9NGWM64tt/qDmW2lxn9yyKl6A+y8Slms5dVFH8f gVjpc+IOeDZVzSC6409EngAoxY1LD1JjCONDtP5ZW0+ur6g93pssFJeLkuyPBZ/MgsPe nPtyuWKvoKmHi4K96KnxYyTvqS4z/i8PbmEGlDS4ubmZy2lY0jE7JqD5H/bRpMDJFnhE /U9A== X-Gm-Message-State: AOJu0YyBOZ3oC4+Pis4xsUPpZx3RxPDwaZ50bptKsSXL/e3kVOI9FlbK t6an640mtchAkUw3o3CJ6ow4Ndvdlg== X-Google-Smtp-Source: AGHT+IHfUmliuT2Nfy5W+EoyEa3s55z0i/1Jvaz7gEb7uzHFheyioWA4wLPjYGPXfJr48fuCitIzk0/rOw== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:39c0:833d:c267:7f64]) (user=elver job=sendgmr) by 2002:a5d:66c6:0:b0:317:cc33:106 with SMTP id k6-20020a5d66c6000000b00317cc330106mr78663wrw.11.1691490091926; Tue, 08 Aug 2023 03:21:31 -0700 (PDT) Date: Tue, 8 Aug 2023 12:17:27 +0200 In-Reply-To: <20230808102049.465864-1-elver@google.com> Mime-Version: 1.0 References: <20230808102049.465864-1-elver@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230808102049.465864-3-elver@google.com> Subject: [PATCH v3 3/3] list_debug: Introduce CONFIG_DEBUG_LIST_MINIMAL From: Marco Elver To: elver@google.com, Andrew Morton , Kees Cook Cc: Guenter Roeck , Peter Zijlstra , Mark Rutland , Steven Rostedt , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , Nathan Chancellor , Nick Desaulniers , Tom Rix , Miguel Ojeda , Sami Tolvanen , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, Dmitry Vyukov , Alexander Potapenko , kasan-dev@googlegroups.com, linux-toolchains@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230808_032133_934128_5BE49B9E X-CRM114-Status: GOOD ( 25.31 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Numerous production kernel configs (see [1, 2]) are choosing to enable CONFIG_DEBUG_LIST, which is also being recommended by KSPP for hardened configs [3]. The feature has never been designed with performance in mind, yet common list manipulation is happening across hot paths all over the kernel. Introduce CONFIG_DEBUG_LIST_MINIMAL, which performs list pointer checking inline, and only upon list corruption delegates to the reporting slow path. To generate optimal machine code with CONFIG_DEBUG_LIST_MINIMAL: 1. Elide checking for pointer values which upon dereference would result in an immediate access fault -- therefore "minimal" checks. The trade-off is lower-quality error reports. 2. Use the newly introduced __preserve_most function attribute (available with Clang, but not yet with GCC) to minimize the code footprint for calling the reporting slow path. As a result, function size of callers is reduced by avoiding saving registers before calling the rarely called reporting slow path. Note that all TUs in lib/Makefile already disable function tracing, including list_debug.c, and __preserve_most's implied notrace has no effect in this case. 3. Because the inline checks are a subset of the full set of checks in ___list_*_valid(), always return false if the inline checks failed. This avoids redundant compare and conditional branch right after return from the slow path. As a side-effect of the checks being inline, if the compiler can prove some condition to always be true, it can completely elide some checks. Running netperf with CONFIG_DEBUG_LIST_MINIMAL (using a Clang compiler with "preserve_most") shows throughput improvements, in my case of ~7% on average (up to 20-30% on some test cases). Link: https://r.android.com/1266735 [1] Link: https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/blob/main/config [2] Link: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings [3] Signed-off-by: Marco Elver Signed-off-by: Marco Elver Signed-off-by: Marco Elver --- v3: * Rename ___list_*_valid() to __list_*_valid_or_report(). * More comments. v2: * Note that lib/Makefile disables function tracing for everything and __preserve_most's implied notrace is a noop here. --- arch/arm64/kvm/hyp/nvhe/list_debug.c | 2 + include/linux/list.h | 64 +++++++++++++++++++++++++--- lib/Kconfig.debug | 15 +++++++ lib/list_debug.c | 2 + 4 files changed, 77 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/list_debug.c b/arch/arm64/kvm/hyp/nvhe/list_debug.c index 16266a939a4c..46a2d4f2b3c6 100644 --- a/arch/arm64/kvm/hyp/nvhe/list_debug.c +++ b/arch/arm64/kvm/hyp/nvhe/list_debug.c @@ -26,6 +26,7 @@ static inline __must_check bool nvhe_check_data_corruption(bool v) /* The predicates checked here are taken from lib/list_debug.c. */ +__list_valid_slowpath bool __list_add_valid_or_report(struct list_head *new, struct list_head *prev, struct list_head *next) { @@ -37,6 +38,7 @@ bool __list_add_valid_or_report(struct list_head *new, struct list_head *prev, return true; } +__list_valid_slowpath bool __list_del_entry_valid_or_report(struct list_head *entry) { struct list_head *prev, *next; diff --git a/include/linux/list.h b/include/linux/list.h index 130c6a1bb45c..066fe33e99bf 100644 --- a/include/linux/list.h +++ b/include/linux/list.h @@ -39,38 +39,90 @@ static inline void INIT_LIST_HEAD(struct list_head *list) } #ifdef CONFIG_DEBUG_LIST + +#ifdef CONFIG_DEBUG_LIST_MINIMAL +# define __list_valid_slowpath __cold __preserve_most +#else +# define __list_valid_slowpath +#endif + /* * Performs the full set of list corruption checks before __list_add(). * On list corruption reports a warning, and returns false. */ -extern bool __list_add_valid_or_report(struct list_head *new, - struct list_head *prev, - struct list_head *next); +extern bool __list_valid_slowpath __list_add_valid_or_report(struct list_head *new, + struct list_head *prev, + struct list_head *next); /* * Performs list corruption checks before __list_add(). Returns false if a * corruption is detected, true otherwise. + * + * With CONFIG_DEBUG_LIST_MINIMAL set, performs minimal list integrity checking + * (that do not result in a fault) inline, and only if a corruption is detected + * calls the reporting function __list_add_valid_or_report(). */ static __always_inline bool __list_add_valid(struct list_head *new, struct list_head *prev, struct list_head *next) { - return __list_add_valid_or_report(new, prev, next); + bool ret = true; + + if (IS_ENABLED(CONFIG_DEBUG_LIST_MINIMAL)) { + /* + * In the minimal config, elide checking if next and prev are + * NULL, since the immediate dereference of them below would + * result in a fault if NULL. + * + * With the minimal config we can afford to inline the checks, + * which also gives the compiler a chance to elide some of them + * completely if they can be proven at compile-time. If one of + * the pre-conditions does not hold, the slow-path will show a + * report which pre-condition failed. + */ + if (likely(next->prev == prev && prev->next == next && new != prev && new != next)) + return true; + ret = false; + } + + ret &= __list_add_valid_or_report(new, prev, next); + return ret; } /* * Performs the full set of list corruption checks before __list_del_entry(). * On list corruption reports a warning, and returns false. */ -extern bool __list_del_entry_valid_or_report(struct list_head *entry); +extern bool __list_valid_slowpath __list_del_entry_valid_or_report(struct list_head *entry); /* * Performs list corruption checks before __list_del_entry(). Returns false if a * corruption is detected, true otherwise. + * + * With CONFIG_DEBUG_LIST_MINIMAL set, performs minimal list integrity checking + * (that do not result in a fault) inline, and only if a corruption is detected + * calls the reporting function __list_del_entry_valid_or_report(). */ static __always_inline bool __list_del_entry_valid(struct list_head *entry) { - return __list_del_entry_valid_or_report(entry); + bool ret = true; + + if (IS_ENABLED(CONFIG_DEBUG_LIST_MINIMAL)) { + struct list_head *prev = entry->prev; + struct list_head *next = entry->next; + + /* + * In the minimal config, elide checking if next and prev are + * NULL, LIST_POISON1 or LIST_POISON2, since the immediate + * dereference of them below would result in a fault. + */ + if (likely(prev->next == entry && next->prev == entry)) + return true; + ret = false; + } + + ret &= __list_del_entry_valid_or_report(entry); + return ret; } #else static inline bool __list_add_valid(struct list_head *new, diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index fbc89baf7de6..e72cf08af0fa 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -1680,6 +1680,21 @@ config DEBUG_LIST If unsure, say N. +config DEBUG_LIST_MINIMAL + bool "Minimal linked list debug checks" + default !DEBUG_KERNEL + depends on DEBUG_LIST + help + Only perform the minimal set of checks in the linked-list walking + routines to catch corruptions that are not guaranteed to result in an + immediate access fault. + + This trades lower quality error reports for improved performance: the + generated code should be more optimal and provide trade-offs that may + better serve safety- and performance- critical environments. + + If unsure, say Y. + config DEBUG_PLIST bool "Debug priority linked list manipulation" depends on DEBUG_KERNEL diff --git a/lib/list_debug.c b/lib/list_debug.c index 2def33b1491f..0ff547910dd0 100644 --- a/lib/list_debug.c +++ b/lib/list_debug.c @@ -17,6 +17,7 @@ * attempt). */ +__list_valid_slowpath bool __list_add_valid_or_report(struct list_head *new, struct list_head *prev, struct list_head *next) { @@ -39,6 +40,7 @@ bool __list_add_valid_or_report(struct list_head *new, struct list_head *prev, } EXPORT_SYMBOL(__list_add_valid_or_report); +__list_valid_slowpath bool __list_del_entry_valid_or_report(struct list_head *entry) { struct list_head *prev, *next;