From patchwork Wed Aug 2 15:06:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13338321 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E05DCC001DF for ; Wed, 2 Aug 2023 15:08:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: Mime-Version:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To: References:List-Owner; bh=+KXSxnfmERwd6WLvX2C4M+z2H0hLuvDcxrlHosxeIWQ=; b=E/z A/zPDEs41jM6nZkhKNNWutNDbvHgXEi91pv741224QUlHSEoVoTTUGAyVleF1jzX/qSlsJ91UFQjQ iKgcbSUH+DSXtF1To++fwow4AhCfi/Gcdp+dBGTOTEzD4eyj0xodXZvSAvPES6FQYnZAxsySsLk5X izr5RGMNQ8rck07FdGchiS3eEcC0FzpxXXTCn8XF3BzV+Tm0S2H2RZoi2WCDfamrV8BINmN2luY1l Bx5UbYLNWtly6cMBW2l6TlLW270IaVE69Bkgni9y3eEvFhiSxvcu6TksWDx/GKbqKIv43V8XRpFiD qB+BQd7s1q4bhiJekQKHBw7UDTndUuA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qRDRn-005BEN-0p; Wed, 02 Aug 2023 15:07:35 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qRDRk-005BD5-0s for linux-arm-kernel@lists.infradead.org; Wed, 02 Aug 2023 15:07:33 +0000 Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-d0d27cd9db9so1857478276.0 for ; Wed, 02 Aug 2023 08:07:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690988848; x=1691593648; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=ODlQqQRL7j+ioqTq3MDMk7BwHCUeRlM+arLr48y6tNA=; b=SlPCrS1xmbnWErc1CKgkJPNCNs7mvtB7anzE5xNQkivhTRzEbUhgWQnPMVMtb3VXh+ ZBtkHFgVBWyYK9aJXUFag88OyruHvsTDz0sROofPxloDejTDL0u7WLbUzdd0bjeB+jSN oTfyPlrZyq9uO50GbK472SiCDM3EoN7lghNrsmGgmy5MZ0QBWWSW/G3ysfDRPCvjzTil jIk0DCPgZI/Khn3fRGwbxbnH3v6fhL+esA/xOgDvjtUUcvHxo0iNIh9aXl4CxUkdcmD8 fJBSw5TBqO8FHGk7VqWk7U50HciveTvD2sCx9Bd9M88m/E0/TJ8dDqT/2Fi+tjLdynkA hc8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690988848; x=1691593648; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=ODlQqQRL7j+ioqTq3MDMk7BwHCUeRlM+arLr48y6tNA=; b=WWS35Aw+tuahtxcwzT8I8LE/BPOCxydTukNF9laUjHdLMhHC2zUKq85D80PntgLb/N BNTYvBoUO6EXpgauTOh73WAzQvJJduzFeL1Zvxr0uF4tV+iT/0ngjYks4FGf5Ku2qbgK SjqyjOtRuCXyrhHalkxQlg/6KF2kPt7zItgFHukM8/ko7Jx3VgUBbfJX8WxoP35MIrjJ pHrmMA4N7r3uMv6uRG2WPfRrV85/+quB/kQptshl1DR5dA0xrjLCFBQXxgNROo8IKMmB HjavciP+gSKg0O9CuAdMAigLpZEA0sYntUzcKAV7UvEyyP0+XriMJkIkXJy2Mr60ZN2T lS/Q== X-Gm-Message-State: ABy/qLYUO7S+3mJX4gpCo3zZnS9VFcscWtrbEz+kUnnNuPq37geQpCxg JnhgGe79PVDrks9ggt91blZVm7Yaaw== X-Google-Smtp-Source: APBJJlG54cyvOOilnqyHEvE3EGA71yo6JZSxh2bIK8O8x7AqakXp9rNt0p+eKxoE0w7ySnwaeqPwqs6mJg== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:5f73:1fc0:c9fd:f203]) (user=elver job=sendgmr) by 2002:a25:dfc3:0:b0:d15:53b5:509f with SMTP id w186-20020a25dfc3000000b00d1553b5509fmr197192ybg.2.1690988848424; Wed, 02 Aug 2023 08:07:28 -0700 (PDT) Date: Wed, 2 Aug 2023 17:06:37 +0200 Mime-Version: 1.0 X-Mailer: git-send-email 2.41.0.585.gd2178a4bd4-goog Message-ID: <20230802150712.3583252-1-elver@google.com> Subject: [PATCH 1/3] Compiler attributes: Introduce the __preserve_most function attribute From: Marco Elver To: elver@google.com, Andrew Morton , Kees Cook Cc: Guenter Roeck , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , Miguel Ojeda , Nick Desaulniers , Nathan Chancellor , Tom Rix , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, Dmitry Vyukov , Alexander Potapenko , kasan-dev@googlegroups.com, linux-toolchains@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230802_080732_309848_C0829018 X-CRM114-Status: GOOD ( 12.91 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org [1]: "On X86-64 and AArch64 targets, this attribute changes the calling convention of a function. The preserve_most calling convention attempts to make the code in the caller as unintrusive as possible. This convention behaves identically to the C calling convention on how arguments and return values are passed, but it uses a different set of caller/callee-saved registers. This alleviates the burden of saving and recovering a large register set before and after the call in the caller." [1] https://clang.llvm.org/docs/AttributeReference.html#preserve-most Use of this attribute results in better code generation for calls to very rarely called functions, such as error-reporting functions, or rarely executed slow paths. Introduce the attribute to compiler_attributes.h. Signed-off-by: Marco Elver --- include/linux/compiler_attributes.h | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/include/linux/compiler_attributes.h b/include/linux/compiler_attributes.h index 00efa35c350f..615a63ecfcf6 100644 --- a/include/linux/compiler_attributes.h +++ b/include/linux/compiler_attributes.h @@ -321,6 +321,17 @@ # define __pass_object_size(type) #endif +/* + * Optional: not supported by gcc. + * + * clang: https://clang.llvm.org/docs/AttributeReference.html#preserve-most + */ +#if __has_attribute(__preserve_most__) +# define __preserve_most __attribute__((__preserve_most__)) +#else +# define __preserve_most +#endif + /* * gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-pure-function-attribute */ From patchwork Wed Aug 2 15:06:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13338322 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CA992C001DF for ; Wed, 2 Aug 2023 15:08:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=xB1pjWEHft29bIXjRWQJTfSIBTvizXbze8RXUIYDxzs=; b=T8FCSAEeme8x3q3IFSw76TlerQ +9HC0rarErEG37FH5sdiMfM2Ce3pbe7qWteemTNgPIlXfPKVF/ZVf/obzDjrKMiM4BqQtROOAYStN jG2ieHTyYMohkG1aPJ3dlvqVx263JIOYUDi4AJJX6GRsYf3vmAJoghFhY+g0JHZlz89t+bIYGp3B9 PNReIBD+SgUjgTRbmJOW4odv4uD0hA98qhosx2JAx3pBHjKUKlzwlbK/+cL7jdiIxX/QaoB0olGg1 fuRxUe2VsjTk5nR15cKtkUL0lxF55kLuaqsZ5EkJlxG0jjn27HCVz0UTF6HRbakRnuE7reudcRjhf D3gmvJqg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qRDRr-005BFi-1O; Wed, 02 Aug 2023 15:07:39 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qRDRp-005BDr-0q for linux-arm-kernel@lists.infradead.org; Wed, 02 Aug 2023 15:07:38 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-58473c4f629so72402727b3.1 for ; Wed, 02 Aug 2023 08:07:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690988852; x=1691593652; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=bvGboV0t8alUYsd0cddb1UrgSKN8k9GsOS0292grtRY=; b=GYpXHhyb51djbzUFEuGFZEroZZF3Dvb8cCqx1P0jJsczmAPHDexsCVHdASsbs/x7Jx keu22L3rwyjMwK0648nvVpSVK2uMvNELdai3ps7AmNO7EMR4Dfoc/3b8EMN7gRfAYkEb I84cnhcKToeQ+dUV+jqQgwNM1R9Q+EtB9FwZu7Nw8xFIIRQjVDGQwKOcVrtkRmn3rYHt eT6bdpbrI6PQSCGPEXiPiOwfVOzKnH6H0arou10oqq0ZaeqOyz56dIvYL7ADRA/4oKnJ chV5fqZcOs1jJU7BfKlWNdoeiyXZNKH4X59F1Y911DP4rfjhXRtAHf46t7M0Zt3NbLlo PINQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690988852; x=1691593652; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=bvGboV0t8alUYsd0cddb1UrgSKN8k9GsOS0292grtRY=; b=S3Jwu1if0i38SKKyuPFBQipULY9Scr7/8jvP3seYgLNLeygKu7huErqpVj/ruIMwG9 fnw55d82ai8emvq+sgbcMVzHuxxXP0Orvi7/VnTPaoPFbIhDHbtlbRRbtEXp+Qh3St80 dtpYYlGtq492zkgBUMOg/3KZEyp8jSuhmVw6JyzsJCBAesDb+eOOOTDVOdGpPrycy7Lp GVgM60Z0N35/gdFjVieLCoE+AhVT1jdnYdN3vccA0DHcEl3xJz9PyaQZJfR54XOBSr0v moZMN9DmEkLKPDc4t+zkYR2et4skzYN4SYj8dxk0PP8ydrmF3rviaHH7+Z6hUEarG/zQ 46Lg== X-Gm-Message-State: ABy/qLYaDIPbIF1PIDDaIcUmVlDRpl73IbOsAkazb8831qsUi/Ww1/ay 7BuuaL1oYYSnjSRVQtyr64/lQfPg6A== X-Google-Smtp-Source: APBJJlGl7coc7Y9Wjkd2tlsnTCLaJ9EQ+DV9HEDr83tchbeO/pNpIzRKN5owQrLr3Yo4s+6Z8oFpFbyHBA== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:5f73:1fc0:c9fd:f203]) (user=elver job=sendgmr) by 2002:a25:ab86:0:b0:d0d:a7bc:4040 with SMTP id v6-20020a25ab86000000b00d0da7bc4040mr122142ybi.0.1690988851982; Wed, 02 Aug 2023 08:07:31 -0700 (PDT) Date: Wed, 2 Aug 2023 17:06:38 +0200 In-Reply-To: <20230802150712.3583252-1-elver@google.com> Mime-Version: 1.0 References: <20230802150712.3583252-1-elver@google.com> X-Mailer: git-send-email 2.41.0.585.gd2178a4bd4-goog Message-ID: <20230802150712.3583252-2-elver@google.com> Subject: [PATCH 2/3] list_debug: Introduce inline wrappers for debug checks From: Marco Elver To: elver@google.com, Andrew Morton , Kees Cook Cc: Guenter Roeck , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , Miguel Ojeda , Nick Desaulniers , Nathan Chancellor , Tom Rix , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, Dmitry Vyukov , Alexander Potapenko , kasan-dev@googlegroups.com, linux-toolchains@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230802_080737_299569_F4168AE3 X-CRM114-Status: GOOD ( 12.98 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Turn the list debug checking functions __list_*_valid() into inline functions that wrap the out-of-line functions. Care is taken to ensure the inline wrappers are always inlined, so that additional compiler instrumentation (such as sanitizers) does not result in redundant outlining. This change is preparation for performing checks in the inline wrappers. No functional change intended. Signed-off-by: Marco Elver --- arch/arm64/kvm/hyp/nvhe/list_debug.c | 6 +++--- include/linux/list.h | 15 +++++++++++++-- lib/list_debug.c | 11 +++++------ 3 files changed, 21 insertions(+), 11 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/list_debug.c b/arch/arm64/kvm/hyp/nvhe/list_debug.c index d68abd7ea124..589284496ac5 100644 --- a/arch/arm64/kvm/hyp/nvhe/list_debug.c +++ b/arch/arm64/kvm/hyp/nvhe/list_debug.c @@ -26,8 +26,8 @@ static inline __must_check bool nvhe_check_data_corruption(bool v) /* The predicates checked here are taken from lib/list_debug.c. */ -bool __list_add_valid(struct list_head *new, struct list_head *prev, - struct list_head *next) +bool ___list_add_valid(struct list_head *new, struct list_head *prev, + struct list_head *next) { if (NVHE_CHECK_DATA_CORRUPTION(next->prev != prev) || NVHE_CHECK_DATA_CORRUPTION(prev->next != next) || @@ -37,7 +37,7 @@ bool __list_add_valid(struct list_head *new, struct list_head *prev, return true; } -bool __list_del_entry_valid(struct list_head *entry) +bool ___list_del_entry_valid(struct list_head *entry) { struct list_head *prev, *next; diff --git a/include/linux/list.h b/include/linux/list.h index f10344dbad4d..e0b2cf904409 100644 --- a/include/linux/list.h +++ b/include/linux/list.h @@ -39,10 +39,21 @@ static inline void INIT_LIST_HEAD(struct list_head *list) } #ifdef CONFIG_DEBUG_LIST -extern bool __list_add_valid(struct list_head *new, +extern bool ___list_add_valid(struct list_head *new, struct list_head *prev, struct list_head *next); -extern bool __list_del_entry_valid(struct list_head *entry); +static __always_inline bool __list_add_valid(struct list_head *new, + struct list_head *prev, + struct list_head *next) +{ + return ___list_add_valid(new, prev, next); +} + +extern bool ___list_del_entry_valid(struct list_head *entry); +static __always_inline bool __list_del_entry_valid(struct list_head *entry) +{ + return ___list_del_entry_valid(entry); +} #else static inline bool __list_add_valid(struct list_head *new, struct list_head *prev, diff --git a/lib/list_debug.c b/lib/list_debug.c index d98d43f80958..fd69009cc696 100644 --- a/lib/list_debug.c +++ b/lib/list_debug.c @@ -17,8 +17,8 @@ * attempt). */ -bool __list_add_valid(struct list_head *new, struct list_head *prev, - struct list_head *next) +bool ___list_add_valid(struct list_head *new, struct list_head *prev, + struct list_head *next) { if (CHECK_DATA_CORRUPTION(prev == NULL, "list_add corruption. prev is NULL.\n") || @@ -37,9 +37,9 @@ bool __list_add_valid(struct list_head *new, struct list_head *prev, return true; } -EXPORT_SYMBOL(__list_add_valid); +EXPORT_SYMBOL(___list_add_valid); -bool __list_del_entry_valid(struct list_head *entry) +bool ___list_del_entry_valid(struct list_head *entry) { struct list_head *prev, *next; @@ -65,6 +65,5 @@ bool __list_del_entry_valid(struct list_head *entry) return false; return true; - } -EXPORT_SYMBOL(__list_del_entry_valid); +EXPORT_SYMBOL(___list_del_entry_valid); From patchwork Wed Aug 2 15:06:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13338323 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id ABB18C04E69 for ; Wed, 2 Aug 2023 15:08:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=5Sd8SVFare97nr0YlVgfeEvH9/+GgVXlbW1BK5CBAxo=; b=042S8H+FvirXGFQqkM4thJ+p+d vqjzsbd8Pok6SqQvAgDXJ82LB6hbG4/H2QHXIWykb+dea4Cs4gYwRkuCTEbfgnhFMKA/3kTYqe7cj gDeZqco25DJm1ebyssWgwXWZ3/U+U7efMvJWR/cPz3Om7SbYzwWoI/LxwlcjM8VF1UhMQc3sUpMMH 9qb67OlGpNzyFytkctM3Hyq/zPnHJ6/kwbsgI4wanComYyfGs9ptdrTWePbzuWW7SamEJ0v0hMweI w5tYJVzEMwsO9Lso8aAUmB/lOJNixR4oZAadW47qp0W0NiFye+Okv7yvsMn+oM6g+iC2iLIgGz8Ge D4xmY64w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qRDRu-005BGQ-04; Wed, 02 Aug 2023 15:07:42 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qRDRp-005BEO-1a for linux-arm-kernel@lists.infradead.org; Wed, 02 Aug 2023 15:07:39 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5847479b559so81356297b3.1 for ; Wed, 02 Aug 2023 08:07:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690988854; x=1691593654; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=QsQzCCvgsUc29iN0t+r4OrRLxcOlAg596pNmSZ7559A=; b=4ZYamtVu7c5DBlxohGCGmrb3sFLfmn9HVQ5Wg0u74wgUuS9HmeqzsIbgmvxx4vbynp G5rhNqMQc/y5nAAokrb8biNfvQGEp8VjT02HxfGGmCTT5wPDSvxvnVw8kiA6iPjuGFVD LA+s86W47CrKMLOaDF3oTsFV1irTVQ90npxdXPTIsyYRRVrnjksYD/YOpvA5gcb/NJON 1/DNjlxIfTxKKGGO7EWAXeVsoT9Ba6/DtSkaghydahqdF14skDb3T2FlkiHoREdye/Wn PP4uXHlG/PZ7eozLUn1Qs2nxG2z0Y5gSqG2d0rJKhv4XdK6rQ2bU5XD8YUaBIrlQg13L ZuFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690988854; x=1691593654; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=QsQzCCvgsUc29iN0t+r4OrRLxcOlAg596pNmSZ7559A=; b=EbyRmHOj4qQ4sSomDvWQa2/H20zMPpqzDxmKvwa9To0Fjdn4CWyM5gL/aODrfWikyI th8BqOwCDpQNUAVN8nJJZo6kuW9dfQmI/P64ZaQuPj7by4nFoqAyfClODUVsPMmt2Eit MMgw8weWV9GlxvGKrEh/u0Gry2k5O215o6HF7Zm+Tiozxgni4/9NOBsvJh9N+1PSjJFp 2mVYIeOxCZYn58UTy8wis7qn9RntRVorunTfoUEVnqCDarJOof4sOA/btAZETsuR2fY+ j9o/dkzE+stl4/fAdqZjQfmu2booO+3UuyWA6azbYhT94RJpr930bB5yh973Ni25aJJN rOZA== X-Gm-Message-State: ABy/qLaMukQnRFJbnEI3sfKbZOsmeUhOhJag7W+09dseyezuJUxuw/mn zWHuIhzCoz64kKtEEZ8RL47r4uxVBw== X-Google-Smtp-Source: APBJJlF2dxbHCNlW4tE69srWZHv4LNf1FBrEIP73vEfLojj7ssYK9cpF/2NfJOSE1k3fWGXBcvsb8SmIjQ== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:5f73:1fc0:c9fd:f203]) (user=elver job=sendgmr) by 2002:a81:b50d:0:b0:586:4e84:26d2 with SMTP id t13-20020a81b50d000000b005864e8426d2mr79140ywh.3.1690988854773; Wed, 02 Aug 2023 08:07:34 -0700 (PDT) Date: Wed, 2 Aug 2023 17:06:39 +0200 In-Reply-To: <20230802150712.3583252-1-elver@google.com> Mime-Version: 1.0 References: <20230802150712.3583252-1-elver@google.com> X-Mailer: git-send-email 2.41.0.585.gd2178a4bd4-goog Message-ID: <20230802150712.3583252-3-elver@google.com> Subject: [PATCH 3/3] list_debug: Introduce CONFIG_DEBUG_LIST_MINIMAL From: Marco Elver To: elver@google.com, Andrew Morton , Kees Cook Cc: Guenter Roeck , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , Miguel Ojeda , Nick Desaulniers , Nathan Chancellor , Tom Rix , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, Dmitry Vyukov , Alexander Potapenko , kasan-dev@googlegroups.com, linux-toolchains@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230802_080737_528065_C66E4E6F X-CRM114-Status: GOOD ( 22.66 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Numerous production kernel configs (see [1, 2]) are choosing to enable CONFIG_DEBUG_LIST, which is also being recommended by KSPP for hardened configs [3]. The feature has never been designed with performance in mind, yet common list manipulation is happening across hot paths all over the kernel. Introduce CONFIG_DEBUG_LIST_MINIMAL, which performs list pointer checking inline, and only upon list corruption delegates to the reporting slow path. To generate optimal machine code with CONFIG_DEBUG_LIST_MINIMAL: 1. Elide checking for pointer values which upon dereference would result in an immediate access fault -- therefore "minimal" checks. The trade-off is lower-quality error reports. 2. Use the newly introduced __preserve_most function attribute (available with Clang, but not yet with GCC) to minimize the code footprint for calling the reporting slow path. As a result, function size of callers is reduced by avoiding saving registers before calling the rarely called reporting slow path. 3. Because the inline checks are a subset of the full set of checks in ___list_*_valid(), always return false if the inline checks failed. This avoids redundant compare and conditional branch right after return from the slow path. As a side-effect of the checks being inline, if the compiler can prove some condition to always be true, it can completely elide some checks. Running netperf with CONFIG_DEBUG_LIST_MINIMAL (using a Clang compiler with "preserve_most") shows throughput improvements, in my case of ~7% on average (up to 20-30% on some test cases). Link: https://r.android.com/1266735 [1] Link: https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/blob/main/config [2] Link: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings [3] Signed-off-by: Marco Elver --- arch/arm64/kvm/hyp/nvhe/list_debug.c | 2 + include/linux/list.h | 56 +++++++++++++++++++++++++--- lib/Kconfig.debug | 15 ++++++++ lib/list_debug.c | 2 + 4 files changed, 69 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/list_debug.c b/arch/arm64/kvm/hyp/nvhe/list_debug.c index 589284496ac5..df718e29f6d4 100644 --- a/arch/arm64/kvm/hyp/nvhe/list_debug.c +++ b/arch/arm64/kvm/hyp/nvhe/list_debug.c @@ -26,6 +26,7 @@ static inline __must_check bool nvhe_check_data_corruption(bool v) /* The predicates checked here are taken from lib/list_debug.c. */ +__list_valid_slowpath bool ___list_add_valid(struct list_head *new, struct list_head *prev, struct list_head *next) { @@ -37,6 +38,7 @@ bool ___list_add_valid(struct list_head *new, struct list_head *prev, return true; } +__list_valid_slowpath bool ___list_del_entry_valid(struct list_head *entry) { struct list_head *prev, *next; diff --git a/include/linux/list.h b/include/linux/list.h index e0b2cf904409..a28a215a3eb1 100644 --- a/include/linux/list.h +++ b/include/linux/list.h @@ -39,20 +39,64 @@ static inline void INIT_LIST_HEAD(struct list_head *list) } #ifdef CONFIG_DEBUG_LIST -extern bool ___list_add_valid(struct list_head *new, - struct list_head *prev, - struct list_head *next); + +#ifdef CONFIG_DEBUG_LIST_MINIMAL +# define __list_valid_slowpath __cold __preserve_most +#else +# define __list_valid_slowpath +#endif + +extern bool __list_valid_slowpath ___list_add_valid(struct list_head *new, + struct list_head *prev, + struct list_head *next); static __always_inline bool __list_add_valid(struct list_head *new, struct list_head *prev, struct list_head *next) { - return ___list_add_valid(new, prev, next); + bool ret = true; + + if (IS_ENABLED(CONFIG_DEBUG_LIST_MINIMAL)) { + /* + * In the minimal config, elide checking if next and prev are + * NULL, since the immediate dereference of them below would + * result in a fault if NULL. + * + * With the minimal config we can afford to inline the checks, + * which also gives the compiler a chance to elide some of them + * completely if they can be proven at compile-time. If one of + * the pre-conditions does not hold, the slow-path will show a + * report which pre-condition failed. + */ + if (likely(next->prev == prev && prev->next == next && new != prev && new != next)) + return true; + ret = false; + } + + ret &= ___list_add_valid(new, prev, next); + return ret; } -extern bool ___list_del_entry_valid(struct list_head *entry); +extern bool __list_valid_slowpath ___list_del_entry_valid(struct list_head *entry); static __always_inline bool __list_del_entry_valid(struct list_head *entry) { - return ___list_del_entry_valid(entry); + bool ret = true; + + if (IS_ENABLED(CONFIG_DEBUG_LIST_MINIMAL)) { + struct list_head *prev = entry->prev; + struct list_head *next = entry->next; + + /* + * In the minimal config, elide checking if next and prev are + * NULL, LIST_POISON1 or LIST_POISON2, since the immediate + * dereference of them below would result in a fault. + */ + if (likely(prev->next == entry && next->prev == entry)) + return true; + ret = false; + } + + ret &= ___list_del_entry_valid(entry); + return ret; } #else static inline bool __list_add_valid(struct list_head *new, diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index fbc89baf7de6..e72cf08af0fa 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -1680,6 +1680,21 @@ config DEBUG_LIST If unsure, say N. +config DEBUG_LIST_MINIMAL + bool "Minimal linked list debug checks" + default !DEBUG_KERNEL + depends on DEBUG_LIST + help + Only perform the minimal set of checks in the linked-list walking + routines to catch corruptions that are not guaranteed to result in an + immediate access fault. + + This trades lower quality error reports for improved performance: the + generated code should be more optimal and provide trade-offs that may + better serve safety- and performance- critical environments. + + If unsure, say Y. + config DEBUG_PLIST bool "Debug priority linked list manipulation" depends on DEBUG_KERNEL diff --git a/lib/list_debug.c b/lib/list_debug.c index fd69009cc696..daad32855f0d 100644 --- a/lib/list_debug.c +++ b/lib/list_debug.c @@ -17,6 +17,7 @@ * attempt). */ +__list_valid_slowpath bool ___list_add_valid(struct list_head *new, struct list_head *prev, struct list_head *next) { @@ -39,6 +40,7 @@ bool ___list_add_valid(struct list_head *new, struct list_head *prev, } EXPORT_SYMBOL(___list_add_valid); +__list_valid_slowpath bool ___list_del_entry_valid(struct list_head *entry) { struct list_head *prev, *next;