From patchwork Fri Aug 11 15:18:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13351009 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B575AEB64DD for ; Fri, 11 Aug 2023 15:20:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: Mime-Version:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To: References:List-Owner; bh=vbFbZn4pw993t1Mac0qKAI6cBAj9B7uKPr6z5YCvR/U=; b=3wQ vyFO10u1OdDrDEvsLoxHFrTV233zh03q3uDln6yqvqg3niirmMjHZ6EpJW65zZPJUez8TAfh7k6LL 9JvkrBqHNL71WqdNj9J9N6/NvayCyjtw3K179YZjjqr6CSIPTUD514GGgz/fA+0P3O1SHnIW4SJp9 FBaFBWxPgxCjauQ8cUSVOZvfcj0tBmh6nhHYqq1l2JR4Ee7hu2CwjJU048qAvuP++RofyDyBqXpI3 yTpoWSg1tpsknZSijY/P9iPsVvE7dMwqzVshZ5eJFDILjtxq7hwHU/JXNreTH8dptAvLg++h8fJzR jbusqnZp92h6CQU9EHYR4r+grNG6L9A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qUTvp-00AwMV-21; Fri, 11 Aug 2023 15:20:05 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qUTvm-00AwJO-24 for linux-arm-kernel@lists.infradead.org; Fri, 11 Aug 2023 15:20:04 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5897d05e878so25195667b3.3 for ; Fri, 11 Aug 2023 08:19:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691767198; x=1692371998; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=1vaThr58cHhzt3vCJBdqBIs2Yl/3Z2jvMp4OgQvCHVc=; b=5xqJ+9vOp2nrO/mxnyBJ/b/1HWLsgBT1IHmI50RXMG+v00b5fp1vs2Rwd8BJ+9On+a Q4jl5SsKlofCJHbO3RpNFGWeldC2dgdOf7WiBs7B3msfJcvntjhupdIzPNHiuqPJVz+z R/HdlzmM4cQEmJpgDlmTthNmGG/KyvuF0gdplPRvM4udG74+d0udlvizD0YfOPv0Opv2 3tPbceoZMBOxCBuYZR9Iewa4BKkdIHfl4kSzyUUw34WMnBaidreMjrhu73PoE0V1Y3HY Lo9Pzzu3F/0G/X2jav91I9Oukq8tP4eQTMYHi0nGsMKpFT4HL5VWbyKeiy/SS6PBE/G+ bSHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691767198; x=1692371998; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=1vaThr58cHhzt3vCJBdqBIs2Yl/3Z2jvMp4OgQvCHVc=; b=FD7+1oXXe4FR+GZBCa5RejTxlhI0EBZBznaFJT1RCY4K2zqdrO3Cv3h889gos9UFFM Geg6gDJzUqcOgZ8XnDM7YPSkpchmzBaL/os+EtSY5r7FPlsLYT5CILEW0hlnpWj6AMo6 lKRia8s71SG3vN97pq47EVzuxyo6W6ZXcKFIxAC7GOTm8/n0VHARieHYujGKGlCWrFZy nB0p8zRSFXJoOdoG6UUZtbdHVlov3VIHF/x+kQoUyyz7gzZstao5083GIZt2WuEEYieJ bOg5nJ6Y4t86ehyy0Cr8l8AZwO07gCR8aG7wLVgP+yta/KHUZM1ipcIXUtnhuN/P3eFr 1A8g== X-Gm-Message-State: AOJu0YzQ1iO84DjWQqHCc2jLib3P1z4IVCVxxetITkQlhJrnO7zHUGPR sC3DOMl9ZZO3ogYx8Xkd1KgYbwCLHQ== X-Google-Smtp-Source: AGHT+IHOFdGW52whxAtpTQwrYswR+9FGN+f7HEI1ais6Hmme/G/bIwskagiWtjT5lu9yqzqehThNh6eXAg== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:8dc0:5176:6fda:46a0]) (user=elver job=sendgmr) by 2002:a81:451d:0:b0:589:9d51:c8c0 with SMTP id s29-20020a81451d000000b005899d51c8c0mr41482ywa.2.1691767197794; Fri, 11 Aug 2023 08:19:57 -0700 (PDT) Date: Fri, 11 Aug 2023 17:18:38 +0200 Mime-Version: 1.0 X-Mailer: git-send-email 2.41.0.694.ge786442a9b-goog Message-ID: <20230811151847.1594958-1-elver@google.com> Subject: [PATCH v4 1/4] compiler_types: Introduce the Clang __preserve_most function attribute From: Marco Elver To: elver@google.com, Andrew Morton , Kees Cook Cc: Guenter Roeck , Peter Zijlstra , Mark Rutland , Steven Rostedt , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , Arnd Bergmann , Greg Kroah-Hartman , Paul Moore , James Morris , "Serge E. Hallyn" , Nathan Chancellor , Nick Desaulniers , Tom Rix , Miguel Ojeda , Sami Tolvanen , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-security-module@vger.kernel.org, llvm@lists.linux.dev, Dmitry Vyukov , Alexander Potapenko , kasan-dev@googlegroups.com, linux-toolchains@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230811_082002_684723_CE2690BA X-CRM114-Status: GOOD ( 20.37 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org [1]: "On X86-64 and AArch64 targets, this attribute changes the calling convention of a function. The preserve_most calling convention attempts to make the code in the caller as unintrusive as possible. This convention behaves identically to the C calling convention on how arguments and return values are passed, but it uses a different set of caller/callee-saved registers. This alleviates the burden of saving and recovering a large register set before and after the call in the caller. If the arguments are passed in callee-saved registers, then they will be preserved by the callee across the call. This doesn't apply for values returned in callee-saved registers. * On X86-64 the callee preserves all general purpose registers, except for R11. R11 can be used as a scratch register. Floating-point registers (XMMs/YMMs) are not preserved and need to be saved by the caller. * On AArch64 the callee preserve all general purpose registers, except x0-X8 and X16-X18." [1] https://clang.llvm.org/docs/AttributeReference.html#preserve-most Introduce the attribute to compiler_types.h as __preserve_most. Use of this attribute results in better code generation for calls to very rarely called functions, such as error-reporting functions, or rarely executed slow paths. Beware that the attribute conflicts with instrumentation calls inserted on function entry which do not use __preserve_most themselves. Notably, function tracing which assumes the normal C calling convention for the given architecture. Where the attribute is supported, __preserve_most will imply notrace. It is recommended to restrict use of the attribute to functions that should or already disable tracing. Note: The additional preprocessor check against architecture should not be necessary if __has_attribute() only returns true where supported; also see https://github.com/ClangBuiltLinux/linux/issues/1908. But until __has_attribute() does the right thing, we also guard by known-supported architectures to avoid build warnings on other architectures. The attribute may be supported by a future GCC version (see https://gcc.gnu.org/bugzilla/show_bug.cgi?id=110899). Signed-off-by: Marco Elver Reviewed-by: Miguel Ojeda Reviewed-by: Nick Desaulniers Acked-by: Steven Rostedt (Google) Acked-by: Mark Rutland --- v4: * Guard attribute based on known-supported architectures to avoid compiler warnings about the attribute being ignored. v3: * Quote more from LLVM documentation about which registers are callee/caller with preserve_most. * Code comment to restrict use where tracing is meant to be disabled. v2: * Imply notrace, to avoid any conflicts with tracing which is inserted on function entry. See added comments. --- include/linux/compiler_types.h | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+) diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h index 547ea1ff806e..c523c6683789 100644 --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -106,6 +106,34 @@ static inline void __chk_io_ptr(const volatile void __iomem *ptr) { } #define __cold #endif +/* + * On x86-64 and arm64 targets, __preserve_most changes the calling convention + * of a function to make the code in the caller as unintrusive as possible. This + * convention behaves identically to the C calling convention on how arguments + * and return values are passed, but uses a different set of caller- and callee- + * saved registers. + * + * The purpose is to alleviates the burden of saving and recovering a large + * register set before and after the call in the caller. This is beneficial for + * rarely taken slow paths, such as error-reporting functions that may be called + * from hot paths. + * + * Note: This may conflict with instrumentation inserted on function entry which + * does not use __preserve_most or equivalent convention (if in assembly). Since + * function tracing assumes the normal C calling convention, where the attribute + * is supported, __preserve_most implies notrace. It is recommended to restrict + * use of the attribute to functions that should or already disable tracing. + * + * Optional: not supported by gcc. + * + * clang: https://clang.llvm.org/docs/AttributeReference.html#preserve-most + */ +#if __has_attribute(__preserve_most__) && (defined(CONFIG_X86_64) || defined(CONFIG_ARM64)) +# define __preserve_most notrace __attribute__((__preserve_most__)) +#else +# define __preserve_most +#endif + /* Builtins */ /*