From patchwork Thu Aug 29 16:56:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13783474 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 442B0C87FC3 for ; Thu, 29 Aug 2024 16:58:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CE2586B009F; Thu, 29 Aug 2024 12:58:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C92166B00A0; Thu, 29 Aug 2024 12:58:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B34036B00A1; Thu, 29 Aug 2024 12:58:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 932696B009F for ; Thu, 29 Aug 2024 12:58:23 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 506D116102A for ; Thu, 29 Aug 2024 16:58:23 +0000 (UTC) X-FDA: 82505891286.26.6D2DE7D Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf18.hostedemail.com (Postfix) with ESMTP id 936401C0004 for ; Thu, 29 Aug 2024 16:58:21 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=b1APO7w8; spf=pass (imf18.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724950613; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=UScoC5uAD98XoEORkSS5jabd7Wa591naPepP4SsdqqM=; b=Z1u5A2wDLyuJFeur44H183xHfPnmf7wmpamQYPgxn7Mbx9Ta4z7Az89o1/P2i8sKFNZ67Y p1rsscYl49coHFuT4fg4EwIdw9OuA3lzOhXMj5a6zLXKk7v9kXpja464bTpciDgIabkLD4 ZltNUPGrRx5Qwa9DYe9BBGRNZ8JIwp4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724950613; a=rsa-sha256; cv=none; b=56kJUQKub7xC3Kyyp4DUGyTJO6/KK5Iu0Zrgka7wUShNDKIFa3o4mALvAc1JQZa1fRcUIb CaCXdrPo96FB7V8ymUrJEULM4DjTHb/Fs2R4sZgoPQrpz/VhJUKyVLO2KGT8+b0mvr86EO VFHRos2Iw8/pXFx5f9czgQTFATAhWME= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=b1APO7w8; spf=pass (imf18.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1724950700; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UScoC5uAD98XoEORkSS5jabd7Wa591naPepP4SsdqqM=; b=b1APO7w8Lqgw4d/9gY+tm4R8XJGo4sDhdUdu5sh4DLObD2nv3nS8rtNRprpwd8NtDN+5We 7BcKEojS9uXHplv/IA35+ZLFjxi9B0ZLnai/oILQmZqGMNKYKPwId4EUM+oo07MJOd/NB6 gPDCgyACIXcZZAJPzfIgYGiSH2Po9y8= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-627-7nh5CtHIOOCFSuu-GrSw3g-1; Thu, 29 Aug 2024 12:58:17 -0400 X-MC-Unique: 7nh5CtHIOOCFSuu-GrSw3g-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id C41D8191379F; Thu, 29 Aug 2024 16:58:14 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.193.245]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id AB5951955F66; Thu, 29 Aug 2024 16:58:03 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, cgroups@vger.kernel.org, x86@kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Tejun Heo , Zefan Li , Johannes Weiner , =?utf-8?q?Michal_Koutn=C3=BD?= , Jonathan Corbet , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen Subject: [PATCH v1 09/17] bit_spinlock: __always_inline (un)lock functions Date: Thu, 29 Aug 2024 18:56:12 +0200 Message-ID: <20240829165627.2256514-10-david@redhat.com> In-Reply-To: <20240829165627.2256514-1-david@redhat.com> References: <20240829165627.2256514-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Rspamd-Queue-Id: 936401C0004 X-Stat-Signature: n6hyfbxys3kd1uhjqx9ckhcx1zezfqge X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1724950701-291639 X-HE-Meta: U2FsdGVkX1+k1gvd2plHSYISTYhZuxfA9+3jMNFwLDNjVTDFhREXahWLjJuMOC8EIJFqJidz40RVzvv9yGrPpy7RoFHKndlMpaPflh+9Uc39sPzClIgIqi4AfDqCJpuFpUHGwN7L49prZzrpD6SuvgyOLZdv6k5u8otL7bhfaf5n8Q7qV7EeVmnGdyRq+0urAfY149A/esUxM91Ib9NUFOiTxYKkQTWrlnBAqkFTjJYy0/hxx+jI/PfIyJlT3cVVe2jMkbLP36ckKG8dKyVqhixRxwssW0lmUu7ks8CMUQDUcMDh/dSW/vxMTMjUJFnwQMdGcp66TSRvPWDSDUFTUenGWqvAjnNVwWPYUz1V68Efw3g51TxVXTXCQQnmVhmcX/oIjoWfe6zVwq/sVQ6nukQvgGhybMMRclboId6YUSBc2MRyKm7/pOOxyhgyy5JyHWhHWKuN4JSUlgZUBvsB3DBP/Us+sKoRGEpuoz8iHvzIltCA2bs5vKezOpldV3jE2oXUWwfb3H6DByEUFUopImXiu/nF+dNoNFoLwHitfvgLxZKrzwQv+t4EDXYt3mP779bac/0fWR/oz0a5ZT37dDzvs08xEKuZT2+bb005iUTR253fuzYGlDfVfs+E+4J1nQO9MIN+IoJQ4gqZTzMI4qrSpIln/6GiqifMDk9dDo3YL7kN1wprPHAanG20xivHWvIh1jBJXl5Qca1v/C/LbjVgFjE06fp+AiqU9zGWpwUWvuvrzX3kVPe3C0UJ7N7pqRbNTr8YlDrUbBOBhqRfdNe3oXGalfymuexE2nOJHf7IR8tnCMrr0YAQdYus7TjEZk2eEmC9SqMra5LzlxYG0Sq42e9DbdIRh3gMTiiCcBA261zILKM6HlUdITn0svvKnxWH5FOgMG0AI4Ek30aKIHuUDGaF/GDuYz0bd/cheyLFcvhQyeWDUeRenod2gch0y9oFigS6NzWNEDVtzaA P1OIS7xl uSQ8XmSWtYb3X1rL3LuDK8gsBHsJAHmYRRnyb8njUJ3D3sP5U0AJT9d08KXnwCob0hBRBtgaQqu3HAdF6I5JamwBI/qZ+zhu7HU8kmtTPQfrDTQOIbReAjvi7AqMAW1bUigBV7vT5jNQWDooJcoOwCBxDxsC+kuhEqYMhY4CiL7UrmHmmzWCcGjqFtLMlWpu6Y8J758RJLsikOBhDj5wpRDSfqT9yAEzTGzqKGZjcXtIwu7rDuFY159qcJvDArPZGQMRFL85Zeo+W+IVcM341D4wu5QLBjmuGzcKWQ1SvMlpPstZQy6mqm1ZAL+lG/4xu4U7YjxhnIhNKHQWpKegd8Tf26w1S+M0KqgRknEA1Z0nD9Vpvw7P+Uyyf3A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The compiler might decide that it is a smart idea to not inline bit_spin_lock(), primarily when a couple of functions in the same file end up calling it. Especially when used in RMAP context, this can negatively affect fork() performance, where each additional function call is noticeable. Let's simply flag all lock/unlock functions as __always_inline; arch_test_and_set_bit_lock() and friends are already tagged like that (but not test_and_set_bit_lock() for some reason). If ever a problem, we could split it into a fast and a slow path, and only force the fast path to be inlined. But there is nothing particularly "big" here. Signed-off-by: David Hildenbrand --- include/linux/bit_spinlock.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/include/linux/bit_spinlock.h b/include/linux/bit_spinlock.h index bbc4730a6505c..c0989b5b0407f 100644 --- a/include/linux/bit_spinlock.h +++ b/include/linux/bit_spinlock.h @@ -13,7 +13,7 @@ * Don't use this unless you really need to: spin_lock() and spin_unlock() * are significantly faster. */ -static inline void bit_spin_lock(int bitnum, unsigned long *addr) +static __always_inline void bit_spin_lock(int bitnum, unsigned long *addr) { /* * Assuming the lock is uncontended, this never enters @@ -38,7 +38,7 @@ static inline void bit_spin_lock(int bitnum, unsigned long *addr) /* * Return true if it was acquired */ -static inline int bit_spin_trylock(int bitnum, unsigned long *addr) +static __always_inline int bit_spin_trylock(int bitnum, unsigned long *addr) { preempt_disable(); #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) @@ -54,7 +54,7 @@ static inline int bit_spin_trylock(int bitnum, unsigned long *addr) /* * bit-based spin_unlock() */ -static inline void bit_spin_unlock(int bitnum, unsigned long *addr) +static __always_inline void bit_spin_unlock(int bitnum, unsigned long *addr) { #ifdef CONFIG_DEBUG_SPINLOCK BUG_ON(!test_bit(bitnum, addr)); @@ -71,7 +71,7 @@ static inline void bit_spin_unlock(int bitnum, unsigned long *addr) * non-atomic version, which can be used eg. if the bit lock itself is * protecting the rest of the flags in the word. */ -static inline void __bit_spin_unlock(int bitnum, unsigned long *addr) +static __always_inline void __bit_spin_unlock(int bitnum, unsigned long *addr) { #ifdef CONFIG_DEBUG_SPINLOCK BUG_ON(!test_bit(bitnum, addr));