From patchwork Fri Nov 1 06:02:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13858760 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AF7D1E674A7 for ; Fri, 1 Nov 2024 06:04:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=BGf5WSa0x+UreevmhbBK3/iZsPnk8J5V0ZkK6yX0mmE=; b=ix19tTsvagZH45 5EO3X0kNaZ2uPM1oZq9p5lQ3NVMFALd2BG4ihc2fXdBm8aNxjZvIAhBYcE3xs4ZPm3PP5SDVmDUCs eIjWnRKOOgtnYUAxN3H4JGsjxkTsJBFFt5agyfSm3PbeWBGjxFBtQyBseL8mjTKK83a9NOyyRfxg5 ObC5iJc6KJM5NXukTxVLXF6l0Lzr4SEepftCz2jsu92IChLrKIAszQXKjTu0X0BpItnooIXEuR9qq bfN9BD+YltAh9JGBXyldhFWFa3yq751cicoPnv7YRG1/QqdGCUw4B2n4WERXA/lPFsHv9EHZwvuFP /g+dm71C2nqBRoF/db/A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t6klR-00000005tdv-0jR6; Fri, 01 Nov 2024 06:04:05 +0000 Received: from mail-qt1-x833.google.com ([2607:f8b0:4864:20::833]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t6klI-00000005tY1-0BXM; Fri, 01 Nov 2024 06:03:58 +0000 Received: by mail-qt1-x833.google.com with SMTP id d75a77b69052e-460c316fc37so10956181cf.0; Thu, 31 Oct 2024 23:03:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1730441035; x=1731045835; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=zmn4qBYUZeQNSTfigZ1U+mXcshXylPzTdWvXiIW+UB4=; b=HLD+auKp7I2EDbSdQoNdfZU0TGyWPC/H4ratmQiYIpwwHRfPEeQwQU8Z2NM9xE0tIS b5xKpJx0KemIl8QVSBYR3i4FyCtZeQB0rkrbwAFpxotgUEbZTpmdS1faAT0LVGRF9e2S /18Nu3+eZQhdgLXtN242TKXJM4t1RqbqyxCz+tCHyGTtDhu7Po4e2pXPmqacx+TCxllp zf51JpesDti4EtJ34jYpKfX3VxQDFi+w8it7rWKx5vTSQI5Bdvbp8uIcohQ5ZKZOESst 1jEo8H5Y6uFk3Ld3k+1kRclgjv9JTGBf0/Dcr6m2WUvz8HHCf2mt9+sdPeqBAse2gyNW 3kdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730441035; x=1731045835; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=zmn4qBYUZeQNSTfigZ1U+mXcshXylPzTdWvXiIW+UB4=; b=oRS76ay30+RWF5YFxxyGVITb5+2iNyJKj/bEHVgH2CsbMz8HC+B9sCP3PNK+5MfCnO 8tr/yBytZ/gC1s/uwQh1U9x8q9GzIGnljIPgzZdZ/qNfv8mH2Yloo2/Ni6Z1G1NISZDL WEf68T8eNS1qWFiAJHS2FIIOyLDFmfg7e/eSA5rBwGOG3B9+i/PVRGPa3JWtxNeC/u1M XQo0QkxQPdI6Q6AhZLUUArH0e48lyMTDq709BWHLo5V/JzQIyxPrczNtjDHP2Lfe50r9 JiMZSyr+LeZdGJWAhxL3sfTj8/AV43UPUSMZm+fpKVbWby9w2VoXBt95xGHd0ilxoIkz IdxA== X-Forwarded-Encrypted: i=1; AJvYcCWJmGaOFxECM8sShgUhPRqTMGuacoJ0bDSc6CKAsSTRhOj0GMWLA/S8AQW5W1bMgQpODieIIXiQFj8p8BI=@lists.infradead.org, AJvYcCX4EFDKvVsADyng4297Mq1hQKuRD71O3M634enVvyUOJQe94iLW2pQKdB7I6EqSN7ZFaZp1Ka4fGscKi5hi23T2@lists.infradead.org X-Gm-Message-State: AOJu0YziKyuBjOm5gQUdc1YNZcTyBPrOBz7FNpC1qZ+N3BaJ6dvgAeCk CzkhWHDQIrOmvRRBdvOaCjrCj3zTccMMY9WsXs8sRqg0xqbpiG5a X-Google-Smtp-Source: AGHT+IHuIrZW72SIqAPAzPw4kkP8Ait46iIS5TYrufN0UM2yweioXZIaIIMLZhfUEsI1wDRcFohQdg== X-Received: by 2002:ac8:58d4:0:b0:460:a82a:39a8 with SMTP id d75a77b69052e-4613bff3422mr347303931cf.13.1730441034526; Thu, 31 Oct 2024 23:03:54 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-462ad086e55sm15238211cf.7.2024.10.31.23.03.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2024 23:03:54 -0700 (PDT) Received: from phl-compute-12.internal (phl-compute-12.phl.internal [10.202.2.52]) by mailfauth.phl.internal (Postfix) with ESMTP id 59F251200066; Fri, 1 Nov 2024 02:03:53 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-12.internal (MEProxy); Fri, 01 Nov 2024 02:03:53 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrvdekkedgkeekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheejpdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhgvg hrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvg hlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtg hpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohepohhj vggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorhesgh hmrghilhdrtghomhdprhgtphhtthhopeifvggushhonhgrfhesghhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Nov 2024 02:03:52 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v2 01/13] rust: Introduce atomic API helpers Date: Thu, 31 Oct 2024 23:02:24 -0700 Message-ID: <20241101060237.1185533-2-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241101060237.1185533-1-boqun.feng@gmail.com> References: <20241101060237.1185533-1-boqun.feng@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241031_230356_122666_1FA7BB7A X-CRM114-Status: GOOD ( 17.08 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org In order to support LKMM atomics in Rust, add rust_helper_* for atomic APIs. These helpers ensure the implementation of LKMM atomics in Rust is the same as in C. This could save the maintenance burden of having two similar atomic implementations in asm. Originally-by: Mark Rutland Signed-off-by: Boqun Feng --- rust/helpers/atomic.c | 1038 +++++++++++++++++++++ rust/helpers/helpers.c | 1 + scripts/atomic/gen-atomics.sh | 1 + scripts/atomic/gen-rust-atomic-helpers.sh | 65 ++ 4 files changed, 1105 insertions(+) create mode 100644 rust/helpers/atomic.c create mode 100755 scripts/atomic/gen-rust-atomic-helpers.sh diff --git a/rust/helpers/atomic.c b/rust/helpers/atomic.c new file mode 100644 index 000000000000..00bf10887928 --- /dev/null +++ b/rust/helpers/atomic.c @@ -0,0 +1,1038 @@ +// SPDX-License-Identifier: GPL-2.0 + +// Generated by scripts/atomic/gen-rust-atomic-helpers.sh +// DO NOT MODIFY THIS FILE DIRECTLY + +/* + * This file provides helpers for the various atomic functions for Rust. + */ +#ifndef _RUST_ATOMIC_API_H +#define _RUST_ATOMIC_API_H + +#include + +// TODO: Remove this after LTO helper support is added. +#define __rust_helper + +__rust_helper int +rust_helper_atomic_read(const atomic_t *v) +{ + return atomic_read(v); +} + +__rust_helper int +rust_helper_atomic_read_acquire(const atomic_t *v) +{ + return atomic_read_acquire(v); +} + +__rust_helper void +rust_helper_atomic_set(atomic_t *v, int i) +{ + atomic_set(v, i); +} + +__rust_helper void +rust_helper_atomic_set_release(atomic_t *v, int i) +{ + atomic_set_release(v, i); +} + +__rust_helper void +rust_helper_atomic_add(int i, atomic_t *v) +{ + atomic_add(i, v); +} + +__rust_helper int +rust_helper_atomic_add_return(int i, atomic_t *v) +{ + return atomic_add_return(i, v); +} + +__rust_helper int +rust_helper_atomic_add_return_acquire(int i, atomic_t *v) +{ + return atomic_add_return_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_add_return_release(int i, atomic_t *v) +{ + return atomic_add_return_release(i, v); +} + +__rust_helper int +rust_helper_atomic_add_return_relaxed(int i, atomic_t *v) +{ + return atomic_add_return_relaxed(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add(int i, atomic_t *v) +{ + return atomic_fetch_add(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add_acquire(int i, atomic_t *v) +{ + return atomic_fetch_add_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add_release(int i, atomic_t *v) +{ + return atomic_fetch_add_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_add_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_sub(int i, atomic_t *v) +{ + atomic_sub(i, v); +} + +__rust_helper int +rust_helper_atomic_sub_return(int i, atomic_t *v) +{ + return atomic_sub_return(i, v); +} + +__rust_helper int +rust_helper_atomic_sub_return_acquire(int i, atomic_t *v) +{ + return atomic_sub_return_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_sub_return_release(int i, atomic_t *v) +{ + return atomic_sub_return_release(i, v); +} + +__rust_helper int +rust_helper_atomic_sub_return_relaxed(int i, atomic_t *v) +{ + return atomic_sub_return_relaxed(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_sub(int i, atomic_t *v) +{ + return atomic_fetch_sub(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_sub_acquire(int i, atomic_t *v) +{ + return atomic_fetch_sub_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_sub_release(int i, atomic_t *v) +{ + return atomic_fetch_sub_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_sub_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_sub_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_inc(atomic_t *v) +{ + atomic_inc(v); +} + +__rust_helper int +rust_helper_atomic_inc_return(atomic_t *v) +{ + return atomic_inc_return(v); +} + +__rust_helper int +rust_helper_atomic_inc_return_acquire(atomic_t *v) +{ + return atomic_inc_return_acquire(v); +} + +__rust_helper int +rust_helper_atomic_inc_return_release(atomic_t *v) +{ + return atomic_inc_return_release(v); +} + +__rust_helper int +rust_helper_atomic_inc_return_relaxed(atomic_t *v) +{ + return atomic_inc_return_relaxed(v); +} + +__rust_helper int +rust_helper_atomic_fetch_inc(atomic_t *v) +{ + return atomic_fetch_inc(v); +} + +__rust_helper int +rust_helper_atomic_fetch_inc_acquire(atomic_t *v) +{ + return atomic_fetch_inc_acquire(v); +} + +__rust_helper int +rust_helper_atomic_fetch_inc_release(atomic_t *v) +{ + return atomic_fetch_inc_release(v); +} + +__rust_helper int +rust_helper_atomic_fetch_inc_relaxed(atomic_t *v) +{ + return atomic_fetch_inc_relaxed(v); +} + +__rust_helper void +rust_helper_atomic_dec(atomic_t *v) +{ + atomic_dec(v); +} + +__rust_helper int +rust_helper_atomic_dec_return(atomic_t *v) +{ + return atomic_dec_return(v); +} + +__rust_helper int +rust_helper_atomic_dec_return_acquire(atomic_t *v) +{ + return atomic_dec_return_acquire(v); +} + +__rust_helper int +rust_helper_atomic_dec_return_release(atomic_t *v) +{ + return atomic_dec_return_release(v); +} + +__rust_helper int +rust_helper_atomic_dec_return_relaxed(atomic_t *v) +{ + return atomic_dec_return_relaxed(v); +} + +__rust_helper int +rust_helper_atomic_fetch_dec(atomic_t *v) +{ + return atomic_fetch_dec(v); +} + +__rust_helper int +rust_helper_atomic_fetch_dec_acquire(atomic_t *v) +{ + return atomic_fetch_dec_acquire(v); +} + +__rust_helper int +rust_helper_atomic_fetch_dec_release(atomic_t *v) +{ + return atomic_fetch_dec_release(v); +} + +__rust_helper int +rust_helper_atomic_fetch_dec_relaxed(atomic_t *v) +{ + return atomic_fetch_dec_relaxed(v); +} + +__rust_helper void +rust_helper_atomic_and(int i, atomic_t *v) +{ + atomic_and(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_and(int i, atomic_t *v) +{ + return atomic_fetch_and(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_and_acquire(int i, atomic_t *v) +{ + return atomic_fetch_and_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_and_release(int i, atomic_t *v) +{ + return atomic_fetch_and_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_and_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_and_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_andnot(int i, atomic_t *v) +{ + atomic_andnot(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_andnot(int i, atomic_t *v) +{ + return atomic_fetch_andnot(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_andnot_acquire(int i, atomic_t *v) +{ + return atomic_fetch_andnot_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_andnot_release(int i, atomic_t *v) +{ + return atomic_fetch_andnot_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_andnot_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_andnot_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_or(int i, atomic_t *v) +{ + atomic_or(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_or(int i, atomic_t *v) +{ + return atomic_fetch_or(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_or_acquire(int i, atomic_t *v) +{ + return atomic_fetch_or_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_or_release(int i, atomic_t *v) +{ + return atomic_fetch_or_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_or_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_or_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_xor(int i, atomic_t *v) +{ + atomic_xor(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_xor(int i, atomic_t *v) +{ + return atomic_fetch_xor(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_xor_acquire(int i, atomic_t *v) +{ + return atomic_fetch_xor_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_xor_release(int i, atomic_t *v) +{ + return atomic_fetch_xor_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_xor_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_xor_relaxed(i, v); +} + +__rust_helper int +rust_helper_atomic_xchg(atomic_t *v, int new) +{ + return atomic_xchg(v, new); +} + +__rust_helper int +rust_helper_atomic_xchg_acquire(atomic_t *v, int new) +{ + return atomic_xchg_acquire(v, new); +} + +__rust_helper int +rust_helper_atomic_xchg_release(atomic_t *v, int new) +{ + return atomic_xchg_release(v, new); +} + +__rust_helper int +rust_helper_atomic_xchg_relaxed(atomic_t *v, int new) +{ + return atomic_xchg_relaxed(v, new); +} + +__rust_helper int +rust_helper_atomic_cmpxchg(atomic_t *v, int old, int new) +{ + return atomic_cmpxchg(v, old, new); +} + +__rust_helper int +rust_helper_atomic_cmpxchg_acquire(atomic_t *v, int old, int new) +{ + return atomic_cmpxchg_acquire(v, old, new); +} + +__rust_helper int +rust_helper_atomic_cmpxchg_release(atomic_t *v, int old, int new) +{ + return atomic_cmpxchg_release(v, old, new); +} + +__rust_helper int +rust_helper_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) +{ + return atomic_cmpxchg_relaxed(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_try_cmpxchg(atomic_t *v, int *old, int new) +{ + return atomic_try_cmpxchg(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) +{ + return atomic_try_cmpxchg_acquire(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) +{ + return atomic_try_cmpxchg_release(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) +{ + return atomic_try_cmpxchg_relaxed(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_sub_and_test(int i, atomic_t *v) +{ + return atomic_sub_and_test(i, v); +} + +__rust_helper bool +rust_helper_atomic_dec_and_test(atomic_t *v) +{ + return atomic_dec_and_test(v); +} + +__rust_helper bool +rust_helper_atomic_inc_and_test(atomic_t *v) +{ + return atomic_inc_and_test(v); +} + +__rust_helper bool +rust_helper_atomic_add_negative(int i, atomic_t *v) +{ + return atomic_add_negative(i, v); +} + +__rust_helper bool +rust_helper_atomic_add_negative_acquire(int i, atomic_t *v) +{ + return atomic_add_negative_acquire(i, v); +} + +__rust_helper bool +rust_helper_atomic_add_negative_release(int i, atomic_t *v) +{ + return atomic_add_negative_release(i, v); +} + +__rust_helper bool +rust_helper_atomic_add_negative_relaxed(int i, atomic_t *v) +{ + return atomic_add_negative_relaxed(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add_unless(atomic_t *v, int a, int u) +{ + return atomic_fetch_add_unless(v, a, u); +} + +__rust_helper bool +rust_helper_atomic_add_unless(atomic_t *v, int a, int u) +{ + return atomic_add_unless(v, a, u); +} + +__rust_helper bool +rust_helper_atomic_inc_not_zero(atomic_t *v) +{ + return atomic_inc_not_zero(v); +} + +__rust_helper bool +rust_helper_atomic_inc_unless_negative(atomic_t *v) +{ + return atomic_inc_unless_negative(v); +} + +__rust_helper bool +rust_helper_atomic_dec_unless_positive(atomic_t *v) +{ + return atomic_dec_unless_positive(v); +} + +__rust_helper int +rust_helper_atomic_dec_if_positive(atomic_t *v) +{ + return atomic_dec_if_positive(v); +} + +__rust_helper s64 +rust_helper_atomic64_read(const atomic64_t *v) +{ + return atomic64_read(v); +} + +__rust_helper s64 +rust_helper_atomic64_read_acquire(const atomic64_t *v) +{ + return atomic64_read_acquire(v); +} + +__rust_helper void +rust_helper_atomic64_set(atomic64_t *v, s64 i) +{ + atomic64_set(v, i); +} + +__rust_helper void +rust_helper_atomic64_set_release(atomic64_t *v, s64 i) +{ + atomic64_set_release(v, i); +} + +__rust_helper void +rust_helper_atomic64_add(s64 i, atomic64_t *v) +{ + atomic64_add(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_add_return(s64 i, atomic64_t *v) +{ + return atomic64_add_return(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_add_return_acquire(s64 i, atomic64_t *v) +{ + return atomic64_add_return_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_add_return_release(s64 i, atomic64_t *v) +{ + return atomic64_add_return_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_add_return_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_add_return_relaxed(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add(s64 i, atomic64_t *v) +{ + return atomic64_fetch_add(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_add_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_add_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_add_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_sub(s64 i, atomic64_t *v) +{ + atomic64_sub(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_sub_return(s64 i, atomic64_t *v) +{ + return atomic64_sub_return(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_sub_return_acquire(s64 i, atomic64_t *v) +{ + return atomic64_sub_return_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_sub_return_release(s64 i, atomic64_t *v) +{ + return atomic64_sub_return_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_sub_return_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_sub_return_relaxed(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_sub(s64 i, atomic64_t *v) +{ + return atomic64_fetch_sub(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_sub_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_sub_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_sub_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_sub_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_inc(atomic64_t *v) +{ + atomic64_inc(v); +} + +__rust_helper s64 +rust_helper_atomic64_inc_return(atomic64_t *v) +{ + return atomic64_inc_return(v); +} + +__rust_helper s64 +rust_helper_atomic64_inc_return_acquire(atomic64_t *v) +{ + return atomic64_inc_return_acquire(v); +} + +__rust_helper s64 +rust_helper_atomic64_inc_return_release(atomic64_t *v) +{ + return atomic64_inc_return_release(v); +} + +__rust_helper s64 +rust_helper_atomic64_inc_return_relaxed(atomic64_t *v) +{ + return atomic64_inc_return_relaxed(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_inc(atomic64_t *v) +{ + return atomic64_fetch_inc(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_inc_acquire(atomic64_t *v) +{ + return atomic64_fetch_inc_acquire(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_inc_release(atomic64_t *v) +{ + return atomic64_fetch_inc_release(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_inc_relaxed(atomic64_t *v) +{ + return atomic64_fetch_inc_relaxed(v); +} + +__rust_helper void +rust_helper_atomic64_dec(atomic64_t *v) +{ + atomic64_dec(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_return(atomic64_t *v) +{ + return atomic64_dec_return(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_return_acquire(atomic64_t *v) +{ + return atomic64_dec_return_acquire(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_return_release(atomic64_t *v) +{ + return atomic64_dec_return_release(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_return_relaxed(atomic64_t *v) +{ + return atomic64_dec_return_relaxed(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_dec(atomic64_t *v) +{ + return atomic64_fetch_dec(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_dec_acquire(atomic64_t *v) +{ + return atomic64_fetch_dec_acquire(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_dec_release(atomic64_t *v) +{ + return atomic64_fetch_dec_release(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_dec_relaxed(atomic64_t *v) +{ + return atomic64_fetch_dec_relaxed(v); +} + +__rust_helper void +rust_helper_atomic64_and(s64 i, atomic64_t *v) +{ + atomic64_and(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_and(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_and_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_and_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_andnot(s64 i, atomic64_t *v) +{ + atomic64_andnot(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_andnot(s64 i, atomic64_t *v) +{ + return atomic64_fetch_andnot(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_andnot_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_andnot_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_andnot_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_andnot_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_or(s64 i, atomic64_t *v) +{ + atomic64_or(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_or(s64 i, atomic64_t *v) +{ + return atomic64_fetch_or(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_or_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_or_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_or_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_or_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_or_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_xor(s64 i, atomic64_t *v) +{ + atomic64_xor(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_xor(s64 i, atomic64_t *v) +{ + return atomic64_fetch_xor(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_xor_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_xor_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_xor_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_xor_relaxed(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_xchg(atomic64_t *v, s64 new) +{ + return atomic64_xchg(v, new); +} + +__rust_helper s64 +rust_helper_atomic64_xchg_acquire(atomic64_t *v, s64 new) +{ + return atomic64_xchg_acquire(v, new); +} + +__rust_helper s64 +rust_helper_atomic64_xchg_release(atomic64_t *v, s64 new) +{ + return atomic64_xchg_release(v, new); +} + +__rust_helper s64 +rust_helper_atomic64_xchg_relaxed(atomic64_t *v, s64 new) +{ + return atomic64_xchg_relaxed(v, new); +} + +__rust_helper s64 +rust_helper_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) +{ + return atomic64_cmpxchg(v, old, new); +} + +__rust_helper s64 +rust_helper_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) +{ + return atomic64_cmpxchg_acquire(v, old, new); +} + +__rust_helper s64 +rust_helper_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) +{ + return atomic64_cmpxchg_release(v, old, new); +} + +__rust_helper s64 +rust_helper_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) +{ + return atomic64_cmpxchg_relaxed(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) +{ + return atomic64_try_cmpxchg(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) +{ + return atomic64_try_cmpxchg_acquire(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) +{ + return atomic64_try_cmpxchg_release(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) +{ + return atomic64_try_cmpxchg_relaxed(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_sub_and_test(s64 i, atomic64_t *v) +{ + return atomic64_sub_and_test(i, v); +} + +__rust_helper bool +rust_helper_atomic64_dec_and_test(atomic64_t *v) +{ + return atomic64_dec_and_test(v); +} + +__rust_helper bool +rust_helper_atomic64_inc_and_test(atomic64_t *v) +{ + return atomic64_inc_and_test(v); +} + +__rust_helper bool +rust_helper_atomic64_add_negative(s64 i, atomic64_t *v) +{ + return atomic64_add_negative(i, v); +} + +__rust_helper bool +rust_helper_atomic64_add_negative_acquire(s64 i, atomic64_t *v) +{ + return atomic64_add_negative_acquire(i, v); +} + +__rust_helper bool +rust_helper_atomic64_add_negative_release(s64 i, atomic64_t *v) +{ + return atomic64_add_negative_release(i, v); +} + +__rust_helper bool +rust_helper_atomic64_add_negative_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_add_negative_relaxed(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) +{ + return atomic64_fetch_add_unless(v, a, u); +} + +__rust_helper bool +rust_helper_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) +{ + return atomic64_add_unless(v, a, u); +} + +__rust_helper bool +rust_helper_atomic64_inc_not_zero(atomic64_t *v) +{ + return atomic64_inc_not_zero(v); +} + +__rust_helper bool +rust_helper_atomic64_inc_unless_negative(atomic64_t *v) +{ + return atomic64_inc_unless_negative(v); +} + +__rust_helper bool +rust_helper_atomic64_dec_unless_positive(atomic64_t *v) +{ + return atomic64_dec_unless_positive(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_if_positive(atomic64_t *v) +{ + return atomic64_dec_if_positive(v); +} + +#endif /* _RUST_ATOMIC_API_H */ +// b032d261814b3e119b72dbf7d21447f6731325ee diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c index 20a0c69d5cc7..ab5a3f1be241 100644 --- a/rust/helpers/helpers.c +++ b/rust/helpers/helpers.c @@ -7,6 +7,7 @@ * Sorted alphabetically. */ +#include "atomic.c" #include "blk.c" #include "bug.c" #include "build_assert.c" diff --git a/scripts/atomic/gen-atomics.sh b/scripts/atomic/gen-atomics.sh index 5b98a8307693..02508d0d6fe4 100755 --- a/scripts/atomic/gen-atomics.sh +++ b/scripts/atomic/gen-atomics.sh @@ -11,6 +11,7 @@ cat < ${LINUXDIR}/include/${header} diff --git a/scripts/atomic/gen-rust-atomic-helpers.sh b/scripts/atomic/gen-rust-atomic-helpers.sh new file mode 100755 index 000000000000..72f2e5bde0c6 --- /dev/null +++ b/scripts/atomic/gen-rust-atomic-helpers.sh @@ -0,0 +1,65 @@ +#!/bin/sh +# SPDX-License-Identifier: GPL-2.0 + +ATOMICDIR=$(dirname $0) + +. ${ATOMICDIR}/atomic-tbl.sh + +#gen_proto_order_variant(meta, pfx, name, sfx, order, atomic, int, arg...) +gen_proto_order_variant() +{ + local meta="$1"; shift + local pfx="$1"; shift + local name="$1"; shift + local sfx="$1"; shift + local order="$1"; shift + local atomic="$1"; shift + local int="$1"; shift + + local atomicname="${atomic}_${pfx}${name}${sfx}${order}" + + local ret="$(gen_ret_type "${meta}" "${int}")" + local params="$(gen_params "${int}" "${atomic}" "$@")" + local args="$(gen_args "$@")" + local retstmt="$(gen_ret_stmt "${meta}")" + +cat < + +// TODO: Remove this after LTO helper support is added. +#define __rust_helper + +EOF + +grep '^[a-z]' "$1" | while read name meta args; do + gen_proto "${meta}" "${name}" "atomic" "int" ${args} +done + +grep '^[a-z]' "$1" | while read name meta args; do + gen_proto "${meta}" "${name}" "atomic64" "s64" ${args} +done + +cat < X-Patchwork-Id: 13858762 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BA591E674A7 for ; Fri, 1 Nov 2024 06:04:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=qEc+//arjhircRVohb85V2+QZvhKT6cuPwnRCgVfTNc=; b=K/iKb3gvK/eiUp NK4xU/Ac8MNpI+g90erkbFkNKOjyqux+0soh0UwMmaOOrZRNP+hEztV5RFB8TVHqrmzEjBWyCz00A 563L0i4IBd1X/bh4nNu/8xOnhlUqC0wVP1iY2ioI5hIiaAI4XJKOQYHeX7UeHqfdY9Co9lH5dBdBn CaMccIR4OClJIacTxA5PnqFdhFRMyUUGHQd15iY59b2kMBmCgndFBgFafvjcZJQwFQXoHKMA0yS0G EbTtgZibBm7NWV+ogsAPMibRsSezhmsWDG0l7cJKSJi7T6WVYQAipKPX+lmdq97QdRAWz4/9pe8VJ rQSHODGH5+Fygyjfvumw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t6kle-00000005tpo-0C5R; Fri, 01 Nov 2024 06:04:18 +0000 Received: from mail-qv1-xf33.google.com ([2607:f8b0:4864:20::f33]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t6klK-00000005tYk-0Llp; Fri, 01 Nov 2024 06:04:02 +0000 Received: by mail-qv1-xf33.google.com with SMTP id 6a1803df08f44-6cbe3ea8e3fso10120386d6.0; Thu, 31 Oct 2024 23:03:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1730441036; x=1731045836; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=ezq0QJ490o/WT6pSZwOZSMkEGdb7Q6G1iLv2f7aPhOU=; b=fPvVAwsZm6Kfmrf4RBN/DQhjOGz6sOYXAzyZOKXGrma83Z8qZdv8ryvALdZfUpUz/h 3w5aiH0wvpMegytZbwtUnnmryhPH2fadc611b2mA73BODmCyCdFexu1OFEBrlJNmjQW6 DCWSe6Xh8syT5BZBCLMlRjoHUugyVEyJH4FvV+OfHCBPZNelRVm6NSzUs+VYk302l6NH +MRvSUky8+4+X7vKe+ckM8suvXCt9OjCN7lrPuDErC2Z16vZR/wSUumqFNLDW6bF4iJd rPI/frh3ZAk0rPMUtJxXKqJHLk331B9BPXTxAzbN4RkAe9zyHRs+HoGl3GWmrDFSS/+V anCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730441036; x=1731045836; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=ezq0QJ490o/WT6pSZwOZSMkEGdb7Q6G1iLv2f7aPhOU=; b=SDu7ANRHSG3Mx13agwuaDLUXRhVV4AsrtGabAixqdT7ga36q19qPERCxPALB6dIboE mbBUtSkXEy93spW9I6lGYHZ0+KpSYXNxryJuDlwmStRYNbJ2NH0g5suRvI1dsmWT1jHS /GuP+vwOG6dIH2alap4t/SZUqG/4FxL2DgarYsTzSp9EW5BUZaQJebCMLvDwS4wrDRL1 TtpzfyTodVRFDRSk/294iESRQ+mwxiGmjaDjupizEkNB/pEcWedr9auxjDLRxDtsVamD PiesUlw3+xsB95U+byTkHin4VggA9dxEKyoXFsvbcSoo/qgnsDlBdW8fZLzgJui1cvZK L9jw== X-Forwarded-Encrypted: i=1; AJvYcCURiKxF4HBClKUXk2DT3pDzob6l+cgTFFSFzIoFLPqH6oDwsX/KH43l/xQGxFTAF/9B2Ke677/sRHWDo+I=@lists.infradead.org, AJvYcCXxITDEfpBDu/xJuUN00ssdUPa5UX/N2Jvo5WraviST7KC4YCquYU2Upz/1psxgld5Misb9kDCJZcjOSSY+xht9@lists.infradead.org X-Gm-Message-State: AOJu0YyiapavcJjyMk3/k0nQ/x9Usmpcr6zqH/Ky7fmgCTPJ2vIQ9dwG BnZwznfD1cQfNSQLHhVrRGHrNTkH6i9ZQdvCemfRNA7WRc9Y15E4 X-Google-Smtp-Source: AGHT+IFRJ7pqYllR8ULDi+mq2ej+Egmq+Ogs44ZumC5hbqQHtZpsJQEAi0GpjMOG+vOIVVuxYXO7OQ== X-Received: by 2002:a05:6214:570f:b0:6cb:399d:6ec4 with SMTP id 6a1803df08f44-6d351a92c06mr94931716d6.7.1730441036059; Thu, 31 Oct 2024 23:03:56 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6d354187e41sm15782496d6.137.2024.10.31.23.03.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2024 23:03:55 -0700 (PDT) Received: from phl-compute-08.internal (phl-compute-08.phl.internal [10.202.2.48]) by mailfauth.phl.internal (Postfix) with ESMTP id CC1DA1200043; Fri, 1 Nov 2024 02:03:54 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-08.internal (MEProxy); Fri, 01 Nov 2024 02:03:54 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrvdekkedgkeekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheejpdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhgvg hrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvg hlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtg hpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohepohhj vggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorhesgh hmrghilhdrtghomhdprhgtphhtthhopeifvggushhonhgrfhesghhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Nov 2024 02:03:54 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v2 02/13] rust: sync: Add basic atomic operation mapping framework Date: Thu, 31 Oct 2024 23:02:25 -0700 Message-ID: <20241101060237.1185533-3-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241101060237.1185533-1-boqun.feng@gmail.com> References: <20241101060237.1185533-1-boqun.feng@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241031_230358_208227_1EA0B1EE X-CRM114-Status: GOOD ( 26.20 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Preparation for generic atomic implementation. To unify the ipmlementation of a generic method over `i32` and `i64`, the C side atomic methods need to be grouped so that in a generic method, they can be referred as ::, otherwise their parameters and return value are different between `i32` and `i64`, which would require using `transmute()` to unify the type into a `T`. Introduce `AtomicIpml` to represent a basic type in Rust that has the direct mapping to an atomic implementation from C. This trait is sealed, and currently only `i32` and `i64` ipml this. Further, different methods are put into different `*Ops` trait groups, and this is for the future when smaller types like `i8`/`i16` are supported but only with a limited set of API (e.g. only set(), load(), xchg() and cmpxchg(), no add() or sub() etc). While the atomic mod is introduced, documentation is also added for memory models and data races. Also bump my role to the maintainer of ATOMIC INFRASTRUCTURE to reflect my responsiblity on the Rust atomic mod. Signed-off-by: Boqun Feng --- MAINTAINERS | 4 +- rust/kernel/sync.rs | 1 + rust/kernel/sync/atomic.rs | 19 ++++ rust/kernel/sync/atomic/ops.rs | 199 +++++++++++++++++++++++++++++++++ 4 files changed, 222 insertions(+), 1 deletion(-) create mode 100644 rust/kernel/sync/atomic.rs create mode 100644 rust/kernel/sync/atomic/ops.rs diff --git a/MAINTAINERS b/MAINTAINERS index b77f4495dcf4..e09471027a63 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3635,7 +3635,7 @@ F: drivers/input/touchscreen/atmel_mxt_ts.c ATOMIC INFRASTRUCTURE M: Will Deacon M: Peter Zijlstra -R: Boqun Feng +M: Boqun Feng R: Mark Rutland L: linux-kernel@vger.kernel.org S: Maintained @@ -3644,6 +3644,8 @@ F: arch/*/include/asm/atomic*.h F: include/*/atomic*.h F: include/linux/refcount.h F: scripts/atomic/ +F: rust/kernel/sync/atomic.rs +F: rust/kernel/sync/atomic/ ATTO EXPRESSSAS SAS/SATA RAID SCSI DRIVER M: Bradley Grove diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs index 0ab20975a3b5..66ac3752ca71 100644 --- a/rust/kernel/sync.rs +++ b/rust/kernel/sync.rs @@ -8,6 +8,7 @@ use crate::types::Opaque; mod arc; +pub mod atomic; mod condvar; pub mod lock; mod locked_by; diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs new file mode 100644 index 000000000000..21b87563667e --- /dev/null +++ b/rust/kernel/sync/atomic.rs @@ -0,0 +1,19 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Atomic primitives. +//! +//! These primitives have the same semantics as their C counterparts: and the precise definitions of +//! semantics can be found at [`LKMM`]. Note that Linux Kernel Memory (Consistency) Model is the +//! only model for Rust code in kernel, and Rust's own atomics should be avoided. +//! +//! # Data races +//! +//! [`LKMM`] atomics have different rules regarding data races: +//! +//! - A normal read doesn't data-race with an atomic read. +//! - A normal write from C side is treated as an atomic write if +//! CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=y. +//! +//! [`LKMM`]: srctree/tools/memory-mode/ + +pub mod ops; diff --git a/rust/kernel/sync/atomic/ops.rs b/rust/kernel/sync/atomic/ops.rs new file mode 100644 index 000000000000..59101a0d0273 --- /dev/null +++ b/rust/kernel/sync/atomic/ops.rs @@ -0,0 +1,199 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Atomic implementations. +//! +//! Provides 1:1 mapping of atomic implementations. + +use crate::bindings::*; +use crate::macros::paste; + +mod private { + /// Sealed trait marker to disable customized impls on atomic implementation traits. + pub trait Sealed {} +} + +// `i32` and `i64` are only supported atomic implementations. +impl private::Sealed for i32 {} +impl private::Sealed for i64 {} + +/// A marker trait for types that ipmlement atomic operations with C side primitives. +/// +/// This trait is sealed, and only types that have directly mapping to the C side atomics should +/// impl this: +/// +/// - `i32` maps to `atomic_t`. +/// - `i64` maps to `atomic64_t`. +pub trait AtomicImpl: Sized + Send + Copy + private::Sealed {} + +// `atomic_t` impl atomic operations on `i32`. +impl AtomicImpl for i32 {} + +// `atomic64_t` impl atomic operations on `i64`. +impl AtomicImpl for i64 {} + +// This macro generates the function signature with given argument list and return type. +macro_rules! declare_atomic_method { + ( + $func:ident($($arg:ident : $arg_type:ty),*) $(-> $ret:ty)? + ) => { + paste!( + #[doc = concat!("Atomic ", stringify!($func))] + #[doc = "# Safety"] + #[doc = "- any pointer passed to the function has to be a valid pointer"] + #[doc = "- Accesses must not cause data races per LKMM:"] + #[doc = " - atomic read racing with normal read, normal write or atomic write is not data race."] + #[doc = " - atomic write racing with normal read or normal write is data-race, unless the"] + #[doc = " normal accesses are done at C side and considered as immune to data"] + #[doc = " races, e.g. CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC."] + unsafe fn [< atomic_ $func >]($($arg: $arg_type,)*) $(-> $ret)?; + ); + }; + ( + $func:ident [$variant:ident $($rest:ident)*]($($arg_sig:tt)*) $(-> $ret:ty)? + ) => { + paste!( + declare_atomic_method!( + [< $func _ $variant >]($($arg_sig)*) $(-> $ret)? + ); + ); + + declare_atomic_method!( + $func [$($rest)*]($($arg_sig)*) $(-> $ret)? + ); + }; + ( + $func:ident []($($arg_sig:tt)*) $(-> $ret:ty)? + ) => { + declare_atomic_method!( + $func($($arg_sig)*) $(-> $ret)? + ); + } +} + +// This macro generates the function implementation with given argument list and return type, and it +// will replace "call(...)" expression with "$ctype _ $func" to call the real C function. +macro_rules! impl_atomic_method { + ( + ($ctype:ident) $func:ident($($arg:ident: $arg_type:ty),*) $(-> $ret:ty)? { + call($($c_arg:expr),*) + } + ) => { + paste!( + #[inline(always)] + unsafe fn [< atomic_ $func >]($($arg: $arg_type,)*) $(-> $ret)? { + // SAFETY: Per function safety requirement, all pointers are valid, and accesses + // won't cause data race per LKMM. + unsafe { [< $ctype _ $func >]($($c_arg,)*) } + } + ); + }; + ( + ($ctype:ident) $func:ident[$variant:ident $($rest:ident)*]($($arg_sig:tt)*) $(-> $ret:ty)? { + call($($arg:tt)*) + } + ) => { + paste!( + impl_atomic_method!( + ($ctype) [< $func _ $variant >]($($arg_sig)*) $( -> $ret)? { + call($($arg)*) + } + ); + ); + impl_atomic_method!( + ($ctype) $func [$($rest)*]($($arg_sig)*) $( -> $ret)? { + call($($arg)*) + } + ); + }; + ( + ($ctype:ident) $func:ident[]($($arg_sig:tt)*) $( -> $ret:ty)? { + call($($arg:tt)*) + } + ) => { + impl_atomic_method!( + ($ctype) $func($($arg_sig)*) $(-> $ret)? { + call($($arg)*) + } + ); + } +} + +// Delcares $ops trait with methods and implements the trait for `i32` and `i64`. +macro_rules! declare_and_impl_atomic_methods { + ($ops:ident ($doc:literal) { + $( + $func:ident [$($variant:ident),*]($($arg_sig:tt)*) $( -> $ret:ty)? { + call($($arg:tt)*) + } + )* + }) => { + #[doc = $doc] + pub trait $ops: AtomicImpl { + $( + declare_atomic_method!( + $func[$($variant)*]($($arg_sig)*) $(-> $ret)? + ); + )* + } + + impl $ops for i32 { + $( + impl_atomic_method!( + (atomic) $func[$($variant)*]($($arg_sig)*) $(-> $ret)? { + call($($arg)*) + } + ); + )* + } + + impl $ops for i64 { + $( + impl_atomic_method!( + (atomic64) $func[$($variant)*]($($arg_sig)*) $(-> $ret)? { + call($($arg)*) + } + ); + )* + } + } +} + +declare_and_impl_atomic_methods!( + AtomicHasBasicOps ("Basic atomic operations") { + read[acquire](ptr: *mut Self) -> Self { + call(ptr as *mut _) + } + + set[release](ptr: *mut Self, v: Self) { + call(ptr as *mut _, v) + } + } +); + +declare_and_impl_atomic_methods!( + AtomicHasXchgOps ("Exchange and compare-and-exchange atomic operations") { + xchg[acquire, release, relaxed](ptr: *mut Self, v: Self) -> Self { + call(ptr as *mut _, v) + } + + cmpxchg[acquire, release, relaxed](ptr: *mut Self, old: Self, new: Self) -> Self { + call(ptr as *mut _, old, new) + } + + try_cmpxchg[acquire, release, relaxed](ptr: *mut Self, old: *mut Self, new: Self) -> bool { + call(ptr as *mut _, old, new) + } + } +); + +declare_and_impl_atomic_methods!( + AtomicHasArithmeticOps ("Atomic arithmetic operations") { + add[](ptr: *mut Self, v: Self) { + call(v, ptr as *mut _) + } + + fetch_add[acquire, release, relaxed](ptr: *mut Self, v: Self) -> Self { + call(v, ptr as *mut _) + } + } +); From patchwork Fri Nov 1 06:02:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13858761 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 30937E674A9 for ; Fri, 1 Nov 2024 06:04:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=u5IPY9UySShcOMyEdlEUKtWksAKW4dgrUzoUsgFYVsc=; b=dVwGboF2BOnTWH cBrMcLV6ms4GOzUBy1Y6NKE0eFf8GkftEkM5SyCksYTCvrdlWrH55/LHcVQNT36NIjcKG+Xrvvd2V sGqOrk4D48ZWTw/j+8e4zA8ymPUgDtLM3Wm6Jd6OUbBN9zqIaZTwhS74JSVHBEB80l/YjWmPEnqA6 qrjB23iUqzxxy0az61DZlcMqWWkabY/Gf6hzp++jwQUHtipNSAFB9YsOPRBCWrsKADbNIbr8QcCJU nnYXvQKYwVVXCPCvK4PtV3lxOVS5CBcplyBL4stbAQXQV2QpCxBZnm3uvZzkXBTeoPveFkDp+DD/N rcf5mWH/9W3Cq25Fa5iw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t6klT-00000005tfX-09vZ; Fri, 01 Nov 2024 06:04:07 +0000 Received: from mail-yw1-x112c.google.com ([2607:f8b0:4864:20::112c]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t6klL-00000005tZJ-0r6H; Fri, 01 Nov 2024 06:04:01 +0000 Received: by mail-yw1-x112c.google.com with SMTP id 00721157ae682-6ea15a72087so13599747b3.1; Thu, 31 Oct 2024 23:03:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1730441037; x=1731045837; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=O88Ws5ttINxBzT//7+/cLza6DlDBrpbiV/WWX/jyzJU=; b=UNyadhmx0tvJs4Am4Aj4U9D2F9Di7cKM+bfRkOfUn8Iu0kr4G+oUhyCIswelYgzYyu 4m0CqMIlpXl2Md4zu3isRuMyjV1A/XBB9exOwHrZ/xbneztEev5et1pWSRfr7iY8jRzB WhvTKVFBJ0nEVpju1f1azLorNkxa7kNtoF4xY0BJF6BttQNQragQiNF7wRZF3p5BdjYE dQ1FSiFezNwrQb5FF4FJJocvefMFaIEIRgHZCx9V4N/Dz7xMXyaT98FDudAd/Gi24SOv WZsy8XxGwgA69WADllr+gtia6I02yO5aQWs9sHz+1beso0dYWa2X8x80XqZXRi78aPlQ 9ojg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730441037; x=1731045837; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=O88Ws5ttINxBzT//7+/cLza6DlDBrpbiV/WWX/jyzJU=; b=vmFCUyD6wwH5PdEtKaPVEm0FvKeuPKJy/9MQrnMKTAjqHc3Vg5r85nArsC1BKlYKBN 4K1HZcRT/+5ek+RdsN7WjU15C2yMdlOFRZMVZhhc19JTUERfIkoMbgtB2QRR2wBtL+JR +/m1FsjvUfCViM7cQ2YEOFm6m4WYCYBYk5p7624u77Ho57xIJyBJQHQwn/PfCiN93pjF aEqn5Q9IzYgbbiDc3LWWPvxfjCbwYm4oOdfGSdyxFujgGffO2CNzAHxRY5dbjXcpVIHr 9xDVNdeczN9oGoYRV4B7HgcaUt0nfi0tJN9sNLXp4+HS5mnwLfvrxUAQDMiTqFt9tmO2 K6rA== X-Forwarded-Encrypted: i=1; AJvYcCVwVNDEa8a3BfCpGCFUBPTAZwavDztE1Gr6T/lgKqg1/h6y55v7aH+m3H3DQJQfDlnkOINOkyISkrWULqGELOVS@lists.infradead.org, AJvYcCX7TG8Xh2Rmp/M0dwSd8CqUrd5No78T8D0ZECcch2yCEpjck8dC11xV+Jbm/mjLML1wT/wl5Tc2q49n6mo=@lists.infradead.org X-Gm-Message-State: AOJu0Yz5oSh+hxzue8wRKSw45q6rFLcZBvwKVWqBOrgAnicALBn3eyxP fb/V9fJnkKNaU5Hd1PfkWvbEMjGapGyjyUENuXhSSrsNUpsuhUY1 X-Google-Smtp-Source: AGHT+IE0faQIGAO8gno+xp81cCbZqSCBMr/1UvkfX38xkfdk+AOPQSLC5vyHLNEsDlMPiKHF+hy5Ag== X-Received: by 2002:a05:690c:3803:b0:6e5:625c:5ad8 with SMTP id 00721157ae682-6ea52542f0dmr68563927b3.37.1730441037417; Thu, 31 Oct 2024 23:03:57 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7b2f39e9b54sm141653185a.5.2024.10.31.23.03.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2024 23:03:57 -0700 (PDT) Received: from phl-compute-03.internal (phl-compute-03.phl.internal [10.202.2.43]) by mailfauth.phl.internal (Postfix) with ESMTP id 455E01200043; Fri, 1 Nov 2024 02:03:56 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-03.internal (MEProxy); Fri, 01 Nov 2024 02:03:56 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrvdekkedgkeekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheejpdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhgvg hrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvg hlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtg hpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohepohhj vggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorhesgh hmrghilhdrtghomhdprhgtphhtthhopeifvggushhonhgrfhesghhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Nov 2024 02:03:55 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v2 03/13] rust: sync: atomic: Add ordering annotation types Date: Thu, 31 Oct 2024 23:02:26 -0700 Message-ID: <20241101060237.1185533-4-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241101060237.1185533-1-boqun.feng@gmail.com> References: <20241101060237.1185533-1-boqun.feng@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241031_230359_347949_61D1A574 X-CRM114-Status: GOOD ( 20.44 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Preparation for atomic primitives. Instead of a suffix like _acquire, a method parameter along with the corresponding generic parameter will be used to specify the ordering of an atomic operations. For example, atomic load() can be defined as: impl Atomic { pub fn load(&self, _o: O) -> T { ... } } and acquire users would do: let r = x.load(Acquire); relaxed users: let r = x.load(Relaxed); doing the following: let r = x.load(Release); will cause a compiler error. Compared to suffixes, it's easier to tell what ordering variants an operation has, and it also make it easier to unify the implementation of all ordering variants in one method via generic. The `IS_RELAXED` and `ORDER` associate consts are for generic function to pick up the particular implementation specified by an ordering annotation. Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic.rs | 3 + rust/kernel/sync/atomic/ordering.rs | 94 +++++++++++++++++++++++++++++ 2 files changed, 97 insertions(+) create mode 100644 rust/kernel/sync/atomic/ordering.rs diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index 21b87563667e..be2e8583595f 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -17,3 +17,6 @@ //! [`LKMM`]: srctree/tools/memory-mode/ pub mod ops; +pub mod ordering; + +pub use ordering::{Acquire, Full, Relaxed, Release}; diff --git a/rust/kernel/sync/atomic/ordering.rs b/rust/kernel/sync/atomic/ordering.rs new file mode 100644 index 000000000000..6cf01cd276c6 --- /dev/null +++ b/rust/kernel/sync/atomic/ordering.rs @@ -0,0 +1,94 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Memory orderings. +//! +//! The semantics of these orderings follows the [`LKMM`] definitions and rules. +//! +//! - [`Acquire`] and [`Release`] are similar to their counterpart in Rust memory model. +//! - [`Full`] means "fully-ordered", that is: +//! - It provides ordering between all the preceding memory accesses and the annotated operation. +//! - It provides ordering between the annotated operation and all the following memory accesses. +//! - It provides ordering between all the preceding memory accesses and all the fllowing memory +//! accesses. +//! - All the orderings are the same strong as a full memory barrier (i.e. `smp_mb()`). +//! - [`Relaxed`] is similar to the counterpart in Rust memory model, except that dependency +//! orderings are also honored in [`LKMM`]. Dependency orderings are described in "DEPENDENCY +//! RELATIONS" in [`LKMM`]'s [`explanation`]. +//! +//! [`LKMM`]: srctree/tools/memory-model/ +//! [`explanation`]: srctree/tools/memory-model/Documentation/explanation.txt + +/// The annotation type for relaxed memory ordering. +pub struct Relaxed; + +/// The annotation type for acquire memory ordering. +pub struct Acquire; + +/// The annotation type for release memory ordering. +pub struct Release; + +/// The annotation type for fully-order memory ordering. +pub struct Full; + +/// The trait bound for operations that only support relaxed ordering. +pub trait RelaxedOnly {} + +impl RelaxedOnly for Relaxed {} + +/// The trait bound for operations that only support acquire or relaxed ordering. +pub trait AcquireOrRelaxed { + /// Describes whether an ordering is relaxed or not. + const IS_RELAXED: bool = false; +} + +impl AcquireOrRelaxed for Acquire {} + +impl AcquireOrRelaxed for Relaxed { + const IS_RELAXED: bool = true; +} + +/// The trait bound for operations that only support release or relaxed ordering. +pub trait ReleaseOrRelaxed { + /// Describes whether an ordering is relaxed or not. + const IS_RELAXED: bool = false; +} + +impl ReleaseOrRelaxed for Release {} + +impl ReleaseOrRelaxed for Relaxed { + const IS_RELAXED: bool = true; +} + +/// Describes the exact memory ordering of an `impl` [`All`]. +pub enum OrderingDesc { + /// Relaxed ordering. + Relaxed, + /// Acquire ordering. + Acquire, + /// Release ordering. + Release, + /// Fully-ordered. + Full, +} + +/// The trait bound for annotating operations that should support all orderings. +pub trait All { + /// Describes the exact memory ordering. + const ORDER: OrderingDesc; +} + +impl All for Relaxed { + const ORDER: OrderingDesc = OrderingDesc::Relaxed; +} + +impl All for Acquire { + const ORDER: OrderingDesc = OrderingDesc::Acquire; +} + +impl All for Release { + const ORDER: OrderingDesc = OrderingDesc::Release; +} + +impl All for Full { + const ORDER: OrderingDesc = OrderingDesc::Full; +} From patchwork Fri Nov 1 06:02:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13858764 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6A75AE674A6 for ; Fri, 1 Nov 2024 06:04:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=5erOJw7VzoHZHZQciGzmepBGzS1e/tah0Ho7vWcGAQY=; b=LKXYHnajI1g4/w RnPqqqyysdG5kty/FGfs16JrX++dNLtYEe1kOVtU0OeYwB0A0ukw6cvnPFbVN3Hhe1sxUEzzZKq5b GuSP1/e3p5gKNW8lzRktHojNFJzrDOr90YK3qaRhCiER3XHRrLV3LlbBhVk52KhzXT0eCXopu0RAP 29fztQ+0CHQSWNhnEqgeJWVPiM6WfCcxVJN9KoGvTRzCcqnF1VWHATE0VgZBATC/8SohW5setm+QG phHCra23Lo7dE6txsiRzdOtlT/mwDdvJ7pbvYocVnYW82am/WQ+6PxQi+Z2Svl6ZJQahGBPYyFcnp QZBjtnu8TR0rieGlFfAA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t6klp-00000005twt-0azj; Fri, 01 Nov 2024 06:04:29 +0000 Received: from mail-qk1-x72f.google.com ([2607:f8b0:4864:20::72f]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t6klL-00000005taY-3qms; Fri, 01 Nov 2024 06:04:04 +0000 Received: by mail-qk1-x72f.google.com with SMTP id af79cd13be357-7b15d7b7a32so121922785a.1; Thu, 31 Oct 2024 23:03:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1730441039; x=1731045839; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=YIkDBqs1Qg0O/PRjCDXbFtqVlNEc14v8sJMt4WTa4Dc=; b=moaXWpClZJULrRLaUa1JPHSUqiAPnMDJDeO2VHsscDYnR9mE9knG6el4LrmXwWPu1V MozBIhq7Za5yDfJtlj/f6VNb/UJgVJfHsz1K3v/MX8s+wgmNAByef3X3ygtFckQYEDJ8 rMGL3aJxvWWov7LBOtwemsxxrniGvmS+nXkh2opXWNR0lE/BTCSbaz/6H7jIoZC/7Ugu uCPPWXetrJPa7eS2ZpHRI3PJEieE+Ymt6b7KxtJFiXMnewJ6yNMY4nMjS5nejlmzGsP4 H+J/KokzCmUiDQvBCiI8d8g0jnjgjGEjNHSPbAfcpgBTvdX/dzh1ifmAighrXPDVeTFE Lv4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730441039; x=1731045839; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=YIkDBqs1Qg0O/PRjCDXbFtqVlNEc14v8sJMt4WTa4Dc=; b=o22vTRLLdBnMuRepxLraU1ezrlgDYVa6tPQptqicMDxyAZVZJwwQFR7QMYPhe8Pslc ClI2X6d4pN/hDsBqbvZPgtfIrReeiv32Xd/7vfSyz77wi+yP3Qt6tjR3Hc4wmOsXI0uA N5L3RPJMhGEvTN0ivIjVwhg62idTwKp1MpnaZD4wFzKQ3OGl9NoszoLN6O8wbZ0dxg/R Or/nD0VDyv45eeRbdVe62RO/35jKlHAsC7ihEg1SVO2NXShxu8Di7gDpNm6/3pNcdZf2 RFBE/qQt+ALKVRJWK0OVYa+/uYWp+vjcbe3zAFlKfhFk/J3xbMd1Av4Hy6jWLW40isXv DEDg== X-Forwarded-Encrypted: i=1; AJvYcCWgMHjgoJcXikWYB4c5vO8p16mYzsr2a7NfxeihUuRY5IulcXK502IrN4YOKkGXckR0nZjnmJTrVI+2wcY=@lists.infradead.org, AJvYcCXHzBReLQFItGErGHBW2w+T6oSVD7e0CynM2aPS8pN24kb9Pp9rSyAwtaTDkMDoeZhyfo7oppc3JuiNM5rAluP8@lists.infradead.org X-Gm-Message-State: AOJu0Yxgod8G109Ms3Z0LV7JAtOqkrmjoHc7SPxH+tOpBFOaCEJ0YO1B NWvdoQKE3s8WBKYGPs5F91rmAZYMdu5Zqn1MCDtRSVaT27jvA7kG X-Google-Smtp-Source: AGHT+IFlXQ7YmvemmkOW7/d6KfdFxcWho9tJcjU7IIT8nZP7aEzzDRPSTnvXfAnX3Qx5MYwFxJ8rTg== X-Received: by 2002:a05:620a:44c1:b0:7ac:e879:1a20 with SMTP id af79cd13be357-7b193ee1612mr2829002485a.1.1730441038906; Thu, 31 Oct 2024 23:03:58 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7b2f3a6fd46sm141484385a.95.2024.10.31.23.03.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2024 23:03:58 -0700 (PDT) Received: from phl-compute-08.internal (phl-compute-08.phl.internal [10.202.2.48]) by mailfauth.phl.internal (Postfix) with ESMTP id B039A1200043; Fri, 1 Nov 2024 02:03:57 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-08.internal (MEProxy); Fri, 01 Nov 2024 02:03:57 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrvdekkedgkeekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedunecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheejpdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhgvg hrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvg hlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtg hpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohepohhj vggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorhesgh hmrghilhdrtghomhdprhgtphhtthhopeifvggushhonhgrfhesghhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Nov 2024 02:03:57 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v2 04/13] rust: sync: atomic: Add generic atomics Date: Thu, 31 Oct 2024 23:02:27 -0700 Message-ID: <20241101060237.1185533-5-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241101060237.1185533-1-boqun.feng@gmail.com> References: <20241101060237.1185533-1-boqun.feng@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241031_230400_177957_53689D1B X-CRM114-Status: GOOD ( 24.09 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org To provide using LKMM atomics for Rust code, a generic `Atomic` is added, currently `T` needs to be Send + Copy because these are the straightforward usages and all basic types support this. The trait `AllowAtomic` should be only ipmlemented inside atomic mod until the generic atomic framework is mature enough (unless the ipmlementer is a `#[repr(transparent)]` new type). `AtomicIpml` types are automatically `AllowAtomic`, and so far only basic operations load() and store() are introduced. Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic.rs | 2 + rust/kernel/sync/atomic/generic.rs | 253 +++++++++++++++++++++++++++++ 2 files changed, 255 insertions(+) create mode 100644 rust/kernel/sync/atomic/generic.rs diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index be2e8583595f..b791abc59b61 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -16,7 +16,9 @@ //! //! [`LKMM`]: srctree/tools/memory-mode/ +pub mod generic; pub mod ops; pub mod ordering; +pub use generic::Atomic; pub use ordering::{Acquire, Full, Relaxed, Release}; diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/generic.rs new file mode 100644 index 000000000000..204da38e2691 --- /dev/null +++ b/rust/kernel/sync/atomic/generic.rs @@ -0,0 +1,253 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Generic atomic primitives. + +use super::ops::*; +use super::ordering::*; +use crate::types::Opaque; + +/// A generic atomic variable. +/// +/// `T` must impl [`AllowAtomic`], that is, an [`AtomicImpl`] has to be chosen. +/// +/// # Invariants +/// +/// Doing an atomic operation while holding a reference of [`Self`] won't cause a data race, this +/// is guaranteed by the safety requirement of [`Self::from_ptr`] and the extra safety requirement +/// of the usage on pointers returned by [`Self::as_ptr`]. +#[repr(transparent)] +pub struct Atomic(Opaque); + +// SAFETY: `Atomic` is safe to share among execution contexts because all accesses are atomic. +unsafe impl Sync for Atomic {} + +/// Atomics that support basic atomic operations. +/// +/// TODO: Unless the `impl` is a `#[repr(transparet)]` new type of an existing [`AllowAtomic`], the +/// impl block should be only done in atomic mod. And currently only basic integer types can +/// implement this trait in atomic mod. +/// +/// # Safety +/// +/// [`Self`] must have the same size and alignment as [`Self::Repr`]. +pub unsafe trait AllowAtomic: Sized + Send + Copy { + /// The backing atomic implementation type. + type Repr: AtomicImpl; + + /// Converts into a [`Self::Repr`]. + fn into_repr(self) -> Self::Repr; + + /// Converts from a [`Self::Repr`]. + fn from_repr(repr: Self::Repr) -> Self; +} + +// SAFETY: `T::Repr` is `Self` (i.e. `T`), so they have the same size and alignment. +unsafe impl AllowAtomic for T { + type Repr = Self; + + fn into_repr(self) -> Self::Repr { + self + } + + fn from_repr(repr: Self::Repr) -> Self { + repr + } +} + +impl Atomic { + /// Creates a new atomic. + pub const fn new(v: T) -> Self { + Self(Opaque::new(v)) + } + + /// Creates a reference to [`Self`] from a pointer. + /// + /// # Safety + /// + /// - `ptr` has to be a valid pointer. + /// - `ptr` has to be valid for both reads and writes for the whole lifetime `'a`. + /// - For the whole lifetime of '`a`, other accesses to the object cannot cause data races + /// (defined by [`LKMM`]) against atomic operations on the returned reference. + /// + /// [`LKMM`]: srctree/tools/memory-model + /// + /// # Examples + /// + /// Using [`Atomic::from_ptr()`] combined with [`Atomic::load()`] or [`Atomic::store()`] can + /// achieve the same functionality as `READ_ONCE()`/`smp_load_acquire()` or + /// `WRITE_ONCE()`/`smp_store_release()` in C side: + /// + /// ```rust + /// # use kernel::types::Opaque; + /// use kernel::sync::atomic::{Atomic, Relaxed, Release}; + /// + /// // Assume there is a C struct `Foo`. + /// mod cbindings { + /// #[repr(C)] + /// pub(crate) struct foo { pub(crate) a: i32, pub(crate) b: i32 } + /// } + /// + /// let tmp = Opaque::new(cbindings::foo { a: 1, b: 2}); + /// + /// // struct foo *foo_ptr = ..; + /// let foo_ptr = tmp.get(); + /// + /// // SAFETY: `foo_ptr` is a valid pointer, and `.a` is inbound. + /// let foo_a_ptr = unsafe { core::ptr::addr_of_mut!((*foo_ptr).a) }; + /// + /// // a = READ_ONCE(foo_ptr->a); + /// // + /// // SAFETY: `foo_a_ptr` is a valid pointer for read, and all accesses on it is atomic, so no + /// // data race. + /// let a = unsafe { Atomic::from_ptr(foo_a_ptr) }.load(Relaxed); + /// # assert_eq!(a, 1); + /// + /// // smp_store_release(&foo_ptr->a, 2); + /// // + /// // SAFETY: `foo_a_ptr` is a valid pointer for write, and all accesses on it is atomic, so no + /// // data race. + /// unsafe { Atomic::from_ptr(foo_a_ptr) }.store(2, Release); + /// ``` + /// + /// However, this should be only used when communicating with C side or manipulating a C struct. + pub unsafe fn from_ptr<'a>(ptr: *mut T) -> &'a Self + where + T: Sync, + { + // CAST: `T` is transparent to `Atomic`. + // SAFETY: Per function safety requirement, `ptr` is a valid pointer and the object will + // live long enough. It's safe to return a `&Atomic` because function safety requirement + // guarantees other accesses won't cause data races. + unsafe { &*ptr.cast::() } + } + + /// Returns a pointer to the underlying atomic variable. + /// + /// Extra safety requirement on using the return pointer: the operations done via the pointer + /// cannot cause data races defined by [`LKMM`]. + /// + /// [`LKMM`]: srctree/tools/memory-model + pub const fn as_ptr(&self) -> *mut T { + self.0.get() + } + + /// Returns a mutable reference to the underlying atomic variable. + /// + /// This is safe because the mutable reference of the atomic variable guarantees the exclusive + /// access. + pub fn get_mut(&mut self) -> &mut T { + // SAFETY: `self.as_ptr()` is a valid pointer to `T`, and the object has already been + // initialized. `&mut self` guarantees the exclusive access, so it's safe to reborrow + // mutably. + unsafe { &mut *self.as_ptr() } + } +} + +impl Atomic +where + T::Repr: AtomicHasBasicOps, +{ + /// Loads the value from the atomic variable. + /// + /// # Examples + /// + /// Simple usages: + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Relaxed}; + /// + /// let x = Atomic::new(42i32); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// let x = Atomic::new(42i64); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// ``` + /// + /// Customized new types in [`Atomic`]: + /// + /// ```rust + /// use kernel::sync::atomic::{generic::AllowAtomic, Atomic, Relaxed}; + /// + /// #[derive(Clone, Copy)] + /// #[repr(transparent)] + /// struct NewType(u32); + /// + /// // SAFETY: `NewType` is transparent to `u32`, which has the same size and alignment as + /// // `i32`. + /// unsafe impl AllowAtomic for NewType { + /// type Repr = i32; + /// + /// fn into_repr(self) -> Self::Repr { + /// self.0 as i32 + /// } + /// + /// fn from_repr(repr: Self::Repr) -> Self { + /// NewType(repr as u32) + /// } + /// } + /// + /// let n = Atomic::new(NewType(0)); + /// + /// assert_eq!(0, n.load(Relaxed).0); + /// ``` + #[inline(always)] + pub fn load(&self, _: Ordering) -> T { + let a = self.as_ptr().cast::(); + + // SAFETY: + // - For calling the atomic_read*() function: + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`, + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer, + // - per the type invariants, the following atomic operation won't cause data races. + // - For extra safety requirement of usage on pointers returned by `self.as_ptr(): + // - atomic operations are used here. + let v = unsafe { + if Ordering::IS_RELAXED { + T::Repr::atomic_read(a) + } else { + T::Repr::atomic_read_acquire(a) + } + }; + + T::from_repr(v) + } + + /// Stores a value to the atomic variable. + /// + /// # Examples + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Relaxed}; + /// + /// let x = Atomic::new(42i32); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// x.store(43, Relaxed); + /// + /// assert_eq!(43, x.load(Relaxed)); + /// ``` + /// + #[inline(always)] + pub fn store(&self, v: T, _: Ordering) { + let v = T::into_repr(v); + let a = self.as_ptr().cast::(); + + // SAFETY: + // - For calling the atomic_set*() function: + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`, + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer, + // - per the type invariants, the following atomic operation won't cause data races. + // - For extra safety requirement of usage on pointers returned by `self.as_ptr(): + // - atomic operations are used here. + unsafe { + if Ordering::IS_RELAXED { + T::Repr::atomic_set(a, v) + } else { + T::Repr::atomic_set_release(a, v) + } + }; + } +} From patchwork Fri Nov 1 06:02:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13858763 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9195EE674A7 for ; Fri, 1 Nov 2024 06:04:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=8e29Ow3MQkPZd5AI6hG3pqxL0DWikk9cRiEQylDNl4g=; b=vtE8XO3CVHfqls f+Q1uScc1qm3iqIugIgZMzltAPsaWQQbrHG4etNjtNdCHEqc14vJPNR75hcwFu9XKggM6R2Y1FXc+ swNKMAPJBGh8SC93hC30e6hnBD5QQ2oefa0XVLBgTSxhjGEHqdvVQ3hYhRI2w3wgO3ureTVfe3dtb 5QANmiPNgJ15eWAepgv84QJeIwtFJqfkk0PWjYKaXGPUYhJRu58xZFaw4fk7TCwZvmyUvouzzQiT1 azD1y4fqxRzSHIytoWMIbw0YF3fGccuWZZ2Jgpp/KJHfBwviofN06e7P5eof0x57OQfIyhKNcbar4 bpy5e4lcAldg4q4w4VoA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t6kln-00000005tvt-1Oh2; Fri, 01 Nov 2024 06:04:27 +0000 Received: from mail-qk1-x72b.google.com ([2607:f8b0:4864:20::72b]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t6klO-00000005tbB-147n; Fri, 01 Nov 2024 06:04:04 +0000 Received: by mail-qk1-x72b.google.com with SMTP id af79cd13be357-7b14443a71eso130453485a.1; Thu, 31 Oct 2024 23:04:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1730441040; x=1731045840; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=ur8oT6TSfzITnezWCsfVdDRwD9KFpZB0nobhPnadT3A=; b=CCLjcpRw19GWEJJh6VqdX2PysvDzPfqhZBVHInwthQVJPQqOOpkxqMYOUm19mbY7N3 objHZz5IDmebPbGl0nkhY6WsQCWDUxscZNnEyKHvbA+E4mv4MrvdUAynInk5vtsG2z9J WDsXCeMCrRqdot3BrRowOZF0YR44FCe3qTgnamNFuzIKaxcooiKrE6pP886ilgPPX+e3 GmvF5fFGbP0J7yqQG+EmAwnFbnUK1DL+eW/j0gAb0TEu+J3pwMmJlRd1qJdhJTrHlPxQ GypK0PkTF8H6KrPTMIub+W01ySuRDIq87S/vhRrc3eyjSHwgtQ5KuXR60SlRXy0Hbkpe wE+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730441040; x=1731045840; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=ur8oT6TSfzITnezWCsfVdDRwD9KFpZB0nobhPnadT3A=; b=uPlZy9hk7fwWHP/8gm/4zE3DGORU2aTMEwlAR0uKLirSGmvbo0LlEo4q+/jRlxEm4d A2MomAvutxmJaqw4lOHH/AHiVzp2isSfrGNqf2T4WNaZXQ5Fiq3Em5vMkcCUYRCsTZtN +mXvYbP01ioHpiArFJmFqqNqSJGOXA1TpdoO+7taAobxHqbT+3SkvQjyALA8rbeLJiD2 EIKw0Bv9JKmvREfY6VdexKcnnovHk6aHoZTDEYRFndazUVsba176BDbdT+RWSYuKlk4V ypC8XmOjP04LP0TqTUtapcctvkXMZ3OnKRXdmxIsYnECWXy/azsElTH2qMTraj3QGD+J m55w== X-Forwarded-Encrypted: i=1; AJvYcCWSsAklaWjFI+mDgUXx5+m/2OmLDoh7yJzVf7NTOMa+STvSxm+dOdm8b9SYTPuqzTequhDCXG7uYpHzSVR1NfHN@lists.infradead.org, AJvYcCWajnb7eil9TrNfwvpdeVXZfzjZGjnrNXj9LKKA9APliKWhzH2oukXL+4QscaZReakJNe1fk06+ei33XQU=@lists.infradead.org X-Gm-Message-State: AOJu0YyRKQ0SqCTExAqZaM1E4ha5dA8K7j1DWEQjKKKbv0yOiShejdrn W/igMdxgPtRyRacKZ623/sCejy0ah36Wt0fFrI0q0g5B3dILNTkF X-Google-Smtp-Source: AGHT+IHO2JUhplW8iilOvtM5e4INFkg7USsOlc635bnnus8LfOPdYPvJqkEhNlBqoPPv/DQBHarZ5A== X-Received: by 2002:a05:620a:294b:b0:7b1:5424:e994 with SMTP id af79cd13be357-7b2fb97a6demr311264085a.26.1730441040331; Thu, 31 Oct 2024 23:04:00 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7b2f3a6fdd9sm140671785a.86.2024.10.31.23.03.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2024 23:04:00 -0700 (PDT) Received: from phl-compute-12.internal (phl-compute-12.phl.internal [10.202.2.52]) by mailfauth.phl.internal (Postfix) with ESMTP id 27C0D1200043; Fri, 1 Nov 2024 02:03:59 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-12.internal (MEProxy); Fri, 01 Nov 2024 02:03:59 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrvdekkedgkeekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedunecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheejpdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhgvg hrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvg hlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtg hpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohepohhj vggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorhesgh hmrghilhdrtghomhdprhgtphhtthhopeifvggushhonhgrfhesghhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Nov 2024 02:03:58 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v2 05/13] rust: sync: atomic: Add atomic {cmp,}xchg operations Date: Thu, 31 Oct 2024 23:02:28 -0700 Message-ID: <20241101060237.1185533-6-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241101060237.1185533-1-boqun.feng@gmail.com> References: <20241101060237.1185533-1-boqun.feng@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241031_230402_451541_936AFB46 X-CRM114-Status: GOOD ( 20.60 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org xchg() and cmpxchg() are basic operations on atomic. Provide these based on C APIs. Note that cmpxchg() use the similar function signature as compare_exchange() in Rust std: returning a `Result`, `Ok(old)` means the operation succeeds and `Err(old)` means the operation fails. With the compiler optimization and inline helpers, it should provides the same efficient code generation as using atomic_try_cmpxchg() or atomic_cmpxchg() correctly. Except it's not! Because of commit 44fe84459faf ("locking/atomic: Fix atomic_try_cmpxchg() semantics"), the atomic_try_cmpxchg() on x86 has a branch even if the caller doesn't care about the success of cmpxchg and only wants to use the old value. For example, for code like: // Uses the latest value regardlessly, same as atomic_cmpxchg() in C. let latest = x.cmpxchg(42, 64, Full).unwrap_or_else(|old| old); It will still generate code: movl $0x40, %ecx movl $0x34, %eax lock cmpxchgl %ecx, 0x4(%rsp) jne 1f 2: ... 1: movl %eax, %ecx jmp 2b Attempting to write an x86 try_cmpxchg_exclusive() for Rust use only, because the Rust function takes a `&mut` for old pointer, which must be exclusive to the function, therefore it's unsafe to use some shared pointer. But maybe I'm missing something? Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic/generic.rs | 151 +++++++++++++++++++++++++++++ 1 file changed, 151 insertions(+) diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/generic.rs index 204da38e2691..bfccc4336c75 100644 --- a/rust/kernel/sync/atomic/generic.rs +++ b/rust/kernel/sync/atomic/generic.rs @@ -251,3 +251,154 @@ pub fn store(&self, v: T, _: Ordering) { }; } } + +impl Atomic +where + T::Repr: AtomicHasXchgOps, +{ + /// Atomic exchange. + /// + /// # Examples + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Acquire, Relaxed}; + /// + /// let x = Atomic::new(42); + /// + /// assert_eq!(42, x.xchg(52, Acquire)); + /// assert_eq!(52, x.load(Relaxed)); + /// ``` + #[inline(always)] + pub fn xchg(&self, v: T, _: Ordering) -> T { + let v = T::into_repr(v); + let a = self.as_ptr().cast::(); + + // SAFETY: + // - For calling the atomic_xchg*() function: + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`, + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer, + // - per the type invariants, the following atomic operation won't cause data races. + // - For extra safety requirement of usage on pointers returned by `self.as_ptr(): + // - atomic operations are used here. + let ret = unsafe { + match Ordering::ORDER { + OrderingDesc::Full => T::Repr::atomic_xchg(a, v), + OrderingDesc::Acquire => T::Repr::atomic_xchg_acquire(a, v), + OrderingDesc::Release => T::Repr::atomic_xchg_release(a, v), + OrderingDesc::Relaxed => T::Repr::atomic_xchg_relaxed(a, v), + } + }; + + T::from_repr(ret) + } + + /// Atomic compare and exchange. + /// + /// Compare: The comparison is done via the byte level comparison between the atomic variables + /// with the `old` value. + /// + /// Ordering: A failed compare and exchange does provide anything, the read part of a failed + /// cmpxchg should be treated as a relaxed read. + /// + /// Returns `Ok(value)` if cmpxchg succeeds, and `value` is guaranteed to be equal to `old`, + /// otherwise returns `Err(value)`, and `value` is the value of the atomic variable when cmpxchg + /// was happening. + /// + /// # Examples + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Full, Relaxed}; + /// + /// let x = Atomic::new(42); + /// + /// // Checks whether cmpxchg succeeded. + /// let success = x.cmpxchg(52, 64, Relaxed).is_ok(); + /// # assert!(!success); + /// + /// // Checks whether cmpxchg failed. + /// let failure = x.cmpxchg(52, 64, Relaxed).is_err(); + /// # assert!(failure); + /// + /// // Uses the old value if failed, probably re-try cmpxchg. + /// match x.cmpxchg(52, 64, Relaxed) { + /// Ok(_) => { }, + /// Err(old) => { + /// // do something with `old`. + /// # assert_eq!(old, 42); + /// } + /// } + /// + /// // Uses the latest value regardlessly, same as atomic_cmpxchg() in C. + /// let latest = x.cmpxchg(42, 64, Full).unwrap_or_else(|old| old); + /// # assert_eq!(42, latest); + /// assert_eq!(64, x.load(Relaxed)); + /// ``` + #[inline(always)] + pub fn cmpxchg(&self, mut old: T, new: T, o: Ordering) -> Result { + if self.try_cmpxchg(&mut old, new, o) { + Ok(old) + } else { + Err(old) + } + } + + /// Atomic compare and exchange and returns whether the operation succeeds. + /// + /// "Compare" and "Ordering" part are the same as [`Atomic::cmpxchg`]. + /// + /// Returns `true` means the cmpxchg succeeds otherwise returns `false` with `old` updated to + /// the value of the atomic variable when cmpxchg was happening. + #[inline(always)] + fn try_cmpxchg(&self, old: &mut T, new: T, _: Ordering) -> bool { + let old = (old as *mut T).cast::(); + let new = T::into_repr(new); + let a = self.0.get().cast::(); + + // SAFETY: + // - For calling the atomic_try_cmpchg*() function: + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`, + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer, + // - per the type invariants, the following atomic operation won't cause data races. + // - `old` is a valid pointer to write because it comes from a mutable reference. + // - For extra safety requirement of usage on pointers returned by `self.as_ptr(): + // - atomic operations are used here. + unsafe { + match Ordering::ORDER { + OrderingDesc::Full => T::Repr::atomic_try_cmpxchg(a, old, new), + OrderingDesc::Acquire => T::Repr::atomic_try_cmpxchg_acquire(a, old, new), + OrderingDesc::Release => T::Repr::atomic_try_cmpxchg_release(a, old, new), + OrderingDesc::Relaxed => T::Repr::atomic_try_cmpxchg_relaxed(a, old, new), + } + } + } + + /// Atomic compare and exchange and return the [`Result`]. + /// + /// "Compare" and "Ordering" part are the same as [`Atomic::cmpxchg`]. + /// + /// Returns `Ok(value)` if cmpxchg succeeds, and `value` is guaranteed to be equal to `old`, + /// otherwise returns `Err(value)`, and `value` is the value of the atomic variable when cmpxchg + /// was happening. + /// + /// # Examples + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Acquire, Relaxed}; + /// + /// let x = Atomic::new(42i32); + /// + /// assert!(x.compare_exchange(52, 64, Acquire).is_err()); + /// assert_eq!(42, x.load(Relaxed)); + /// + /// assert!(x.compare_exchange(42, 64, Acquire).is_ok()); + /// assert_eq!(64, x.load(Relaxed)); + /// ``` + #[inline(always)] + pub fn compare_exchange(&self, mut old: T, new: T, o: Ordering) -> Result { + if self.try_cmpxchg(&mut old, new, o) { + Ok(old) + } else { + Err(old) + } + } +} From patchwork Fri Nov 1 06:02:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13858766 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7389CE674A7 for ; Fri, 1 Nov 2024 06:04:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2/oBB6S5Nl1K0bdaW+WDwNno8fYDh3bE6nU+P37b1uw=; b=2me6QtxaqsmSOD HvSdCaI2EDgsH5UcICeqKLOsf/7d6eVRif01qf0BHFsY5Iweu5nH4MAwF6YKCNpSeZbYEc/nsQuf6 Wb8ke1hL6jooDcaWvA2US2ch9RlfVWKkdEww9fXZygL3ZrqaP2RCpjBiIlpcncYuoqlYdLpT3ea4P 9QxsboHC6suYZPCC5wCKbk2NH76vMQWz8ShHpISkbEbo+/KVfDrIWNMsfRERDLVw6qbDbb8L3AVIJ zIQq6i15i++xTE9rH7reJ9uryzjGzLQCUYwbo6dHomhhuQcQVFEw7H3cpLY/lrBgcsB47xgkqvnPm cw7KUfgYDhUakz+kH/tg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t6klx-00000005u2u-3SBA; Fri, 01 Nov 2024 06:04:37 +0000 Received: from mail-yb1-xb2f.google.com ([2607:f8b0:4864:20::b2f]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t6klP-00000005tcI-38QY; Fri, 01 Nov 2024 06:04:07 +0000 Received: by mail-yb1-xb2f.google.com with SMTP id 3f1490d57ef6-e2903a48ef7so1548041276.2; Thu, 31 Oct 2024 23:04:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1730441042; x=1731045842; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=Ph2DxAhw/4vnxdgt0bOmp06fJpRNyucZTc/mta0yCHM=; b=MKg4VgSEiRzMF6L27eTfZD2jK4L5OWiEGfbb+Umk86b+6KyY49cw1myReeH9VLXAGz WOcN0BoqOyzbTb9p3tRe7SC2AtSN89r7hIyaDtHk4TLrqJOgO7cK90DArf/HFAOwrZ9/ 5sye2aUv78LkJlZA9tICgCTOzVfQY+7jZPuC0LOxjeaRwDx1G9LGTaw0EtGkTTFYLsez NGYGSUOgKqWCNTkQdi1yVV6pxLUmqWw9Jf7YtrzviFdk90oXpzH2xX7oc6CoH8C5mIXZ SNZ36qqZVL51f3m7pGizCWvng6UFjg+Q/+3ZBqGcIOoBUE0QhakHvHfroL5uUiPQzmFU Lzng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730441042; x=1731045842; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=Ph2DxAhw/4vnxdgt0bOmp06fJpRNyucZTc/mta0yCHM=; b=J7YeoDYFxTXuJGntrBesRR6rCb5igXsxCGDi1qFvumKTJUnIq8yoL/Ht0+knpkLGwK ie/H4vkPlLm95WazLA8l98amHa06z6tAe4QW3S0qEts9nGz4NbISz+DYqvDos+4tMBNc Aiwj8nA9ksL8FstKFWfPl6yamjuKmKqpGFsY9a82rlpkrZ7ipVtJVMODrFiXf/zDUHyS MjboM4/iiV1iKLX8C83kfDpXKvgqmd8I3TPO2MI44XCeqKJdPXXwOSRNjawrpy2P431S xO0fEPizuNA8s07X1dZuN8e7L/bdy+IWbZMup6s9r4kDGxDYWAEi72SZfcfs9AbekdfG MkSg== X-Forwarded-Encrypted: i=1; AJvYcCW04zK442l3bKrCN4mTuEBUzE708C0W8DcxUrx6+Xh/70UdLVtfByR9S60In+vFFNjAKtSJqF2M9P2N+24=@lists.infradead.org, AJvYcCXfhhDj5UOU0iwcGpnaGAKmF+rCJ9CGPL9SUk9G7pFZBzG2q3CQFYjULU9/EyueoP2gDwaeGopGqjqLbO4RXLgc@lists.infradead.org X-Gm-Message-State: AOJu0Yy8u8dp9jZM3xBcV7FL8hHY1M2RIWpJIb8tGxTRknzjS79cgOHf NElQq30LPnwKGTao0vAtcdJqs6iyH++5/KSpxB6rrKl6gvLilCer X-Google-Smtp-Source: AGHT+IHkGlgdKze1p73qxOAiQc0MuoROR/SyVFC8MTxPG24ppth96d0R9Aipn2Wg51Xq0Yzvi0ZtnQ== X-Received: by 2002:a05:6902:2d04:b0:e2b:d829:fbeb with SMTP id 3f1490d57ef6-e30e5b57845mr5969396276.36.1730441041886; Thu, 31 Oct 2024 23:04:01 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-462ad0c6658sm15328281cf.47.2024.10.31.23.04.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2024 23:04:01 -0700 (PDT) Received: from phl-compute-10.internal (phl-compute-10.phl.internal [10.202.2.50]) by mailfauth.phl.internal (Postfix) with ESMTP id A54591200043; Fri, 1 Nov 2024 02:04:00 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-10.internal (MEProxy); Fri, 01 Nov 2024 02:04:00 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrvdekkedgkeekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheejpdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhgvg hrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvg hlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtg hpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohepohhj vggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorhesgh hmrghilhdrtghomhdprhgtphhtthhopeifvggushhonhgrfhesghhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Nov 2024 02:04:00 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v2 06/13] rust: sync: atomic: Add the framework of arithmetic operations Date: Thu, 31 Oct 2024 23:02:29 -0700 Message-ID: <20241101060237.1185533-7-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241101060237.1185533-1-boqun.feng@gmail.com> References: <20241101060237.1185533-1-boqun.feng@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241031_230404_366617_ECBCC775 X-CRM114-Status: GOOD ( 16.66 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org One important set of atomic operations is the arithmetic operations, i.e. add(), sub(), fetch_add(), add_return(), etc. However it may not make senses for all the types that `AllowAtomic` to have arithmetic operations, for example a `Foo(u32)` may not have a reasonable add() or sub(), plus subword types (`u8` and `u16`) currently don't have atomic arithmetic operations even on C side and might not have them in the future in Rust (because they are usually suboptimal on a few architecures). Therefore add a subtrait of `AllowAtomic` describing which types have and can do atomic arithemtic operations. A few things about this `AllowAtomicArithmetic` trait: * It has an associate type `Delta` instead of using `AllowAllowAtomic::Repr` because, a `Bar(u32)` (whose `Repr` is `i32`) may not wants an `add(&self, i32)`, but an `add(&self, u32)`. * `AtomicImpl` types already implement an `AtomicHasArithmeticOps` trait, so add blanket implementation for them. In the future, `i8` and `i16` may impl `AtomicImpl` but not `AtomicHasArithmeticOps` if arithemtic operations are not available. Only add() and fetch_add() are added. The rest will be added in the future. Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic/generic.rs | 102 +++++++++++++++++++++++++++++ 1 file changed, 102 insertions(+) diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/generic.rs index bfccc4336c75..a75c3e9f4c89 100644 --- a/rust/kernel/sync/atomic/generic.rs +++ b/rust/kernel/sync/atomic/generic.rs @@ -3,6 +3,7 @@ //! Generic atomic primitives. use super::ops::*; +use super::ordering; use super::ordering::*; use crate::types::Opaque; @@ -54,6 +55,23 @@ fn from_repr(repr: Self::Repr) -> Self { } } +/// Atomics that allows arithmetic operations with an integer type. +pub trait AllowAtomicArithmetic: AllowAtomic { + /// The delta types for arithmetic operations. + type Delta; + + /// Converts [`Self::Delta`] into the representation of the atomic type. + fn delta_into_repr(d: Self::Delta) -> Self::Repr; +} + +impl AllowAtomicArithmetic for T { + type Delta = Self; + + fn delta_into_repr(d: Self::Delta) -> Self::Repr { + d + } +} + impl Atomic { /// Creates a new atomic. pub const fn new(v: T) -> Self { @@ -402,3 +420,87 @@ pub fn compare_exchange(&self, mut old: T, new: T, o: Ordering) - } } } + +impl Atomic +where + T::Repr: AtomicHasArithmeticOps, +{ + /// Atomic add. + /// + /// The addition is a wrapping addition. + /// + /// # Examples + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Relaxed}; + /// + /// let x = Atomic::new(42); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// x.add(12, Relaxed); + /// + /// assert_eq!(54, x.load(Relaxed)); + /// ``` + #[inline(always)] + pub fn add(&self, v: T::Delta, _: Ordering) { + let v = T::delta_into_repr(v); + let a = self.as_ptr().cast::(); + + // SAFETY: + // - For calling the atomic_add() function: + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`, + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer, + // - per the type invariants, the following atomic operation won't cause data races. + // - For extra safety requirement of usage on pointers returned by `self.as_ptr(): + // - atomic operations are used here. + unsafe { + T::Repr::atomic_add(a, v); + } + } + + /// Atomic fetch and add. + /// + /// The addition is a wrapping addition. + /// + /// # Examples + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Acquire, Full, Relaxed}; + /// + /// let x = Atomic::new(42); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// assert_eq!(54, { x.fetch_add(12, Acquire); x.load(Relaxed) }); + /// + /// let x = Atomic::new(42); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// assert_eq!(54, { x.fetch_add(12, Full); x.load(Relaxed) } ); + /// ``` + #[inline(always)] + pub fn fetch_add(&self, v: T::Delta, _: Ordering) -> T { + let v = T::delta_into_repr(v); + let a = self.as_ptr().cast::(); + + // SAFETY: + // - For calling the atomic_fetch_add*() function: + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`, + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer, + // - per the type invariants, the following atomic operation won't cause data races. + // - For extra safety requirement of usage on pointers returned by `self.as_ptr(): + // - atomic operations are used here. + let ret = unsafe { + match Ordering::ORDER { + ordering::OrderingDesc::Full => T::Repr::atomic_fetch_add(a, v), + ordering::OrderingDesc::Acquire => T::Repr::atomic_fetch_add_acquire(a, v), + ordering::OrderingDesc::Release => T::Repr::atomic_fetch_add_release(a, v), + ordering::OrderingDesc::Relaxed => T::Repr::atomic_fetch_add_relaxed(a, v), + } + }; + + T::from_repr(ret) + } +} From patchwork Fri Nov 1 06:02:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13858765 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BB3B9E674A6 for ; Fri, 1 Nov 2024 06:04:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=NxRorZd6+bM78NLHMSJ5dotfXVDmwRl2hM23V5JtU2M=; b=1pOzgeZiaDpn1q xmHvMAdJSxz+zGjK0LE0ifsP8WZoHOafVdHPqJJwEajsGLQfms18ZhgY2cjn3nOwCsxVHed6yBDsW VQXEUAJbrIP6gCHvZh6iwXXq0kjcG3XD6hDyzWHZYBGik6ciB7Gh0m9eYajCuFYwX/ff/qykzrhgE dLun1ZLCdHC7xlknsyTAvr0RkRv1jvrXbfyBX7+/9nOFpxZhK8+MjiQVTtsvYSRl+ECJcc4wQm1F1 +z25/QpjaVqypmZ9wnioTSjXArCDdgXjgsDtZrCh3AgDbUvvXvjsnvVE6PlcOHgLzScZzoHnibKwS 2LN80DqcFMY+PcDL61Hw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t6klw-00000005u1N-11gn; Fri, 01 Nov 2024 06:04:36 +0000 Received: from mail-qk1-x72a.google.com ([2607:f8b0:4864:20::72a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t6klR-00000005tcl-14Gp; Fri, 01 Nov 2024 06:04:07 +0000 Received: by mail-qk1-x72a.google.com with SMTP id af79cd13be357-7b1511697a5so113533885a.2; Thu, 31 Oct 2024 23:04:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1730441043; x=1731045843; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=6aL27TUodcAdd+uJ8RiXbRTkzdjD0lB6w2v/luzePnk=; b=OtmHcKFVBi1xcqXUTyivZtbUP4ulZjMR8c/32eVYn1+HnndOsH1bS3c29gcIpcNQsx S0UdYohBs9gix4JWu6lcIyYwKIufPuCj1WUHu1NVhS+5kk0CNPCYgPWFCuESxNO2H2xw U0QI23MTeH4Gt3d4ov+hSOwYGXmiAUW0nngdL1ZsRcimXkyFpEKC3wRwor71Pr29tR1l lCeRebTuB5tYZWOwMk/Eu5USM5ga2UyVLOULkWdryNtrS9FMnuJeqAbjFRHg2IZhVu1b mdoeMS1ei0mDG0fTCg4Ob8E7hYCRjRSJAQjIYTzepzTHl69Mgz44+/qN1bM+H0ZmmFbc RO0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730441043; x=1731045843; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=6aL27TUodcAdd+uJ8RiXbRTkzdjD0lB6w2v/luzePnk=; b=rhsP5w13NKI3HzWw6Yl7jCgoP35o7mrUm1xIR3+n+eQuooW+1kLuDBww4yJ7TMJyOB AMTgH7SduQtpkK1j4VYIfdg7xWfTY4mdmpgvqWOt9HNMgQE+e0+SuL3wyBLpPMhXSjo8 3pCNmKds1cKoq4PZOg8mYwGviFZTbA0HwJzEA2jfiUQkiTER5/vyrJj1QW1IG6ewl/yY a/NkPV4/p/j0Ntim617R9IgygIAwqRozFnvEzaz4PRMTYfv1LUS0y5fe8M1wc5MPurWl wR/2SaLpAMVZAKrof2ItqinTrtUm4k3+OTwbKbSqO6bxEoNaLNCScsuLtB6YhBiySBwQ qaww== X-Forwarded-Encrypted: i=1; AJvYcCU1sv/lr8lD/IBPNxzTGI0eND3eUaO8XJGHxDeCHfTjrnpmbqR2mDMTml5AU7e1X/HDlEkdh+J7YqdWgQc=@lists.infradead.org, AJvYcCWv+HT/l5O0BMIaBXnHDR9dPbyinYjOq0khSo3L6fNV79XGDIL/6K1MKaum6FNwxLLENtPBYOAcD4PLSfrAu+6x@lists.infradead.org X-Gm-Message-State: AOJu0YzQXxYdxH4uab7+C8DfQLB8i9OodG5+pGlp0CAQyw6/TqfrZJ/Z CNe0iidGxi59lIrjgN5ewEf016C9hNYIllne8tCWG4iVKN3Igy6l X-Google-Smtp-Source: AGHT+IEBsIeTKPMFVd18zq3m0TbNuXedTa+kxiUVC1B4NUnt+dZIvvMRhm1ETl2jJ5UPhxkWU8o/Gg== X-Received: by 2002:a05:620a:4554:b0:7b1:7d5c:8f9a with SMTP id af79cd13be357-7b193efe4c8mr2648481885a.28.1730441043401; Thu, 31 Oct 2024 23:04:03 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7b2f39e9b54sm141659585a.5.2024.10.31.23.04.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2024 23:04:03 -0700 (PDT) Received: from phl-compute-08.internal (phl-compute-08.phl.internal [10.202.2.48]) by mailfauth.phl.internal (Postfix) with ESMTP id 36BE21200043; Fri, 1 Nov 2024 02:04:02 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-08.internal (MEProxy); Fri, 01 Nov 2024 02:04:02 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrvdekkedgkeekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedvnecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheejpdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhgvg hrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvg hlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtg hpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohepohhj vggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorhesgh hmrghilhdrtghomhdprhgtphhtthhopeifvggushhonhgrfhesghhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Nov 2024 02:04:01 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v2 07/13] rust: sync: atomic: Add Atomic Date: Thu, 31 Oct 2024 23:02:30 -0700 Message-ID: <20241101060237.1185533-8-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241101060237.1185533-1-boqun.feng@gmail.com> References: <20241101060237.1185533-1-boqun.feng@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241031_230405_421191_D0AE5634 X-CRM114-Status: GOOD ( 10.92 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Add generic atomic support for basic unsigned types that have an `AtomicIpml` with the same size and alignment. Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic.rs | 80 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 80 insertions(+) diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index b791abc59b61..b2e81e22c105 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -22,3 +22,83 @@ pub use generic::Atomic; pub use ordering::{Acquire, Full, Relaxed, Release}; + +/// ```rust +/// use kernel::sync::atomic::{Atomic, Relaxed}; +/// +/// let x = Atomic::new(42u64); +/// +/// assert_eq!(42, x.load(Relaxed)); +/// ``` +// SAFETY: `u64` and `i64` has the same size and alignment. +unsafe impl generic::AllowAtomic for u64 { + type Repr = i64; + + fn into_repr(self) -> Self::Repr { + self as _ + } + + fn from_repr(repr: Self::Repr) -> Self { + repr as _ + } +} + +/// ```rust +/// use kernel::sync::atomic::{Atomic, Full, Relaxed}; +/// +/// let x = Atomic::new(42u64); +/// +/// assert_eq!(42, x.fetch_add(12, Full)); +/// assert_eq!(54, x.load(Relaxed)); +/// +/// x.add(13, Relaxed); +/// +/// assert_eq!(67, x.load(Relaxed)); +/// ``` +impl generic::AllowAtomicArithmetic for u64 { + type Delta = u64; + + fn delta_into_repr(d: Self::Delta) -> Self::Repr { + d as _ + } +} + +/// ```rust +/// use kernel::sync::atomic::{Atomic, Relaxed}; +/// +/// let x = Atomic::new(42u32); +/// +/// assert_eq!(42, x.load(Relaxed)); +/// ``` +// SAFETY: `u32` and `i32` has the same size and alignment. +unsafe impl generic::AllowAtomic for u32 { + type Repr = i32; + + fn into_repr(self) -> Self::Repr { + self as _ + } + + fn from_repr(repr: Self::Repr) -> Self { + repr as _ + } +} + +/// ```rust +/// use kernel::sync::atomic::{Atomic, Full, Relaxed}; +/// +/// let x = Atomic::new(42u32); +/// +/// assert_eq!(42, x.fetch_add(12, Full)); +/// assert_eq!(54, x.load(Relaxed)); +/// +/// x.add(13, Relaxed); +/// +/// assert_eq!(67, x.load(Relaxed)); +/// ``` +impl generic::AllowAtomicArithmetic for u32 { + type Delta = u32; + + fn delta_into_repr(d: Self::Delta) -> Self::Repr { + d as _ + } +} From patchwork Fri Nov 1 06:02:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13858767 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3100CE674A6 for ; Fri, 1 Nov 2024 06:04:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=iAfKqLVd+joSmHNyQS6/FGumifXLk4fYDzxWFM0ON1Y=; b=2t57K6yo0DMX7B 4hGsevzfXBqfj4f+oPSqgxlrDIcUYCbbNRzmeryApbwIM6Ld4CwKNumuAkMBv7AF1nS4QEtUlilUb NTbieQz/DZptipLRfZ/78nDrnrrgEhlH3ABEFNBZIuq4mmcIXEaov2ao+i7B99BQPNC3Qicg8mgQe suJ4RIpcvjzTcz6+VQh/ZagfJiy4GWjwG5yUqLAwCTtoFPgR8qq4iG9uj+5J/lWGCbPm7ND0cfVsd EiV0l/uGhbgKgO7B+hyI2DZPyLQvMzvk+OdP5wkGJ35kHA5WMAgzAh6kDUibXONncFGDUcjLki3Py D3Aibas2g7EhxfFF/S5g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t6klz-00000005u3i-1bkP; Fri, 01 Nov 2024 06:04:39 +0000 Received: from mail-qv1-xf30.google.com ([2607:f8b0:4864:20::f30]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t6klS-00000005teQ-3axr; Fri, 01 Nov 2024 06:04:10 +0000 Received: by mail-qv1-xf30.google.com with SMTP id 6a1803df08f44-6d18dff41cdso10906396d6.0; Thu, 31 Oct 2024 23:04:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1730441045; x=1731045845; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=A6tG39HqwvxlfRrBPWElR6isULD+PRgdHmRoN4FVkag=; b=CuuveHPVtkJWMuag0Af/A+Ncp0xFE/6x/M2YsezAsqF/o/noHg1udtip6Ny6P+WkaW He4p1XR3vPJyVJXIgrilpu9VW8qmBUjDBWeeLtXbABIBnb6vGu3jhTa/qstdaUkIZCAb 4riQv/2Sb7+clXRd+ooAd2O2nmrLvNOvh4ePHLe0TnjYCed6HmIgI5Yss8oXgSIKAbun FkpNyJsF/3VZ4cEWGgETUjnYZ0kn+lDf0z1HmInjXjqzRJQCTiFXK0GWMLEAOHQ4UZhK CdaAhJgg7RQ6CB6m9Z5dFtXfTKBK+/x/vzFDzg94dBe+bz3d+hiCJqOaF5UiRYseBhKn h87A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730441045; x=1731045845; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=A6tG39HqwvxlfRrBPWElR6isULD+PRgdHmRoN4FVkag=; b=AXc6JuU1b7+nfbDZ3CLK0LMG0snHXPu864EDxYmv59CAbtnosfv3eVR9B6AmKx4Zhx d6jzulC49T5bNAmvIKXYzooohb3D7V/1vlabMjLh5j9RA+btPI0eYvJsFi5EV3Gb8kc7 d5QqE39u9PbFTEfoA4cE/5MTsOjxCNv1zuwQLQIXSgluGtSSo9WZcWa32X9og4ADyWF7 2vfqunsBHEwaKTwhzYNll4y1e2mQMOU4q31EmhULVvcXDhoR9O30JuIi6XkE2G1CRp+q DrR8rsgbU+fKNLSJ9BVBAcElkLOOxVFM+9g6lGcUyvBHufxedis0SkU1kvjSnU7UHsDj iGXg== X-Forwarded-Encrypted: i=1; AJvYcCV9q5vDmj1RyfWfNvsA92fqEZ+o3x6wOcPmxPhJMVTjeNSbc5afdaPUuelG+1akrM500J4Qt9uS1j0VGSc=@lists.infradead.org, AJvYcCVh5CHzCXSXxEKno3XWxM4aoYapuOqmg5mtmec832NLEUpnd1VYxEs+GFIU60RV5zF5BYksRmd52HMcW/p4uxYc@lists.infradead.org X-Gm-Message-State: AOJu0YwenSwhB+Ka3dxPfSq7nevjFToYEhQtkhqL/RMctEq4mXBFjVlr BuLBOVHIWhW8MhGDh1zgEA7AJ8i3eOtnfjegPyXuLwzNLP8ei+I+ X-Google-Smtp-Source: AGHT+IFtPDQpnEfGDYf1MmQMy0Ftb/LAJarvVqvfSwx3qBw/GVNSOA1uCeVT60t5HopFhvrmJymKdw== X-Received: by 2002:a05:6214:5348:b0:6d3:4992:f585 with SMTP id 6a1803df08f44-6d34992f616mr104514016d6.49.1730441044801; Thu, 31 Oct 2024 23:04:04 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7b2f3a7045bsm140747785a.73.2024.10.31.23.04.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2024 23:04:04 -0700 (PDT) Received: from phl-compute-12.internal (phl-compute-12.phl.internal [10.202.2.52]) by mailfauth.phl.internal (Postfix) with ESMTP id A2B0D1200043; Fri, 1 Nov 2024 02:04:03 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-12.internal (MEProxy); Fri, 01 Nov 2024 02:04:03 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrvdekkedgkeekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedvnecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheejpdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhgvg hrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvg hlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtg hpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohepohhj vggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorhesgh hmrghilhdrtghomhdprhgtphhtthhopeifvggushhonhgrfhesghhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Nov 2024 02:04:03 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v2 08/13] rust: sync: atomic: Add Atomic<{usize,isize}> Date: Thu, 31 Oct 2024 23:02:31 -0700 Message-ID: <20241101060237.1185533-9-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241101060237.1185533-1-boqun.feng@gmail.com> References: <20241101060237.1185533-1-boqun.feng@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241031_230406_997987_C0ED6091 X-CRM114-Status: GOOD ( 13.08 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Add generic atomic support for `usize` and `isize`. Note that instead of mapping directly to `atomic_long_t`, the represention type (`AllowAtomic::Repr`) is selected based on CONFIG_64BIT. This reduces the necessarity of creating `atomic_long_*` helpers, which could save the binary size of kernel if inline helpers are not available. Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic.rs | 71 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 71 insertions(+) diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index b2e81e22c105..4166ad48604f 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -102,3 +102,74 @@ fn delta_into_repr(d: Self::Delta) -> Self::Repr { d as _ } } + +// SAFETY: `usize` has the same size and the alignment as `i64` for 64bit and the same as `i32` for +// 32bit. +unsafe impl generic::AllowAtomic for usize { + #[cfg(CONFIG_64BIT)] + type Repr = i64; + #[cfg(not(CONFIG_64BIT))] + type Repr = i32; + + fn into_repr(self) -> Self::Repr { + self as _ + } + + fn from_repr(repr: Self::Repr) -> Self { + repr as _ + } +} + +/// ```rust +/// use kernel::sync::atomic::{Atomic, Full, Relaxed}; +/// +/// let x = Atomic::new(42usize); +/// +/// assert_eq!(42, x.fetch_add(12, Full)); +/// assert_eq!(54, x.load(Relaxed)); +/// +/// x.add(13, Relaxed); +/// +/// assert_eq!(67, x.load(Relaxed)); +/// ``` +impl generic::AllowAtomicArithmetic for usize { + type Delta = usize; + + fn delta_into_repr(d: Self::Delta) -> Self::Repr { + d as _ + } +} + +// SAFETY: `isize` has the same size and the alignment as `i64` for 64bit and the same as `i32` for +// 32bit. +unsafe impl generic::AllowAtomic for isize { + type Repr = i64; + + fn into_repr(self) -> Self::Repr { + self as _ + } + + fn from_repr(repr: Self::Repr) -> Self { + repr as _ + } +} + +/// ```rust +/// use kernel::sync::atomic::{Atomic, Full, Relaxed}; +/// +/// let x = Atomic::new(42isize); +/// +/// assert_eq!(42, x.fetch_add(12, Full)); +/// assert_eq!(54, x.load(Relaxed)); +/// +/// x.add(13, Relaxed); +/// +/// assert_eq!(67, x.load(Relaxed)); +/// ``` +impl generic::AllowAtomicArithmetic for isize { + type Delta = isize; + + fn delta_into_repr(d: Self::Delta) -> Self::Repr { + d as _ + } +} From patchwork Fri Nov 1 06:02:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13858769 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0BAEEE674A7 for ; Fri, 1 Nov 2024 06:05:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=zzYYnbQoNtWMEzZK1SpEcLVUvuCLArMoc6dFVaI0nbA=; b=T4Zo4jkXPjpxP/ 1Rtut/XDW4SRZHqkW5iELC3r05OK0LZcWpILvu7kK0jOvDwYequteDpaUrCJgsfF9ZMwCSanvfqUf c2H3KG5kXG+PWEZ7L/Ywp5ACNK3B0N4v0yRJ48RgtGe1eXfuOu0OgLJjIIi962dNYkhE3YGZsORIK KatSKJ4Tx/YTLGra+e3AbasFKPPyEr2xXWkCVIVB9w1MZR/S8N+crSpxc9gKgrS09D1B0JASjy0sy Mq1BtY0bGzV4zxCmqd7u4+jIhHwkIRNdIIMBDdPDfh/3wjuDyEE0Dl7TGItTSLzBw1wlubpa1XoE2 jItyK72JD9qJ1u6sqXGA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t6kmE-00000005uFs-1yyo; Fri, 01 Nov 2024 06:04:54 +0000 Received: from mail-qv1-xf2f.google.com ([2607:f8b0:4864:20::f2f]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t6klU-00000005tfN-0wnl; Fri, 01 Nov 2024 06:04:12 +0000 Received: by mail-qv1-xf2f.google.com with SMTP id 6a1803df08f44-6cbce9e4598so9694576d6.2; Thu, 31 Oct 2024 23:04:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1730441046; x=1731045846; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=JH7KGrs5tWNaElymjoHhwHajdPBdI1O/huzZLgMbScw=; b=cJCz5R35vXZEjzhl5394Jt4aTs3G130l5U5qPkzf7e6azP0orY6yQtG3KUAnySYcRL Em/IpHRHCg/6eG78WM3TzX4yI6kaX55j41/u/D9AAfCu/m36LsvDqYeQPhLWoxS1Z2LI jNGJW0aC/Cr0wldT/jI5RocohoYUEStTuZljQPnJqe4LRVkvm1W5b6NC4a3ff8EuGClu sLSuEvFlPWOVOKKiKt3bpZWotTjHI3YOR6jvzvLfiqSLpXbYHc/j99b834wzVak4TcVD PvwZGY5b8EtRX0GEAhwyDYYg1HrDhcHdE87OYJmHLfGB2nV5aGBc/9GoYrN1E9O3G6lU kqqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730441046; x=1731045846; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=JH7KGrs5tWNaElymjoHhwHajdPBdI1O/huzZLgMbScw=; b=jC6HgE5g9xUTTJNj8nOc/YK6LCuG55Aig/0fyl2yyDzEjFYSstrzC0anp7XbuM4YyE XAGP8xT6l7E6KtDZaJhvGXLwaTctHf2F9sV35SU+g2bzT3/jJH5ifhaIq/EezB/5QaQH C6dxGBSm503kwaJJF2L/b7mJZ4Jdpg3KUig88JRSttpzJgxH796VMbJ8AGbf3MFlSnkv DzdFhwuF0jl0s/jJ9Fs9YnqEnOVgbN3cuehUNeVLzFTkjAsrg8tTPf7gci3wlQ+x9uP2 ppc1rUcSb00Ud9wADkqizkVb9/jUgX6gfwKMORmWf2+kIvjIOPehkqTlDe1dAtP6Wtxj hpOg== X-Forwarded-Encrypted: i=1; AJvYcCWt6vImVlyLjNC2NWuVgwCinYCKIcM7YXjh73HmZmN8rH7MSRiP+oP1nVinC6aiOpFCcJVyfmnKyO3yuA8=@lists.infradead.org, AJvYcCXCddc5eM3iJPxWXNdmTpFZvAawpSUpvLbCEWWHEBVEb7xLwTpx6j/nb2tJKDkQLd2c438kdVfFq2nX7X3OMyDO@lists.infradead.org X-Gm-Message-State: AOJu0YyZuiWFKuWbGwtlMqCka517BQ4ERXxx8bYZ427ko5TooAwEAaIF z8qmfocN2TmmtP32qDl+gJZq9NaeUtLzEGoFv7AMZuznH9hVAHjW X-Google-Smtp-Source: AGHT+IESIgPAqTkC70WxJjWwkI3tgQEnLpwNYxX5rWL6ZHem8kS8mRx17gqOxaZoRNqJiFjUt2ygsg== X-Received: by 2002:a05:6214:2f84:b0:6cb:f510:3536 with SMTP id 6a1803df08f44-6d35c19d32cmr27592686d6.47.1730441046358; Thu, 31 Oct 2024 23:04:06 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6d3541780a0sm15671686d6.109.2024.10.31.23.04.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2024 23:04:06 -0700 (PDT) Received: from phl-compute-09.internal (phl-compute-09.phl.internal [10.202.2.49]) by mailfauth.phl.internal (Postfix) with ESMTP id 1BDFE1200043; Fri, 1 Nov 2024 02:04:05 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-09.internal (MEProxy); Fri, 01 Nov 2024 02:04:05 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrvdekkedgkeekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheejpdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhgvg hrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvg hlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtg hpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohepohhj vggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorhesgh hmrghilhdrtghomhdprhgtphhtthhopeifvggushhonhgrfhesghhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Nov 2024 02:04:04 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v2 09/13] rust: sync: atomic: Add Atomic<*mut T> Date: Thu, 31 Oct 2024 23:02:32 -0700 Message-ID: <20241101060237.1185533-10-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241101060237.1185533-1-boqun.feng@gmail.com> References: <20241101060237.1185533-1-boqun.feng@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241031_230408_509402_6AC84E42 X-CRM114-Status: GOOD ( 16.56 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Add atomic support for raw pointer values, similar to `isize` and `usize`, the representation type is selected based on CONFIG_64BIT. `*mut T` is not `Send`, however `Atomic<*mut T>` definitely needs to be a `Sync`, and that's the whole point of atomics: being able to have multiple shared references in different threads so that they can sync with each other. As a result, a pointer value will be transferred from one thread to another via `Atomic<*mut T>`: x.store(p1, Relaxed); let p = x.load(p1, Relaxed); This means a raw pointer value (`*mut T`) needs to be able to transfer across thread boundaries, which is essentially `Send`. To reflect this in the type system, and based on the fact that pointer values can be transferred safely (only using them to dereference is unsafe), as suggested by Alice, extend the `AllowAtomic` trait to include a customized `Send` semantics, that is: `impl AllowAtomic` has to be safe to be transferred across thread boundaries. Suggested-by: Alice Ryhl Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic.rs | 24 ++++++++++++++++++++++++ rust/kernel/sync/atomic/generic.rs | 16 +++++++++++++--- 2 files changed, 37 insertions(+), 3 deletions(-) diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index 4166ad48604f..e62c3cd1d3ca 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -173,3 +173,27 @@ fn delta_into_repr(d: Self::Delta) -> Self::Repr { d as _ } } + +/// ```rust +/// use kernel::sync::atomic::{Atomic, Relaxed}; +/// +/// let x = Atomic::new(core::ptr::null_mut::()); +/// +/// assert!(x.load(Relaxed).is_null()); +/// ``` +// SAFETY: A `*mut T` has the same size and the alignment as `i64` for 64bit and the same as `i32` +// for 32bit. And it's safe to transfer the ownership of a pointer value to another thread. +unsafe impl generic::AllowAtomic for *mut T { + #[cfg(CONFIG_64BIT)] + type Repr = i64; + #[cfg(not(CONFIG_64BIT))] + type Repr = i32; + + fn into_repr(self) -> Self::Repr { + self as _ + } + + fn from_repr(repr: Self::Repr) -> Self { + repr as _ + } +} diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/generic.rs index a75c3e9f4c89..cff98469ed35 100644 --- a/rust/kernel/sync/atomic/generic.rs +++ b/rust/kernel/sync/atomic/generic.rs @@ -19,6 +19,10 @@ #[repr(transparent)] pub struct Atomic(Opaque); +// SAFETY: `Atomic` is safe to send between execution contexts, because `T` is `AllowAtomic` and +// `AllowAtomic`'s safety requirement guarantees that. +unsafe impl Send for Atomic {} + // SAFETY: `Atomic` is safe to share among execution contexts because all accesses are atomic. unsafe impl Sync for Atomic {} @@ -30,8 +34,13 @@ unsafe impl Sync for Atomic {} /// /// # Safety /// -/// [`Self`] must have the same size and alignment as [`Self::Repr`]. -pub unsafe trait AllowAtomic: Sized + Send + Copy { +/// - [`Self`] must have the same size and alignment as [`Self::Repr`]. +/// - The implementer must guarantee it's safe to transfer ownership from one execution context to +/// another, this means it has to be a [`Send`], but because `*mut T` is not [`Send`] and that's +/// the basic type needs to support atomic operations, so this safety requirement is added to +/// [`AllowAtomic`] trait. This safety requirement is automatically satisfied if the type is a +/// [`Send`]. +pub unsafe trait AllowAtomic: Sized + Copy { /// The backing atomic implementation type. type Repr: AtomicImpl; @@ -42,7 +51,8 @@ pub unsafe trait AllowAtomic: Sized + Send + Copy { fn from_repr(repr: Self::Repr) -> Self; } -// SAFETY: `T::Repr` is `Self` (i.e. `T`), so they have the same size and alignment. +// SAFETY: `T::Repr` is `Self` (i.e. `T`), so they have the same size and alignment. And all +// `AtomicImpl` types are `Send`. unsafe impl AllowAtomic for T { type Repr = Self; From patchwork Fri Nov 1 06:02:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13858768 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 32279E674A7 for ; Fri, 1 Nov 2024 06:04:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ayMO4nFslI9B7lqsRLZCBC3bbenuAR8Fusxywst5YeQ=; b=kRiabTCTfm43yV rjw8fFqKaEYCUCRKF/B2Y+CCldgD/7PWVbrRsMWrdx/b0QofHZ4TqH9S8wdfDEct6YtijKvPJhDuk hcT7Wdu7CCNUiwQI+FQ+nojWKP0SJyAFYZfPXn7WJluF4f/U4jLr1VKxuEnpclftfuio/GnwvnBng K/pYS7DsbNpYhO90vLm5ETQhIe0jogbyyC4MhA5ooI+aVbQaPCVwVXDVC9uUngMmPwc+ucHd8IZoW gdmTy9drVTzDxUvFhSNBCVitqBf82mbUFoD0W/1IFWNSnxOPEtROYQADCdjJHq3Zm4/BdX1K6/lnX tGLsr2wuvlbn9wuiyjpA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t6kmB-00000005uDb-1K2K; Fri, 01 Nov 2024 06:04:51 +0000 Received: from mail-qk1-x729.google.com ([2607:f8b0:4864:20::729]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t6klV-00000005thJ-2hfq; Fri, 01 Nov 2024 06:04:14 +0000 Received: by mail-qk1-x729.google.com with SMTP id af79cd13be357-7b1b224f6c6so114264185a.3; Thu, 31 Oct 2024 23:04:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1730441048; x=1731045848; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=HedkcmZuTLLzFJb9CZIBHRUYnKPqPD+H8D+47bw5J50=; b=lG4e6zUuDl9edFy9SBo7OEmZtsPBb5vY+VwCJEyd7HezAVMI8DGIRH3TB5ZpqLQ04y Cc5OBpY/zLUSpcAIU0h9c4U2lIhqzIdM4k7uT5Z0iy2aVqqg7mbrIhJBVv8Db2aaja+p M1qp+8H1bXBczlYDfOWb/qpZtQDCL6xrRqUR5flAvnW+oMs8KfoD7OVqh/O+QGCDrS1R a1smpJxnCURnV9R0KICmWbRULuZKS9xPwt0PZzFJlrro962XoBPh9s1zf44ewUwuf5PZ SL/C+pjgLc0C4GK4p7z9E51CKxYPhWwQctI59ti0FJaz46KvfkTYYWyT45mpJeA2rQwo 6CXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730441048; x=1731045848; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=HedkcmZuTLLzFJb9CZIBHRUYnKPqPD+H8D+47bw5J50=; b=IqWyyPkpL30d8X8zEscNVGsaruADYx+2GBwDqBKc6k9o5ZWJZkArEa/wBm9DFHAMu+ GPe5HkpRp3cDeyI9Mn6aDR6hVeQl7bi2FCAfWGrpG8jv8TxFcQWY6tNuDYhXhP3iwloV HdZR6kT9ekb4OLJWbyYlkYEprhZtL9SK3MZmaDILqLftqBWFVQsan7niNCmYxBDGFvXa RU2Sg6FklD3nlEgT/upPB2M67AMTcimVe7ux07Zjb5JUh9xIeqSYcoChK1Cwbo6Izdaq Ye4C7Wc4BjDho3jNWbna3dQ44bpRax8MAas9ZW5Nx4ULx51DVv9vL3Atx4mDKDRiOnOz nvfA== X-Forwarded-Encrypted: i=1; AJvYcCUh0DnfC1vQPQrRuIpnJ9rPbQtWeoJHFkmhsdgtJeBoXv/sNMf5BFe8LJan9vo+JsLGlPzIhWJnTzNJ2TXnYX5a@lists.infradead.org, AJvYcCV7YDkUPhJYQwjsa/0bptjU3gLBbleWtlef+tIQIK8uIP2f5Cr5PoHdu8bpuE0/BU5Eb4iLiwhjSktpp/U=@lists.infradead.org X-Gm-Message-State: AOJu0Yxr2O8ej3zyqZi1HkHZ0pwjeVFHqdRpJ9mBOvJYF6l7niu/Iowi NgFE5MvPgb/35APg85sWlPjoQic+8xvzzjWzguY/hwI53C2hxYeq X-Google-Smtp-Source: AGHT+IFjNsIlEurpXPjF8KNlWfgLlf1NKfyfrV0QuV/GBtGTbA0qW3F/s1nZzTSXWMiWahQF6/N9wQ== X-Received: by 2002:a05:620a:444c:b0:7ab:3ec9:f4c7 with SMTP id af79cd13be357-7b2fb982785mr233693885a.35.1730441047663; Thu, 31 Oct 2024 23:04:07 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7b2f39e9dd6sm142079785a.1.2024.10.31.23.04.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2024 23:04:07 -0700 (PDT) Received: from phl-compute-03.internal (phl-compute-03.phl.internal [10.202.2.43]) by mailfauth.phl.internal (Postfix) with ESMTP id 8C55F1200043; Fri, 1 Nov 2024 02:04:06 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-03.internal (MEProxy); Fri, 01 Nov 2024 02:04:06 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrvdekkedgkeekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedunecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheejpdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhgvg hrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvg hlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtg hpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohepohhj vggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorhesgh hmrghilhdrtghomhdprhgtphhtthhopeifvggushhonhgrfhesghhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Nov 2024 02:04:05 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v2 10/13] rust: sync: atomic: Add arithmetic ops for Atomic<*mut T> Date: Thu, 31 Oct 2024 23:02:33 -0700 Message-ID: <20241101060237.1185533-11-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241101060237.1185533-1-boqun.feng@gmail.com> References: <20241101060237.1185533-1-boqun.feng@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241031_230409_823952_A56B0151 X-CRM114-Status: GOOD ( 14.63 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org (This is more an RFC) Add arithmetic operations support for `Atomic<*mut T>`. Currently the semantics of arithmetic atomic operation is the same as pointer arithmetic, that is, e.g. `Atomic<*mut u64>::add(1)` is adding 8 (`size_of::`) to the pointer value. In Rust std library, there are two sets of pointer arithmetic for `AtomicPtr`: * ptr_add() and ptr_sub(), which is the same as Atomic<*mut T>::add(), pointer arithmetic. * byte_add() and byte_sub(), which use the input as byte offset to change the pointer value, e.g. byte_add(1) means adding 1 to the pointer value. We can either take the approach in the current patch and add byte_add() later on if needed, or start with ptr_add() and byte_add() naming. Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic.rs | 29 +++++++++++++++++++++++++++++ 1 file changed, 29 insertions(+) diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index e62c3cd1d3ca..cbe5d40d9e36 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -197,3 +197,32 @@ fn from_repr(repr: Self::Repr) -> Self { repr as _ } } + +/// ```rust +/// use kernel::sync::atomic::{Atomic, Relaxed}; +/// +/// let s: &mut [i32] = &mut [1, 3, 2, 4]; +/// +/// let x = Atomic::new(s.as_mut_ptr()); +/// +/// x.add(1, Relaxed); +/// +/// let ptr = x.fetch_add(1, Relaxed); // points to the 2nd element. +/// let ptr2 = x.load(Relaxed); // points to the 3rd element. +/// +/// // SAFETY: `ptr` and `ptr2` are valid pointers to the 2nd and 3rd elements of `s` with writing +/// // provenance, and no other thread is accessing these elements. +/// unsafe { core::ptr::swap(ptr, ptr2); } +/// +/// assert_eq!(s, &mut [1, 2, 3, 4]); +/// ``` +impl generic::AllowAtomicArithmetic for *mut T { + type Delta = isize; + + /// The behavior of arithmetic operations + fn delta_into_repr(d: Self::Delta) -> Self::Repr { + // Since atomic arithmetic operations are wrapping, so a wrapping_mul() here suffices even + // if overflow may happen. + d.wrapping_mul(core::mem::size_of::() as _) as _ + } +} From patchwork Fri Nov 1 06:02:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13858770 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BC0CDE674A6 for ; Fri, 1 Nov 2024 06:05:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=QUgwhiqDN+HAAPZ79P6IScaqFAGvEGTwj119Mupwfic=; b=MO+Ft7SpEOzdBK LXNJpAY2Tesm6rWfm1LOIHH3cVAycw5peG4mCZkFfNY7ga2Edu8Qf0OBH26XI9kDAv94GPnvUhOvf kFVTGtBq7hf1vZR2yi8/D8P3jxEgg88rOna0TmP5rM6aI5RTSGZ+/afT5svcH6lzR8xJabOWnswpq Ul7CJpFEuUvnJBpkyRv2lpfrOGnRSlT+ayRMEALdRHbpCxCYetIWCJgbmp3ApW7Gn9CfSgMKnXIMF rmRTfrEAvDcjhVoSUK0l5EqPAxHs7b+vstcUKzIgKbekpSMOyvRPEgqd8NK/NqYNmHsv8XUgDCK8V nXl5P6L3Ai+Ln5ejI7tw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t6kmF-00000005uH2-2itN; Fri, 01 Nov 2024 06:04:55 +0000 Received: from mail-yw1-x1130.google.com ([2607:f8b0:4864:20::1130]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t6klW-00000005tig-3sPF; Fri, 01 Nov 2024 06:04:15 +0000 Received: by mail-yw1-x1130.google.com with SMTP id 00721157ae682-6ea053b5929so13844907b3.0; Thu, 31 Oct 2024 23:04:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1730441049; x=1731045849; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=ONyAMI/uecNup1a5yaiABvXQUHETtGYA3BLm9LR8ufQ=; b=RtrB76IBuvGTJu2tYWkRQtTdPYYRDOV3EUJWb6TgobNhuOW27yGjduif10k0iWRAeU pj6PNRR458jGHvVNtAF7jA+XXlZWyLnc/LcCOq15oiIzOS1R8TVjvJuLXvz2YtiOQD6j GICsCUYv2V7grlWfUYn/lvVdcMF71QzNBEnXW98YuHW3VwJTp6Txhb/B6fk9jgfAOzTU yfxu+lBUzAp3NAwFnXbd1k8wxMu8rXerDFsqF5oTp5e28r5i5IociyDxkuPjro1uIvXY sG/68Yqzyj6px79q1rfOEOCRB3lwFXsFLwAYiCNtPTlSM36P7Rbf1kGvH3MDsOdwPG8M fHDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730441049; x=1731045849; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=ONyAMI/uecNup1a5yaiABvXQUHETtGYA3BLm9LR8ufQ=; b=hAOiPsqiW+PAm+cnAABiRacw6Qht1nGmE5qbV39//39/8WqHRvMZnEf2SQJuWy7sav 0pXQzsE1EPhO2vYujyy6aSHhEil1599XBK/+DOi2FboA9/EEM2NvfKqUGaz27rYZGPJb TytURODaavv5cO9XL5AU5tZnqfYqZSvHpa09qmvJ+shOOGXjaUSGdMePGTDVNzpW+eB2 5/A+ZV1YBF+f2xBmVZYJgSzxt9A1WfS4HitR9PY56CCrhwdg8cUSX8qguG67nX1gdVix C7fYTbHqEBJFka5f144ci1EOpp9lgnAa3jgu37O+cmpi34mbdcNEuCpvDiHpOGBz3s5w M/tQ== X-Forwarded-Encrypted: i=1; AJvYcCUImklm2CrKzeB+fVaKXmNKK/C16FWvvtlHJ+SH5kaMmF4/zmw53+OHlGYQ3zW2DeSpCZZDMm9JEppEcGE=@lists.infradead.org, AJvYcCUZDfniriElcQJ0jeYdetaysu3O7HDjgV+VwVrZkcdLOpDl7+qzOvdTl0ImM4vFQm4gKMxwQ9JVZB9ulsfDs2Vk@lists.infradead.org X-Gm-Message-State: AOJu0YxZTOMxrDwpZ338i6wb7qz9cRd0QRbzIoew+5SEYnVKGHvRoAY9 nSH9JCxDKTy/DigPqYhZ+XcHDZ+TPFgFHokyU1Zocb+9fZjuxp54 X-Google-Smtp-Source: AGHT+IE2BIxtmwEusy18ql7cbtMQdFNwpczdaXcpq/PlQSZ5f3hVe6pqbH5Cwf8zGRtuTIUFjOhEVQ== X-Received: by 2002:a05:690c:6901:b0:6ea:24e2:4408 with SMTP id 00721157ae682-6ea64a9f55cmr29902627b3.2.1730441049120; Thu, 31 Oct 2024 23:04:09 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6d35417a3afsm15822656d6.126.2024.10.31.23.04.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2024 23:04:09 -0700 (PDT) Received: from phl-compute-06.internal (phl-compute-06.phl.internal [10.202.2.46]) by mailfauth.phl.internal (Postfix) with ESMTP id 0560C1200043; Fri, 1 Nov 2024 02:04:08 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-06.internal (MEProxy); Fri, 01 Nov 2024 02:04:08 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrvdekkedgkeekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheejpdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhgvg hrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvg hlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtg hpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohepohhj vggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorhesgh hmrghilhdrtghomhdprhgtphhtthhopeifvggushhonhgrfhesghhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Nov 2024 02:04:07 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v2 11/13] rust: sync: Add memory barriers Date: Thu, 31 Oct 2024 23:02:34 -0700 Message-ID: <20241101060237.1185533-12-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241101060237.1185533-1-boqun.feng@gmail.com> References: <20241101060237.1185533-1-boqun.feng@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241031_230411_166775_7710BF85 X-CRM114-Status: GOOD ( 20.50 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Memory barriers are building blocks for concurrent code, hence provide a minimal set of them. The compiler barrier, barrier(), is implemented in inline asm instead of using core::sync::atomic::compiler_fence() because memory models are different: kernel's atomics are implemented in inline asm therefore the compiler barrier should be implemented in inline asm as well. Signed-off-by: Boqun Feng --- rust/helpers/helpers.c | 1 + rust/kernel/sync.rs | 1 + rust/kernel/sync/barrier.rs | 67 +++++++++++++++++++++++++++++++++++++ 3 files changed, 69 insertions(+) create mode 100644 rust/kernel/sync/barrier.rs diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c index ab5a3f1be241..f4a94833b29d 100644 --- a/rust/helpers/helpers.c +++ b/rust/helpers/helpers.c @@ -8,6 +8,7 @@ */ #include "atomic.c" +#include "barrier.c" #include "blk.c" #include "bug.c" #include "build_assert.c" diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs index 66ac3752ca71..0d0b19441ae8 100644 --- a/rust/kernel/sync.rs +++ b/rust/kernel/sync.rs @@ -9,6 +9,7 @@ mod arc; pub mod atomic; +pub mod barrier; mod condvar; pub mod lock; mod locked_by; diff --git a/rust/kernel/sync/barrier.rs b/rust/kernel/sync/barrier.rs new file mode 100644 index 000000000000..277aa09747bf --- /dev/null +++ b/rust/kernel/sync/barrier.rs @@ -0,0 +1,67 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Memory barriers. +//! +//! These primitives have the same semantics as their C counterparts: and the precise definitions of +//! semantics can be found at [`LKMM`]. +//! +//! [`LKMM`]: srctree/tools/memory-mode/ + +/// A compiler barrier. +/// +/// An explicic compiler barrier function that prevents the compiler from moving the memory +/// accesses either side of it to the other side. +pub fn barrier() { + // By default, Rust inline asms are treated as being able to access any memory or flags, hence + // it suffices as a compiler barrier. + // + // SAFETY: An empty asm block should be safe. + unsafe { + core::arch::asm!(""); + } +} + +/// A full memory barrier. +/// +/// A barrier function that prevents both the compiler and the CPU from moving the memory accesses +/// either side of it to the other side. +pub fn smp_mb() { + if cfg!(CONFIG_SMP) { + // SAFETY: `smp_mb()` is safe to call. + unsafe { + bindings::smp_mb(); + } + } else { + barrier(); + } +} + +/// A write-write memory barrier. +/// +/// A barrier function that prevents both the compiler and the CPU from moving the memory write +/// accesses either side of it to the other side. +pub fn smp_wmb() { + if cfg!(CONFIG_SMP) { + // SAFETY: `smp_wmb()` is safe to call. + unsafe { + bindings::smp_wmb(); + } + } else { + barrier(); + } +} + +/// A read-read memory barrier. +/// +/// A barrier function that prevents both the compiler and the CPU from moving the memory read +/// accesses either side of it to the other side. +pub fn smp_rmb() { + if cfg!(CONFIG_SMP) { + // SAFETY: `smp_rmb()` is safe to call. + unsafe { + bindings::smp_rmb(); + } + } else { + barrier(); + } +} From patchwork Fri Nov 1 06:02:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13858846 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 98453E674AB for ; Fri, 1 Nov 2024 07:16:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=HResYXjKTLqmKlGUJhKSIY3eoitxwI7ga12bafQEOBg=; b=LFtMcL8lUaON46 pMapxel3+cldBMTS7w5UONXcCdOVUjRBf59u++FWG8UxZhInRhXBlqQRm3cSOIAps3lT5nzmur7uE MlvXmxSUy8T0nGHUM/Py4OpPArNLRarTui/mwMi3EsD7NEkfvXYP5rDz0LIjbORn2XgErm1IBLg7b Oeeh+EjbcBSYwfWAXXy8vZKWJ1wzdq3Ol3idfki2k0x920qsITExf21mvinBYUVfpZ5a56RR7EuL7 NOpbH4YExui/Q2K5cf9M80hec713dSARwX/swpGUXjE2NFcUjvi+hfiaaxGXDEgjlmwnrpJgMJwkR fU8kbYojD3FPRfr5EFpA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t6ltF-000000065AG-0d2R; Fri, 01 Nov 2024 07:16:13 +0000 Received: from mail-qt1-x82b.google.com ([2607:f8b0:4864:20::82b]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t6klY-00000005tkB-2Nap; Fri, 01 Nov 2024 06:04:17 +0000 Received: by mail-qt1-x82b.google.com with SMTP id d75a77b69052e-460963d6233so10055161cf.2; Thu, 31 Oct 2024 23:04:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1730441051; x=1731045851; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=Qb/auzzRSYdGnLFU8kbv5LWllGUHO0oSi9IMia1ivTQ=; b=dBQfgIzgeIH4Xjhn54GfCubq7cE23gQTi6s8y8VRtGyHwYSR3T4dipFnCBxu04T6R9 eqCAKGjClF8bYFp+x6syHCNGfOfmZgLa5o43MXBduEDU9v0Y4odTj+oSW0G8q1hqRkew dhrSeQ9k5/2w3MvjcwGy1XUfKsI1pHjAUW9V/Wtnls7AK6EqcQ4tOucjLhfPVDqxFE9u doEIEPIWDYM7CGAZJ8Ccm0Wi0GJeCZMECFuDULlE4DrKLQ6M9D7QKLzKA39de3oID0cq 5Io7434hkGHsIa6JgI+TrtEsttB1NqcUsfO6D3Jpfz1FCwaxSZ/SLFPv66JfmXuF2Df/ DThw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730441051; x=1731045851; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=Qb/auzzRSYdGnLFU8kbv5LWllGUHO0oSi9IMia1ivTQ=; b=ZoniAdR8iKJatnGJHL078cSX5U+WfdVQ6r9TzftVt40ved6XaoxHEVhMXcl0It9TbG 92tRYL7wVlrfnRggOBxcGq9bdYmxKqkDY6dscYax33WwpfEX8kPgmp66YYfWHmeJc5m3 kIA9/01ITkzHl5FoGW9SNBYpWI0fcBEZjt81wEuctLF14cn8etoCNaIs+NkcD8GWTDEu oEekG7ofSPD3h/9yWWp554+DrgVmZ0cIloYQhCAAV86MyM0s/OuxThgs6mH5OabdPCBI NCfnZtvXuv/NyhfWK9WU91tUoGYT2lvfhYbegm9OX84WT74CY+mLjmFrNW/zEvGbfmQ+ I+gA== X-Forwarded-Encrypted: i=1; AJvYcCW5lyrR/Gh4vxPH4Wc2jalbibXG1VB5g75yoXO6GnONWRoYBuERKFFRifKsno4oabCwaEYs6SdPZ0ZUnCg=@lists.infradead.org, AJvYcCWvdAXIg8nsLcWs8ZrCbBnhYJX3Py6GI1DJQpO1iM0jaqJOIiSDSKfNHp3SEJ7EKXb/Te+/7i2ZpoqQdA8r/PPE@lists.infradead.org X-Gm-Message-State: AOJu0YxQ49SqsFCH66XgxZLCsZaZgE/L1t41d7Vsav8yQUjwRsUs92L9 MbVUJciIo4zJWcQ1BBENPn+PX6JlX4SSKxEvLOKI4b4KwGyxNztE X-Google-Smtp-Source: AGHT+IHS6b6//DorDylLbtgJ5bxng2h5LRkBucc4foDz3VRtnJmrGN2L8AFAqOvdv2cV+zLm68lbuQ== X-Received: by 2002:a05:622a:1444:b0:461:22f1:dd17 with SMTP id d75a77b69052e-462ab2ea97dmr71679421cf.41.1730441050686; Thu, 31 Oct 2024 23:04:10 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-462ad0c72efsm15253151cf.50.2024.10.31.23.04.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2024 23:04:10 -0700 (PDT) Received: from phl-compute-04.internal (phl-compute-04.phl.internal [10.202.2.44]) by mailfauth.phl.internal (Postfix) with ESMTP id 7765D1200043; Fri, 1 Nov 2024 02:04:09 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-04.internal (MEProxy); Fri, 01 Nov 2024 02:04:09 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrvdekkedgkeekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheekpdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhgvg hrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvg hlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtg hpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohepohhj vggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorhesgh hmrghilhdrtghomhdprhgtphhtthhopeifvggushhonhgrfhesghhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Nov 2024 02:04:08 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org, Danilo Krummrich Subject: [RFC v2 12/13] rust: add rcu abstraction Date: Thu, 31 Oct 2024 23:02:35 -0700 Message-ID: <20241101060237.1185533-13-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241101060237.1185533-1-boqun.feng@gmail.com> References: <20241101060237.1185533-1-boqun.feng@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241031_230412_869509_456D8AA6 X-CRM114-Status: GOOD ( 17.76 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Wedson Almeida Filho Add a simple abstraction to guard critical code sections with an rcu read lock. Signed-off-by: Wedson Almeida Filho Signed-off-by: Danilo Krummrich --- rust/helpers/helpers.c | 1 + rust/helpers/rcu.c | 13 +++++++++++ rust/kernel/sync.rs | 1 + rust/kernel/sync/rcu.rs | 52 +++++++++++++++++++++++++++++++++++++++++ 4 files changed, 67 insertions(+) create mode 100644 rust/helpers/rcu.c create mode 100644 rust/kernel/sync/rcu.rs diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c index f4a94833b29d..65951245879f 100644 --- a/rust/helpers/helpers.c +++ b/rust/helpers/helpers.c @@ -18,6 +18,7 @@ #include "mutex.c" #include "page.c" #include "rbtree.c" +#include "rcu.c" #include "refcount.c" #include "signal.c" #include "slab.c" diff --git a/rust/helpers/rcu.c b/rust/helpers/rcu.c new file mode 100644 index 000000000000..f1cec6583513 --- /dev/null +++ b/rust/helpers/rcu.c @@ -0,0 +1,13 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include + +void rust_helper_rcu_read_lock(void) +{ + rcu_read_lock(); +} + +void rust_helper_rcu_read_unlock(void) +{ + rcu_read_unlock(); +} diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs index 0d0b19441ae8..f5a413e1ce30 100644 --- a/rust/kernel/sync.rs +++ b/rust/kernel/sync.rs @@ -13,6 +13,7 @@ mod condvar; pub mod lock; mod locked_by; +pub mod rcu; pub use arc::{Arc, ArcBorrow, UniqueArc}; pub use condvar::{new_condvar, CondVar, CondVarTimeoutResult}; diff --git a/rust/kernel/sync/rcu.rs b/rust/kernel/sync/rcu.rs new file mode 100644 index 000000000000..5a35495f69a4 --- /dev/null +++ b/rust/kernel/sync/rcu.rs @@ -0,0 +1,52 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! RCU support. +//! +//! C header: [`include/linux/rcupdate.h`](srctree/include/linux/rcupdate.h) + +use crate::bindings; +use core::marker::PhantomData; + +/// Evidence that the RCU read side lock is held on the current thread/CPU. +/// +/// The type is explicitly not `Send` because this property is per-thread/CPU. +/// +/// # Invariants +/// +/// The RCU read side lock is actually held while instances of this guard exist. +pub struct Guard { + _not_send: PhantomData<*mut ()>, +} + +impl Guard { + /// Acquires the RCU read side lock and returns a guard. + pub fn new() -> Self { + // SAFETY: An FFI call with no additional requirements. + unsafe { bindings::rcu_read_lock() }; + // INVARIANT: The RCU read side lock was just acquired above. + Self { + _not_send: PhantomData, + } + } + + /// Explicitly releases the RCU read side lock. + pub fn unlock(self) {} +} + +impl Default for Guard { + fn default() -> Self { + Self::new() + } +} + +impl Drop for Guard { + fn drop(&mut self) { + // SAFETY: By the type invariants, the rcu read side is locked, so it is ok to unlock it. + unsafe { bindings::rcu_read_unlock() }; + } +} + +/// Acquires the RCU read side lock. +pub fn read_lock() -> Guard { + Guard::new() +} From patchwork Fri Nov 1 06:02:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13858855 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CB6BFE674AC for ; Fri, 1 Nov 2024 07:28:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=0HegzNHPElGwh5si9vw5oXqKq4/KCLQuPc6OvnynPXQ=; b=T690yiiwvstHtZ dRqSaJZZgb0yazVUNi0e9CHGUiQ/UXG9Qt7Eii23tLIM5IdKb4RWyrjG5ptHUyklIlE0PvdHrhoBb beYCI/UElqTjVdozbf0EQ3VYWCKAUd0QdiBnopwGHOhWUV4MW1QhXMghq0TnobSXIWZ3qIRHq2klx W3t5s1dqvm6itz8pXE+WoEQEijYY+OC5q+ngKuzRT80w9nV7vThAxhTFdcYE5DCzUaNz3pjfgN+uV 9OUYPGhBL67ltqadliYIEdDtxe9OnwvOjD74BnrAhkpOVjNCSm0OkOjzHNYf8+JqkG8ZVXEGdTxx8 huQ3HA6qLRNaBSfOQiig==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t6m56-000000066lm-2uEa; Fri, 01 Nov 2024 07:28:28 +0000 Received: from mail-ua1-x930.google.com ([2607:f8b0:4864:20::930]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t6klb-00000005tmR-0QNb; Fri, 01 Nov 2024 06:04:24 +0000 Received: by mail-ua1-x930.google.com with SMTP id a1e0cc1a2514c-84fe06fbdc6so528234241.0; Thu, 31 Oct 2024 23:04:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1730441053; x=1731045853; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=uwbnorYz17/tWcZ9fit5jE8cUEZsdnUVkY29L9brcUg=; b=a+mXontdISBqUDaTHJ/GNeFSPx+jsWm5KheZV5LxEY/NKelvZpjKldJw5F/WIWhUUz Xh41bIaJOInr6XimDNhhXrxgU3mhW6gC1xmj9ZJmk0KPBQrhAsilhNcyi0eUxTDfhmcl k1Du+n4SGqdasKmuQQs1o6AJhbJQkbMio1rI9zH3FWRUWwusBKz/BRt9rxgQTNew8I1c zlAwbqnIq6eVF+F+kCLTE8g363P1QdrQrGxrX5+osKOKuN5CJsrIgifbJbkx3PwsQiep Xgor98/fNi0PBKr45VkMqXwvBK5fLEfZsAq4BtWfQDKGHIkaVofQtHUXpaaE+sU0v+xF k2yA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730441053; x=1731045853; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=uwbnorYz17/tWcZ9fit5jE8cUEZsdnUVkY29L9brcUg=; b=PMubwLWC+ablpZZjNMXaVs16RWCe9iULX3tduTYl6WAGltccSc5PItnczPid+EpYYV xd8xlF8Bmy7kZP8YNi5bOxFf6aLpHzsTzGMkbDunFbWYUnkV5us3TqhOdIWb0ll1fx+v eBb/CFyGligYnv5gFuuUuf2KZL6sKr2H4GD25ei1xjgnknoj1O4dpU4OZdXBEXpftGZe Pvs1AkvkUSyTf1/61TZQEraTk0BrBv1ROuDR01nDO6lqc3KMLjw2CjioxWWZUWqEpdNS J/4xkk9Dzx5t9jMBbMgQ/SSiZWcsQswMap0CG+me+5PIWowrCP1k381gYTzL2EgGAPro 1abg== X-Forwarded-Encrypted: i=1; AJvYcCUTAZcerUVivIceTBatZXiG40jMcAbG3DyeqQp0CzJMzpMhyPmtxxQXZev2IKf38ljyVHddGGD/+nN/+ZqRljbz@lists.infradead.org, AJvYcCWJ4go6t03ExvdXSYMAYFgEZxCxAOcmSFhNzE/mSu7GWQW19ExwjAQngWHI+CPSwK8kwIWbWT4hbD5jIBc=@lists.infradead.org X-Gm-Message-State: AOJu0Yzy7JJYeq5n26UzaVgnPS7eOQHyI9fltSXsBY6FlM8q4bvWhRgR 1fENRpybOIzQbTz79mOutIm59j3u/AoMZbGEf7uHetnA9tnB4iM1 X-Google-Smtp-Source: AGHT+IHAc5xMZh4K5kU8JOF0c7jZniOEtFMNfZvJtennS+zE2ZkZJCx0edfJ6yZAUHpY+GVVThi1tg== X-Received: by 2002:a05:6102:290b:b0:4a5:b712:2c94 with SMTP id ada2fe7eead31-4a95430e304mr7189566137.14.1730441052668; Thu, 31 Oct 2024 23:04:12 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-462ad08a8b1sm15146921cf.19.2024.10.31.23.04.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2024 23:04:12 -0700 (PDT) Received: from phl-compute-04.internal (phl-compute-04.phl.internal [10.202.2.44]) by mailfauth.phl.internal (Postfix) with ESMTP id EB6831200043; Fri, 1 Nov 2024 02:04:10 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-04.internal (MEProxy); Fri, 01 Nov 2024 02:04:10 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrvdekkedgkeekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnegoufhushhpvggtthffohhmrghinhculdegledmnecujfgurhep hffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepuehoqhhunhcuhfgvnh hguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrdgtohhmqeenucggtffrrghtthgvrhhn pedutedvgfetleeiffeihfetgfeiheetueefhedukedvveejuddvheeujeehuefgteenuc ffohhmrghinheptghrrghtvghsrdhiohdpiihulhhiphgthhgrthdrtghomhdpghhithhh uhgsrdgtohhmpdhkrghnghhrvghjohhsrdgtohhmpdhgihhthhhusgdrihhonecuvehluh hsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepsghoqhhunhdomhgv shhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieegqddujeejkeehhe ehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigihmvgdrnhgrmhgv pdhnsggprhgtphhtthhopeehjedpmhhouggvpehsmhhtphhouhhtpdhrtghpthhtoheprh hushhtqdhfohhrqdhlihhnuhigsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthht oheprhgtuhesvhhgvghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehlihhnuhigqd hkvghrnhgvlhesvhhgvghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehlihhnuhig qdgrrhgthhesvhhgvghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehllhhvmheslh hishhtshdrlhhinhhugidruggvvhdprhgtphhtthhopehlkhhmmheslhhishhtshdrlhhi nhhugidruggvvhdprhgtphhtthhopehojhgvuggrsehkvghrnhgvlhdrohhrghdprhgtph htthhopegrlhgvgidrghgrhihnohhrsehgmhgrihhlrdgtohhmpdhrtghpthhtohepfigv ughsohhnrghfsehgmhgrihhlrdgtohhm X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Nov 2024 02:04:10 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v2 13/13] rust: sync: rcu: Add RCU protected pointer Date: Thu, 31 Oct 2024 23:02:36 -0700 Message-ID: <20241101060237.1185533-14-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241101060237.1185533-1-boqun.feng@gmail.com> References: <20241101060237.1185533-1-boqun.feng@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241031_230415_325199_55A3AA7C X-CRM114-Status: GOOD ( 27.43 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org RCU protected pointers are an atomic pointer that can be loaded and dereferenced by mulitple RCU readers, but only one updater/writer can change the value (following a read-copy-update pattern usually). This is useful in the case where data is read-mostly. The rationale of this patch is to provide a proof of concept on how RCU should be exposed to the Rust world, and it also serves as an example for atomic usage. Similar mechanisms like ArcSwap [1] are already widely used. Provide a `Rcu

` type with an atomic pointer implementation. `P` has to be a `ForeignOwnable`, which means the ownership of a object can be represented by a pointer-size value. `Rcu::dereference()` requires a RCU Guard, which means dereferencing is only valid under RCU read lock protection. `Rcu::read_copy_update()` is the operation for updaters, it requries a `Pin<&mut Self>` for exclusive accesses, since RCU updaters are normally exclusive with each other. A lot of RCU functionalities including asynchronously free (call_rcu() and kfree_rcu()) are still missing, and will be the future work. Also, we still need language changes like field projection [2] to provide better ergonomic. Acknowledgment: this work is based on a lot of productive discussions and hard work from others, these are the ones I can remember (sorry if I forgot your contribution): * Wedson started the work on RCU field projection and Benno followed it up and had been working on it as a more general language feature. Also, Gary's field-projection repo [3] has been used as an example for related discussions. * During Kangrejos 2023 [4], Gary, Benno and Alice provided a lot of feedbacks on the talk from Paul and me: "If you want to use RCU in Rust for Linux kernel..." * During a recent discussion among Benno, Paul and me, Benno suggested using `Pin<&mut>` to guarantee the exclusive access on updater operations. Link: https://crates.io/crates/arc-swap [1] Link: https://rust-lang.zulipchat.com/#narrow/channel/213817-t-lang/topic/Field.20Projections/near/474648059 [2] Link: https://github.com/nbdd0121/field-projection [3] Link: https://kangrejos.com/2023 [4] Signed-off-by: Boqun Feng --- rust/kernel/sync/rcu.rs | 269 +++++++++++++++++++++++++++++++++++++++- 1 file changed, 268 insertions(+), 1 deletion(-) diff --git a/rust/kernel/sync/rcu.rs b/rust/kernel/sync/rcu.rs index 5a35495f69a4..8326b2e0986a 100644 --- a/rust/kernel/sync/rcu.rs +++ b/rust/kernel/sync/rcu.rs @@ -5,7 +5,11 @@ //! C header: [`include/linux/rcupdate.h`](srctree/include/linux/rcupdate.h) use crate::bindings; -use core::marker::PhantomData; +use crate::{ + sync::atomic::{Atomic, Relaxed, Release}, + types::ForeignOwnable, +}; +use core::{marker::PhantomData, pin::Pin, ptr::NonNull}; /// Evidence that the RCU read side lock is held on the current thread/CPU. /// @@ -50,3 +54,266 @@ fn drop(&mut self) { pub fn read_lock() -> Guard { Guard::new() } + +/// An RCU protected pointer, the pointed object is protected by RCU. +/// +/// # Invariants +/// +/// Either the pointer is null, or it points to a return value of [`P::into_foreign`] and the atomic +/// variable exclusively owns the pointer. +pub struct Rcu(Atomic<*mut core::ffi::c_void>, PhantomData

); + +/// A pointer that has been unpublished, but hasn't waited for a grace period yet. +/// +/// The pointed object may still have an existing RCU reader. Therefore a grace period is needed to +/// free the object. +/// +/// # Invariants +/// +/// The pointer has to be a return value of [`P::into_foreign`] and [`Self`] exclusively owns the +/// pointer. +pub struct RcuOld(NonNull, PhantomData

); + +impl Drop for RcuOld

{ + fn drop(&mut self) { + // SAFETY: As long as called in a sleepable context, which should be checked by klint, + // `synchronize_rcu()` is safe to call. + unsafe { + bindings::synchronize_rcu(); + } + + // SAFETY: `self.0` is a return value of `P::into_foreign()`, so it's safe to call + // `from_foreign()` on it. Plus, the above `synchronize_rcu()` guarantees no existing + // `ForeignOwnable::borrow()` anymore. + let p: P = unsafe { P::from_foreign(self.0.as_ptr()) }; + drop(p); + } +} + +impl Rcu

{ + /// Creates a new RCU pointer. + pub fn new(p: P) -> Self { + // INVARIANTS: The return value of `p.into_foreign()` is directly stored in the atomic + // variable. + Self(Atomic::new(p.into_foreign().cast_mut()), PhantomData) + } + + /// Dereferences the protected object. + /// + /// Returns `Some(b)`, where `b` is a reference-like borrowed type, if the pointer is not null, + /// otherwise returns `None`. + /// + /// # Examples + /// + /// ```rust + /// # use kernel::alloc::{flags, KBox}; + /// use kernel::sync::rcu::{self, Rcu}; + /// + /// let x = Rcu::new(KBox::new(100i32, flags::GFP_KERNEL)?); + /// + /// let g = rcu::read_lock(); + /// // Read in under RCU read lock protection. + /// let v = x.dereference(&g); + /// + /// assert_eq!(v, Some(&100i32)); + /// + /// # Ok::<(), Error>(()) + /// ``` + /// + /// Note the borrowed access can outlive the reference of the [`Rcu

`], this is because as + /// long as the RCU read lock is held, the pointed object should remain valid. + /// + /// In the following case, the main thread is responsible for the ownership of `shared`, i.e. it + /// will drop it eventually, and a work item can temporarily access the `shared` via `cloned`, + /// but the use of the dereferenced object doesn't depend on `cloned`'s existence. + /// + /// ```rust + /// # use kernel::alloc::{flags, KBox}; + /// # use kernel::workqueue::system; + /// # use kernel::sync::{Arc, atomic::{Atomic, Acquire, Release}}; + /// use kernel::sync::rcu::{self, Rcu}; + /// + /// struct Config { + /// a: i32, + /// b: i32, + /// c: i32, + /// } + /// + /// let config = KBox::new(Config { a: 1, b: 2, c: 3 }, flags::GFP_KERNEL)?; + /// + /// let shared = Arc::new(Rcu::new(config), flags::GFP_KERNEL)?; + /// let cloned = shared.clone(); + /// + /// // Use atomic to simulate a special refcounting. + /// static FLAG: Atomic = Atomic::new(0); + /// + /// system().try_spawn(flags::GFP_KERNEL, move || { + /// let g = rcu::read_lock(); + /// let v = cloned.dereference(&g).unwrap(); + /// drop(cloned); // release reference to `shared`. + /// FLAG.store(1, Release); + /// + /// // but still need to access `v`. + /// assert_eq!(v.a, 1); + /// drop(g); + /// }); + /// + /// // Wait until `cloned` dropped. + /// while FLAG.load(Acquire) == 0 { + /// // SAFETY: Sleep should be safe. + /// unsafe { kernel::bindings::schedule(); } + /// } + /// + /// drop(shared); + /// + /// # Ok::<(), Error>(()) + /// ``` + pub fn dereference<'rcu>(&self, _rcu_guard: &'rcu Guard) -> Option> { + // Ordering: Address dependency pairs with the `store(Release)` in read_copy_update(). + let ptr = self.0.load(Relaxed); + + if !ptr.is_null() { + // SAFETY: + // - Since `ptr` is not null, so it has to be a return value of `P::into_foreign()`. + // - The returned `Borrowed<'rcu>` cannot outlive the RCU Guar, this guarantees the + // return value will only be used under RCU read lock, and the RCU read lock prevents + // the pass of a grace period that the drop of `RcuOld` or `Rcu` is waiting for, + // therefore no `from_foreign()` will be called for `ptr` as long as `Borrowed` exists. + // + // CPU 0 CPU 1 + // ===== ===== + // { `x` is a reference to Rcu> } + // let g = rcu::read_lock(); + // + // if let Some(b) = x.dereference(&g) { + // // drop(g); cannot be done, since `b` is still alive. + // + // if let Some(old) = x.replace(...) { + // // `x` is null now. + // println!("{}", b); + // } + // drop(old): + // synchronize_rcu(); + // drop(g); + // // a grace period passed. + // // No `Borrowed` exists now. + // from_foreign(...); + // } + Some(unsafe { P::borrow(ptr) }) + } else { + None + } + } + + /// Read, copy and update the pointer with new value. + /// + /// Returns `None` if the pointer's old value is null, otherwise returns `Some(old)`, where old + /// is a [`RcuOld`] which can be used to free the old object eventually. + /// + /// The `Pin<&mut Self>` is needed because this function needs the exclusive access to + /// [`Rcu

`], otherwise two `read_copy_update()`s may get the same old object and double free. + /// Using `Pin<&mut Self>` provides the exclusive access that C side requires with the type + /// system checking. + /// + /// Also this has to be `Pin` because a `&mut Self` may allow users to `swap()` safely, that + /// will break the atomicity. A [`Rcu

`] should be structurally pinned in the struct that + /// contains it. + /// + /// Note that `Pin<&mut Self>` cannot assume noalias here because [`Atomic`] is a + /// [`Opaque`] which has the same effect on aliasing rules as [`UnsafePinned`]. + /// + /// [`UnsafePinned`]: https://rust-lang.github.io/rfcs/3467-unsafe-pinned.html + pub fn read_copy_update(self: Pin<&mut Self>, f: F) -> Option> + where + F: FnOnce(Option>) -> Option

, + { + // step 1: READ. + // Ordering: Address dependency pairs with the `store(Release)` in read_copy_update(). + let old_ptr = NonNull::new(self.0.load(Relaxed)); + + let old = old_ptr.map(|nonnull| { + // SAFETY: Per type invariants `old_ptr` has to be a value return by a previous + // `into_foreign()`, and the exclusive reference `self` guarantees that `from_foreign()` + // has not been called. + unsafe { P::borrow(nonnull.as_ptr()) } + }); + + // step 2: COPY, or more generally, initializing `new` based on `old`. + let new = f(old); + + // step 3: UPDATE. + if let Some(new) = new { + let new_ptr = new.into_foreign().cast_mut(); + // Ordering: Pairs with the address dependency in `dereference()` and + // `read_copy_update()`. + // INVARIANTS: `new.into_foreign()` is directly store into the atomic variable. + self.0.store(new_ptr, Release); + } else { + // Ordering: Setting to a null pointer doesn't need to be Release. + // INVARIANTS: The atomic variable is set to be null. + self.0.store(core::ptr::null_mut(), Relaxed); + } + + // INVARIANTS: The exclusive reference guarantess that the ownership of a previous + // `into_foreign()` transferred to the `RcuOld`. + Some(RcuOld(old_ptr?, PhantomData)) + } + + /// Replaces the pointer with new value. + /// + /// Returns `None` if the pointer's old value is null, otherwise returns `Some(old)`, where old + /// is a [`RcuOld`] which can be used to free the old object eventually. + /// + /// # Examples + /// + /// ```rust + /// use core::pin::pin; + /// # use kernel::alloc::{flags, KBox}; + /// use kernel::sync::rcu::{self, Rcu}; + /// + /// let mut x = pin!(Rcu::new(KBox::new(100i32, flags::GFP_KERNEL)?)); + /// let q = KBox::new(101i32, flags::GFP_KERNEL)?; + /// + /// // Read in under RCU read lock protection. + /// let g = rcu::read_lock(); + /// let v = x.dereference(&g); + /// + /// // Replace with a new object. + /// let old = x.as_mut().replace(q); + /// + /// assert!(old.is_some()); + /// + /// // `v` should still read the old value. + /// assert_eq!(v, Some(&100i32)); + /// + /// // New readers should get the new value. + /// assert_eq!(x.dereference(&g), Some(&101i32)); + /// + /// drop(g); + /// + /// // Can free the object outside the read-side critical section. + /// drop(old); + /// # Ok::<(), Error>(()) + /// ``` + pub fn replace(self: Pin<&mut Self>, new: P) -> Option> { + self.read_copy_update(|_| Some(new)) + } +} + +impl Drop for Rcu

{ + fn drop(&mut self) { + let ptr = *self.0.get_mut(); + if !ptr.is_null() { + // SAFETY: As long as called in a sleepable context, which should be checked by klint, + // `synchronize_rcu()` is safe to call. + unsafe { + bindings::synchronize_rcu(); + } + + // SAFETY: `self.0` is a return value of `P::into_foreign()`, so it's safe to call + // `from_foreign()` on it. Plus, the above `synchronize_rcu()` guarantees no existing + // `ForeignOwnable::borrow()` anymore. + drop(unsafe { P::from_foreign(ptr) }); + } + } +}